U.S. officials at the federal and state level are assessing the possibility of “market manipulation” behind big moves in banking share prices in recent days by Short Sellers

CategoriesGamestop_, Issue 2023Q2

Credit goes to Reuters :

May 4 (Reuters) – U.S. officials at the federal and state level are assessing the possibility of “market manipulation” behind big moves in banking share prices in recent days, a source familiar with the matter said on Thursday.

Shares of regional banks resumed their slide this week after the collapse of First Republic Bank, the third U.S. mid-sized lender to fail in two months. Short sellers raked in $378.9 million in paper profits on Thursday alone from betting against certain regional banks, according to analytics firm Ortex.

Increased short-selling activity and volatility in shares have drawn increasing scrutiny by federal and state officials and regulators in recent days, given strong fundamentals in the sector and sufficient capital levels, said the source, who was not authorized to speak publicly.

“State and federal regulators and officials are increasingly attentive to the possibility of market manipulation regarding banking equities,” the source said.

PacWest Bancorp (PACW.O) shares slid 57% on Thursday, dragging down other regional lenders, after the Los Angeles-based bank said it was in talks about strategic options.

Western Alliance Bancorp (WAL.N) denied a report from the Financial Times that said it was exploring a potential sale, and said it was exploring legal options. The report had sent the lender’s shares down as much as 61.5% before trading was halted.

Share price swings did not reflect the fact that many regional banks outperformed on first quarter earnings and had sound fundamentals, including stable deposits, sufficient capital, and decreased uninsured deposits, the source said.

“This week we have seen that regional banks remain well- capitalized,” the source said.

Short selling, in which investors sell borrowed securities and aim to buy these back at a lower price to pocket the difference, is not illegal and considered part of a healthy market. But manipulating stock prices, which the SEC has defined as the ‘intentional or willful conduct designed to deceive or defraud investors by controlling or artificially affecting” stock prices, is.

An official with the U.S. Securities and Exchange Commission told Reuters on Wednesday the agency was “not currently contemplating” a short-selling ban.

On Thursday the agency did not respond immediately to a Reuters request for comment.

But the source familiar with current events noted that the agency had warned in March, during a previous period of high market volatility surrounding the collapse of Silicon Valley Bank and Signature Bank, that it was carefully monitoring market stability and would prosecute any form of misconduct.

Google “We Have No Moat, And Neither Does OpenAI”

CategoriesIssue 2023Q2, Site Updates_
Leaked Internal Google Document Claims Open Source AI Will Outcompete Google and OpenAI

Please consider supporting the source: https://www.semianalysis.com/p/google-we-have-no-moat-and-neither

We’ve done a lot of looking over our shoulders at OpenAI. Who will cross the next milestone? What will the next move be?

But the uncomfortable truth is, we aren’t positioned to win this arms race and neither is OpenAI. While we’ve been squabbling, a third faction has been quietly eating our lunch.

I’m talking, of course, about open source. Plainly put, they are lapping us. Things we consider “major open problems” are solved and in people’s hands today. Just to name a few:

While our models still hold a slight edge in terms of quality, the gap is closing astonishingly quickly. Open-source models are faster, more customizable, more private, and pound-for-pound more capable. They are doing things with $100 and 13B params that we struggle with at $10M and 540B. And they are doing so in weeks, not months. This has profound implications for us:

  • We have no secret sauce. Our best hope is to learn from and collaborate with what others are doing outside Google. We should prioritize enabling 3P integrations.
  • People will not pay for a restricted model when free, unrestricted alternatives are comparable in quality. We should consider where our value add really is.
  • Giant models are slowing us down. In the long run, the best models are the ones

    which can be iterated upon quickly. We should make small variants more than an afterthought, now that we know what is possible in the <20B parameter regime.

https://lmsys.org/blog/2023-03-30-vicuna/

What Happened

At the beginning of March the open source community got their hands on their first really capable foundation model, as Meta’s LLaMA was leaked to the public. It had no instruction or conversation tuning, and no RLHF. Nonetheless, the community immediately understood the significance of what they had been given.

A tremendous outpouring of innovation followed, with just days between major developments (see The Timeline for the full breakdown). Here we are, barely a month later, and there are variants with instruction tuningquantizationquality improvementshuman evalsmultimodalityRLHF, etc. etc. many of which build on each other.

Most importantly, they have solved the scaling problem to the extent that anyone can tinker. Many of the new ideas are from ordinary people. The barrier to entry for training and experimentation has dropped from the total output of a major research organization to one person, an evening, and a beefy laptop.

Why We Could Have Seen It Coming

In many ways, this shouldn’t be a surprise to anyone. The current renaissance in open source LLMs comes hot on the heels of a renaissance in image generation. The similarities are not lost on the community, with many calling this the “Stable Diffusion moment” for LLMs.

In both cases, low-cost public involvement was enabled by a vastly cheaper mechanism for fine tuning called low rank adaptation, or LoRA, combined with a significant breakthrough in scale (latent diffusion for image synthesis, Chinchilla for LLMs). In both cases, access to a sufficiently high-quality model kicked off a flurry of ideas and iteration from individuals and institutions around the world. In both cases, this quickly outpaced the large players.

These contributions were pivotal in the image generation space, setting Stable Diffusion on a different path from Dall-E. Having an open model led to product integrationsmarketplacesuser interfaces, and innovations that didn’t happen for Dall-E.

The effect was palpable: rapid domination in terms of cultural impact vs the OpenAI solution, which became increasingly irrelevant. Whether the same thing will happen for LLMs remains to be seen, but the broad structural elements are the same.

What We Missed

The innovations that powered open source’s recent successes directly solve problems we’re still struggling with. Paying more attention to their work could help us to avoid reinventing the wheel.

LoRA is an incredibly powerful technique we should probably be paying more attention to

LoRA works by representing model updates as low-rank factorizations, which reduces the size of the update matrices by a factor of up to several thousand. This allows model fine-tuning at a fraction of the cost and time. Being able to personalize a language model in a few hours on consumer hardware is a big deal, particularly for aspirations that involve incorporating new and diverse knowledge in near real-time. The fact that this technology exists is underexploited inside Google, even though it directly impacts some of our most ambitious projects.

Retraining models from scratch is the hard path

Part of what makes LoRA so effective is that – like other forms of fine-tuning – it’s stackable. Improvements like instruction tuning can be applied and then leveraged as other contributors add on dialogue, or reasoning, or tool use. While the individual fine tunings are low rank, their sum need not be, allowing full-rank updates to the model to accumulate over time.

This means that as new and better datasets and tasks become available, the model can be cheaply kept up to date, without ever having to pay the cost of a full run.

By contrast, training giant models from scratch not only throws away the pretraining, but also any iterative improvements that have been made on top. In the open source world, it doesn’t take long before these improvements dominate, making a full retrain extremely costly.

We should be thoughtful about whether each new application or idea really needs a whole new model. If we really do have major architectural improvements that preclude directly reusing model weights, then we should invest in more aggressive forms of distillation that allow us to retain as much of the previous generation’s capabilities as possible.

Large models aren’t more capable in the long run if we can iterate faster on small models

LoRA updates are very cheap to produce (~$100) for the most popular model sizes. This means that almost anyone with an idea can generate one and distribute it. Training times under a day are the norm. At that pace, it doesn’t take long before the cumulative effect of all of these fine-tunings overcomes starting off at a size disadvantage. Indeed, in terms of engineer-hours, the pace of improvement from these models vastly outstrips what we can do with our largest variants, and the best are already largely indistinguishable from ChatGPTFocusing on maintaining some of the largest models on the planet actually puts us at a disadvantage.

Data quality scales better than data size

Many of these projects are saving time by training on small, highly curated datasets. This suggests there is some flexibility in data scaling laws. The existence of such datasets follows from the line of thinking in Data Doesn’t Do What You Think, and they are rapidly becoming the standard way to do training outside Google. These datasets are built using synthetic methods (e.g. filtering the best responses from an existing model) and scavenging from other projects, neither of which is dominant at Google. Fortunately, these high quality datasets are open source, so they are free to use.

Directly Competing With Open Source Is a Losing Proposition

This recent progress has direct, immediate implications for our business strategy. Who would pay for a Google product with usage restrictions if there is a free, high quality alternative without them?

And we should not expect to be able to catch up. The modern internet runs on open source for a reason. Open source has some significant advantages that we cannot replicate.

We need them more than they need us

Keeping our technology secret was always a tenuous proposition. Google researchers are leaving for other companies on a regular cadence, so we can assume they know everything we know, and will continue to for as long as that pipeline is open.

But holding on to a competitive advantage in technology becomes even harder now that cutting edge research in LLMs is affordable. Research institutions all over the world are building on each other’s work, exploring the solution space in a breadth-first way that far outstrips our own capacity. We can try to hold tightly to our secrets while outside innovation dilutes their value, or we can try to learn from each other.

Individuals are not constrained by licenses to the same degree as corporations

Much of this innovation is happening on top of the leaked model weights from Meta. While this will inevitably change as truly open models get better, the point is that they don’t have to wait. The legal cover afforded by “personal use” and the impracticality of prosecuting individuals means that individuals are getting access to these technologies while they are hot.

Being your own customer means you understand the use case

Browsing through the models that people are creating in the image generation space, there is a vast outpouring of creativity, from anime generators to HDR landscapes. These models are used and created by people who are deeply immersed in their particular subgenre, lending a depth of knowledge and empathy we cannot hope to match.

Owning the Ecosystem: Letting Open Source Work for Us

Paradoxically, the one clear winner in all of this is Meta. Because the leaked model was theirs, they have effectively garnered an entire planet’s worth of free labor. Since most open source innovation is happening on top of their architecture, there is nothing stopping them from directly incorporating it into their products.

The value of owning the ecosystem cannot be overstated. Google itself has successfully used this paradigm in its open source offerings, like Chrome and Android. By owning the platform where innovation happens, Google cements itself as a thought leader and direction-setter, earning the ability to shape the narrative on ideas that are larger than itself.

The more tightly we control our models, the more attractive we make open alternatives. Google and OpenAI have both gravitated defensively toward release patterns that allow them to retain tight control over how their models are used. But this control is a fiction. Anyone seeking to use LLMs for unsanctioned purposes can simply take their pick of the freely available models.

Google should establish itself a leader in the open source community, taking the lead by cooperating with, rather than ignoring, the broader conversation. This probably means taking some uncomfortable steps, like publishing the model weights for small ULM variants. This necessarily means relinquishing some control over our models. But this compromise is inevitable. We cannot hope to both drive innovation and control it.

Epilogue: What about OpenAI?

All this talk of open source can feel unfair given OpenAI’s current closed policy. Why do we have to share, if they won’t? But the fact of the matter is, we are already sharing everything with them in the form of the steady flow of poached senior researchers. Until we stem that tide, secrecy is a moot point.

And in the end, OpenAI doesn’t matter. They are making the same mistakes we are in their posture relative to open source, and their ability to maintain an edge is necessarily in question. Open source alternatives can and will eventually eclipse them unless they change their stance. In this respect, at least, we can make the first move.

Marisa Tomei

CategoriesIssue 2023Q2, Movies to Watch

Marisa Tomei  , italian, born December 4, 1964)[1] is an American actress. She was a cast member on the Cosby Show spin-off A Different World in 1987. For her role in the 1992 comedy My Cousin Vinny, she won the Academy Award for Best Supporting Actress. She has received two additional Oscar nominations for In the Bedroom (2001) and The Wrestler (2008), with the latter also earning her nominations at the BAFTA and Golden Globe Awards.

Tomei has appeared in a number of notable films, including Chaplin (1992), The Paper (1994), What Women Want (2000), Before the Devil Knows You’re Dead (2007), The Ides of March (2011), Crazy, Stupid, Love (2011), Parental Guidance (2012), Love Is Strange (2014), and The Big Short (2015). She also portrayed May Parker in the Marvel Cinematic Universe (MCU), having appeared in Captain America: Civil War (2016), Spider-Man: Homecoming (2017), Avengers: Endgame (2019), Spider-Man: Far From Home (2019), and Spider-Man: No Way Home (2021).

Tomei was formerly involved with the Naked Angels Theater Company and appeared in Daughters (1986) before making her Broadway debut in Wait Until Dark (1998). She earned a nomination for the Drama Desk Award for Outstanding Featured Actress in a Play for her role in Top Girls (2008), and a special Drama Desk Award for The Realistic Joneses (2014). She returned to Broadway in the revival of The Rose Tattoo in 2019.

Did JP Morgan Chase just get a “not-a-bailout” bailout to make it a bigger systemic risk so that the global financial system must bail them out?

CategoriesGamestop_, Issue 2023Q2, Site Updates_

From u/ WhatCanIMakeToday :

According to the list of global systemically important banks (WikipediaFinancial Stability Board (FSB)FSB PDF), JP Morgan Chase is top dog as the only Tier 4 bank. (The higher the tier, the more systemic risk the bank poses to the financial system so the required capital buffer is higher at each tier.)

A systemically important financial institution (SIFI) is a bank, insurance company, or other financial institution whose failure might trigger a financial crisis. They are colloquially referred to as “too big to fail“. [Wikipedia]

According to the Bank for International Settlements (BIS), which has a dashboard showing scores and components for Global Systemically Important Banks (GSIBs), JP Morgan Chase is by itself in Tier 4 with the highest overall risk rating as the most interconnected bank with the most complex banking relationships.

r/Superstonk - Global systemically important banks: assessment methodology and the additional loss absorbency requirement (Nov 2022)
Global systemically important banks: assessment methodology and the additional loss absorbency requirement (Nov 2022)

Score Calculation Methodology [PDF]

r/Superstonk - The G-SIB assessment methodology – score calculation (BIS, Nov 2014)
The G-SIB assessment methodology – score calculation (BIS, Nov 2014)

If JP Morgan Chase were to fail, the financial system would be at high risk of a financial crisis. Which means JPM Chase is in an interesting position because the global financial system is both incentivized to keep JPM from failing and, if an institution is to fail, putting the most complex and interconnected financial institution at risk maximizes the likelihood of another bailout.

Some of you may remember from 2 years ago (April 15, 2021) that JP Morgan sold $13B in bonds in the largest bank deal ever at the time (SuperStonkBloomberg) to raise money. The next day, Bank of America takes the lead by selling $15B worth of bonds (SuperStonk DDBloomberg, April 16, 2021).

So if JP Morgan needed to raise some serious money without getting a bailout, buying another bank in a sweetheart deal seems like another way to juice up JP Morgan’s balance sheet with some good PR. According to CNN Business,

First Republic … had assets of $229.1 billion as of April 13. As of the end of last year, it was the nation’s 14th-largest bank, according to a ranking by the Federal Reserve. JPMorgan Chase is the largest bank in the United States with total global assets of nearly $4 trillion as of March 31.

Now that JP Morgan picked up First Republic, their total assets increases by about $229B (about 5.7%). And, according to Reuters [Factbox], JP Morgan just got a pretty sweet deal with First Republic Bank:

  • JPMorgan Chase will pay $10.6 billion to the Federal Deposit Insurance Corp (FDIC)
  • Will not assume First Republic’s corporate debt or preferred stock
  • FDIC to provide loss share agreements with respect to most acquired loans

So JP Morgan paid $10.6B to pick up $229B (less than 5c on the dollar), passes on the corporate debt, and shares losses with the FDIC so that:

  • JPMorgan expects one-time gain of $2.6 billion post-tax at closing
  • Estimated to add roughly $500 million to net income and be accretive to tangible book value per share

That’s a pretty damn good deal. Let’s look more into what the FDIC says about shared loss agreements (SLA).

r/Superstonk - FDIC FAQ on Shared Loss Agreements
FDIC FAQ on Shared Loss Agreements

The FDIC absorbs a portion of the loss on assets sold through resolving a failed bank “sharing the loss with the purchaser of the failed bank”. Sounds like the FDIC just took one for the team.

r/Superstonk - FDIC FAQ on Shared Loss Agreements
FDIC FAQ on Shared Loss Agreements

According to the FDIC, loss sharing is basically an 80/20 split (except for after the 2008 Great Financial Crisis when the split was 95/5, which has ended).

r/Superstonk - FDIC FAQ on Shared Loss Agreements
FDIC FAQ on Shared Loss Agreements

According to the FDIC, resolving a failed bank with loss sharing is supposed to be based on the least costly option (to the Deposit Insurance Fund). (We’ve seen this least costly option come up in resolving bank failures before with the FDIC and Federal Reserve contemplating requiring Too Big To Fail banks sell destined-to-fail bonds to absorb losses and reduce payouts by the FDIC Deposit Insurance Fund [SuperStonk])

According to Investopedia, JPMorgan To Pay FDIC $10.6 Billion For First Republic, This is What It Gets, resolving FRC bank failure will cost the FDIC Deposit Insurance Fund $13B.

The FDIC will take a $13 billion hit to its fund and provide $50 billion in financing.

Wait, the FDIC is providing $50B to finance JPM buying FRC?!

r/Superstonk - Did JP Morgan Chase just get a "not-a-bailout" bailout to make it a bigger systemic risk so that the global financial system must bail them out?

The FDIC loaned JP Morgan $50B to buy the failed First Republic bank for $10.6B. $30B of that was used to repay a rescue deal from March (last month) backed by JP Morgan, Citigroup, Bank of America, and Wells Fargo. Which means JP Morgan gets their money back from the previous rescue plus an extra $9.4B out of this loan deal to buy $229B worth of assets from First Republic.

On top of that, JP Morgan splits losses with the FDIC 80/20 with the FDIC covering 80% of loan losses for the next 5-7 years (5 years for commercial loans and 7 years for residential mortgages).

Imagine if a bank loans you $9,400 to buy a $229,000 house. No down payment. Just “here’s $9,400 and the keys to that $229,000 house”. Oh, and the bank will cover 80% of the cost for anything that breaks in the house for the next 5-7 years. This is an insane deal.

Which truly makes one wonder if this is a “not-a-bailout” bailout for JP Morgan, the only Tier 4 GSIB as the most interconnected bank with the most complex banking relationships and the highest overall systemic risk rating.

Are we going to see:

  1. Fat bonuses at JP Morgan?
  2. Followed by news about JP Morgan posing a systemic risk?
  3. Followed by calls to bail out JP Morgan to save pensions?

Wes Anderson – Selected Filmography

CategoriesIssue 2023Q2, Movies to Watch

He gained acclaim for his early work Bottle Rocket (1996), and Rushmore (1998). During this time, he often collaborated with Luke Wilson and Owen Wilson and founded his production company American Empirical Pictures, which he currently runs. He then received a nomination for the Academy Award for Best Original Screenplay for The Royal Tenenbaums (2001). His next films included The Life Aquatic with Steve Zissou (2004), The Darjeeling Limited (2007), and his first stop-motion film Fantastic Mr. Fox (2009) for which he received an Academy Award for Best Animated Feature nomination, and then Moonrise Kingdom (2012) earning his second Academy Award for Best Original Screenplay nomination.

With Anderson’s film The Grand Budapest Hotel (2014), he received his first Academy Award nominations for Best Director and Best Picture, and won the Golden Globe Award for Best Motion Picture – Musical or Comedy and the BAFTA Award for Best Original Screenplay. The next films included his second stop-motion film Isle of Dogs (2018), which earned him the Silver Bear for Best Director, and The French Dispatch (2021). His next film, Asteroid City, is slated for release in June, 2023.

BBC’s 100 Greatest Films of the 21st Century

CategoriesIssue 2023Q2, Movies to Watch
No. Title Director Country Year
1 Mulholland Drive David Lynch United States, France 2001
2 In the Mood for Love Wong Kar-wai Hong Kong, France 2000
3 There Will Be Blood Paul Thomas Anderson United States 2007
4 Spirited Away Hayao Miyazaki Japan 2001
5 Boyhood Richard Linklater United States 2014
6 Eternal Sunshine of the Spotless Mind Michel Gondry United States 2004
7 The Tree of Life Terrence Malick United States 2011
8 Yi Yi Edward Yang Taiwan, Japan 2000
9 A Separation Asghar Farhadi Iran 2011
10 No Country for Old Men Joel Coen and Ethan Coen United States 2007
11 Inside Llewyn Davis United States, France 2013
12 Zodiac David Fincher United States 2007
13 Children of Men Alfonso Cuarón United Kingdom, United States 2006
14 The Act of Killing Joshua Oppenheimer Norway, Denmark, United Kingdom 2012
15 4 Months, 3 Weeks and 2 Days Cristian Mungiu Romania 2007
16 Holy Motors Leos Carax France, Germany 2012
17 Pan’s Labyrinth Guillermo del Toro Spain, Mexico 2006
18 The White Ribbon Michael Haneke France, Austria, Germany, Italy 2009
19 Mad Max: Fury Road George Miller Australia 2015
20 Synecdoche, New York Charlie Kaufman United States 2008
21 The Grand Budapest Hotel Wes Anderson United States 2014
22 Lost in Translation Sofia Coppola United States 2003
23 Caché Michael Haneke France, Austria, Germany, Italy 2005
24 The Master Paul Thomas Anderson United States 2012
25 Memento Christopher Nolan United States 2001
26 25th Hour Spike Lee United States 2002
27 The Social Network David Fincher United States 2010
28 Talk to Her Pedro Almodóvar Spain 2002
29 WALL-E Andrew Stanton United States 2008
30 Oldboy Park Chan-wook South Korea 2003
31 Margaret Kenneth Lonergan United States 2011
32 The Lives of Others Florian Henckel von Donnersmarck Germany 2006
33 The Dark Knight Christopher Nolan United States 2008
34 Son of Saul László Nemes Hungary 2015
35 Crouching Tiger, Hidden Dragon Ang Lee Taiwan, China, Hong Kong, United States 2000
36 Timbuktu Abderrahmane Sissako Mauritania, France 2014
37 Uncle Boonmee Who Can Recall His Past Lives Apichatpong Weerasethakul Thailand 2010
38 City of God Fernando Meirelles and Kátia Lund Brazil 2002
39 The New World Terrence Malick United States 2005
40 Brokeback Mountain Ang Lee United States 2005
41 Inside Out Pete Docter United States 2015
42 Amour Michael Haneke France, Austria, Germany 2012
43 Melancholia Lars von Trier Denmark, Sweden, France, Germany 2011
44 12 Years a Slave Steve McQueen United States, United Kingdom 2013
45 Blue Is the Warmest Colour Abdellatif Kechiche France, Belgium, Spain 2013
46 Certified Copy Abbas Kiarostami Iran 2010
47 Leviathan Andrey Zvyagintsev Russia 2014
48 Brooklyn John Crowley United Kingdom, Canada, Ireland 2015
49 Goodbye to Language Jean-Luc Godard France, Switzerland 2014
50 The Assassin Hou Hsiao-hsien Taiwan, China, Hong Kong 2015
51 Inception Christopher Nolan United States, United Kingdom 2010
52 Tropical Malady Apichatpong Weerasethakul Thailand 2004
53 Moulin Rouge! Baz Luhrmann Australia 2001
54 Once Upon a Time in Anatolia Nuri Bilge Ceylan Turkey 2011
55 Ida Paweł Pawlikowski Poland, Denmark, France, United Kingdom 2013
56 Werckmeister Harmonies Béla Tarr and Ágnes Hranitzky Hungary 2000
57 Zero Dark Thirty Kathryn Bigelow United States 2012
58 Moolaadé Ousmane Sembène Senegal, France, Burkina Faso, Morocco, Tunisia 2004
59 A History of Violence David Cronenberg United States 2005
60 Syndromes and a Century Apichatpong Weerasethakul Thailand 2006
61 Under the Skin Jonathan Glazer United Kingdom, United States, Switzerland 2013
62 Inglourious Basterds Quentin Tarantino United States, Germany 2009
63 The Turin Horse Béla Tarr and Ágnes Hranitzky Hungary 2011
64 The Great Beauty Paolo Sorrentino Italy, France 2013
65 Fish Tank Andrea Arnold United Kingdom 2009
66 Spring, Summer, Fall, Winter… and Spring Kim Ki-duk South Korea, Germany 2003
67 The Hurt Locker Kathryn Bigelow United States 2008
68 The Royal Tenenbaums Wes Anderson United States 2001
69 Carol Todd Haynes United Kingdom, United States 2015
70 Stories We Tell Sarah Polley Canada 2012
71 Tabu Miguel Gomes Portugal, Germany, Brazil, France 2012
72 Only Lovers Left Alive Jim Jarmusch United Kingdom, Germany 2013
73 Before Sunset Richard Linklater United States 2004
74 Spring Breakers Harmony Korine United States 2012
75 Inherent Vice Paul Thomas Anderson United States 2014
76 Dogville Lars von Trier Denmark, United Kingdom, Sweden, France 2003
77 The Diving Bell and the Butterfly Julian Schnabel France, United States 2007
78 The Wolf of Wall Street Martin Scorsese United States 2013
79 Almost Famous Cameron Crowe United States 2000
80 The Return Andrey Zvyagintsev Russia 2003
81 Shame Steve McQueen United Kingdom 2011
82 A Serious Man Joel Coen and Ethan Coen United States 2009
83 A.I. Artificial Intelligence Steven Spielberg United States 2001
84 Her Spike Jonze United States 2013
85 A Prophet Jacques Audiard France, Italy 2009
86 Far from Heaven Todd Haynes United States 2002
87 Amélie Jean-Pierre Jeunet France 2001
88 Spotlight Tom McCarthy United States 2015
89 The Headless Woman Lucrecia Martel Argentina 2008
90 The Pianist Roman Polanski France, Germany, Poland, United Kingdom 2002
91 The Secret in Their Eyes Juan José Campanella Argentina 2009
92 The Assassination of Jesse James by the Coward Robert Ford Andrew Dominik United States 2007
93 Ratatouille Brad Bird United States 2007
94 Let the Right One In Tomas Alfredson Sweden 2008
95 Moonrise Kingdom Wes Anderson United States 2012
96 Finding Nemo Andrew Stanton United States 2003
97 White Material Claire Denis France 2009
98 Ten Abbas Kiarostami Iran 2002
99 The Gleaners and I Agnès Varda France 2000
100 Carlos Olivier Assayas France, Germany 2010
Requiem for a Dream Darren Aronofsky United States 2000
Toni Erdmann Maren Ade Germany, Austria 2016