productdesignLA

View Original

MIT's AI Conference

I recently attended MIT Technology Review’s signature AI conference, EmTech — as well as Future Compute, which is their “executive classroom for digital leadership” in Cambridge, Massachusetts.

MIT’s Media Lab made for an inspiring venue!

As someone that pays close attention to AI advancements and their corresponding user experiences, both conferences broadened my understanding on a few dimensions. These areas were outside of the core technology and interaction design that I’ve been primarily focused on. They included:

  • VC investing

  • Current litigation

  • Ethics, policy, and regulation

  • Hardware developments

  • Cybersecurity

  • And, even future workforce planning implications. (Yes, “Digital FTEs” are likely to be a thing.)

Amir Ghavi (an AI, Tech Transactions, and IP lawyer) provided us with copyright considerations — as well as great historical precedents from past technological advancements for context (e.g., Xerox copy machines, VCRs, and peer-to-peer file sharing via Napster).

In terms of where we’ve been and where we are today, broadly-speaking, there was general consensus that AI will continue to improve rapidly and transform the way enterprise and consumers interact with it. The models will continue to get better and better; at least for the foreseeable future.

With a nod to the decades of all the foundational work that got us here, a brief history of the AI world tended to be represented as such:

2022 AI’s Big Bang explosion with the public release of Chat GPT

2023 — Lots of experimentation and proof-of-concepts

2024 — Infrastructure, production, and lots of agents!

This current confluence is largely a result of: lots and lots of data, increased compute capacity, and the models getting much more sophisticated.

👨 Ahead of Chat GPT 4o’s summary of my conference note-taking below, here are a few, macro themes that stood out to me:

  • If properly focused, the technology will ultimately allow humanity to flourish; mainly through Human-AI collaboration. The emphasis seemed to be that humans won’t be replaced by AI – but they will be replaced by humans using AI. So, collaboration to get to the next level of intelligence and performance.

  • There was lots of discussion around what to do with the black box phenomenon – meaning, how can we understand, audit, and adjust what AI is doing. The framing was that governance should enable “legible model architecture” instead of “explainable.” In other words, we won’t necessarily be able to trace root-cause issues down the level of a faulty bolt, like we can with airplane crashes, for example. That’s why we not only need a human in the loop, but an expert in the loop. 

  • AI policy is complicated on many fronts. The technological stakes are high and make no mistake about it, this is very much an AI arms race and a strategic, geopolitical issue. Governments are going to potentially be more inclined to take a laissez-faire approach to allow for local advantages in the initial training of large models. 

    Speaking of regulation, we need to consider how it factors-in across the entire stack. (Current safety protocols were cited on how we currently do things for physical products and medications across the supply chain. Also, if it’s a consumer issue, then it falls under the jurisdiction of the FTC here in the United States). Overall, it’s much easier to build privacy into the model from the start. Considerations include:

    • What are we using in the training data? 

    • Where are models deployed?

    • How will we use the data? 

    • Privacy and trust-impact assessments

    • A model’s end-of-life consent

  • In terms of current AI performance, it’s been interesting to see the value of creative tasks being ‘outsourced’ to AI vs. analytical ones, which are currently prone to high error rates. We now truly have an on-demand knowledge delivery system – and we’ll eventually have individual-level knowledge management as performance improves.

Srinivas Narayanan of OpenAI provided some remarkable GPT-4 benchmark and performance data.

  • The nature of productivity will also change. As a developer for example, it won’t be about the amount of code you write working alongside a copilot, but more about what customer and business problems you solved. (Programming itself is also moving from instructional programming to more clearly expressing intent via prompt engineering where the LLM then does the work). Some great questions were discussed, like: what if we — and companies by extension — all of sudden become 50% more productive?

  • And perhaps the area that I’m most excited about… this technology has created an entirely new way to interface with computing. If done thoughtfully and intentionally, AI assistants and digital twins will allow us to level-up our intelligence and hopefully prosper. Younger generations – like Generation Alpha – will be growing up with AI as a natural part of their lives (i.e., AI-native).

🤖 And, here are the themes that emerged from Chat GPT 4o based on my raw, conference notes:

Technological Advancements and AI Capabilities

AI models and technologies are rapidly evolving, with continuous updates and new releases. Deep learning, reinforcement learning, and generative models are making significant strides. AI can robustly see and interpret the world, with applications in defense, biotech, self-driving, and more.

Industry Impacts and Applications

AI is transforming industries like Hollywood, biotech, and defense by enhancing capabilities and efficiency. Generative models are being integrated into creative tools for design, video, and audio production. AI is being used for customer support, traffic routing, and various other applications.

Jingwan (Cynthia) Lu shared some progress on Adobe’s Firefly product for creative professionals.

Human-AI Collaboration and Ethics

Emphasis on human-AI collaboration, not replacement, with AI augmenting human capabilities. Ethical considerations, governance, and regulation are crucial for safe and responsible AI deployment. There are ongoing discussions about AI’s role in enhancing human flourishing and productivity.

AI and Regulation

Various countries and regions are developing AI regulations, with the EU leading in safety principles. The importance of legible model architectures and governance to ensure safe AI systems. Legal challenges related to AI, such as copyright and data privacy, are being actively addressed.

AI in Business and Workforce

Businesses are increasingly adopting AI to improve operations, customer service, and productivity. The workforce is evolving with AI, requiring new skills and approaches to integrate AI effectively. AI-powered tools and platforms are enabling businesses to innovate and streamline processes.

Future of AI and Technological Integration

AI is expected to become more integrated into everyday life, with advancements in multi-modal and generative interfaces. The future of AI includes applications in spatial computing, edge computing, and decentralized AI. Companies are focusing on sustainable and cost-effective AI solutions to drive innovation.

Peter Smart from Fantasy demo’d some Generative UI concepts that were pretty rad! (I’ll probably do a separate post on this topic alone).

Challenges and Considerations

The need for high-quality, clean data for effective AI training and deployment. Addressing biases in AI systems and ensuring ethical use of AI in various domains. The importance of cross-functional collaboration and strategic planning in AI implementation.

Influence and Societal Impact

AI’s impact on social media, misinformation, and elections, with efforts to ensure transparency and authenticity. The role of AI in enhancing customer experiences and personalizing interactions. The potential for AI to influence public opinion and societal norms, requiring careful consideration and management.

And... a lovely view of the MIT campus, the Charles River, and Fenway Park in the distance.

I think it’s safe to say that our AI future is still being written. Let’s co-author it together!

Marc