You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Generative AI is already changing the way we interact with and consume content on the Web, and it's plausible that in time it will systemically change the UI as we know it. As the organization responsible for inventing and/or maintaining many of the technologies that underpin the web GUI, what will our role be as the UI changes?
For most people it's easier to ask a question and listen to a response, than it is to tap or type, or read content on-screen. Sometimes there will be a graphical or visual element to the response, but where the Web is currently predicated on a GUI, generative AI could well challenge that paradigm.
The technologies we use to create websites, webapps, and webviews, are all currently about a GUI - HTML, CSS, JavaScript, SVG, ARIA, and others - many of which W3C is/has been responsible for.
What would the Web "look" like if it wasn't predicated on a GUI? What technologies might we need to give it form and structure, what would design mean if the UI was voice and conversation first, not visual first?
The machine readability of data is not a new thing, nor is its presentation for consumption on the Web, but there are signs that generative AI tools and their kin mean we'll need to pivot our thinking about what the UI is and the technologies we'll need to design and build UI that do not depend on a visual aspect even though they may at times incorporate visual elements.
The text was updated successfully, but these errors were encountered:
Generative AI is already changing the way we interact with and consume content on the Web, and it's plausible that in time it will systemically change the UI as we know it. As the organization responsible for inventing and/or maintaining many of the technologies that underpin the web GUI, what will our role be as the UI changes?
For most people it's easier to ask a question and listen to a response, than it is to tap or type, or read content on-screen. Sometimes there will be a graphical or visual element to the response, but where the Web is currently predicated on a GUI, generative AI could well challenge that paradigm.
The technologies we use to create websites, webapps, and webviews, are all currently about a GUI - HTML, CSS, JavaScript, SVG, ARIA, and others - many of which W3C is/has been responsible for.
What would the Web "look" like if it wasn't predicated on a GUI? What technologies might we need to give it form and structure, what would design mean if the UI was voice and conversation first, not visual first?
The machine readability of data is not a new thing, nor is its presentation for consumption on the Web, but there are signs that generative AI tools and their kin mean we'll need to pivot our thinking about what the UI is and the technologies we'll need to design and build UI that do not depend on a visual aspect even though they may at times incorporate visual elements.
The text was updated successfully, but these errors were encountered: