-
Notifications
You must be signed in to change notification settings - Fork 0
Daily Log
OVERVIEW
Native:
- react native
- ARKit & ARCore
- ViroReact
- based on 2 APIs (ARKit for iOS and ARCore for Android)
- open source
- Swift (strictly iOS)
- ARKit
- Unity + Vuforia
Web AR
- PWA?
- 8thWall
- A-frame
- AR js
- EasyAR (OpenVC)
Research
- Native vs Web
- camera access
- notifications
- download threshold
- memory usage
- offline use (local storage/ memory)
- differences per AR framework
- image tracking
- marker issues
- accuracy
- memory usage
Planning:
Week 1: Exploring Native app development (React Native, Swift) + PWA development
Week 2: Start exploring (free) AR frameworks for first demos
Week 3: Exploring paying AR frameworks (with free trial period +- 14 days?)
week 4: Summary, final demos, overview of findings
Questions:
- What is expected? How much?
- Meeting once a week?
- Deliverables: small demos + ‘tech summary’
- Personal issues
- Learning React Native
-
prerequisites:
- XCode (already installed)
- Homebrew (already installed)
- Expo
- https://docs.expo.dev/
- to build native JS/TS projects for all devices
-
https://docs.expo.dev/get-started/installation/#requirements
- needed to use Expo CLI:
- Node js (LTS even number)
- Git
- Watchman (update brew first)
- what is watchman? (https://developers.facebook.com/blog/post/2021/03/15/eli5-watchman-watching-changes-build-faster/)
- for incremental build: keep track of which files have changed while working on app development, so that you don’t need to rebuild the whole project, when only certain things have been changed
- what is watchman? (https://developers.facebook.com/blog/post/2021/03/15/eli5-watchman-watching-changes-build-faster/)
- yarn (why not npm?)
- install via npm (https://classic.yarnpkg.com/en/docs/install#mac-stable)
- Visual Studio code (already installed)
- Extension for Expo
- needed to use Expo CLI:
-
Expo Tutorial
- https://docs.expo.dev/tutorial/introduction/
- creating universal apps
- Need Expo Go app on physical device
- Initialise expo app
- create-expo-app
- Install dependencies
- npx expo install react-dom react-native-web @expo/webpack-config
-
Getting started with React Native
- https://reactnative.dev/docs/getting-started
- Setting up your app
- https://reactnative.dev/docs/environment-setup
- Expo or React Native CLI
- npx create-expo-app
- connect on your device with Expo go app if you’re on the same network
COACH MEETING - 9/1/23 13u
- It might be useful to first make a list of which features or properties you want to compare in this project. That way you have a clear view of where you want to go with this project.
- You can look for inspiration in other comparative studies.
- Vb dev.to, medium
- React or vanilla JS can be okay for what you want to do. Start with frameworks you are familiar with first.
- Will there be any backend? Maybe just work with querystring, so that you can have some customisation in your link, without needing a real backend (vb name, message…)
- Not everything you want to compare, needs to be in your end product (vb notifications…)
- You could look into:
- how long it took you to set up
- how is the documentation
- how big/helpful is the community
- (actual features can easily be compared via documenation)
- You can turn Notion into a blog (feather.so, simple.ink)
- It seems like a good idea to keep a daily blog (even just for yourself) and maybe make weekly summaries of what you learned and your findings. The final result will probably be a fully written article, you could post this on medium or dev.to.
- Next meeting: Friday 15h30
LOOKING INTO SOFTWARE COMPARISON ARTICLES
https://dev.to/software-comparisons
- First impression: comparing 2 things at a time
- Idea:
- I will probably have different comparison classes. → web vs native AR, comparisons within web AR, comparisons within native AR, general (pricing, documentation, level of difficulty …)
- Need to take into account my own background: I start with knowledge of React, no knowledge of native, ….
https://dev.to/software-comparisons
- First impression: comparing 2 things at a time
- Idea:
- I will probably have different comparison classes. → web vs native AR, comparisons within web AR, comparisons within native AR, general (pricing, documentation, level of difficulty …)
- Need to take into account my own background: I start with knowledge of React, no knowledge of native, ….
Examples:
-
web3.js vs ethers.js: a Comparison of Web3 Libraries
This article compares 2 JS libraries with similar functionalities.
- Quantitive comparison:
- release date, github stats (stars, contibrutors), bundle size
- API differences (methods, separation of roles)
- comparing actual functions that should deliver the same result (amount of code?) (side-by-side examples)
- support with other (open source) libraries/frameworks
- idea: maybe look how easily integrated in React? React wrappers?
- Quantitive comparison:
-
Article about comparing Python GUIs.
- Advantages vs differences
- Learning resources
- code flexibility
- Ease to learn vs ease to master (learning curve)
- dependencies
- Advantages vs differences
-
React vs Vue vs Angular vs Svelte)
Article comparing JS frontend frameworks.
- popularity
- google trends, NPM trends, and the Stackoverflow 2020 survey results
- =/= larger community!
- community/resources
- spectrum chat
- gitter chat
- discord
- stackoverflow
- tutorials (paid/free), recentness of the tutorials
- performance (how do you perform these tests?)
- speed test
- use a set task and compare the speed to execute
- table with actions + speed
- slowdown geometric mean?
- startup test
- memory test
- speed test
- learning curve
- The way the author handled this factor seemed a bit subjective to me. The author kept estimating ‘probably a day to learn’, without any real ‘evidence’.
- real-world examples
- companies that use the framework
- open-source?
- release date
- who it’s developed by
- popularity
Ideas for comparison classes:
GENERAL
- How recently/frequently updated?
- Github stats (how big is the community?)
- Bundle size of packages
- Side-by-side comparison of similar functions/features
- ease of implementation
- clarity of code
- length of code
- syntax?
- Support/integration in other frameworks (mostly react/vanilla js?)
- Keeping track of advantages/disadvantages I encounter.
- Differences between the frameworks (without making a judgement about them already, just factual statements of differences)
- Availability/clarity of Learning resources/documentation
- Learning curve: Ease/Difficulty to learn
- Learning dependencies
- Real world examples
- github
- open-source?
- popularity
- google trends, NPM trends, and the Stackoverflow 2020 survey results
- =/= larger community!
- community/resources
- spectrum chat
- gitter chat
- discord
- stackoverflow
- tutorials (paid/free), recentness of the tutorials
- performance
- speed
- startup
- memory
- Debugging?
(my own thoughts:)
- Price
- Device range
- Time I spent on learning it?
- Pre-required knowledge → can it be implemented in any/many frameworks?
- personal opinion of preference?
- download necessities?
SPECIFIC FOR AR
- Accuracy of image tracking
- Limitations/conditions for tracking image look
- Device range
- How far can you go?
- Possibilities/qualities of animations?
- Internet speed dependency
- Possibility/memory usage of local storage/offline use
- what happens in darker rooms?
- link to camera quality?
- Permissions camera access
CHARACTERISTICS
- name
- summary of goal/purpose
- price
- use cases?
- community size
- amount of libraries/tools on top of this framework
Note:
- Mention (my own) prerequisites/previous knowledge to have clear view for learning curve/ difficulty to implement in known frameworks…
- Sometimes it comes down to opinion, you could compare 2 things and note differences, without a ‘clear’ objective view of what is ‘better’.
-
XR, AR, VR, MR – what’s the difference?
- XR
-
= extended reality, refers to all combined real and virtual environments and man-machine interactions
-
umbrella term for AR, VR, …
- AR = augmented reality
- virtual information and objects are overlaid on the real world
- “This experience enriches the real world with digital details such as images, text and animations, which are accessed through AR glasses or via screens, tablets and smartphones. Users are not isolated from the real world, but can interact and see what is happening in front of them.”
- example: pokémon go, face filters (snapchat, instagram)
- VR = Virtual reality
- MR = Mixed reality
- AR = augmented reality
-
Examples:
- Ikea AR functionality
- Rolex AR app
→ both are representative forms of XR
-
- XR
-
What is augmented reality (AR)?
- integration of digital information with the user's environment in real time
- information overlaid on top of real-world environment
- via smartphone or glasses
- term coined by Thomas Caudell in 1990
- Requires hardware components:
- processor
- sensors:
- camera, GPS (for user location), accelerometers, solid-state compasses (device orientation)
- display
- input device
- Can need a lot of processing power depending on computationally intensive program (data processing can be offloaded to different machine)
- tie data to augmented reality markers in the real world
- “When a computing device's AR app or browser plugin receives digital information from a known marker, it begins to execute the marker's code and layer the correct image or images.”
- AR vs VR
- “The biggest difference between AR and VR is that augmented reality uses the existing real-world environment and puts virtual information on top of it, whereas VR completely immerses users in a virtually rendered environment. While VR puts the user in a new, simulated environment, AR places the user in a sort of mixed reality.”
- Examples:
- Retail apps: to show items in user’s environment
- Target app
- Tools and Measurement apps: use AR to measure different 3D points in the user's environment
- Apple Measure app
- Entertainment and games
- Snapchat face filters
- Pokémon go
- Military
- Architecture
- Navigation
- Archaeology
- logistics training
- Google glasses (glasses device for AR)
- Retail apps: to show items in user’s environment
- ARKit: Apple’s open source mobile AR development tool set
- Improved Depth API
- vb Target, Ikea
- ARCore: Android equivalent
- uses geospatial API with data from Google Earth 3D models, Street View image data from Google Maps
- improved Depth API
-
What is augmented reality or AR?)
-
AR = enhanced, interactive version of a real-world environment achieved through digital visual elements, sounds, and other sensory stimuli via holographic technology
-
3 features:
- combination of digital and physical worlds
- real-time interactions
- accurate 3D identification of virtual and real objects
-
Types of virtual realities
- AR = Augmented reality
- overlay real-world views with digital elements
- limited interaction
- VR = Virtual reality
- immersive experiences, isolating user from real world
- via headset device
- MR = Mixed reality
- combining AR and VR elements
- digital objects can interact with the real world
- XR = Extended reality
- covers all types of technologies that enhance our senses
- includes AR, VR, MR
- AR = Augmented reality
-
Types of AR:
Determines how you can display images and information.
- marker-based
- uses image recognition to identify objects already programmes in your AR device or application
- Placing objects in view as points of reference helps AR device determine position and orientation of the camera.
- This is done by switching camera to greyscale and detecting a marker. This marker is then compared with all the other markers in its information bank. Once device finds a match, it uses that data to mathematically determine the pose and then can place the AR image in the right spot.
- markerless
- more complex
- no point on which your device will focus
- So, device must recognize items as they appear in view.
- Via recognition algorithm, device looks for colors, patterns, features to determine object and will orient itself via time, accelerometer, GPS and compass information. Then use camera to overlay an image within real-world surroundings.
- marker-based
-
How does AR work?
- AI : most AR solutions need AI to work
- AR software: tools and apps used to access AR.
- Processing: usually using devices internal operating system
- Lenses: need lens or image platform to view content or images
- Sensors: AR system needs data about environment to align real and digital world. Camera captures info and sends it through software for processing.
-
From my research before the winter break, I already had some AR frameworks I wanted to look into.
NATIVE:
- ViroReact → for React native (based on ARKit and ARCore)
- ARKit (iOS)
- ARCore (Android)
- Vuforia (Unity) (iOS + Android)
WEB
- A-frame
- 8th Wall
- AR.js
- Zappar
But, I also asked some people at In The Pocket, where I plan to do my internship next semester, for some recommendations of AR related frameworks that are used by them.
I got the following recommendations:
- Babylon.js (for web based solutions)
- WebXR (https://github.com/immersive-web/webxr)
- Vulkan
- Unity + C#
- 8th wall (web), with a warning for pricing for commercial use
- AR.js (open source) (web)
- Blippar (web)
- Zappar (web)
- Onirix (web)
- AR Foundation (native: iOS + android via Unity)
- ARKit (native iOS)
- ARCore (native Android)
FINAL SELECTION
- 8th wall
- AR js
- Babylon.js + WebXR
- (Zappar)
- ARCore (react native)
- ARKit (swift + react native)
- unity
DEMO GOAL
Receiver:
- scan QR code (with info of message content)
- Use physical card as image marker to show message
- Maybe have animated figure?
Sender:
- Create secret message
- Add text/name
- add image (maybe animated)
- maybe choose from different possible marker images??
- Create QR code to send to someone
Life long learning: REACT NATIVE
Since I will be looking into some native AR frameworks as well, (and leading up to my internship), I wanted to start learning React Native. I felt like this would be a good starting point to get into my first native coding project, since I have some experience with React already.
I started exploring React Native (via Expo) yesterday, so I will start by continuing this first tutorial.
https://docs.expo.dev/tutorial/introduction/
-
add the right assets
-
Installing dependencies (npx expo install react-dom react-native-web @expo/webpack-config)
-
to run development mode: npx expo start
- open app on phone via expo go app (scan QR code)
- open on web
- open on iOS simulator (XCode)
BABYLON JS
(SWIFT)
(UNITY)
REACT NATIVE
https://docs.expo.dev/tutorial/build-a-screen/
-
Built in components
[Core Components](https://reactnative.dev/docs/components-and-apis)
problem while working: create-expo-app creates a git repo in a new folder. But, since I already had a git repo to keep track of all my code for my passion project, I needed to add this subfolder as a submodule to my git repo.
-
styling: in JS
- via style prop (camelCasing)
- [Styling in React Native](https://reactnative.dev/docs/style)
-
add an image: use ‘require’ to add static image from assets
-
divide components into files
- components folder
-
Pressable component
- touch event on phone
-
different styling for different usage of same component
- theme prop
- icons from expo: @expo/vector-icons
- use in-line styling → last defined styles are used
Adding functionalities
- pick picture from device
- Expo SDK library expo-image-picker
- use the picked image
- uri (Uniform Resource Identifier) of the image
- use a state variable
Creating a modal
- presents content above the rest of your app
- transparent instead of transparent=’true’!!
Adding gestures
- react native gesture handler library
Take Screenshots
- react-native-view-shot and expo-media-library libraries
- user permissions
- usePermissions hook → might be important for me with camera access!
- permission status
- requestPermission method
- on first load: permission is null → trigger requestPermission if it is null
- usePermissions hook → might be important for me with camera access!
-
import { captureRef } from 'react-native-view-shot';
- takes screenshot of a View and returns uri of picture
- put reference on the view you want to capture
- returns promise with uri
Handle platform differences
- browser can’t take screenshot via react-native-view-shot library
- make exceptions to get same functionality on all platforms
- for web: dom-to-image library
- Platform module of React Native gives acces to info about platform on which app is running
- use Platform.OS to check
- Problem: i needed to install some packages to use web option:
- npx expo install react-native-web@~0.18.9 react-dom@18.1.0 @expo/webpack-config@^0.17.2
- Problem: seems like there are some problems with Modal on web
status bar, splash screen, app icon
- status bar
- expo-status-bar library
- component
- change style of StatusBar component (light)
- splash screen
-
loading screen
-
app.json file with path defined in splash.image property
-
white bar on android → set background color for splash screen
- change this in app.json file
-
prevent splash screen from disappearing very quickly → manually set this via expo-splash screen library (only use this for testing!)
- import * as SplashScreen from 'expo-splash-screen';
SplashScreen.preventAutoHideAsync(); setTimeout(SplashScreen.hideAsync, 5000);
-
- App icon
- same as splash image → path to icon.png in app.json (icon property)
Extra documentation: https://docs.expo.dev/tutorial/follow-up/
Plan for today: set up the ‘skeleton’ for my basic demo in React Native.
There are 2 parts of the app:
- Sender
- Choose tracking image
- Create visuals on top of tracking image
- name
- message
- (animated) figures
- test out the design (think about session storage, how to not loose what they are working on, when page is refreshed or something)
- Create a QR code with the used data to send to someone
- Receiver
- scan QR code with data
- use camera
Random thought: since a web option is available with React Native, it might also work to look at the web AR in react native for the web app version?
https://necolas.github.io/react-native-web/
I wanted to start making up the basic version (without the AR logic yet) of my demo app. So, to do this a bit more organised, I started with making some basic wireframes.
- background image?
- Navigation between screens
- https://reactnative.dev/docs/navigation
- via library
- npm install @react-navigation/native @react-navigation/native-stack
-
https://reactnavigation.org/docs/getting-started/
- install dependencies:
- npx expo install react-native-screens react-native-safe-area-context
- wrap in navigator container
- don’t nest navigator containers, just use 1 at the root of your app
- React native doesn’t have built in global history stack (↔ web urls)
- native stack provides gestures from iOS and Android (↔ web)
- Install native stack navigator library
- npm install @react-navigation/native-stack
- depends on react-native-screens
- createNativeStackNavigator
- returns object with 2 properties (which are components)
- Screen
- Navigator
- Navigator contains Screen elements as its children, to define configuration for routes
- returns object with 2 properties (which are components)
- NavigationContainer
- component that manages navigation tree
- contains navigation state
- render at root of app (App.js)
- must wrap all navigators structure
- Use parameters when going to a route
- Can this be a good option to give the info through the steps?
- install dependencies:
- Horizontal scroll
-
https://rossbulat.medium.com/react-native-carousels-with-horizontal-scroll-views-60b0587a670c
- ScrollView
- horizontal=true
- ScrollView
-
https://rossbulat.medium.com/react-native-carousels-with-horizontal-scroll-views-60b0587a670c
I'm struggling a bit to work quicker with React Native. Right now it's going very slowly.
- Make a blog with short weekly overviews and clear description of the goal. (Will check if github wiki is okay).
- Don’t try to do too much, 1 framework for native and 1 for web will probably be enough work already
- Focus on the final article and research aspect.
- Github Student Developer Pack for free heroku credits
- Image selection for tracking image
- text input for message
- tracking the message and image via params that are passed through the navigation
- creating a QR code
-
https://aboutreact.com/generation-of-qr-code-in-react-native
-
react-native-svg
andreact-native-qrcode-svg
package- !! some dependencies issues!! use this instead:
npm i -S react-native-svg react-native-qrcode-svg
- !! some dependencies issues!! use this instead:
- QRCode component
-
https://www.npmjs.com/package/query-string
- to put params in querystring form
- what about urls in apps?
-
-
https://aboutreact.com/generation-of-qr-code-in-react-native
- Saving result to pdf
- https://pspdfkit.com/guides/react-native/pdf-generation/from-html/?utm_source=google&utm_medium=paid_search&utm_campaign=hybrid&utm_content=react-native-html-to-pdf&utm_term=react native html to pdf&gclid=Cj0KCQiAn4SeBhCwARIsANeF9DInAdHi1ZywYg9zePJ_JHGNv6m-jlsm-upN9kTmnJUf2G27SA3jqv8aAq4vEALw_wcB
-
https://docs.expo.dev/versions/latest/sdk/print/
- seems difficult
- idea: create a View component with the right images (qr code + tracking image) and take a screenshot and convert this to pdf?
Some findings:
- Extra steps are needed for scanning QR code:
- you need a URL that is linked to your app, but this only works when you already have the app installed.
Goals today:
- pdf download
- camera permissions
- QR code scanning
I think the best way will by using the expo print package and using html to pdf. But, with added images for the card.
- https://docs.expo.dev/versions/latest/sdk/print/
-
https://stackoverflow.com/questions/68081396/how-to-add-image-to-pdf-file-and-print-in-react-native-expo
- you need to make a string with your html
- for iOS you need to convert the images in your html to base64, can’t handle local assets
- need to take a screenshot of the QR code to put into the html file for the pdf
- https://docs.expo.dev/tutorial/screenshot/
- solution: put it directly in base64 form
- user permissions?
- Right now there seems to be no problem downloading a pdf file, but don’t I need to ask for permissions?
- https://docs.expo.dev/versions/latest/sdk/media-library/
General notes:
It seems like a lot of the time iOS needs some custom development.
- https://www.npmjs.com/package/react-native-qrcode-scanner
-
https://www.toptal.com/react-native/react-native-camera-tutorial
- RNCamera (React Native Camera)
- https://snack.expo.dev/@eseg/simple-qr-code-scanner
- https://javascript.plainenglish.io/qr-code-and-barcode-reader-app-using-react-native-expo-856ce6ce1df4
-
https://docs.expo.dev/versions/latest/sdk/camera/
- this seems like the best option for me
- When testing it out, I noticed that the camera permissions are remembered when you give permission once.
-
there is a built in barcode method!
-
- using react-native-permissions library
-
https://dev.to/gautham495/asking-for-permissions-in-react-native-c87
-
https://www.freecodecamp.org/news/how-to-create-a-camera-app-with-expo-and-react-native/
-
Problem:
-
when navigating back after using camera, camera turns black
-
possible solution:
-
Actual solution: using the isFocused property from react-navigation
-
- I managed to add a pdf with the resulting QR code and chosen image, so that you can print the card. (It works without permissions, are they still needed to save the pdfs? I think there might be some stored permissions from another expo project, so I should revisit this maybe.)
- I used camera permissions to access the camera.
- I was able to scan the created QR code via the Camera object (expo camera library) and show the chosen message and image.
Full demo:
sender-demo-react-native.mp4
receiver-demo-react-native.mp4
-
Camera permissions are fairly easy to get in React Native via the expo-camera library.
-
The permissions are stored, so if you have given it once, it automatically remembers this.
- Actually start implementing the AR functionality in my react native demo.
- Will it rely on the same camera functionality? Do you need separate permissions for the camera usage?
- Project overview is good. Maybe just a bit more explanation about marker-based AR. Also for your final article, so that people who don’t know anything about it can still understand.
- If you find a good resource or blog, you can also just link to it, instead of writing it all out yourself.
- Maybe put your repository on public, if you don’t mind your wiki being public.
- For the problem with React Native routing to specific page with querystring: see it as a ‘nice-to-have’, if you have time left.
- Maybe first make the web react demo skeleton, so that the ‘boring part’ is done and you can then just focus on AR itself for the rest of your time.
- Don’t try to plan on using too many AR frameworks. See how much time you have.
- If you can, add some choice for the AR design, not just text.
- Good that you thought about it already. Just see what is possible when you are trying it out.
- Next meeting: Friday. Next week: Wednesday online. Still need to check for final week with the Integration Juries of the first years.
- I seem to be on track.
I want to explore Next js for the web version of my app. Since I have some experience with Nuxt for Vue, it should be quite similar to work with and should normally make things like routing a lot easier.
-
https://nextjs.org/learn/basics/create-nextjs-app
- Some properties
- Framework to build React applications.
- page-based intuitive routing
- pre-rendering (SSR, SSG)
- Discord community
- Setup
- Need Node.js 10.13 or later
- create-next-app
- Some properties
-
https://www.freecodecamp.org/news/nextjs-tutorial/
- create-next-app app-name
- Sets up a new project with this structure:
-
Pages and styles folder
-
Pages and routing
-
just make new file in pages folder
-
no need for react router library anymore
-
dynamic pages
- wrap file name in brackets
- Example: for filename [slug].js
- useRouter hook to access info about app location or history
- e.g. get query parameters
-
Link component form ‘next/link’
-
just use ‘href’ property to link to pages
-
you can add query by passing object to href prop
-
Push to routes via .push method of useRouter hook
-
-
SEO
- use Head component from next/head
- to add meta data
-
API
- api folder for backend
- e.g. for data fetching
-
Starting to create the demo skeleton in Next js
-
babel error
-
Without changing anything in the default created next project, I get the following error about babel:
-
Solution:
-
https://stackoverflow.com/questions/68163385/parsing-error-cannot-find-module-next-babel
-
["next/babel","next/core-web-vitals"]
-
This has removed the error
-
-
-
Adding same page structure as React Native version
-
working with css modules
- for local styling in clear file structure
-
Horizontal scroll for image picker:
- https://www.npmjs.com/package/react-horizontal-scrolling-menu
- npm i react-horizontal-scrolling-menu
-
use MUI icons
npm install @emotion/react npm install @emotion/styled
-
To pass image through routing parameter: use public folder for image assets and just use the path to the image
- be careful with router!! you need to wait until router object is ready, before you can access the query that you passed through.
- there seems to be some problem with the loading time of the query parameters
- maybe it’s better to work with global states, and then in the end
-
Creating QR code
- https://www.npmjs.com/package/react-qr-code
- https://www.npmjs.com/package/next-qrcode
- Need to get the current url
- doesn’t work via window.location
-
https://stackoverflow.com/questions/58022046/get-url-pathname-in-nextjs
- get path via useRoute hook
- hydration problem:
- what is rendered on the server and the client does not match:
- https://stackoverflow.com/questions/55271855/react-material-ui-ssr-warning-prop-d-did-not-match-server-m-0-0-h-24-v-2
- I’m guessing this is because I get the url on the client side to pass to the QRCode, so this will differ on the initial server side render
- solution:
- add a useEffect (on mount) that changes a state ‘loaded’ to true, when the page is first rendered (adding empty array as dependency in useEffect)
- use conditional rendering of the QR code based on whether the page has loaded for the first time
- This solved the hydration issues
- what is rendered on the server and the client does not match:
-
pdf download
- https://medium.com/knowsi/exporting-pdfs-with-next-js-714735f0a473
- https://udithajanadara.medium.com/export-react-component-as-a-pdf-5afba8ba02ee
- How should I ask for permission of the user to download a file?
- Sharing info via page routing parameters is a bit more convoluted in Next Js vs React Native (because of the server side rendering that needs to wait for the client side to be loaded, before being able to access the query).
- For web, we will be able to easily create a QR code that goes immediately to our web app on the right page and reads the info hidden in the querystring. (So, technically, a custom QR scanner in our app is not necessary if the user already has a built in QR scanner app. However, for a complete user flow, we will still add our own QR code scanner function on the receiver part of our app for the web version.).
- In React Native, ‘require(path)’ for an image source, was easily translated and transferred via the query params. For Next JS, this did not easily transfer via querystring, so instead, I opted to use my public folder to store my assets, to have static defined paths to my images, and just send the path string via the query to share the info between the pages.
-
https://blog.logrocket.com/generating-pdfs-react/
- tells us how to create a pdf with pdf-react library
- To download:
- https://stackoverflow.com/questions/51623836/how-do-i-download-a-pdf-file-onclick-with-react-pdf
-
https://react-pdf.org/advanced#on-the-fly-rendering
- there’s a built in pdf download link
- browser support:
- https://www.npmjs.com/package/react-pdf
- Need v5 for older browsers
- v4 for internet explorer 11
- problem:
- put Qrcode in the pdf
- package to create base64 version of QR code
- https://www.npmjs.com/package/qrcode-base64
- problem: it says type ‘gif’
- Final solution:
- https://www.npmjs.com/package/qrcode
- qrcode package
- can convert url to base64 png version of QR code
QRCode.toDataURL('the url that needs to be converted to QR', { errorCorrectionLevel: 'H' }, function (err, url) { console.log(url) })
- I can use this base64 result in both the pdf version and in my app itself.
- What about download permissions?
-
Seems like I do not need to use any permissions to download a file in the browser
-
The permissions are handled by the browser
Example in safari:
- If permission in browser is given once, it is remembered.
-
-
https://stackoverflow.com/questions/67062336/react-not-native-ask-camera-permission
- probably via navigator.getUserMedia
- Qr code reader package
- https://www.npmjs.com/package/react-qr-reader
- need extra package:
- npm i webrtc-adapter
- problem: this is still not working
- npm i webrtc-adapter
-
https://www.npmjs.com/package/react-camera-pro
- works with IOS, Android and webcam
- browser compatibility:
- problem:
- need extra package styled-components
- doesn’t detect camera device
-
https://www.npmjs.com/package/react-qr-scanner
- with camera: only works via [localhost](http://localhost) or https!
- works, but still a problem:
- If your refresh the Receiver page (where the QR code scanner is used), we get this error. But only if you refresh, not if you go through the link in the main menu…
- Uncaught ReferenceError: document is not defined
-
https://stackdiary.com/guides/referenceerror-document-is-not-defined/#:~:text=The "ReferenceError%3A document is not,such as in a Node.
- it’s probably again a hydration issue. The camera uses the document property, but this is a client side property, so you can’t access this when it is first rendered on the server.
-
https://stackdiary.com/guides/referenceerror-document-is-not-defined/#:~:text=The "ReferenceError%3A document is not,such as in a Node.
- I will try with vercel, since next js makes this very easy. (Vercel automatically uses the ‘next build’ process when it detects a next js project)
- Question:
- I can link a github repo, but what should be deployed is in a subfolder of this repo… Can I do this?
- https://vercel.com/blog/advanced-project-settings
- It was actually very easy to do this, I just had to select a subfolder in the vercel set-up
- Only thing I needed to look at was my npm install command:
- I needed to override this with ‘npm install —legacy-peer-deps’ to not have any dependency tree issues with my packages
- The result:
-
Chrome:
- browser automatically asks for camera permission
- If I refresh, it doesn’t need to ask again
-
Firefox
-
asks automatically:
- gives option to choose to remember this choice or not
- If you don’t remember, it still remembers for a while, so on instant refresh, you don’t need to ask permission again
-
testing in incognito mode:
- If you close your window and open in a new tab, it asks for permission again
- In normal and incognito:
- if you close window and open again, it asks again for permission
- Maybe I need to add a message when permission is denied
- If you open in 2 tabs at the same time, you need to give permission twice.
-
-
Safari
- Asks for permission automatically
- Ask immediately again on instant refresh
-
My Android phone
- The built in facebook browser: asks for permission to use camera, but can’t display the actual camera footage
- Chrome:
-
asks for permission
- remembers on refresh
-
can show camera, but shows front viewing camera
-
doesn’t show certain text:
- I think this is because of the fact that my phone is in dark mode and I need to explicitly say what colour my text should be.
- SOLVED: this was exactly the problem
- I think this is because of the fact that my phone is in dark mode and I need to explicitly say what colour my text should be.
-
-
iOS
- chrome asks for permission automatically
- but every refresh it asks again
- same issue with front facing camera
-
- apparently react-qr-scanner has a bug to use the rear camera
- i need to use the package modern-react-qr-scanner instead
-
The rear camera issue is now fixed, but there is again an issue with SSR:
- https://github.com/react-qr-reader/react-qr-reader/issues/91
- the package uses a ‘blob’, which is a web-only feature, so during the server side rendering, when the package is loaded, it gives an error
- The solution that is given:
-
exclude the component from SSR
-
https://blog.bitsrc.io/using-non-ssr-friendly-components-with-next-js-916f38e8992c
-
idea: separate the QR scanner in a separate component & make sure component is not included in SSR
- This seems to work
-
- Qr code generation/download seems to happen faster for native app
- No download permissions/file access needed to download final pdf
- On web: the camera permission depend on your browser AND OS
- chrome: will remember once you allow, even after refresh, but not when you close the window
- Firefox: gives the option to remember the permission, otherwise it will remember on refresh, not on window close
- Safari: asks again on every refresh
- chrome on iOS: asks on every refresh
- chrome on Android: same as on desktop
- Chrome pdf download on iOS
- since each pdf gets the same name right now, it overwrites the downloaded file. So maybe, it’s better to add some sort of time stamp to it, so that you could create more than 1 card and download them all.
-
https://www.aircards.co/blog/markerless-vs-marker-based-ar-with-examples
- Augment Reality needs a trigger. There are options for this trigger:
- Marker-based
- Uses designated marker to activate AR experience (e.g. QR code, logo, image)
- Shapes need to be distinctive/recognisable for the camera to identify it in the environment.
- AR experience is tied to the marker: displays on top of it and moves along with it.
- Markerless
- doesn’t use a marker
- scans the real environment and places digital elements on recognisable feature
- e.g. flat surface
- not tied to a marker, but placement is based on geometry of objects.
- e.g. pokémon go, product placement apps
- Location-based
- = GPS-based = Geo-based
- depends on your physical location
- used in travel/tourist industries
- e.g. directional guidance, art installations in a city
- Marker-based
- Augment Reality needs a trigger. There are options for this trigger:
- MindAR
https://github.com/hiukim/mind-ar-js
MindAR:
- Mentioned on AR js github as a new Open source AR library for the web, specifically for image tracking and face tracking.
Features | MindAR |
---|---|
Open source | yes |
Price | Free |
First release | 4/10/2021 |
Last release | 16/12/2022 |
Github stars | 1.4k |
Documentation | https://hiukim.github.io/mind-ar-js-doc/ |
The documentation doesn’t seem all that big yet, but since this open source project is done by people who really seem to believe in making AR accessible for free on the web, the documentation that is there is very well structured and clear to read. | |
Not a lot of examples as of now. | |
Udemy course available: https://www.udemy.com/course/introduction-to-web-ar-development/?referralCode=D2565F4CA6D767F30D61 (€34,99) | |
No code option for building Face Filters (MindAR Studio : https://studio.mindar.org/) | |
Platform for creating and publishing Image Tracking AR. (Pictarize: https://pictarize.com/) | |
Info about picking good tracking images (https://www.mindar.org/how-to-choose-a-good-target-image-for-tracking-in-ar-part-1/) | |
Tool for compiling your image before hand, to reduce loading time: | |
https://hiukim.github.io/mind-ar-js-doc/tools/compile/ | |
Community | - fairly limited |
- Stackoverflow: https://stackoverflow.com/questions/tagged/mindar?tab=Newest | |
- 6 questions, 2/6 answered | |
- max votes: 1 | |
- max views: 518 | |
Dependency on other frameworks | AFRAME |
Integration in other softwares | three.js, AFRAME, plain html, React: https://github.com/hiukim/mind-ar-js-react |
AR features | - Image Tracking |
- Face Tracking | |
Package size | Image Tracking and Face tracking are independently built, to minimise package size. Three js and AFRAME support are also built independently. |
Download options | HTML script, npm (depends on three js or AFRAME choice) |
Language | Javascript |
Underlying performance | Webgl (GPU) |
Ease of use | - no code options |
- choice between AFRAME or three js | |
- Pure html is possible | |
- Based on AFRAME, but no knowledge of AFRAME needed to use it. | |
React: does not work | |
Target Images | - Pre compile your images to reduce loading time |
- Extract features | |
- possible to use multiple target images | |
Pre-required knowledge | Very limited, in some cases some basic html knowledge is enough. Aframe knowledge is not necessary. |
NOTE: The Pictarize platform gives a very easy way to achieve a no code result of image tracking!
- I tried this out, but it doesn’t seem to show the content on my chosen tracking image yet. Might just need to find a better way of placing the content?
- You can use it for free, but with a water mark and the link you get for your example is not permanent.
Simple code example in plain html:
<html>
<head>
<meta name="viewport" content="width=device-width, initial-scale=1" />
<script src="[https://aframe.io/releases/1.3.0/aframe.min.js](https://aframe.io/releases/1.3.0/aframe.min.js)"></script>
<script src="[https://cdn.jsdelivr.net/npm/mind-ar@1.2.0/dist/mindar-image-aframe.prod.js](https://cdn.jsdelivr.net/npm/mind-ar@1.2.0/dist/mindar-image-aframe.prod.js)"></script>
</head>
<body>
<a-scene mindar-image="imageTargetSrc: [https://cdn.jsdelivr.net/gh/hiukim/mind-ar-js@1.2.0/examples/image-tracking/assets/card-example/card.mind;"](https://cdn.jsdelivr.net/gh/hiukim/mind-ar-js@1.2.0/examples/image-tracking/assets/card-example/card.mind;%22) color-space="sRGB" renderer="colorManagement: true, physicallyCorrectLights" vr-mode-ui="enabled: false" device-orientation-permission-ui="enabled: false">
<a-assets>
<img id="card" src="[https://cdn.jsdelivr.net/gh/hiukim/mind-ar-js@1.2.0/examples/image-tracking/assets/card-example/card.png](https://cdn.jsdelivr.net/gh/hiukim/mind-ar-js@1.2.0/examples/image-tracking/assets/card-example/card.png)" />
<a-asset-item id="avatarModel" src="[https://cdn.jsdelivr.net/gh/hiukim/mind-ar-js@1.2.0/examples/image-tracking/assets/card-example/softmind/scene.gltf](https://cdn.jsdelivr.net/gh/hiukim/mind-ar-js@1.2.0/examples/image-tracking/assets/card-example/softmind/scene.gltf)"></a-asset-item>
</a-assets>
<a-camera position="0 0 0" look-controls="enabled: false"></a-camera>
<a-entity mindar-image-target="targetIndex: 0">
<a-plane src="#card" position="0 0 0" height="0.552" width="1" rotation="0 0 0"></a-plane>
<a-gltf-model rotation="0 0 0 " position="0 0 0.1" scale="0.005 0.005 0.005" src="#avatarModel" animation="property: position; to: 0 0.1 0.1; dur: 1000; easing: easeInOutQuad; loop: true; dir: alternate">
</a-entity>
</a-scene>
</body>
</html>
IDEA: create multiple target images, with bad to good features for image tracking. See how well the image tracking goes in the different frameworks.
Different print quality?
Idea: Clickable download/screenshot button on the AR image.
I wanted to see how the choice of tracking image will affect the quality of the image tracking. So, using the image compiler from mindAR ([compiler](https://hiukim.github.io/mind-ar-js-doc/tools/compile/)), I wanted to see how the features would be analysed from different versions of an image:
Seems like the color doesn’t really matter all that much. But, the border around the image makes a big difference. Without it, it doesn’t have any bounding markers.
-
Download npm packages
npm i mind-ar --save npm i aframe --save
-
Step 1: Static example for react
-
https://github.com/hiukim/mind-ar-js-react
-
Error: ‘self is not defined’
-
Looking for solution:
-
https://github.com/Splidejs/splide/issues/252
- Seems like the issue is that ‘self’ is not available to node, but there is a node-self package
-
https://www.npmjs.com/package/node-self?activeTab=readme
- npm i node-self
-
https://github.com/Splidejs/splide/issues/252
-
New error:
- document is not defined.
- Does this mean it is a hydration issue again?
- Let’s try the dynamic loading again
-
The example does not seem up to date with the current version of the library
- I can’t seem to solve the issue myself and the documentation/community is lacking as of now.
- Maybe I will try a pure html example lateron.
-
-
Final try: unistall npm mind-ar package and install the version that was used in the example. (version 1.0.0)
- It gives the same issue, so this is no help
-
I really tried to look into the structure of the elements that are given, but it seems that there are a lot of behind the scenes issues, that I have no control over. So, for now, I will not look into this library any further.
- AR.js
Since this is one of the free frameworks I wanted to try, I will start with this one, as I won’t have to worry about any trial period running out of time.
https://github.com/AR-js-org/AR.js
AR js:
- lightweight library for AR on the web
- Image tracking, Location based AR & Marker tracking
Features AR js Open source yes Price Free First release Last release 29/12/2022 Github stars 4.3k on their new github project (https://github.com/AR-js-org/AR.js), 15.7k on their old github project (https://github.com/jeromeetienne/AR.js) Documentation - Official documentation: https://ar-js-org.github.io/AR.js-Docs/ -github: https://github.com/AR-js-org/AR.js Community - Stackoverflow (https://stackoverflow.com/search?q=AR.js) - 500 questions - max 78 votes - max 39k views - Codesandbox examples (speific react-three-arjs) https://codesandbox.io/examples/package/@artcom/react-three-arjs Dependency on other frameworks AFRAME, three js Integration in other softwares pure html, React, Vue, Next React: wrapper for react, based on react-three-fiber: https://github.com/artcom/react-three-arjs AR features - Image Tracking - Face Tracking - Location based - Marker tracking Package size Different build per option (three js or AFRAME + time of AR tracking) Download options npm, cdn Language Javascript Underlying performance webgl, webrtc Ease of use Target images https://github.com/Carnaux/NFT-Marker-Creator/wiki/Creating-good-markers - visual complexity: more features to recognise, gives better result - resolution - physical marker: distance to camera, well-printed colours on paper that opaque, on screens you need to consider the luminosity, resolution of the camera, luminosity of the environment Pre-required knowledge Aframe, three js REQUIREMENTS/RESTRICTIONS OF AR.JS (https://ar-js-org.github.io/AR.js-Docs/)
Some requirements and known restrictions are listed below:
- It works on every phone with webgl and webrtc.
- Marker based tracking is very lightweight, while Image Tracking is more CPU consuming
- Location-based AR will not work correctly on Firefox, due to the inability to obtain absolute device orientation (compass bearing)
- On device with multi-cameras, Chrome may have problems on detecting the right one. Please use Firefox if you find that AR.js opens on the wrong camera. There is an open issue for this.
- To work with Location Based feature, your phone needs to have GPS sensors
- Please, read carefully any suggestions that AR.js pops-up -as alerts- for Location Based on iOS, as iOS requires user actions to activate geoposition Access to the phone camera or to camera GPS sensors, due to major browsers restrictions, can be done only under https websites.
Experiment with AR.js
-
npm install
npm install @ar-js-org/ar.js
-
wrapper for react:
-
https://github.com/artcom/react-three-arjs
npm i @artcom/react-three-arjs
(dependency warnings again)
-
problem: module not found
- Extra packages needed:
- https://codesandbox.io/s/jolly-hodgkin-ssu33?file=/package.json:407-412
- @ar-js-org/ar.js
- @react-three/fiber
- core-js
- three
- Extra packages needed:
-
Again same error as before: needs to access client only properties → use dynamic wrapper
-
New error:
→ again seems to be an error behind the scenes in the library
- Maybe a camera_para.dat file missing, as mentioned on the github
- I downloaded the files from the sandbox example
- No error anymore, but it doesn’t show anything at the moment…
-
UPDATE: the codesandbox worked on my phone, scanning the screen, but not with my laptop using webcam to scan image on my phone.
- So, maybe the problem was with using my phone to show the tracking image.
- I will test by deploying to vercel, whether it works with my phone.
-
Problem with z-index of video:
- the video with the camera element, doesn’t show on my screen, since there is an automated z-index of -2, which puts the video view behind my body, which has a background color
-
-
-
FIRST BREAKTHROUGH!!
- the react AR js example works on my Android phone on a hosted version on vercel!
- This uses the built in example with the Hiro Marker
NEXT:
- test with printed out version
- test my own tracking image → how to create the pattern?
Making my own image markers for AR.js?:
-
https://carnaux.github.io/NFT-Marker-Creator/#/
- results in fset, iset files → fit for Aframe in web, not for my react library wrapper
-
https://jeromeetienne.github.io/AR.js/three.js/examples/marker-training/examples/generator.html
-
→ needs a thick border around it
-
Attempt1:
-
FAILED
-
-
Attempt2:
-
- Attempt 3:
None of these worked, with just replacing the .patt file. So, maybe something else is also needed? Or are the images just not good enough?
MARKER BASED vs NFT (Natural Feature tracking)
- I notice when trying to create my own markers, that there is a difference between marker based and NFT image tracking
- Marker based is much more restricted
- you need a very thick border, very limited
- Nft should allow any type of picture
- but I haven’t found a way to include it via the react-three-arjs library
- Marker based is much more restricted
CONCLUSION OF TODAY:
This hasn’t been the most encouraging day to say the least. I have not really achieved anything I imagined.
- The first library (mindAR) I tried, did not have any working result in the end.
- Ar js, via the react-three-arjs library was finally working, after a lot of trial and error. But only with the example given.
- I tried making my own markers (.patt files) to replace in the react-three-arjs example, but none of them worked. I also realised here that there is a difference between marker based and nft image tracking. And so far I only had an example for marker based, which is much more limiting in its images that you can choose. For example, needs a big border…
- There were a lot of errors behind the scenes in the libraries itself.
- Doesn’t work with my webcam + marker on phone.
- Works on phone with marker on laptop.
- React support seems very limited.
PLAN:
- Try out the libraries directly in React, without the marker.
- Try out plain html examples, without react.
- Look into A-frame, as all of the examples so far rely on it.
Let’s start fresh today.
What I want to do today:
- Try an example with Aframe in React
- Try the examples of yesterday in plain html
- Try to use the nft instead of the marker images
Yesterday, I tried out the AR js library in my React project. This didn’t go as planned, and I only found an example with a marker images, which is very limited in its design options (thick border, …).
So, I want to see if it is possible to go the NFT (Natural Feature Tracking) route. (https://ar-js-org.github.io/AR.js-Docs/image-tracking/) (https://github.com/Carnaux/NFT-Marker-Creator/wiki/Creating-good-markers)
To try this, I will see if I can use the AR.js with Aframe in React.
- Creating an nft image
- Test 1 of nft in react:
- Packages
- @ar-js-org/ar.js
- aframe
- Packages
- Error
- ‘Assertion failed: console.assert’
- https://stackoverflow.com/questions/74465462/ar-js-es6-ar-js-org-ar-js-npm-gives-assertion-failed-console-assert-error
- I have tried many ways to use this example https://github.com/FollowTheDarkside/arjs-image-tracking-sample, but nothing worked for me.
-
The built in Hiro example works. However, the image tracking is very inconsistent. And it seems that if they lost the images, you need to ‘retrigger’ finding it again, by for example placing something in from of your camera and then removing it again to see the marker image. So, it feels like it needs an extra push to register the marker again.
-
The example with the given Hiro marker worked, but I still haven’t gotten my own custom marker to work with the example.
-
Simply replacing the patternUrl with my own url is not enough
-
I found this example within an example of react wrapper for aframe
https://codesandbox.io/s/react-ar-js-forked-q5xd3x?file=/src/App.js:336-487
<a-marker-camera preset="custom" type="pattern" url="patterns/mypattern.patt" ></a-marker-camera>
-
Replacing ARMarker with this, gives an error:
-
let’s try the suggested solution by ‘extending’ components
-
Note: I found it very annoying that I could only test with my phone if I deploy my code, since the webrtc needs https to run. So, I tried a way to run my [localhost](http://localhost) on https, so I can access it via my IP address on my phone via https.
I followed the steps from [this medium article](https://medium.com/@greg.farrow1/nextjs-https-for-a-local-dev-server-98bb441eabd7) and it worked!
- Since so many open source AR libraries seem to based on Aframe, I feel like I need to get more familiar with some of the basics first before I can move further in this project.
Coach meeting
- Maybe try basic vanilla js examples
- Try AR.js without the wrapper
- Maybe Aframe instead AR.JS
- pure html
- A-frame
- mindar in html
- AR js html
- Aframe (custom markers)
- Aframe vs AR js
- look for basic ar.js tutorial on youtube
https://hiukim.github.io/mind-ar-js-doc/quick-start/overview
-
The basic html example works!
-
Using a different gltf model works as well!
-
Trying to pick the model via the query string!
-
First problem: I want to select the model tag via ‘getElementById’, but it isn’t loaded yet when ‘getElementById’ gets called, so we get null.
- Fix: DOMContentLoaded event on window!
-
Final:
- Preload all your model assets via the tag from the mindAR library
- Pick the used model src (linked to one of the assets via id) based on the query string
-
Deployment test:
- I tried deploying the result on filezilla, but when I went to this site, I got the weird error ‘failed to launch’, saying my device isn’t compatible, and to use chrome for Android, but that is exactly what I was using to check it.
- In the console I read : ‘getGamepad will now require Secure Context’
- Solution:
- apparently it automatically went to http instead of https, over https it works fine!
- I tried deploying the result on filezilla, but when I went to this site, I got the weird error ‘failed to launch’, saying my device isn’t compatible, and to use chrome for Android, but that is exactly what I was using to check it.
CONCLUSION: FIRST WORKING EXAMPLE!!!!
-
- Using own tracking images
- The documentation of mindAR provides info about pre-compiling your tracking images, to diminish loading time.
- https://hiukim.github.io/mind-ar-js-doc/quick-start/compile
- They even have their own tool to do this: https://hiukim.github.io/mind-ar-js-doc/tools/compile/
- I tried out 2 images:
The accuracy for both is quite good.
- Color vs black/white doesn’t matter
- shape doesn’t matter
- Easy change to custom tracking image:
```markdown
<a-scene mindar-image="imageTargetSrc: assets/custom/lego.mind" color-space="sRGB" renderer="colorManagement: true, physicallyCorrectLights" vr-mode-ui="enabled: false" device-orientation-permission-ui="enabled: false">
```
Only thing I needed to change was the imageTargetSrc to the path to my own compiled images.
- Selecting the tracking images via the query
- Works again! Very similar to how we did the 3D model selection via query.
- Tracking multiple images at once? ([https://hiukim.github.io/mind-ar-js-doc/examples/multi-tracks](https://hiukim.github.io/mind-ar-js-doc/examples/multi-tracks)) ([https://hiukim.github.io/mind-ar-js-doc/examples/multi-targets](https://hiukim.github.io/mind-ar-js-doc/examples/multi-targets))
- I think the previous image selection can be done as well by using the targetIndex. If you compile multiple images at once, it should all be in the same file?
- This works!
- If you compile multiple images at once via the image compiler of MindAR, you can witch between which target image is chosen, by using the TargetIndex property.
```html
<a-entity id="target" mindar-image-target="targetIndex: 0">
<a-plane src="#card" position="0 0 0" height="0.552" width="1" rotation="0 0 0"></a-plane>
<a-gltf-model id="avatarModel" rotation="0 0 0 " position="0 0 0.1" scale="0.005 0.005 0.005" src="#avatarModel" animation="property: position; to: 0 0.1 0.1; dur: 1000; easing: easeInOutQuad; loop: true; dir: alternate">
</a-entity>
```
- Also possible to use multiple images at once and show all effects at the same time: via the ‘MaxTrack’ property
```markdown
<a-scene mindar-image="imageTargetSrc: [https://cdn.jsdelivr.net/gh/hiukim/mind-ar-js@1.2.0/examples/image-tracking/assets/band-example/band.mind](https://cdn.jsdelivr.net/gh/hiukim/mind-ar-js@1.2.0/examples/image-tracking/assets/band-example/band.mind); maxTrack: 2" color-space="sRGB" renderer="colorManagement: true, physicallyCorrectLights" vr-mode-ui="enabled: false" device-orientation-permission-ui="enabled: false">
```
-
Adding own text message overlay in 3D.
-
Since I allow for customisation by the user of which text message is shown on the mystery mail, I want to see if there is an easy way to add a textual element, based on your input.
-
I can’t find an example of some sort of text tag on their documentation site directly, but I know it should be possible, since they use it in their no-code pictarize example.
-
Found an example:
- https://hiukim.github.io/mind-ar-js-doc/samples/advanced.html
- They use an tag
<a-text value="Portfolio" color="black" align="center" width="2" position="0 0.4 0" text=""></a-text>
- value is just the text you want to display?
- I think this is an A-frame element, so let’s look for some more info there.
-
https://github.com/aframevr/aframe/blob/master/docs/primitives/a-text.md
-
simple tag works!
-
Can we add a background?
- On the previous source, I found an example that uses an to set a text linked to a geometry (the colored plane background) and make sure the geometry will fit the text size, so a long text will give a bigger geometry, so that the background fits the plane.
<a-entity id="text" geometry="primitive: plane; height: auto; width: auto" material="color: blue" position="0.4 .4 0.4" text="width: 1; value: Choose your message; wrapCount: 20; align: center"></a-entity>
-
-
https://www.npmjs.com/package/aframe-text-geometry-component
- doesn’t work. Might be working with an older version of aframe
-
-
-
Adding some UI
- Tracking image preview to guide the user
- You can customise the UI of the scanning phase
- How?
-
You basically define an element with id and use that id to reference to in your scanningUI property.
<img id="example-image" class="hidden" src="https://cdn.jsdelivr.net/gh/hiukim/mind-ar-js@1.2.0/examples/image-tracking/assets/card-example/card.png" /> <a-scene id="tracking-image" mindar-image="imageTargetSrc: assets/custom/multi.mind; uiScanning: #example-image" color-space="sRGB" renderer="colorManagement: true, physicallyCorrectLights" vr-mode-ui="enabled: false" device-orientation-permission-ui="enabled: false">
-
NOTE: You do need to add a class ‘hidden’ with display set to none, but then mindAR knows what to do with the scanning UI (hides it when he recognises the tracking image and puts your 3D assets on top of it)
<style> #example-image { width: 70vw; height: auto; opacity: .5; position: absolute; left: 50vw; top: 50vh; transform: translate(-50%, -50%); } #example-image.hidden { display: none; } </style>
-
- Adjusting the overlay images
-
I made sure the image that gets overlayed in AR is the image you had chosen as tracking image and keeps its own ratio.
//using multitracking to select the target through the index const $target = document.getElementById('target'); $target.setAttribute('mindar-image-target', `targetIndex: ${trackingImage}`); const $exampleImage = document.getElementById('example-image'); $exampleImage.setAttribute('src', `assets/img/${trackingImages[trackingImage]}.png`); const $image = document.getElementById(`image-${trackingImage}`); const ratio = $image.height/$image.width; const $overlayImage = document.getElementById('overlay-image'); $overlayImage.setAttribute('src', `#image-${trackingImage}`); $overlayImage.setAttribute('height', `${ratio}`);
-
(Note: I needed to fix some image loading issues, by adding an extra window load event)
-
- Result
https://ava-mc.be/mindar-example/?model=1&image=0&message=this+is+my+message
mindar-example.mp4
Things to keep an eye out for
- Which type of images work best?
- How much battery is used.
Since I have a working html example again, it should be possible to put this in react. So, let’s try this again.
- create-react-app
- To try this step-by-step again, I will start by putting my working example in a new create-react-app project, to not have any issues with SSR yet.
-
https://www.npmjs.com/package/mind-ar
- npm install mind-ar
- trying the first example again
- I get a lot of warnings, but no errors
- The camera is strangely cut-off
- no AR effect happens when scanning the image, but also no errors
- Test: installing the same version of a-frame as my working example (1.3.0)
- Result: no difference
- I really can’t seem to figure it out.
- There is an example of mindAR in View, maybe I can find some help there.
- Starting over:
- Found a working sandbox example!
- https://codesandbox.io/s/mind-ar-react-tt9wgq?file=/src/App.js
- Download folder and just try to make it work on my computer.
- It does, so why? Why didn’t my example work…
- copying the files to my own project, breaks it again?
- Is it because of the package versions?
- I broke my project trying to go back to older packages
- Just changing version of aframe and mind-ar was not enough, so it must be something else. Maybe react version or node version?
- Found a working sandbox example!
SOME FINDINGS SO FAR:
- mindAR is a great library for simple plain html apps
- including it in react is really not easy. There is 1 example of it on the documentation site of mindAR, but this is outdated and does not work with the current version of react/aframe/mindAR.
The only working example I found on codesandbox (https://codesandbox.io/s/mind-ar-react-tt9wgq) was made with older versions of the packages. And before I just start working from the existing example in the set up project, I want to try a last time to set up my own react project, with an old react version, and then incorporate the mindAR example I found.
https://stackoverflow.com/questions/46566830/how-to-use-create-react-app-with-an-older-react-version
I found what might have been the issue with the earlier project I tried to set up myself with older version of react and the other packages. (react-dom instead of react-dom/client)
- No errors so far for the normal react project with older version of react
- Now I want to use the older versions of aframe and mindAR to test the example from before.
- aframe@1.2.0
- mindar@1.0.0
- No errors, but doesn’t show the AR effect yet. Some issue with THREE js:
- THREE.WebGLRenderer: EXT_texture_filter_anisotropic extension not supported.
- But this error only shows on mobile.
- Try again with exactly the same dependencies as in the package.json file of the example
- No difference
- Final try: copying package.json and package-lock.json file directly from the codesandbox example
- Still no AR
- Adding the yarn.lock file as well (even though I use npm?) and reinstalling node_modules
- Still nothing!
- No errors, but doesn’t show the AR effect yet. Some issue with THREE js:
- Trying to customise the exact code-sandbox example
- It works, but I do not know why
- What is the difference?
- Time to test out the working html example I had before!
- IT WORKS!
- I added the querystring logic to pick your message, model and target image, via react-router-dom
- I added my assets in the public folder
- I made a component where I could customise the model, text and target image of my AR message, by just passing it through the properties of my AR component.
- Made sure the ratio of the overlay image is adjusted, based on the image you choose.
Just like I did with mindAR, I would like to start over with a simple AR.js example in plain html. So, let’s go back to the documentation of AR.js.
https://ar-js-org.github.io/AR.js-Docs/image-tracking/
- NFT = natural feature tracking, gives option to uses full images instead of (Hiro) markers/bar-codes.
-
https://carnaux.github.io/NFT-Marker-Creator/#/
- link to create NFT markers from your own images
- restrictions: image needs to be square
- generates .fset, .fset3, .iset files
- Basic html example from documentation does not work
- device:error WebXR session support error: Cannot read properties of null (reading 'hasLoaded')
- Codebox examples do also not work for me.
https://ar-js-org.github.io/AR.js-Docs/marker-based/
The nft image tracking does not seem to work for me, but since I had 1 working example with the hiro marker, I will see if this is still an option in plain html.
-
The marker example does work (https://github.com/AR-js-org/AR.js#-marker-based-example) with the preset hiro marker and my own gltf model.
-
Does it work this time with my own marker?
- Let’s try with my own custom markers. I used the AR.js marker generator for this, from the documentation.
- I made multiple markers to test out. These ones did not work:
These ones worked:
It seems the thickness of the border plays the biggest role in whether the marker is recognised or not. It’s about .5 thickness ratio.
But, the contrast also seems to matter. These markers did not work, despite the .5 border ratio:
NOTE: I did notice a small syntax difference in AR.js vs mindAR: the default rotation settings differ with 90 degrees. I had to rotate my model and text 90 degrees to get the parallel front-view that was default in mindAR.
As I got the plain html version to work with the marker tracking, I want to try again to get it to work in react (and Next.js).
-
Separate create-react-app test
First I will try to get the AR.js example I made in html into a new, separate create-react-app file and then I will try to implement it in my existing Next.js app.
-
Trying by myself
-
Import npm package
npm install @ar-js-org/ar.js
-
Also need aframe package
npm install aframe
-
Using the html example from before.
-
Error:
- I tried using different versions of packages, based on the small sample of examples I found, but the problem seems to be with the package itself, as the error occurs, even before using any of the components yet in my app. It gives an error just at the import statement itself…
-
-
-
Looking for some AR.js examples
- I needed to click through an issue mentioned on the AR.js github Readme and there click through again to find this github with some react examples with AR.js ( https://github.com/kalwalt/react-AR-experiments/)
-
- they use an extra package ‘aframe-react’ to wrap the aframe components in.
- https://www.npmjs.com/package/aframe-react
- Trying to use their version of arjs:
-
"arjs": "https://github.com/AR-js-org/AR.js.git#595a5c1c7020d3dd46f7082f63b5b7cca6d376e3"
-
Still an error:
-
-
NOTE: you need to put all your assets in your public folder
-
- I needed to click through an issue mentioned on the AR.js github Readme and there click through again to find this github with some react examples with AR.js ( https://github.com/kalwalt/react-AR-experiments/)
-
-
Going back to the react wrapper I found previously
Since working out my html AR.js example, I noticed that the custom markers need to be constructed in the right way in order for them to work. So, I thought it would be a good idea to go back to one of the first examples I tried with the react wrapper package for Ar.js (https://github.com/artcom/react-three-arjs), with a custom marker that I knew worked with my plain html example.
And… It worked!
But now, I need to see if I can still easily add the custom text and models, like from my example.
- Adding gltf models
-
I tried to add gltf models from the aframe package, but since the wrapper is based on three js, I get the following error:
- Since I am now using a react wrapper for ar.js that works with the THREE js version of ar.js, I need to find the alternative in THREE js:
- How can I add a gltf model via three js?
- I looked into loading a model via react-three-fiber, as this library is based on it. I found this link: https://docs.pmnd.rs/react-three-fiber/tutorials/loading-models
-
This seems to work:
const gltf = useLoader(GLTFLoader, "assets/models/Buggy0.gltf"); ... <primitive object={gltf.scene} />
-
Now I just need to scale and position it a bit
<primitive object={gltf.scene} rotation={[-90, 0, 0]} position={[-0.2, 0, 0.1]} scale={[0.005, 0.005, 0.005]} />
-
The animation is also not there yet
-
https://github.com/pmndrs/react-three-fiber/issues/195
- This is when your animation is built into your model
- Can I make the animation myself in THREE js, like I had before in aframe? (animating the position
- https://threejs.org/docs/#manual/en/introduction/Animation-system
- It should be possible, but requires some THREE js background
-
https://github.com/pmndrs/react-three-fiber/issues/195
-
- I looked into loading a model via react-three-fiber, as this library is based on it. I found this link: https://docs.pmnd.rs/react-three-fiber/tutorials/loading-models
- How can I add a gltf model via three js?
- Since I am now using a react wrapper for ar.js that works with the THREE js version of ar.js, I need to find the alternative in THREE js:
-
- Adding a text element
- 3D text:
-
https://www.tutorialspoint.com/creating-3d-text-using-react-three-fiber
-
We need to extend:
import { extend } from "@react-three/fiber"; import { TextGeometry } from "three/examples/jsm/geometries/TextGeometry";
-
I need a font
-
- 3D text:
- Customising via query
-
I made sure that this version of the web-app with marker tracking via ar.js via the react-three-arjs wrapper is done. So, you can choose a tracking image, your message text and choose a 3D model.
-
The result can be found here:
-
COACH MEETING
- Don’t loose too much time with AR.js anymore.
- Blog is good. In the end, make a summary blogpost with all your conclusions, but you can reference to your more detailed blogposts, demos, more background.
- Don’t loose sight of your research question.
- You can mention the change in marker vs nft
- Try to do native option as soon as possible
- Make a planning for the coming days: you need to have enough time for your conclusion and presentation.
- For presentation: go through your process step by step, mention most important findings, conclusions in short. (15min + 5min Q&A)
- showcase video: Apparently there is also a showcase video. This is not a walkthrough, but more of a promo video.
- Most important thing in the end: a nice blog with a summary of my findings, maybe post on medium
- Focus on native vs web.
- Next meeting: Thursday
- Last meeting: very quick on Wednesday to go over presentation.
There are 2 important SDK’s for AR in native context:
- ARKit for iOS
- https://developer.apple.com/augmented-reality/arkit/
- Documentation:
- Problem: I don’t have an iOS device to test with, and a simulator does not have camera access.
- Device limitations:
- ARKit requires iOS 11.0 or later and an iOS device with an A9 or later processor
- you need to configure iOS privacy controls so the user can permit camera access for your app.
- Consent & privacy:
- ARKit automatically asks the user for permission for camera usage the first time your app runs an AR session.
- You need to display a message of why you are using the camera.
- For face tracking specifically: you need to specify what their face data will be used for.
-
- ARCore for Android
-
https://developers.google.com/ar
- Devices:
-
https://developers.google.com/ar/devices
- the device must be running Android 7.0 or newer
- They provide a list with all the supported devices. My own Android is not on the list. They do mention that simulators can run it, only with the rear camera.
- It also even depends where you are from. For example, the Androids in China are different.
-
https://developers.google.com/ar/devices
- Devices:
-
https://developers.google.com/ar
- Some services are available for iOS devices (Only Cloud anchors and Augmented Faces)
- The devices need to be ARKit compatible with iOS 11.0 or later
-
I found this library to work with React Native. However, I am using Expo to build my React Native project, and apparently there are some limitations:
Something weird: when looking at article that reference to Viro React, the links do not work anymore. The main website of viromedia, is not being used anymore…
START FROM SCRATCH
-
https://arvrjourney.com/augmented-reality-with-react-native-15219f36e3f2
- Need the following:
- Home Brew
- Node
- Watchman
- React Native CLI
- Viro React CLI
- Create Viro project
- react-viro init myFirstARApp → Name of app needs to be alphanumeric apparently
- Issues:
- Viro Media App:
- When trying out one of the built in project of the Viro Media app, I get the message that my device isn’t compatible, so I will not be able to use it to test out the AR. I will need a simulator for Android
- I don’t get a ngrok url, that is needed tot test my app.
-
https://github.com/viromedia/viro/issues/432
- They suggest just closing session and trying again: this does not work
- They suggest to check the ngrok status → it stays on ‘reconnecting - x509: certificate signed by unknown authority’
- It changed to ‘ reconnecting - resolved [tunnel.us.ngrok.com](http://tunnel.us.ngrok.com/) has no records’
-
https://github.com/inconshreveable/ngrok/issues/611
- They think it might be linked to region: -region=eu
-
https://github.com/inconshreveable/ngrok/issues/611
- It seems that there is a firewall blocking ngrok
-
https://github.com/viromedia/viro/issues/432
- Viro Media App:
- Need the following:
First I will make sure I can at least test it:
https://reactnative.dev/docs/0.68/environment-setup
-
Install Android Studio
Already a problem: The Android Virtual Device is unavailable
Trying this solution: [https://github.com/flutter/flutter/issues/118502#issuecomment-1383215722](https://github.com/flutter/flutter/issues/118502#issuecomment-1383215722)
- Still unavailable
[https://www.reddit.com/r/AndroidStudio/comments/108axki/in_android_studio_setup_wizard_it_says_android/](https://www.reddit.com/r/AndroidStudio/comments/108axki/in_android_studio_setup_wizard_it_says_android/)
- Trying to just continue and see if I can install it later
!
- Seems like the Android Emulator is installed, so let’s see if I can continue
- Install the right SDK Tools and Platforms
- Open your android folder in your React Native folder
- Create a new Android Device
- Pick a phone that should support ARCore
- Try to run the viro app I created on the Digital Android Device
- Immediately get an error:
- trying again: First start the app in my terminal and then run it again in the Android Studio
Some weird things I already notice: the documentation page that articles often link to, is no longer in use.
-
https://alexandermgabriel.medium.com/developing-with-the-mysterious-viroreact-b977de3b5451
- Someone else described the process of using ViroReact and it does not seem that promising
-
https://github.com/ViroCommunity/viro
- On the Github of Viro, we find some more info. Hopefully, I can easily set up their starting project.
- There is a starter kit project.
- Finally found the new documentation!
- https://viro-community.readme.io/docs/overview
- Quick Start guide for Mac:
I finally found the quickstart guide I was looking for. Let’s try it out!
https://viro-community.readme.io/docs/quick-start-maclinux
- Homebrew, Node, Watchman
- React native cli, react viro cli
- The same step as before, asking to use the Viro Media App, but I don’t have that type of device. So, I want to use my Digital Android Device.
-
I get an error again:
‘Could not compile settings file '/Users/avamc/OneDrive - Hogeschool West-Vlaanderen/personal-passion-project/ViroSample/android/settings.gradle’
https://stackoverflow.com/questions/58293436/could-not-compile-settings-gradle-react-native
- They say it might be because of multiple versions of JDK (Java Development Kit)
- But I checked and there is only 1 version: jdk-13.jdk
-
UPDATE: https://viro-community.readme.io/docs/set-up-android-studio-with-viroreact
According to the documentation, it is not currently possible to use an Android Emulator with Viro React
Let’s see if there is still an issue here.
- It seems to be working. I needed to open Android Studio manually for it to work. It downloaded the Expo Go app on the virtual device. But it is running very slowly. My computer can’t really handle it…
The Digital Android Device route seemed to be too heavy for my computer. Luckily, I was able to borrow a phone from a friend, that is on the list of ARCore supported devices (OnePlus 6T).
But the device has not been used in a while, so I will need to wait until the battery has loaded enough.
Let’s try to continue with the Viro React Example now that I have an Android Device that should support ARCore.
- Download the Viro Media App
- Now I do get a ngrok link, but I get an error on my Android device when trying to open it in the Viro Media App
- I am guessing it has to do with the restrictions of eduroam
- I tried with my hotspot, still the same issue
- Error in log references to this issue:
-
https://github.com/facebook/react-native/issues/4968
-
It suggests this:
To resolve try the following:
- Clear watchman watches:
watchman watch-del-all
. - Delete the
node_modules
folder:rm -rf node_modules && npm install
. - Reset Metro Bundler cache:
rm -rf /tmp/metro-bundler-cache-*
ornpm start -- --reset-cache
. - Remove haste cache:
rm -rf /tmp/haste-map-react-native-packager-*
.
- Clear watchman watches:
-
couldn’t remove the cache…
-
Looking up the issue specifically with this package: ‘react/jsx-runtime’
- https://stackoverflow.com/questions/70485451/unable-to-resolve-module-react-jsx-runtime
- They suggest updating React version to the latest version
- This seemed to have solved the issue!
-
- I am guessing it has to do with the restrictions of eduroam
Now that we finally get a working example with Viro React, let’s try to create our own AR functionalities with it!
https://viro-community.readme.io/docs/tutorial-ar
https://viro-community.readme.io/docs/image-recognition
- According to them, you can use any png or jpg for your image recognition
- https://viro-community.readme.io/docs/viroartrackingtargets
- What I noticed when looking at the App.js file: they are using the class component structure… This is not what I am used to. Will I be able to easily modify it? Maybe it is a better idea to start with a project from myself and then add Viro to it afterwards, so I can use the function component structure
I found an interesting link: maybe expo is still an option:
https://viro-community.readme.io/docs/integrating-with-expo
-
Copy my react native demo app
-
Install viro package
npm install --save @viro-community/react-viro
→ needed —legacy-peer-deps again
-
Add plugins section to app.json:
"plugins": ["@viro-community/react-viro"]
Specifically for AR on Android:
"plugins": [ [ "@viro-community/react-viro", { "androidXrMode": "AR" } ] ]
-
Configure Android AppManifest.xml
- Pre-build the android folder
- Change the manifest file
-
To test: need Android studio to connect with a hardware device to automatically run my expo app on android
-
Try to run android build → gradle error
- Do I still need some more settings?
- https://github.com/ViroCommunity/viro/blob/main/readmes/INSTALL_ANDROID.md#for-vr
- After checking, everything should be okay
- Looking up the error:
- https://stackoverflow.com/questions/58952564/error-unable-to-determine-the-current-character-it-is-not-a-string-number-ar
- Trying to delete node_modules and npm install again
- It’s taking a while, but it seems to be doing something at least…
- Nope still an error:
- but a different one at least
- Nope still an error:
- Do I still need some more settings?
- I see now that I apparently read over an important step on an expo link:
- https://docs.expo.dev/workflow/customizing/
- We need to use development builds
- npm install -g eas-cli
- npx expo install expo-dev-client
- Create and install EAS builds
- eas build
- Need Expo account
- Select a platform: I will choose Android
- I’m waiting in the ‘Free Tier Queue’, which takes longer than the premium one…
- build failed…
- It happened at the install dependencies point
- eas build
- My guess: the dependencies of the packages → can I do the build with the ‘—legacy-peer-deps’
- https://github.com/expo/eas-cli/issues/1545
- Same people with this issue. I’m trying this suggestion:
- You can add `.npmrc`with `legacy-peer-deps=true`
- It already brings the build process further than it was before.
- It is taking a really long time
- A really long time….
- Is that a good sign?
- Need to build again in the .apk format!!!!
- So, let’s wait for this again…
- It finally built, and I could finally install it on Android. But, there is an error:
- [https://stackoverflow.com/questions/50530889/gradle-sync-failed-cause-compilesdkversion-is-not-specified](https://stackoverflow.com/questions/50530889/gradle-sync-failed-cause-compilesdkversion-is-not-specified)
I got stuck trying to get ViroReact working in my existing expo project. So, I decided to go back to the ViroSample project and see where I would end up.
The project is written in a class component structure. However, to kind of circumvent this, I made my own AR component, and rendered that one inside their main component.
The image tracking is starting to work. And I even noticed that the images you can use as targets, can be just from a png or jpg, which gives the added option of having the user upload their own target image for tracking purposes. No pre-compiling is needed.
GLTF is not possible, converting to GLB for 3D models, works.
A positive note: when image is partly out of frame, the things you put on the image are still being shown, the tracking continues.
Configuring the text, was too difficult for what it was.
I added some customisation for my message in the Viro project. It took a while, but this was partly because of the class component structure I was not used to, the ViroSceneNAvigator object which was new to me, and just some React Native specifics, that differed a bit from Expo.
Viro-React-Experiment.mp4
When I compare it to my previous webAR results, I do find that the tracking goes more smoothly and I noticed that the AR effects 'stay in place' even when the tracking image is only partly in view, or not in view of the camera at all. If the image is not in view yet, but the AR effects would in reality still be visible, due to their size, you would still see this. This provides a more immersive feeling of the AR.
comparison-arjs-markers.mp4
If you look at the tracking here, it is rather instant as well, put the AR effect disappear once the markers are half out of view.
comparison-mindAR.mp4
Looking back at mindAR, I feel like the tracking is still happening when the image is partly in view, but due to the lagging, it does not look smooth. All effects are also lost when the image is completely out of view.
Today was a frustrating day to say the least. And I want to say the following about my first experience with native AR
-
Developer experience:
The overall experience as a developer working on a native AR application is a lot more restrictive. One of the biggest things I noticed, was the limitations for testing. Native AR is very dependent on your type of device, and sadly, the Android device I own for example, is not fit for ARCore. I tried setting up an Android Emulator. This in itself was a rather complicated task. Furthermore, the Viro React project I wanted to run, could not even work with an Emulator, which was mentioned in their documentation.
iOS emulator does not allow for camera usage, so iOS testing without a device is also out of the question for me.
Eventually, I managed to get my hands on an Android Device that does support ARCore.
-
Viro React specifics
-
Once I finally had a supported Android Device, I was able to run the starting Viro React project via their Viro Media App. This has a built in VR and AR project. However, when I wanted to start customising it, the project structure is very overwhelming and it was written in the class component structure of React, which I am not experienced with.
-
I found in the documentation that it should be possible to add Viro React to an existing Expo project. I am still in the process of figuring it out, but in general, it required a lot of extra steps. It also requires you to make development builds, outside of the expo set-up. This takes a long time to just be able to test it out on your device.
-
Finally got some results starting from the Initial project from viro react
What I noticed is that for the first time, no pre-processing of the image targets is needed. You can just use a png or jpg to use as image tracker.
-
Tracking continues out of frame!
- This is nice when you are still in close proximity to your image, but can behave weirdly, continuing to show the assets, even though the target image is nowhere to be seen anymore.
-
Category: Development process
- test-ability → for native very difficult
- for web: very easy
There is immediately a dependency tree conflict when installing Viro React in a new expo project.
- I will use an older version of react-native, to match the version in viro-react.
- I also needed to put a lower version of react
- Now the viro react package can install without dependency issues
- Now I do get a warning of expo that it might not work anymore, and indeed, the app is not viewable anymore in my Expo Go app.
- So, let’s go back
- Install viro with —legacy-peer-deps
- Now let’s try to go to a development build of expo again, like mentioned in the docs of viro
- via eas
- Download the .apk file on android phone to see if it works.
- Same error as yesterday: ‘couldn’t find DSO to load: libhermes.so’
- Tried adding hermes as engine:
- https://github.com/expo/expo/issues/18275
- https://docs.expo.dev/guides/using-hermes/
- Let’s try ‘eas update’ after adding
- Let’s try building again?
- Still doesn’t work and I am too unexperienced with app development to figure it out
Wrote this out on the Project Overview page.
- Less options for type of 3D model (only .obj & .glb)
I just wanted to see if I could find some explanations of why it might be a better AR performance on native:
https://www.agora.io/en/blog/comparing-web-ar-vs-native-ar/
→ why it is probably smoother: native has access an ‘AR camera’ → it handles the augmentation on the Operating System level, while web renders it on top of OS, so you have some computational lag
→ for apps you can beforehand limit who can install it → if device isn’t compatible, you can make sure they can’t download it, while with web you need to ‘disappoint’ them
→ native is optimised with the OS
basic testing app was +- 200MB to download…
https://www.framepush.com/2021/09/native-ar-versus-web-ar-which-is-for-me/
- native runs on CPU
- has access to all GPU
- specific hardware functionalities from specific platforms
https://www.softwaretestinghelp.com/webar-vs-native-ar/
- Game engines (like Unity and Unreal) play big part in native AR
- Unity: C#
Since I have some time and space left, I want to try out an example in Unity. Since in most articles about AR development, it is mentioned that it started with the 3D game engines, like for example Unity. There seem to be 2 big players, Vuforia and AR Foundation.
-
https://www.codingninjas.com/codestudio/library/arcore-vs-arkit-vs-vuforia-vs-ar-foundation#:~:text=When it comes to Vuforia,necessary tools for industrial needs.
- Vuforia Engine = Software development Kit for making AR apps.
- Even works on some phone that do not support ARCore or ARKit, when it is not supported, it uses its own platform → would my phone work?
- core capabilities: object and image tracking
- AR Foundation = cross platform framework to write AR functions once, and then construct them for iOS and Android with additional changes
- More of an interface to use, no AR functionality within itself
- Vuforia Engine = Software development Kit for making AR apps.
Tutorial on youtube by Playful Technology
https://www.youtube.com/watch?v=gpaq5bAjya8
- Tutorial for ARFoundation
- Self contained
- Not dependent on Vuforia (3rd party) anymore
- using native AR functionality provided by manufacturor
- The tutorial works with the 2022 version of unity, so I will install 2022 version of Unity
- For Android: include Android build support
- Downloading takes a while and a lot of storage
- Create a new AR core project
- Can’t open it
- Maybe it’s because of the Unity Hub version, there’s a new version available.
- Let’s try again and hope it works…
- Nope…
- People with the same issue:
- [https://forum.unity.com/threads/fail-to-open-project-from-unity-hub.812067/](https://forum.unity.com/threads/fail-to-open-project-from-unity-hub.812067/)
-
I will just try again. I signed into Unity now, maybe that was the problem…
- It’s doing something more already
- It opens!
-
Check and modify settings
- Edit > Project Settings
- XR-Plugin management
- In our case: targetting Android build
- XR-Plugin management
- Edit > Project Settings
-
Apparently I made the new project with the 2020 version, where I could not select Android
- Try to convert to 2022 version… Maybe that was the issue before… Hopefully not…
-
It doesn’t work with converting, stays in 2020 version. So let’s try… again…
-
Let’s try with the 2020 version… And add the Android Build options…
- Let’s see where we get with this version
- Project Settings:
- Android: ArCore
- Initialise on startup
- ARCore specific:
-
Require → when an app relies on AR, set it to required, if it is just an optional/additional part → optional
-
-
Player settings:
- company name
- app name
- icons
- Graphics:
- remove OpenGLS2 → not supported in the future
- Minimum API Level 24
- Needed for ARCore
- Scripting backend: IL2CPP
- 64 bit build:
- ARMv7 and ARM64 (Required for Google Play)
-
Check packages
- Window > Package manager
-
Unity crashed….
-
Trying again
- Update packages to most current version
It seems like it will take longer than I thought, so I will leave it for tomorrow.
Every time I search something about AR, even in native context, the first thing I see is an advertised Link for 8th wall, so I do still want to try it out and see what all the fuss is about…
-
Starting a free trial
- I do need to add my card info…
-
“8th Wall enables developers to create, collaborate and publish WebAR experiences that run directly in a web browser.”
- Javascript + WebGL
- Simultaneous Localization and Mapping (SLAM) engine, hyper-optimized for real-time Web AR on browsers
- World Tracking, Image Targets, and Face Effects
- 8th Wall Cloud Editor
- Built in hosting
- Can be integrated in
- Three js
- Aframe
- PlayCanvas
- Babylon js
- Requirements:
- WebGL (canvas.getContext('webgl') || canvas.getContext('webgl2'))
- getUserMedia (navigator.mediaDevices.getUserMedia)
- deviceorientation (window.DeviceOrientationEvent - only needed if SLAM is enabled)
- Web-Assembly/WASM (window.WebAssembly)
- https → for camera access
<img width="1071" alt="image" src="https://user-images.githubusercontent.com/91590248/214915602-eeac3a87-2802-4cca-a0db-99110f8c9852.png">
https://www.8thwall.com/docs/web/#quick-start-guide
- Creating a work space
- Activate public profile
- start a new project
- Unlimited Demo projects possible
- commercial: need commercial license
- I will try the image target museum 8th wall template
- clone the project
- Runs in a built in editor on 8th wall site (Cloud Editor)
- I have the feeling there is not enough storage on my laptop to run the example…
-
Can I incorporate 8th wall in an existing project?
- There is a React option: https://www.8thwall.com/8thwall/react-app
- But I feel like everything is in their own contained platform
-
Github page with examples:
-
Documentation:
- very extensive
- a lot of templates and examples
- Own Cloud Editor
- Their own youtube video tutorials
- Very clear starting guide
Idea → Add devices range category to my current tables
-
Already in the setup:
- Light
- AR origin
- To map real world objects and virtual objects in the scene together
- other scripts:
- Plane manager, anchor manager, Raycast manager, anchor creator
- plane manager: detects horizontal or vertical surfaces → for marker less AR
- Raycast: determine intersection of those planes at certain distance
- Anchor: physical point in space tracked by app
- AR session
- script: manages overall lifecycle of AR application
- Needs to be attached to every object in your AR app
- script: manages overall lifecycle of AR application
-
For our app, we don’t need the plane, raycast, anchor, so we remove them. Only AR Session script.
-
Add new component:
- AR Tracked image manager:
-
Needs library of images to track
-
Create in assets
-
Add images to it (just jpg, png…)
-
You can specify the size to look for in the real world, will make detection a bit better
-
Drag image library to AR Image Tracker Manager
-
Can add multiple images
-
- AR Tracked image manager:
-
Add script to add something when image is tracked
- Add component (in AR Session Origin)
- New script: PlaceTrackeImages
- Open in Visual Code
- Use AR Foundation engine and ARSubsystems engine
- Global variables
- reference to AR tracked image manager
- List of Gameobjects
- Game object: bundled assets: model + maybe scripts that affect its behaviour
- can be 2D texture to a quad, can be animation, 3D model, animated model
- each element in array corresponds to one of the images in the reference library that is being tracked
- give them the same name as the image being tracked
- Dictonary:
- keyed array
- of all of the prefabs created
- Functions
- Awake:
- happens once when code first starts running
- Difference with start?
- https://docs.unity3d.com/ScriptReference/MonoBehaviour.Start.html#:~:text=Start is called on the,not the script is enabled.
- Like the Awake function, Start is called exactly once in the lifetime of the script. However, Awake is called when the script object is initialised, regardless of whether or not the script is enabled
- OnEnable
- event listener for tracked image change event (on tracked image manager)
- when new image from references is detected in the scene
- or old one has left the scene
- or moved
- when event is enabled, we will attach our function that handles the change event (OnTrackedImageChanged)
- event listener for tracked image change event (on tracked image manager)
- Disables
- remove the event handler again
- OntrackedImageChanged
- the event handler itself
- When new image is detected:
- loop through ‘added’ array from event arguments
- look for corresponding Game object and if it has not been created (instantiated) yet, attach it to the tracked image and add it to our array of instantiated objects
- loop through ‘added’ array from event arguments
- When image is updated
- set the prefab to the right tracking state
- When item is removed, not able to be tracked anymore → left the scene completely
- destroy the prefab and remove it from our array
- Awake:
- New script: PlaceTrackeImages
- Add component (in AR Session Origin)
-
Adding prefabs
- 1 per tracked image
- Assets > create > Prefab
- Double click to edit
- Add all kinds of content you want
- 3D model:
- 3D object
- set the size it should have in the real world
- 3D object
- drag to array of prefabs
- 3D model:
-
Deploy to phone and test it
- File > Build settings
- Android
- Add our scene
- Switch platform
- If connected with USB: build and run
-
Error while building…
- A failure occurred while executing com.android.build.gradle.internal.tasks.Workers$ActionFacade See the Console for details.
- https://forum.unity.com/threads/a-failure-occurred-while-executing-com-android-build-gradle-internal-tasks-workers-actionfacade.958112/
- Let’s start with trying turning it on and off again
https://stackoverflow.com/questions/69776130/how-to-fix-gradle-build-failed-on-unity
Could be that I do not have ‘gradle’ installed on my computer, whatever it is…
I’m wondering if that could have been the problem before with Viro React as well?
References from Playful Technology:
-
https://docs.unity3d.com/Packages/com.unity.xr.arfoundation@5.0/manual/index.html
- About ARFoundation
-
https://gist.github.com/alastaira/92d790ed09330ea7a45e7c3a2a4d26e1
- github with the code of his project
- https://sourceforge.net/software/product/AR-Foundation/#:~:text=Q%3A How much does AR,starts at %24399 per year.
- https://developers.google.com/ar/develop/unity/android-11-build
Trying again with another version of unity
While I am having some troubles with building for Unity and waiting for another version to download, I will look back at 8th wall.
I noticed that chrome wouldn’t load the template in 8thWall’s Cloud Editor, so I tried it with Firefox and this works. I’ve had this issue before with in-browser web-editor elements in chrome not loading correctly (for example with expo). I’m not sure why.
Looking into some of the templates of 8th wall, you can choose A-frame or three.js, and some others as well.
Since I have made some experiments with the A-frame syntax and like the clarity of it, I will stick to this version. But it is nice to know I have some options.
There is a template specifically for React, but I want to have a bit more control. I would like to work on my own project, and have it on my github, instead of working inside their editor.
On their github, they try to explain how:
https://github.com/8thwall/web
Another article (about babylon.js, but I’m guessing the principles will be the same)
https://medium.com/8th-wall/babylon-js-8th-wall-integration-the-full-tutorial-7ed6a56fa168
It might be possible to add your own packages in the Cloud Editor:
https://www.8thwall.com/blog/post/89540744369/introducing-8th-wall-modules
https://forum.unity.com/threads/could-not-find-upm-executable-at-path.974862/
- Trying a last time
- At least the new version opens again
- Now let’s see if it will finally build…
- Still same error…
- A failure occurred while executing com.android.build.gradle.internal.tasks.Workers$ActionFacade
Might be a problem with the debug keystore
It finally worked when creating my own keystore…
FINALLY
Conclusions:
- Tracking is rather good, no lagging. But, when image is done tracking, the 3D elements disappear as well.
- Let’s see if this is because of my script?
- I tried 3 versions that could influence this according to me
- Inside the ‘removed’ tracked images event
- Destroy the prefab
- Inside the ‘removed’ tracked images event
- I tried 3 versions that could influence this according to me
2. Do nothing
3. Set prefab to inactive
- All options had the same result: the 3D elements disappear completely when the image is not being tracked anymore.
- Option to specify real world dimensions
Viro: also mention real world dimensions → might have made for better tracking
unity-ar-experiment.mp4
Today, I want to try the following things with 8th wall:
- Basic image tracking set-up
- Use the React template in their Cloud Editor
- Use 8th wall in my own local example
- Basic image tracking with 8th wall
- There are a lot of templates ready to use
- Cloud Editor makes project encompassed whole.
- Updating a template:
- just change body.html for the main content
- Starting tutorial from the documentation:
- https://www.youtube.com/watch?time_continue=3&v=-iAhNh_qD9I&embeds_euri=https%3A%2F%2Fwww.8thwall.com%2F&feature=emb_logo
- Logs from testing on mobile device are visible on console of Editor
- Can preview 3D models in the editor and see scale
- head.html, app.js, body.html → loaded in this order
- hot reload connected devices
- built in ‘git’ system → landing changes, see what has changed and add comment
- When published: QR code → points to short link that stays the same, but the URL they redirect to can be modified at any point, so you can already share the QR code and make changes later
- still access to landed code after free trial ended
- slack community
Let’s try out a simple image tracking example
-
Started from the ‘Endless image Targets template’ in A-frame
-
Basic structure
- head.html → add necessary packages
<meta name="8thwall:renderer" content="aframe:1.1.0"> <meta name="8thwall:package" content="@8thwall.xrextras"> <meta name="8thwall:package" content="@8thwall.landing-page">
- app.js
→ register A-frame components
- body.html
→ actual content
-
there’s an image target section, where you can add your own targets
- Adding
- Easy viewing for testing
- Options for adding cans or cones as objects to track
- Gives tips about which image targets work best
- Can’t really choose the dimension ratio
- You can test them on the spot to make changes if needed
8thwall-target-testing.mp4
There's an option to 'auto' use targets: then you don't need to write anything in your code. You can just use the name. If you need more than 5 targets at once: you will need to add them explicitly in your code. (https://www.8thwall.com/docs/web/#changing-active-image-targets):
-
Let’s try a more basic example of image-tracking
-
Adding gltf models:
- Easy preview!
-
creates a bundle out of the models assets that belong together
-
-
Test: When trying out the same tracking images I used with mindAR, I notice a difference with different Android phone. Surprisingly, the issues were with the newer phone.
- The mindAr example is also more difficult on the newer android, however it still tracks in some instances, while the 8th wall example doesn’t for some reason. Seems to me a bit of an autofocus problem.
-
Older Android:
- can find question mark target in the mindAR example: mindAR-can-find-target-phone-2.mp4
- can find question mark target in 8th wall example: 8thwall-can-find-target-phone-2.mp4
-
Newer Android:
- can find question mark target in the mindAR example, but rather sketchy: mindAR-sporadically-find-target.mp4
- can't find question mark target in 8th wall example: 8thwall-cant-find-target.mp4
-
One annoying thing: There are a lot of artbitrary ESLint rules, so if you copy from another JS file, you will get a lot of warnings (e.g. no semi-colon)
-
All about the image targets:
-
Had some trouble with the customisation via query string:
- Finally found this example: https://www.8thwall.com/playground/url-params
- comes down to creating the elements, instead of accessing them afterwards…
https://avamc.8thwall.app/image-tracking-basics/?message=this+will+be+your+message&model=3&image=0
Tracking image to test:
8thwall-full-demo.mp4
- Image tracking
- Image tracking itself is rather accurate
- There is a very easy image target adding UI in their cloud editor
- jpg and png uploads
- 8th wall processes them for you
- Some limits to amount of tracking images
- The ratio is fixed, so it will take part of an image. you can only choose portrait or landscape mode.
- The images itself need to be of enough quality. For example, compared to mindAR, the question mark image worked less good. On another phone, it did not track at all.
- tracking stops when image out of frame, objects disappear. No immersive effect, like with ViroReact.
- Prerequisite knowledge:
- Very simple templates can be used without too much pre-required knowledge
- You need some basic a-frame and/or three.js or babylon.js knowledge to really start creating your own thing
devices: does not seem to work on desktop
8th wall is a very self-contained framework
→ can be nice, but sometimes it took a while if you wanted more custom functionalities
- If you need very basic: a-frame knowledge very limited, for more custom features, you need to rely on more knowledge.
- Very extensive documentation, however, if you are stuck with a specific problem, you won’t find too much on the known public platforms, such as stackoverflow,…
- hosting via their own platform
Documentation
- Extensive documentation page
- youtube channel with a lot of tutorials
- A lot of starting templates to show the different possibilities
- Cloud Editor
Ease of use
- tutorials
- examples
- cloud editor
- Easy interface and structure, very customer minded
Price:
- Starts at 12$ a month
8th wall pros:
- Very extensive documentation and very customer focused framework
- pro/con: self-contained platform
- Cloud Editor has a lot of simplifying features, especially for adding image targets or previewing 3D models, with scale reference!.
8th wall cons:
- Very dependent on their system
- Price
PROS:
- Best image tracking, no lagging
- Immersive feeling: digital elements stay in environment, even when tracking image out of view.
- image targets can be regular jpg/pngs
- Free preview/testing app
- Easy starting project
CONS
- Very limited devices
- Hard to integrate in your own project
- Build problems
- No auto-linking for React Native
- Difficult testing process as developer
- Some outdated documentation links, contradictory info in referencing articles
PROS:
- simplest structure in plain html
- easy and clear assisting scanning UI (preview image to track,…)
- free
- Almost no pre-required knowledge needed
CONS:
- precompiling of image targets
- most lagging, least performant image tracking
- Hard to integrate in other frameworks, such as react
- Limited documentation, very few examples
- Small community
- Very bad google-ability of problems
PROS
- free
- only one I successfully integrated in React project
- large open-source community
CONS:
- most limited image tracking
- with half image tracking already stops
- only markers worked for me → very strict image format
- preprocessing of images needed
- Bad documentation, non-working example demos
- Confusing
- very limited
PROS
- Easy tutorials
- Large community
- Owned by Unity itself
- Google-ability of problems
CONS:
- limited devices
- C#
- Some build problems with Unity itself
PROS
- Self-contained customer-aimed platform
- Very extensive documentation
- Lots of tutorials
- templates
- Very limited laggin
- Easy to use editor
- Image target processor
- 3D model previewer
CONS
- Self-contained platform:
- more steps required to add to own set-up
- For more customisation, you need to understand more about their underlying structure and A-frame
- Price
- Target images more restricted than mindAR
I believe the easiest way will be to use an i-frame to include another 8th wall based page in my existing project.
https://www.8thwall.com/8thwall/inline-ar
Local projects:
https://github.com/8thwall/web/tree/master/serve
Disappointment: https://www.8thwall.com/docs/web/#start-a-new-project
- Select Hosting Type (Pro/Enterprise plans only): Decide up front if the project will be hosted by 8th Wall and developed using the 8th Wall Cloud Editor, or if you'll be self-hosting. This setting cannot be changed later. Self-hosting is only avilable to paid Pro/Enterprise workspaces. Self-hosting is not available to workspaces on Starter or Plus plans, or workspaces on the Pro plan during the free trial period.
I can’t use self-hosting to try out my project locally.
Today I want to close off the research part and write the final article that I could possibly put on Medium.
Things I will try to do today:
- Write out the article
- Add links
- Write out some explanation on my github README to make the examples clear
- Add demo links with target image
- Add videos in structured way to blog
Planning for today:
- Finish Medium Article
- Prepare presentation
- Finish up deliverables
- Print out example cards
- Coach meeting
- Creating showcase video
Coach meeting
- Don’t undersell yourself in your Medium article.
- Good structure of the article, good to add some humour sometimes.
- Maybe add link to Devine site or your own portfolio.
- Presentation is well-structured and clear. Maybe leave out live demos. Add them as a link in the end during Q&A.
- Project has ended as a clear whole.
Video is finished:
avaMirzaeeCheshmeh_personalpassionproject_showcasevideo.mp4
The final day. Today I will just finalise my files to hand in and prepare my presentation for tomorrow.