Skip to content

Daily Log

ava-mc edited this page Feb 1, 2023 · 46 revisions

Daily Log

Day 1 - 9/1/23

OVERVIEW

Native:

  • react native
    • ARKit & ARCore
    • ViroReact
      • based on 2 APIs (ARKit for iOS and ARCore for Android)
      • open source
  • Swift (strictly iOS)
    • ARKit
  • Unity + Vuforia

Web AR

  • PWA?
    • 8thWall
    • A-frame
    • AR js
    • EasyAR (OpenVC)

Research

  • Native vs Web
    • camera access
    • notifications
    • download threshold
    • memory usage
    • offline use (local storage/ memory)
  • differences per AR framework
    • image tracking
    • marker issues
    • accuracy
    • memory usage

Planning:

Week 1: Exploring Native app development (React Native, Swift) + PWA development

Week 2: Start exploring (free) AR frameworks for first demos

Week 3: Exploring paying AR frameworks (with free trial period +- 14 days?)

week 4: Summary, final demos, overview of findings

Questions:

  • What is expected? How much?
  • Meeting once a week?
  • Deliverables: small demos + ‘tech summary’
  • Personal issues
  1. Learning React Native

COACH MEETING - 9/1/23 13u

  • It might be useful to first make a list of which features or properties you want to compare in this project. That way you have a clear view of where you want to go with this project.
    • You can look for inspiration in other comparative studies.
    • Vb dev.to, medium
  • React or vanilla JS can be okay for what you want to do. Start with frameworks you are familiar with first.
  • Will there be any backend? Maybe just work with querystring, so that you can have some customisation in your link, without needing a real backend (vb name, message…)
  • Not everything you want to compare, needs to be in your end product (vb notifications…)
  • You could look into:
    • how long it took you to set up
    • how is the documentation
    • how big/helpful is the community
    • (actual features can easily be compared via documenation)
  • You can turn Notion into a blog (feather.so, simple.ink)
  • It seems like a good idea to keep a daily blog (even just for yourself) and maybe make weekly summaries of what you learned and your findings. The final result will probably be a fully written article, you could post this on medium or dev.to.
  • Next meeting: Friday 15h30

LOOKING INTO SOFTWARE COMPARISON ARTICLES

https://dev.to/software-comparisons

  • First impression: comparing 2 things at a time
  • Idea:
    • I will probably have different comparison classes. → web vs native AR, comparisons within web AR, comparisons within native AR, general (pricing, documentation, level of difficulty …)
    • Need to take into account my own background: I start with knowledge of React, no knowledge of native, ….

Day 2 - 10/1/23

EXPLORING SOFTWARE COMPARISON ARTICLES

https://dev.to/software-comparisons

  • First impression: comparing 2 things at a time
  • Idea:
    • I will probably have different comparison classes. → web vs native AR, comparisons within web AR, comparisons within native AR, general (pricing, documentation, level of difficulty …)
    • Need to take into account my own background: I start with knowledge of React, no knowledge of native, ….

Examples:

  1. web3.js vs ethers.js: a Comparison of Web3 Libraries

    This article compares 2 JS libraries with similar functionalities.

    • Quantitive comparison:
      • release date, github stats (stars, contibrutors), bundle size
    • API differences (methods, separation of roles)
    • comparing actual functions that should deliver the same result (amount of code?) (side-by-side examples)
    • support with other (open source) libraries/frameworks
      • idea: maybe look how easily integrated in React? React wrappers?
  2. Python GUI, PyQt vs TKinter

    Article about comparing Python GUIs.

    • Advantages vs differences
      • Learning resources
      • code flexibility
      • Ease to learn vs ease to master (learning curve)
      • dependencies
  3. React vs Vue vs Angular vs Svelte)

    Article comparing JS frontend frameworks.

    • popularity
      • google trends, NPM trends, and the Stackoverflow 2020 survey results
      • =/= larger community!
    • community/resources
      • spectrum chat
      • gitter chat
      • discord
      • stackoverflow
      • tutorials (paid/free), recentness of the tutorials
    • performance (how do you perform these tests?)
      • speed test
        • use a set task and compare the speed to execute
        • table with actions + speed
        • slowdown geometric mean?
      • startup test
      • memory test
    • learning curve
      • The way the author handled this factor seemed a bit subjective to me. The author kept estimating ‘probably a day to learn’, without any real ‘evidence’.
    • real-world examples
      • companies that use the framework
    • open-source?
    • release date
    • who it’s developed by
  4. JSX.Element vs ReactElement vs ReactNode)

Ideas for comparison classes:

GENERAL

  • How recently/frequently updated?
  • Github stats (how big is the community?)
  • Bundle size of packages
  • Side-by-side comparison of similar functions/features
    • ease of implementation
    • clarity of code
    • length of code
    • syntax?
  • Support/integration in other frameworks (mostly react/vanilla js?)
  • Keeping track of advantages/disadvantages I encounter.
  • Differences between the frameworks (without making a judgement about them already, just factual statements of differences)
  • Availability/clarity of Learning resources/documentation
  • Learning curve: Ease/Difficulty to learn
    • Learning dependencies
  • Real world examples
  • github
  • open-source?
  • popularity
    • google trends, NPM trends, and the Stackoverflow 2020 survey results
    • =/= larger community!
  • community/resources
    • spectrum chat
    • gitter chat
    • discord
    • stackoverflow
    • tutorials (paid/free), recentness of the tutorials
  • performance
    • speed
    • startup
    • memory
  • Debugging?

(my own thoughts:)

  • Price
  • Device range
  • Time I spent on learning it?
  • Pre-required knowledge → can it be implemented in any/many frameworks?
  • personal opinion of preference?
  • download necessities?

SPECIFIC FOR AR

  • Accuracy of image tracking
  • Limitations/conditions for tracking image look
  • Device range
  • How far can you go?
  • Possibilities/qualities of animations?
  • Internet speed dependency
  • Possibility/memory usage of local storage/offline use
  • what happens in darker rooms?
  • link to camera quality?
  • Permissions camera access

CHARACTERISTICS

  • name
  • summary of goal/purpose
  • price
  • use cases?
  • community size
  • amount of libraries/tools on top of this framework

Note:

  • Mention (my own) prerequisites/previous knowledge to have clear view for learning curve/ difficulty to implement in known frameworks…
  • Sometimes it comes down to opinion, you could compare 2 things and note differences, without a ‘clear’ objective view of what is ‘better’.

EXPLORING AR

  1. XR, AR, VR, MR – what’s the difference?

    • XR
      • = extended reality, refers to all combined real and virtual environments and man-machine interactions

      • umbrella term for AR, VR, …

        • AR = augmented reality
          • virtual information and objects are overlaid on the real world
          • “This experience enriches the real world with digital details such as images, text and animations, which are accessed through AR glasses or via screens, tablets and smartphones. Users are not isolated from the real world, but can interact and see what is happening in front of them.”
          • example: pokémon go, face filters (snapchat, instagram)
        • VR = Virtual reality
        • MR = Mixed reality
      • Examples:

        • Ikea AR functionality
        • Rolex AR app

        → both are representative forms of XR

  2. What is augmented reality (AR)?

    • integration of digital information with the user's environment in real time
    • information overlaid on top of real-world environment
    • via smartphone or glasses
    • term coined by Thomas Caudell in 1990
    • Requires hardware components:
      • processor
      • sensors:
        • camera, GPS (for user location), accelerometers, solid-state compasses (device orientation)
      • display
      • input device
    • Can need a lot of processing power depending on computationally intensive program (data processing can be offloaded to different machine)
    • tie data to augmented reality markers in the real world
    • “When a computing device's AR app or browser plugin receives digital information from a known marker, it begins to execute the marker's code and layer the correct image or images.”
    • AR vs VR
      • “The biggest difference between AR and VR is that augmented reality uses the existing real-world environment and puts virtual information on top of it, whereas VR completely immerses users in a virtually rendered environment. While VR puts the user in a new, simulated environment, AR places the user in a sort of mixed reality.”
    • Examples:
      • Retail apps: to show items in user’s environment
        • Target app
      • Tools and Measurement apps: use AR to measure different 3D points in the user's environment
        • Apple Measure app
      • Entertainment and games
        • Snapchat face filters
        • Pokémon go
      • Military
      • Architecture
      • Navigation
      • Archaeology
      • logistics training
        • Google glasses (glasses device for AR)
    • ARKit: Apple’s open source mobile AR development tool set
      • Improved Depth API
      • vb Target, Ikea
    • ARCore: Android equivalent
      • uses geospatial API with data from Google Earth 3D models, Street View image data from Google Maps
      • improved Depth API
  3. What is augmented reality or AR?)

    • AR = enhanced, interactive version of a real-world environment achieved through digital visual elements, sounds, and other sensory stimuli via holographic technology

    • 3 features:

      • combination of digital and physical worlds
      • real-time interactions
      • accurate 3D identification of virtual and real objects
    • Types of virtual realities

      • AR = Augmented reality
        • overlay real-world views with digital elements
        • limited interaction
      • VR = Virtual reality
        • immersive experiences, isolating user from real world
        • via headset device
      • MR = Mixed reality
        • combining AR and VR elements
        • digital objects can interact with the real world
      • XR = Extended reality
        • covers all types of technologies that enhance our senses
        • includes AR, VR, MR
    • Types of AR:

      Determines how you can display images and information.

      • marker-based
        • uses image recognition to identify objects already programmes in your AR device or application
        • Placing objects in view as points of reference helps AR device determine position and orientation of the camera.
          • This is done by switching camera to greyscale and detecting a marker. This marker is then compared with all the other markers in its information bank. Once device finds a match, it uses that data to mathematically determine the pose and then can place the AR image in the right spot.
      • markerless
        • more complex
        • no point on which your device will focus
        • So, device must recognize items as they appear in view.
        • Via recognition algorithm, device looks for colors, patterns, features to determine object and will orient itself via time, accelerometer, GPS and compass information. Then use camera to overlay an image within real-world surroundings.
    • How does AR work?

      • AI : most AR solutions need AI to work
      • AR software: tools and apps used to access AR.
      • Processing: usually using devices internal operating system
      • Lenses: need lens or image platform to view content or images
      • Sensors: AR system needs data about environment to align real and digital world. Camera captures info and sends it through software for processing.

CONSIDERING AR FRAMEWORKS

From my research before the winter break, I already had some AR frameworks I wanted to look into.

NATIVE:

  • ViroReact → for React native (based on ARKit and ARCore)
  • ARKit (iOS)
  • ARCore (Android)
  • Vuforia (Unity) (iOS + Android)

WEB

  • A-frame
  • 8th Wall
  • AR.js
  • Zappar

But, I also asked some people at In The Pocket, where I plan to do my internship next semester, for some recommendations of AR related frameworks that are used by them.

I got the following recommendations:

  • Babylon.js (for web based solutions)
  • WebXR (https://github.com/immersive-web/webxr)
  • Vulkan
  • Unity + C#
  • 8th wall (web), with a warning for pricing for commercial use
  • AR.js (open source) (web)
  • Blippar (web)
  • Zappar (web)
  • Onirix (web)
  • AR Foundation (native: iOS + android via Unity)
  • ARKit (native iOS)
  • ARCore (native Android)

FINAL SELECTION

Web

  • 8th wall
  • AR js
  • Babylon.js + WebXR
  • (Zappar)

Native

  • ARCore (react native)
  • ARKit (swift + react native)
  • unity

DEMO GOAL

Receiver:

  • scan QR code (with info of message content)
  • Use physical card as image marker to show message
  • Maybe have animated figure?

Sender:

  • Create secret message
    • Add text/name
    • add image (maybe animated)
    • maybe choose from different possible marker images??
  • Create QR code to send to someone

Some preliminary learning

Life long learning: REACT NATIVE

Since I will be looking into some native AR frameworks as well, (and leading up to my internship), I wanted to start learning React Native. I felt like this would be a good starting point to get into my first native coding project, since I have some experience with React already.

I started exploring React Native (via Expo) yesterday, so I will start by continuing this first tutorial.

https://docs.expo.dev/tutorial/introduction/

  • add the right assets

  • Installing dependencies (npx expo install react-dom react-native-web @expo/webpack-config)

  • to run development mode: npx expo start

    • open app on phone via expo go app (scan QR code)
    • open on web
    • open on iOS simulator (XCode)

    image

BABYLON JS

(SWIFT)

(UNITY)

Day 3 - 11/1/23

Some preliminary learning

REACT NATIVE

https://docs.expo.dev/tutorial/build-a-screen/

problem while working: create-expo-app creates a git repo in a new folder. But, since I already had a git repo to keep track of all my code for my passion project, I needed to add this subfolder as a submodule to my git repo.

  • styling: in JS

    image

  • add an image: use ‘require’ to add static image from assets

    image

  • divide components into files

    • components folder
  • Pressable component

    • touch event on phone
  • different styling for different usage of same component

    • theme prop
    • icons from expo: @expo/vector-icons
    • use in-line styling → last defined styles are used

Adding functionalities

  • pick picture from device
    • Expo SDK library expo-image-picker
  • use the picked image
    • uri (Uniform Resource Identifier) of the image
    • use a state variable

Creating a modal

  • presents content above the rest of your app
  • transparent instead of transparent=’true’!!

Adding gestures

Take Screenshots

  • react-native-view-shot and expo-media-library libraries
  • user permissions
    • usePermissions hook → might be important for me with camera access!
      • permission status
      • requestPermission method
      • on first load: permission is null → trigger requestPermission if it is null
  • import { captureRef } from 'react-native-view-shot';
    • takes screenshot of a View and returns uri of picture
    • put reference on the view you want to capture
    • returns promise with uri

Handle platform differences

  • browser can’t take screenshot via react-native-view-shot library
  • make exceptions to get same functionality on all platforms
  • for web: dom-to-image library
  • Platform module of React Native gives acces to info about platform on which app is running
    • use Platform.OS to check
  • Problem: i needed to install some packages to use web option:
    • npx expo install react-native-web@~0.18.9 react-dom@18.1.0 @expo/webpack-config@^0.17.2
  • Problem: seems like there are some problems with Modal on web

status bar, splash screen, app icon

  • status bar
    • expo-status-bar library
    • component
    • change style of StatusBar component (light)
  • splash screen
    • loading screen

    • app.json file with path defined in splash.image property

    • white bar on android → set background color for splash screen

      • change this in app.json file

      image

    • prevent splash screen from disappearing very quickly → manually set this via expo-splash screen library (only use this for testing!)

      • import * as SplashScreen from 'expo-splash-screen';

      SplashScreen.preventAutoHideAsync(); setTimeout(SplashScreen.hideAsync, 5000);

  • App icon
    • same as splash image → path to icon.png in app.json (icon property)

Extra documentation: https://docs.expo.dev/tutorial/follow-up/

Day 4 - 12/1/23

Plan for today: set up the ‘skeleton’ for my basic demo in React Native.

Reminder of functionalities:

There are 2 parts of the app:

  • Sender
    • Choose tracking image
    • Create visuals on top of tracking image
      • name
      • message
      • (animated) figures
    • test out the design (think about session storage, how to not loose what they are working on, when page is refreshed or something)
    • Create a QR code with the used data to send to someone
  • Receiver
    • scan QR code with data
    • use camera

Random thought: since a web option is available with React Native, it might also work to look at the web AR in react native for the web app version?

https://necolas.github.io/react-native-web/

Wireframes

I wanted to start making up the basic version (without the AR logic yet) of my demo app. So, to do this a bit more organised, I started with making some basic wireframes.

image

Trying to make the basic demo app in React Native

  • background image?
  • Navigation between screens
    • https://reactnative.dev/docs/navigation
    • via library
      • npm install @react-navigation/native @react-navigation/native-stack
    • https://reactnavigation.org/docs/getting-started/
      • install dependencies:
        • npx expo install react-native-screens react-native-safe-area-context
      • wrap in navigator container
        • don’t nest navigator containers, just use 1 at the root of your app
      • React native doesn’t have built in global history stack (↔ web urls)
      • native stack provides gestures from iOS and Android (↔ web)
      • Install native stack navigator library
        • npm install @react-navigation/native-stack
        • depends on react-native-screens
      • createNativeStackNavigator
        • returns object with 2 properties (which are components)
          • Screen
          • Navigator
            • Navigator contains Screen elements as its children, to define configuration for routes
      • NavigationContainer
        • component that manages navigation tree
        • contains navigation state
        • render at root of app (App.js)
          • must wrap all navigators structure
      • Use parameters when going to a route
        • Can this be a good option to give the info through the steps?
  • Horizontal scroll

I'm struggling a bit to work quicker with React Native. Right now it's going very slowly.

Day 5 - 13/1/23

Coach meeting

  • Make a blog with short weekly overviews and clear description of the goal. (Will check if github wiki is okay).
  • Don’t try to do too much, 1 framework for native and 1 for web will probably be enough work already
  • Focus on the final article and research aspect.
  • Github Student Developer Pack for free heroku credits

Setting up weekly blogpost + project overview

Working on React Native demo app

Some findings:

  • Extra steps are needed for scanning QR code:
    • you need a URL that is linked to your app, but this only works when you already have the app installed.

Day 6 - 14/1/23

Still working on React Native demo version of my app

Goals today:

  • pdf download
  • camera permissions
  • QR code scanning

pdf

I think the best way will by using the expo print package and using html to pdf. But, with added images for the card.

General notes:

It seems like a lot of the time iOS needs some custom development.

Making a QR code scanner in react native

Camera permissions

What I achieved today:

  • I managed to add a pdf with the resulting QR code and chosen image, so that you can print the card. (It works without permissions, are they still needed to save the pdfs? I think there might be some stored permissions from another expo project, so I should revisit this maybe.)
  • I used camera permissions to access the camera.
  • I was able to scan the created QR code via the Camera object (expo camera library) and show the chosen message and image.

Full demo:

sender-demo-react-native.mp4
receiver-demo-react-native.mp4

Some first findings:

Next steps:

  • Actually start implementing the AR functionality in my react native demo.
    • Will it rely on the same camera functionality? Do you need separate permissions for the camera usage?

Day 9 - 17/1/23

Coach meeting

  • Project overview is good. Maybe just a bit more explanation about marker-based AR. Also for your final article, so that people who don’t know anything about it can still understand.
    • If you find a good resource or blog, you can also just link to it, instead of writing it all out yourself.
  • Maybe put your repository on public, if you don’t mind your wiki being public.
  • For the problem with React Native routing to specific page with querystring: see it as a ‘nice-to-have’, if you have time left.
  • Maybe first make the web react demo skeleton, so that the ‘boring part’ is done and you can then just focus on AR itself for the rest of your time.
  • Don’t try to plan on using too many AR frameworks. See how much time you have.
  • If you can, add some choice for the AR design, not just text.
    • Good that you thought about it already. Just see what is possible when you are trying it out.
  • Next meeting: Friday. Next week: Wednesday online. Still need to check for final week with the Integration Juries of the first years.
  • I seem to be on track.

Life long learning: Next JS

I want to explore Next js for the web version of my app. Since I have some experience with Nuxt for Vue, it should be quite similar to work with and should normally make things like routing a lot easier.

https://nextjs.org/learn/foundations/about-nextjs?utm_source=next-site&utm_medium=homepage-cta&utm_campaign=next-website

  • https://nextjs.org/learn/basics/create-nextjs-app

    • Some properties
      • Framework to build React applications.
      • page-based intuitive routing
      • pre-rendering (SSR, SSG)
      • Discord community
    • Setup
      • Need Node.js 10.13 or later
      • create-next-app
  • https://www.freecodecamp.org/news/nextjs-tutorial/

    • create-next-app app-name
    • Sets up a new project with this structure:

    image

    • Pages and styles folder

      image

    • Pages and routing

      • just make new file in pages folder

      • no need for react router library anymore

      • dynamic pages

        • wrap file name in brackets
        • Example: for filename [slug].js

        image

        • useRouter hook to access info about app location or history
          • e.g. get query parameters
      • Link component form ‘next/link’

        • just use ‘href’ property to link to pages

          image

        • you can add query by passing object to href prop

          image

        • Push to routes via .push method of useRouter hook

      • SEO

        • use Head component from next/head
        • to add meta data
      • API

        • api folder for backend
        • e.g. for data fetching

Starting to create the demo skeleton in Next js

Some first findings about web vs native:

  • Sharing info via page routing parameters is a bit more convoluted in Next Js vs React Native (because of the server side rendering that needs to wait for the client side to be loaded, before being able to access the query).
  • For web, we will be able to easily create a QR code that goes immediately to our web app on the right page and reads the info hidden in the querystring. (So, technically, a custom QR scanner in our app is not necessary if the user already has a built in QR scanner app. However, for a complete user flow, we will still add our own QR code scanner function on the receiver part of our app for the web version.).
  • In React Native, ‘require(path)’ for an image source, was easily translated and transferred via the query params. For Next JS, this did not easily transfer via querystring, so instead, I opted to use my public folder to store my assets, to have static defined paths to my images, and just send the path string via the query to share the info between the pages.

Day 10 - 18/1/23

Pdf generation/download in React

Camera permissions/QR code scanner

Trying to deploy the basic web demo

  • I will try with vercel, since next js makes this very easy. (Vercel automatically uses the ‘next build’ process when it detects a next js project)
  • Question:
    • I can link a github repo, but what should be deployed is in a subfolder of this repo… Can I do this?
    • https://vercel.com/blog/advanced-project-settings
    • It was actually very easy to do this, I just had to select a subfolder in the vercel set-up
    • Only thing I needed to look at was my npm install command:
      • I needed to override this with ‘npm install —legacy-peer-deps’ to not have any dependency tree issues with my packages
  • The result:
    • https://passion-project-dusky.vercel.app/

    • Chrome:

      • browser automatically asks for camera permission
      • If I refresh, it doesn’t need to ask again
    • Firefox

      • asks automatically:

        • gives option to choose to remember this choice or not
        • If you don’t remember, it still remembers for a while, so on instant refresh, you don’t need to ask permission again

        image

      • testing in incognito mode:

      image

      • If you close your window and open in a new tab, it asks for permission again
      • In normal and incognito:
        • if you close window and open again, it asks again for permission
      • Maybe I need to add a message when permission is denied
      • If you open in 2 tabs at the same time, you need to give permission twice.
    • Safari

      • Asks for permission automatically

      image

      • Ask immediately again on instant refresh

Testing deployed site on mobile devices

Some native vs web findings:

  • Qr code generation/download seems to happen faster for native app
  • No download permissions/file access needed to download final pdf
  • On web: the camera permission depend on your browser AND OS
    • chrome: will remember once you allow, even after refresh, but not when you close the window
    • Firefox: gives the option to remember the permission, otherwise it will remember on refresh, not on window close
    • Safari: asks again on every refresh
    • chrome on iOS: asks on every refresh
    • chrome on Android: same as on desktop
  • Chrome pdf download on iOS
    • since each pdf gets the same name right now, it overwrites the downloaded file. So maybe, it’s better to add some sort of time stamp to it, so that you could create more than 1 card and download them all.

Marker-based vs Marker-less AR

  • https://www.aircards.co/blog/markerless-vs-marker-based-ar-with-examples
    • Augment Reality needs a trigger. There are options for this trigger:
      • Marker-based
        • Uses designated marker to activate AR experience (e.g. QR code, logo, image)
        • Shapes need to be distinctive/recognisable for the camera to identify it in the environment.
        • AR experience is tied to the marker: displays on top of it and moves along with it.
      • Markerless
        • doesn’t use a marker
        • scans the real environment and places digital elements on recognisable feature
          • e.g. flat surface
        • not tied to a marker, but placement is based on geometry of objects.
        • e.g. pokémon go, product placement apps
      • Location-based
        • = GPS-based = Geo-based
        • depends on your physical location
        • used in travel/tourist industries
        • e.g. directional guidance, art installations in a city

Day 11 - 19/1/23

First AR experiments (webAR)

  1. MindAR

https://github.com/hiukim/mind-ar-js

MindAR:

  • Mentioned on AR js github as a new Open source AR library for the web, specifically for image tracking and face tracking.
Features MindAR
Open source yes
Price Free
First release 4/10/2021
Last release 16/12/2022
Github stars 1.4k
Documentation https://hiukim.github.io/mind-ar-js-doc/
The documentation doesn’t seem all that big yet, but since this open source project is done by people who really seem to believe in making AR accessible for free on the web, the documentation that is there is very well structured and clear to read.
Not a lot of examples as of now.
Udemy course available: https://www.udemy.com/course/introduction-to-web-ar-development/?referralCode=D2565F4CA6D767F30D61 (€34,99)
No code option for building Face Filters (MindAR Studio : https://studio.mindar.org/)
Platform for creating and publishing Image Tracking AR. (Pictarize: https://pictarize.com/)
Info about picking good tracking images (https://www.mindar.org/how-to-choose-a-good-target-image-for-tracking-in-ar-part-1/)
Tool for compiling your image before hand, to reduce loading time:
https://hiukim.github.io/mind-ar-js-doc/tools/compile/
Community - fairly limited
- Stackoverflow: https://stackoverflow.com/questions/tagged/mindar?tab=Newest
- 6 questions, 2/6 answered
- max votes: 1
- max views: 518
Dependency on other frameworks AFRAME
Integration in other softwares three.js, AFRAME, plain html, React: https://github.com/hiukim/mind-ar-js-react
AR features - Image Tracking
- Face Tracking
Package size Image Tracking and Face tracking are independently built, to minimise package size. Three js and AFRAME support are also built independently.
Download options HTML script, npm (depends on three js or AFRAME choice)
Language Javascript
Underlying performance Webgl (GPU)
Ease of use - no code options
- choice between AFRAME or three js
- Pure html is possible
- Based on AFRAME, but no knowledge of AFRAME needed to use it.
React: does not work
Target Images - Pre compile your images to reduce loading time
- Extract features
- possible to use multiple target images
Pre-required knowledge Very limited, in some cases some basic html knowledge is enough. Aframe knowledge is not necessary.

NOTE: The Pictarize platform gives a very easy way to achieve a no code result of image tracking!

  • I tried this out, but it doesn’t seem to show the content on my chosen tracking image yet. Might just need to find a better way of placing the content?
  • You can use it for free, but with a water mark and the link you get for your example is not permanent.

Simple code example in plain html:

<html>
<head>
<meta name="viewport" content="width=device-width, initial-scale=1" />
<script src="[https://aframe.io/releases/1.3.0/aframe.min.js](https://aframe.io/releases/1.3.0/aframe.min.js)"></script>
<script src="[https://cdn.jsdelivr.net/npm/mind-ar@1.2.0/dist/mindar-image-aframe.prod.js](https://cdn.jsdelivr.net/npm/mind-ar@1.2.0/dist/mindar-image-aframe.prod.js)"></script>
</head>
<body>
<a-scene mindar-image="imageTargetSrc: [https://cdn.jsdelivr.net/gh/hiukim/mind-ar-js@1.2.0/examples/image-tracking/assets/card-example/card.mind;"](https://cdn.jsdelivr.net/gh/hiukim/mind-ar-js@1.2.0/examples/image-tracking/assets/card-example/card.mind;%22) color-space="sRGB" renderer="colorManagement: true, physicallyCorrectLights" vr-mode-ui="enabled: false" device-orientation-permission-ui="enabled: false">
<a-assets>
<img id="card" src="[https://cdn.jsdelivr.net/gh/hiukim/mind-ar-js@1.2.0/examples/image-tracking/assets/card-example/card.png](https://cdn.jsdelivr.net/gh/hiukim/mind-ar-js@1.2.0/examples/image-tracking/assets/card-example/card.png)" />
<a-asset-item id="avatarModel" src="[https://cdn.jsdelivr.net/gh/hiukim/mind-ar-js@1.2.0/examples/image-tracking/assets/card-example/softmind/scene.gltf](https://cdn.jsdelivr.net/gh/hiukim/mind-ar-js@1.2.0/examples/image-tracking/assets/card-example/softmind/scene.gltf)"></a-asset-item>
</a-assets>
 <a-camera position="0 0 0" look-controls="enabled: false"></a-camera>
  <a-entity mindar-image-target="targetIndex: 0">
    <a-plane src="#card" position="0 0 0" height="0.552" width="1" rotation="0 0 0"></a-plane>
    <a-gltf-model rotation="0 0 0 " position="0 0 0.1" scale="0.005 0.005 0.005" src="#avatarModel" animation="property: position; to: 0 0.1 0.1; dur: 1000; easing: easeInOutQuad; loop: true; dir: alternate">
  </a-entity>
</a-scene>
</body>
</html>

IDEA: create multiple target images, with bad to good features for image tracking. See how well the image tracking goes in the different frameworks.

Different print quality?

Idea: Clickable download/screenshot button on the AR image.

Creating/testing Tracking images

I wanted to see how the choice of tracking image will affect the quality of the image tracking. So, using the image compiler from mindAR ([compiler](https://hiukim.github.io/mind-ar-js-doc/tools/compile/)), I wanted to see how the features would be analysed from different versions of an image:

image image

Seems like the color doesn’t really matter all that much. But, the border around the image makes a big difference. Without it, it doesn’t have any bounding markers.

First experiment: MindAR with AFRAME

  • Download npm packages

    npm i mind-ar --save
    npm i aframe --save
  • Step 1: Static example for react

    • https://github.com/hiukim/mind-ar-js-react

      • Error: ‘self is not defined’

        image
        • Looking for solution:

        • New error:

          • document is not defined.
          • Does this mean it is a hydration issue again?
            • Let’s try the dynamic loading again
        • The dynamic loading seems to work, now there’s a new issue:

          image
        • The example does not seem up to date with the current version of the library

          • I can’t seem to solve the issue myself and the documentation/community is lacking as of now.
          • Maybe I will try a pure html example lateron.
      • Final try: unistall npm mind-ar package and install the version that was used in the example. (version 1.0.0)

        • It gives the same issue, so this is no help
      • I really tried to look into the structure of the elements that are given, but it seems that there are a lot of behind the scenes issues, that I have no control over. So, for now, I will not look into this library any further.

      1. AR.js

      Since this is one of the free frameworks I wanted to try, I will start with this one, as I won’t have to worry about any trial period running out of time.

      https://github.com/AR-js-org/AR.js

      AR js:

      • lightweight library for AR on the web
      • Image tracking, Location based AR & Marker tracking
      Features AR js
      Open source yes
      Price Free
      First release
      Last release 29/12/2022
      Github stars 4.3k on their new github project (https://github.com/AR-js-org/AR.js), 15.7k on their old github project (https://github.com/jeromeetienne/AR.js)
      Documentation - Official documentation: https://ar-js-org.github.io/AR.js-Docs/
      -github: https://github.com/AR-js-org/AR.js
      Community - Stackoverflow (https://stackoverflow.com/search?q=AR.js)
      - 500 questions
      - max 78 votes
      - max 39k views
      - Codesandbox examples (speific react-three-arjs) https://codesandbox.io/examples/package/@artcom/react-three-arjs
      Dependency on other frameworks AFRAME, three js
      Integration in other softwares pure html, React, Vue, Next
      React: wrapper for react, based on react-three-fiber: https://github.com/artcom/react-three-arjs
      AR features - Image Tracking
      - Face Tracking
      - Location based
      - Marker tracking
      Package size Different build per option (three js or AFRAME + time of AR tracking)
      Download options npm, cdn
      Language Javascript
      Underlying performance webgl, webrtc
      Ease of use
      Target images https://github.com/Carnaux/NFT-Marker-Creator/wiki/Creating-good-markers
      - visual complexity: more features to recognise, gives better result
      - resolution
      - physical marker: distance to camera, well-printed colours on paper that opaque, on screens you need to consider the luminosity, resolution of the camera, luminosity of the environment
      Pre-required knowledge Aframe, three js

      REQUIREMENTS/RESTRICTIONS OF AR.JS (https://ar-js-org.github.io/AR.js-Docs/)

      Some requirements and known restrictions are listed below:

      • It works on every phone with webgl and webrtc.
      • Marker based tracking is very lightweight, while Image Tracking is more CPU consuming
      • Location-based AR will not work correctly on Firefox, due to the inability to obtain absolute device orientation (compass bearing)
      • On device with multi-cameras, Chrome may have problems on detecting the right one. Please use Firefox if you find that AR.js opens on the wrong camera. There is an open issue for this.
      • To work with Location Based feature, your phone needs to have GPS sensors
      • Please, read carefully any suggestions that AR.js pops-up -as alerts- for Location Based on iOS, as iOS requires user actions to activate geoposition Access to the phone camera or to camera GPS sensors, due to major browsers restrictions, can be done only under https websites.

      Experiment with AR.js

      • npm install

        npm install @ar-js-org/ar.js
      • wrapper for react:

        • https://github.com/artcom/react-three-arjs

          • npm i @artcom/react-three-arjs

          (dependency warnings again)

          • problem: module not found

          • Again same error as before: needs to access client only properties → use dynamic wrapper

          • New error:

            image

            → again seems to be an error behind the scenes in the library

            • Maybe a camera_para.dat file missing, as mentioned on the github
            • I downloaded the files from the sandbox example
              • No error anymore, but it doesn’t show anything at the moment…
          • UPDATE: the codesandbox worked on my phone, scanning the screen, but not with my laptop using webcam to scan image on my phone.

            • So, maybe the problem was with using my phone to show the tracking image.
            • I will test by deploying to vercel, whether it works with my phone.
          • Problem with z-index of video:

            • the video with the camera element, doesn’t show on my screen, since there is an automated z-index of -2, which puts the video view behind my body, which has a background color

FIRST BREAKTHROUGH!!

image

NEXT:

  • test with printed out version
  • test my own tracking image → how to create the pattern?

Making my own image markers for AR.js?:

image
- Attempt 3:
image

None of these worked, with just replacing the .patt file. So, maybe something else is also needed? Or are the images just not good enough?

MARKER BASED vs NFT (Natural Feature tracking)

  • I notice when trying to create my own markers, that there is a difference between marker based and NFT image tracking
    • Marker based is much more restricted
      • you need a very thick border, very limited
    • Nft should allow any type of picture
      • but I haven’t found a way to include it via the react-three-arjs library

CONCLUSION OF TODAY:

This hasn’t been the most encouraging day to say the least. I have not really achieved anything I imagined.

  • The first library (mindAR) I tried, did not have any working result in the end.
  • Ar js, via the react-three-arjs library was finally working, after a lot of trial and error. But only with the example given.
  • I tried making my own markers (.patt files) to replace in the react-three-arjs example, but none of them worked. I also realised here that there is a difference between marker based and nft image tracking. And so far I only had an example for marker based, which is much more limiting in its images that you can choose. For example, needs a big border…
  • There were a lot of errors behind the scenes in the libraries itself.
  • Doesn’t work with my webcam + marker on phone.
  • Works on phone with marker on laptop.
  • React support seems very limited.

PLAN:

  • Try out the libraries directly in React, without the marker.
  • Try out plain html examples, without react.
  • Look into A-frame, as all of the examples so far rely on it.

Day 12 - 20/1/23

Let’s start fresh today.

What I want to do today:

  • Try an example with Aframe in React
  • Try the examples of yesterday in plain html
  • Try to use the nft instead of the marker images

Trying out the nft image tracking in React

Yesterday, I tried out the AR js library in my React project. This didn’t go as planned, and I only found an example with a marker images, which is very limited in its design options (thick border, …).

So, I want to see if it is possible to go the NFT (Natural Feature Tracking) route. (https://ar-js-org.github.io/AR.js-Docs/image-tracking/) (https://github.com/Carnaux/NFT-Marker-Creator/wiki/Creating-good-markers)

To try this, I will see if I can use the AR.js with Aframe in React.

  1. Creating an nft image
  2. Test 1 of nft in react:
    • Packages
      • @ar-js-org/ar.js
      • aframe
  3. Error
  4. I have tried many ways to use this example https://github.com/FollowTheDarkside/arjs-image-tracking-sample, but nothing worked for me.

Going back to the hiro marker example with react-three-arjs

  • The built in Hiro example works. However, the image tracking is very inconsistent. And it seems that if they lost the images, you need to ‘retrigger’ finding it again, by for example placing something in from of your camera and then removing it again to see the marker image. So, it feels like it needs an extra push to register the marker again.

  • The example with the given Hiro marker worked, but I still haven’t gotten my own custom marker to work with the example.

    image
  • Simply replacing the patternUrl with my own url is not enough

  • I found this example within an example of react wrapper for aframe

    https://codesandbox.io/s/react-ar-js-forked-q5xd3x?file=/src/App.js:336-487

    <a-marker-camera
    preset="custom"
    type="pattern"
    url="patterns/mypattern.patt"
    ></a-marker-camera>

Note: I found it very annoying that I could only test with my phone if I deploy my code, since the webrtc needs https to run. So, I tried a way to run my [localhost](http://localhost) on https, so I can access it via my IP address on my phone via https.

I followed the steps from [this medium article](https://medium.com/@greg.farrow1/nextjs-https-for-a-local-dev-server-98bb441eabd7) and it worked!

Aframe basics

  • Since so many open source AR libraries seem to based on Aframe, I feel like I need to get more familiar with some of the basics first before I can move further in this project.

Coach meeting

  • Maybe try basic vanilla js examples
  • Try AR.js without the wrapper
  • Maybe Aframe instead AR.JS
  • pure html
    • A-frame
  • mindar in html
  • AR js html
  • Aframe (custom markers)
  • Aframe vs AR js
  • look for basic ar.js tutorial on youtube

Starting over with MindAR in pure html

https://hiukim.github.io/mind-ar-js-doc/quick-start/overview

  1. The basic html example works!

  2. Using a different gltf model works as well!

  3. Trying to pick the model via the query string!

    • First problem: I want to select the model tag via ‘getElementById’, but it isn’t loaded yet when ‘getElementById’ gets called, so we get null.

    • Final:

      • Preload all your model assets via the tag from the mindAR library
      • Pick the used model src (linked to one of the assets via id) based on the query string
      image
    • Deployment test:

      • I tried deploying the result on filezilla, but when I went to this site, I got the weird error ‘failed to launch’, saying my device isn’t compatible, and to use chrome for Android, but that is exactly what I was using to check it.
        • In the console I read : ‘getGamepad will now require Secure Context’
      • Solution:
        • apparently it automatically went to http instead of https, over https it works fine!

    CONCLUSION: FIRST WORKING EXAMPLE!!!!

Day 13 - 21/1/23

More customisation for pure html MindAR example

  1. Using own tracking images
image
    The accuracy for both is quite good.
    
    - Color vs black/white doesn’t matter
    - shape doesn’t matter
    
    - Easy change to custom tracking image:
    
    ```markdown
    <a-scene mindar-image="imageTargetSrc: assets/custom/lego.mind" color-space="sRGB" renderer="colorManagement: true, physicallyCorrectLights" vr-mode-ui="enabled: false" device-orientation-permission-ui="enabled: false">
    ```
    
    Only thing I needed to change was the imageTargetSrc to the path to my own compiled images.
    
- Selecting the tracking images via the query
    - Works again! Very similar to how we did the 3D model selection via query.
- Tracking multiple images at once? ([https://hiukim.github.io/mind-ar-js-doc/examples/multi-tracks](https://hiukim.github.io/mind-ar-js-doc/examples/multi-tracks)) ([https://hiukim.github.io/mind-ar-js-doc/examples/multi-targets](https://hiukim.github.io/mind-ar-js-doc/examples/multi-targets))
    - I think the previous image selection can be done as well by using the targetIndex. If you compile multiple images at once, it should all be in the same file?
        - This works!
            - If you compile multiple images at once via the image compiler of MindAR, you can witch between which target image is chosen, by using the TargetIndex property.
                
                ```html
                <a-entity id="target" mindar-image-target="targetIndex: 0">
                        <a-plane src="#card" position="0 0 0" height="0.552" width="1" rotation="0 0 0"></a-plane>
                        <a-gltf-model id="avatarModel" rotation="0 0 0 " position="0 0 0.1" scale="0.005 0.005 0.005" src="#avatarModel" animation="property: position; to: 0 0.1 0.1; dur: 1000; easing: easeInOutQuad; loop: true; dir: alternate">
                      </a-entity>
                ```
                
    - Also possible to use multiple images at once and show all effects at the same time: via the ‘MaxTrack’ property
        
        ```markdown
        <a-scene mindar-image="imageTargetSrc: [https://cdn.jsdelivr.net/gh/hiukim/mind-ar-js@1.2.0/examples/image-tracking/assets/band-example/band.mind](https://cdn.jsdelivr.net/gh/hiukim/mind-ar-js@1.2.0/examples/image-tracking/assets/band-example/band.mind); maxTrack: 2" color-space="sRGB" renderer="colorManagement: true, physicallyCorrectLights" vr-mode-ui="enabled: false" device-orientation-permission-ui="enabled: false">
        ```
  1. Adding own text message overlay in 3D.

    • Since I allow for customisation by the user of which text message is shown on the mystery mail, I want to see if there is an easy way to add a textual element, based on your input.

    • I can’t find an example of some sort of text tag on their documentation site directly, but I know it should be possible, since they use it in their no-code pictarize example.

    • Found an example:

      <a-text value="Portfolio" color="black" align="center" width="2" position="0 0.4 0" text=""></a-text>
      • value is just the text you want to display?
      • I think this is an A-frame element, so let’s look for some more info there.
  2. Adding some UI

  • Tracking image preview to guide the user
    • You can customise the UI of the scanning phase
    • How?
      • You basically define an element with id and use that id to reference to in your scanningUI property.

        <img id="example-image" class="hidden" src="https://cdn.jsdelivr.net/gh/hiukim/mind-ar-js@1.2.0/examples/image-tracking/assets/card-example/card.png" />
        
            <a-scene id="tracking-image" mindar-image="imageTargetSrc: assets/custom/multi.mind; uiScanning: #example-image" color-space="sRGB" renderer="colorManagement: true, physicallyCorrectLights" vr-mode-ui="enabled: false" device-orientation-permission-ui="enabled: false">
      • NOTE: You do need to add a class ‘hidden’ with display set to none, but then mindAR knows what to do with the scanning UI (hides it when he recognises the tracking image and puts your 3D assets on top of it)

        <style>
                #example-image {
                    width: 70vw;
                    height: auto;
                    opacity: .5;
                    position: absolute;
                    left: 50vw;
                    top: 50vh;
                    transform: translate(-50%, -50%);
              }
              #example-image.hidden {
                display: none;
              }
            </style>
  1. Adjusting the overlay images
    • I made sure the image that gets overlayed in AR is the image you had chosen as tracking image and keeps its own ratio.

      //using multitracking to select the target through the index
                      const $target = document.getElementById('target'); 
                      $target.setAttribute('mindar-image-target', `targetIndex: ${trackingImage}`);
      
                      const $exampleImage =  document.getElementById('example-image'); 
                      $exampleImage.setAttribute('src', `assets/img/${trackingImages[trackingImage]}.png`);
      
                      const $image = document.getElementById(`image-${trackingImage}`);
                      const ratio = $image.height/$image.width;
                      const $overlayImage =  document.getElementById('overlay-image'); 
                      $overlayImage.setAttribute('src', `#image-${trackingImage}`);
                      $overlayImage.setAttribute('height', `${ratio}`);
      
    • (Note: I needed to fix some image loading issues, by adding an extra window load event)

  2. Result

https://ava-mc.be/mindar-example/?model=1&image=0&message=this+is+my+message

mindar-example.mp4

Things to keep an eye out for

  • Which type of images work best?
  • How much battery is used.

Trying mindAR in react again

Since I have a working html example again, it should be possible to put this in react. So, let’s try this again.

  1. create-react-app
    • To try this step-by-step again, I will start by putting my working example in a new create-react-app project, to not have any issues with SSR yet.
    • https://www.npmjs.com/package/mind-ar
      • npm install mind-ar
      • trying the first example again
        • I get a lot of warnings, but no errors
        • The camera is strangely cut-off
        • no AR effect happens when scanning the image, but also no errors
      • Test: installing the same version of a-frame as my working example (1.3.0)
        • Result: no difference
      • I really can’t seem to figure it out.
      • There is an example of mindAR in View, maybe I can find some help there.
  2. Starting over:
    • Found a working sandbox example!
      • https://codesandbox.io/s/mind-ar-react-tt9wgq?file=/src/App.js
      • Download folder and just try to make it work on my computer.
        • It does, so why? Why didn’t my example work…
        • copying the files to my own project, breaks it again?
        • Is it because of the package versions?
          • I broke my project trying to go back to older packages
          • Just changing version of aframe and mind-ar was not enough, so it must be something else. Maybe react version or node version?

SOME FINDINGS SO FAR:

  • mindAR is a great library for simple plain html apps
  • including it in react is really not easy. There is 1 example of it on the documentation site of mindAR, but this is outdated and does not work with the current version of react/aframe/mindAR.

Day 15 - 23/1/23

Trying mindAR in old version of React

The only working example I found on codesandbox (https://codesandbox.io/s/mind-ar-react-tt9wgq) was made with older versions of the packages. And before I just start working from the existing example in the set up project, I want to try a last time to set up my own react project, with an old react version, and then incorporate the mindAR example I found.

https://stackoverflow.com/questions/46566830/how-to-use-create-react-app-with-an-older-react-version

I found what might have been the issue with the earlier project I tried to set up myself with older version of react and the other packages. (react-dom instead of react-dom/client)

  • No errors so far for the normal react project with older version of react
  • Now I want to use the older versions of aframe and mindAR to test the example from before.
    • aframe@1.2.0
    • mindar@1.0.0
      • No errors, but doesn’t show the AR effect yet. Some issue with THREE js:
        • THREE.WebGLRenderer: EXT_texture_filter_anisotropic extension not supported.
        • But this error only shows on mobile.
      • Try again with exactly the same dependencies as in the package.json file of the example
        • No difference
      • Final try: copying package.json and package-lock.json file directly from the codesandbox example
        • Still no AR
      • Adding the yarn.lock file as well (even though I use npm?) and reinstalling node_modules
        • Still nothing!
  • Trying to customise the exact code-sandbox example
    • It works, but I do not know why
    • What is the difference?
    • Time to test out the working html example I had before!
    • IT WORKS!
      • I added the querystring logic to pick your message, model and target image, via react-router-dom
      • I added my assets in the public folder
      • I made a component where I could customise the model, text and target image of my AR message, by just passing it through the properties of my AR component.
      • Made sure the ratio of the overlay image is adjusted, based on the image you choose.

Starting over with AR.js

Just like I did with mindAR, I would like to start over with a simple AR.js example in plain html. So, let’s go back to the documentation of AR.js.

https://ar-js-org.github.io/AR.js-Docs/image-tracking/

  • NFT = natural feature tracking, gives option to uses full images instead of (Hiro) markers/bar-codes.
  • https://carnaux.github.io/NFT-Marker-Creator/#/
    • link to create NFT markers from your own images
    • restrictions: image needs to be square
    • generates .fset, .fset3, .iset files
  • Basic html example from documentation does not work
    • device:error WebXR session support error: Cannot read properties of null (reading 'hasLoaded')
  • Codebox examples do also not work for me.

Trying marker based with AR.js

https://ar-js-org.github.io/AR.js-Docs/marker-based/

The nft image tracking does not seem to work for me, but since I had 1 working example with the hiro marker, I will see if this is still an option in plain html.

image

These ones worked:

image

It seems the thickness of the border plays the biggest role in whether the marker is recognised or not. It’s about .5 thickness ratio.

image

But, the contrast also seems to matter. These markers did not work, despite the .5 border ratio:

image

NOTE: I did notice a small syntax difference in AR.js vs mindAR: the default rotation settings differ with 90 degrees. I had to rotate my model and text 90 degrees to get the parallel front-view that was default in mindAR.

Day 16 - 24/1/23

Trying the marker tracking with AR.js in react

As I got the plain html version to work with the marker tracking, I want to try again to get it to work in react (and Next.js).

  1. Separate create-react-app test

    First I will try to get the AR.js example I made in html into a new, separate create-react-app file and then I will try to implement it in my existing Next.js app.

  2. Going back to the react wrapper I found previously

Since working out my html AR.js example, I noticed that the custom markers need to be constructed in the right way in order for them to work. So, I thought it would be a good idea to go back to one of the first examples I tried with the react wrapper package for Ar.js (https://github.com/artcom/react-three-arjs), with a custom marker that I knew worked with my plain html example.

And… It worked!

But now, I need to see if I can still easily add the custom text and models, like from my example.

COACH MEETING

  • Don’t loose too much time with AR.js anymore.
  • Blog is good. In the end, make a summary blogpost with all your conclusions, but you can reference to your more detailed blogposts, demos, more background.
  • Don’t loose sight of your research question.
    • You can mention the change in marker vs nft
    • Try to do native option as soon as possible
  • Make a planning for the coming days: you need to have enough time for your conclusion and presentation.
    • For presentation: go through your process step by step, mention most important findings, conclusions in short. (15min + 5min Q&A)
    • showcase video: Apparently there is also a showcase video. This is not a walkthrough, but more of a promo video.
  • Most important thing in the end: a nice blog with a summary of my findings, maybe post on medium
  • Focus on native vs web.
  • Next meeting: Thursday
  • Last meeting: very quick on Wednesday to go over presentation.

Day 17 - 25/1/23

Experimenting with native AR in React Native

There are 2 important SDK’s for AR in native context:

image
    - Consent & privacy:
        - ARKit automatically asks the user for permission for camera usage the first time your app runs an AR session.
        - You need to display a message of why you are using the camera.
        - For face tracking specifically: you need to specify what their face data will be used for.
        - 
  • ARCore for Android
    • https://developers.google.com/ar
      • Devices:
        • https://developers.google.com/ar/devices
          • the device must be running Android 7.0 or newer
          • They provide a list with all the supported devices. My own Android is not on the list. They do mention that simulators can run it, only with the rear camera.
          • It also even depends where you are from. For example, the Androids in China are different.
image
            - Some services are available for iOS devices (Only Cloud anchors and Augmented Faces)
                - The devices need to be ARKit compatible with iOS 11.0 or later
    - 

Viro for React Native

I found this library to work with React Native. However, I am using Expo to build my React Native project, and apparently there are some limitations:

Something weird: when looking at article that reference to Viro React, the links do not work anymore. The main website of viromedia, is not being used anymore…

START FROM SCRATCH

First I will make sure I can at least test it:

Setting up a Virtual Android Device

https://reactnative.dev/docs/0.68/environment-setup

  1. Install Android Studio

    Already a problem: The Android Virtual Device is unavailable

image
Trying this solution: [https://github.com/flutter/flutter/issues/118502#issuecomment-1383215722](https://github.com/flutter/flutter/issues/118502#issuecomment-1383215722) 

- Still unavailable

[https://www.reddit.com/r/AndroidStudio/comments/108axki/in_android_studio_setup_wizard_it_says_android/](https://www.reddit.com/r/AndroidStudio/comments/108axki/in_android_studio_setup_wizard_it_says_android/) 

- Trying to just continue and see if I can install it later
    
    !
image
- Seems like the Android Emulator is installed, so let’s see if I can continue
  1. Install the right SDK Tools and Platforms
  2. Open your android folder in your React Native folder
  3. Create a new Android Device
    • Pick a phone that should support ARCore
  4. Try to run the viro app I created on the Digital Android Device
    • Immediately get an error:
image
  • trying again: First start the app in my terminal and then run it again in the Android Studio

Starting from scratch with ViroReact

Some weird things I already notice: the documentation page that articles often link to, is no longer in use.

Trying the Quick Start Guide

I finally found the quickstart guide I was looking for. Let’s try it out!

https://viro-community.readme.io/docs/quick-start-maclinux

  1. Homebrew, Node, Watchman
  2. React native cli, react viro cli
  3. The same step as before, asking to use the Viro Media App, but I don’t have that type of device. So, I want to use my Digital Android Device.

UPDATE: https://viro-community.readme.io/docs/set-up-android-studio-with-viroreact

According to the documentation, it is not currently possible to use an Android Emulator with Viro React

Testing Digital Android Device with my previous expo react native example

Let’s see if there is still an issue here.

  • It seems to be working. I needed to open Android Studio manually for it to work. It downloaded the Expo Go app on the virtual device. But it is running very slowly. My computer can’t really handle it…

Trying again with a supported Android Device

The Digital Android Device route seemed to be too heavy for my computer. Luckily, I was able to borrow a phone from a friend, that is on the list of ARCore supported devices (OnePlus 6T).

But the device has not been used in a while, so I will need to wait until the battery has loaded enough.

Viro React with a supported device

Let’s try to continue with the Viro React Example now that I have an Android Device that should support ARCore.

  • Download the Viro Media App
  • Now I do get a ngrok link, but I get an error on my Android device when trying to open it in the Viro Media App
    • I am guessing it has to do with the restrictions of eduroam
      • I tried with my hotspot, still the same issue
    • Error in log references to this issue:

Customizing the Viro React Starter kit project

Now that we finally get a working example with Viro React, let’s try to create our own AR functionalities with it!

https://viro-community.readme.io/docs/tutorial-ar

https://viro-community.readme.io/docs/image-recognition

  • According to them, you can use any png or jpg for your image recognition
  • https://viro-community.readme.io/docs/viroartrackingtargets
  • What I noticed when looking at the App.js file: they are using the class component structure… This is not what I am used to. Will I be able to easily modify it? Maybe it is a better idea to start with a project from myself and then add Viro to it afterwards, so I can use the function component structure

Trying to incorporate Viro in an existing React Native (Expo) project

I found an interesting link: maybe expo is still an option:

https://viro-community.readme.io/docs/integrating-with-expo

  1. Copy my react native demo app

  2. Install viro package

    npm install --save @viro-community/react-viro

    → needed —legacy-peer-deps again

  3. Add plugins section to app.json:

    "plugins": ["@viro-community/react-viro"]

    Specifically for AR on Android:

    "plugins": [
          [
            "@viro-community/react-viro",
            {
              "androidXrMode": "AR"
            }
          ]
        ]
  4. Configure Android AppManifest.xml

    • Pre-build the android folder
    • Change the manifest file
  5. To test: need Android studio to connect with a hardware device to automatically run my expo app on android

  6. Try to run android build → gradle error

Trying again

  • I see now that I apparently read over an important step on an expo link:
    • https://docs.expo.dev/workflow/customizing/
    • We need to use development builds
      • npm install -g eas-cli
      • npx expo install expo-dev-client
      • Create and install EAS builds
        • eas build
          • Need Expo account
          • Select a platform: I will choose Android
          • I’m waiting in the ‘Free Tier Queue’, which takes longer than the premium one…
          • build failed…
          • It happened at the install dependencies point
image
            - My guess: the dependencies of the packages → can I do the build with the ‘—legacy-peer-deps’
                - https://github.com/expo/eas-cli/issues/1545
                - Same people with this issue. I’m trying this suggestion:
                    - You can add `.npmrc`with `legacy-peer-deps=true`
                    - It already brings the build process further than it was before.
                    - It is taking a really long time
                    - A really long time….
                    - Is that a good sign?
            - Need to build again in the .apk format!!!!
                - So, let’s wait for this again…
                - It finally built, and I could finally install it on Android. But, there is an error:
                    - [https://stackoverflow.com/questions/50530889/gradle-sync-failed-cause-compilesdkversion-is-not-specified](https://stackoverflow.com/questions/50530889/gradle-sync-failed-cause-compilesdkversion-is-not-specified)

Revisiting the ViroSample project

I got stuck trying to get ViroReact working in my existing expo project. So, I decided to go back to the ViroSample project and see where I would end up.

The project is written in a class component structure. However, to kind of circumvent this, I made my own AR component, and rendered that one inside their main component.

The image tracking is starting to work. And I even noticed that the images you can use as targets, can be just from a png or jpg, which gives the added option of having the user upload their own target image for tracking purposes. No pre-compiling is needed.

GLTF is not possible, converting to GLB for 3D models, works.

A positive note: when image is partly out of frame, the things you put on the image are still being shown, the tracking continues.

Configuring the text, was too difficult for what it was.

I added some customisation for my message in the Viro project. It took a while, but this was partly because of the class component structure I was not used to, the ViroSceneNAvigator object which was new to me, and just some React Native specifics, that differed a bit from Expo.

The result of today:

Viro-React-Experiment.mp4

When I compare it to my previous webAR results, I do find that the tracking goes more smoothly and I noticed that the AR effects 'stay in place' even when the tracking image is only partly in view, or not in view of the camera at all. If the image is not in view yet, but the AR effects would in reality still be visible, due to their size, you would still see this. This provides a more immersive feeling of the AR.

comparison-arjs-markers.mp4

If you look at the tracking here, it is rather instant as well, put the AR effect disappear once the markers are half out of view.

comparison-mindAR.mp4

Looking back at mindAR, I feel like the tracking is still happening when the image is partly in view, but due to the lagging, it does not look smooth. All effects are also lost when the image is completely out of view.

Experience of today: native AR

Today was a frustrating day to say the least. And I want to say the following about my first experience with native AR

  • Developer experience:

    The overall experience as a developer working on a native AR application is a lot more restrictive. One of the biggest things I noticed, was the limitations for testing. Native AR is very dependent on your type of device, and sadly, the Android device I own for example, is not fit for ARCore. I tried setting up an Android Emulator. This in itself was a rather complicated task. Furthermore, the Viro React project I wanted to run, could not even work with an Emulator, which was mentioned in their documentation.

    iOS emulator does not allow for camera usage, so iOS testing without a device is also out of the question for me.

    Eventually, I managed to get my hands on an Android Device that does support ARCore.

  • Viro React specifics

    • Once I finally had a supported Android Device, I was able to run the starting Viro React project via their Viro Media App. This has a built in VR and AR project. However, when I wanted to start customising it, the project structure is very overwhelming and it was written in the class component structure of React, which I am not experienced with.

    • I found in the documentation that it should be possible to add Viro React to an existing Expo project. I am still in the process of figuring it out, but in general, it required a lot of extra steps. It also requires you to make development builds, outside of the expo set-up. This takes a long time to just be able to test it out on your device.

    • Finally got some results starting from the Initial project from viro react

      What I noticed is that for the first time, no pre-processing of the image targets is needed. You can just use a png or jpg to use as image tracker.

    • Tracking continues out of frame!

      • This is nice when you are still in close proximity to your image, but can behave weirdly, continuing to show the assets, even though the target image is nowhere to be seen anymore.

Category: Development process

  • test-ability → for native very difficult
  • for web: very easy

Day 18 - 26/1/23

Trying to get Viro React working in a new expo project

There is immediately a dependency tree conflict when installing Viro React in a new expo project.

  • I will use an older version of react-native, to match the version in viro-react.
  • I also needed to put a lower version of react
  • Now the viro react package can install without dependency issues
  • Now I do get a warning of expo that it might not work anymore, and indeed, the app is not viewable anymore in my Expo Go app.
  • So, let’s go back
  • Install viro with —legacy-peer-deps
  • Now let’s try to go to a development build of expo again, like mentioned in the docs of viro

Overview of Viro React

Wrote this out on the Project Overview page.

  • Less options for type of 3D model (only .obj & .glb)

Some native AR research

I just wanted to see if I could find some explanations of why it might be a better AR performance on native:

https://www.agora.io/en/blog/comparing-web-ar-vs-native-ar/

→ why it is probably smoother: native has access an ‘AR camera’ → it handles the augmentation on the Operating System level, while web renders it on top of OS, so you have some computational lag

→ for apps you can beforehand limit who can install it → if device isn’t compatible, you can make sure they can’t download it, while with web you need to ‘disappoint’ them

→ native is optimised with the OS

basic testing app was +- 200MB to download…

https://www.framepush.com/2021/09/native-ar-versus-web-ar-which-is-for-me/

  • native runs on CPU
  • has access to all GPU
  • specific hardware functionalities from specific platforms

https://www.softwaretestinghelp.com/webar-vs-native-ar/

  • Game engines (like Unity and Unreal) play big part in native AR
  • Unity: C#

Trying Unity tutorial?

Since I have some time and space left, I want to try out an example in Unity. Since in most articles about AR development, it is mentioned that it started with the 3D game engines, like for example Unity. There seem to be 2 big players, Vuforia and AR Foundation.

Tutorial on youtube by Playful Technology

https://www.youtube.com/watch?v=gpaq5bAjya8

  • Tutorial for ARFoundation
  • Self contained
  • Not dependent on Vuforia (3rd party) anymore
  • using native AR functionality provided by manufacturor
  • The tutorial works with the 2022 version of unity, so I will install 2022 version of Unity
    • For Android: include Android build support
    • Downloading takes a while and a lot of storage
  • Create a new AR core project
    • Can’t open it
    • Maybe it’s because of the Unity Hub version, there’s a new version available.
    • Let’s try again and hope it works…
    • Nope…
image
- People with the same issue:
    - [https://forum.unity.com/threads/fail-to-open-project-from-unity-hub.812067/](https://forum.unity.com/threads/fail-to-open-project-from-unity-hub.812067/)
  • I will just try again. I signed into Unity now, maybe that was the problem…

    • It’s doing something more already
    • It opens!
  • Check and modify settings

    • Edit > Project Settings
      • XR-Plugin management
        • In our case: targetting Android build
  • Apparently I made the new project with the 2020 version, where I could not select Android

    • Try to convert to 2022 version… Maybe that was the issue before… Hopefully not…
  • It doesn’t work with converting, stays in 2020 version. So let’s try… again…

  • Let’s try with the 2020 version… And add the Android Build options…

    • Let’s see where we get with this version
    • Project Settings:
      • Android: ArCore
      • Initialise on startup
    image
    • ARCore specific:
      • Require → when an app relies on AR, set it to required, if it is just an optional/additional part → optional

        image
  • Player settings:

    • company name
    • app name
    • icons
    • Graphics:
      • remove OpenGLS2 → not supported in the future
    • Minimum API Level 24
      • Needed for ARCore
    • Scripting backend: IL2CPP
    • 64 bit build:
      • ARMv7 and ARM64 (Required for Google Play)
  • Check packages

    • Window > Package manager
  • Unity crashed….

  • Trying again

    • Update packages to most current version

It seems like it will take longer than I thought, so I will leave it for tomorrow.

Small note

Every time I search something about AR, even in native context, the first thing I see is an advertised Link for 8th wall, so I do still want to try it out and see what all the fuss is about…

Taking a quick look at 8th wall

  • https://www.8thwall.com/

  • Starting a free trial

    • I do need to add my card info…
  • https://www.8thwall.com/docs/

    “8th Wall enables developers to create, collaborate and publish WebAR experiences that run directly in a web browser.”

    • Javascript + WebGL
    • Simultaneous Localization and Mapping (SLAM) engine, hyper-optimized for real-time Web AR on browsers
    • World Tracking, Image Targets, and Face Effects
    • 8th Wall Cloud Editor
    • Built in hosting
    • Can be integrated in
      • Three js
      • Aframe
      • PlayCanvas
      • Babylon js
    • Requirements:
      • WebGL (canvas.getContext('webgl') || canvas.getContext('webgl2'))
      • getUserMedia (navigator.mediaDevices.getUserMedia)
      • deviceorientation (window.DeviceOrientationEvent - only needed if SLAM is enabled)
      • Web-Assembly/WASM (window.WebAssembly)
      • https → for camera access
       <img width="1071" alt="image" src="https://user-images.githubusercontent.com/91590248/214915602-eeac3a87-2802-4cca-a0db-99110f8c9852.png">
    

https://www.8thwall.com/docs/web/#quick-start-guide

  • Creating a work space
  • Activate public profile
image
  • start a new project
    • Unlimited Demo projects possible
    • commercial: need commercial license
    • I will try the image target museum 8th wall template
image
    - clone the project
    - Runs in a built in editor on 8th wall site (Cloud Editor)
    - I have the feeling there is not enough storage on my laptop to run the example…

Idea → Add devices range category to my current tables

Day 19 - 27/1/23

Unity Tutorial

  • Already in the setup:

    • Light
    • AR origin
      • To map real world objects and virtual objects in the scene together
      • other scripts:
        • Plane manager, anchor manager, Raycast manager, anchor creator
        • plane manager: detects horizontal or vertical surfaces → for marker less AR
        • Raycast: determine intersection of those planes at certain distance
        • Anchor: physical point in space tracked by app
    • AR session
      • script: manages overall lifecycle of AR application
        • Needs to be attached to every object in your AR app
  • For our app, we don’t need the plane, raycast, anchor, so we remove them. Only AR Session script.

  • Add new component:

    • AR Tracked image manager:
      • Needs library of images to track

      • Create in assets

        image
      • Add images to it (just jpg, png…)

      • You can specify the size to look for in the real world, will make detection a bit better

      • Drag image library to AR Image Tracker Manager

      • Can add multiple images

  • Add script to add something when image is tracked

    • Add component (in AR Session Origin)
      • New script: PlaceTrackeImages
        • Open in Visual Code
        • Use AR Foundation engine and ARSubsystems engine
        • Global variables
          • reference to AR tracked image manager
          • List of Gameobjects
            • Game object: bundled assets: model + maybe scripts that affect its behaviour
            • can be 2D texture to a quad, can be animation, 3D model, animated model
            • each element in array corresponds to one of the images in the reference library that is being tracked
            • give them the same name as the image being tracked
          • Dictonary:
            • keyed array
            • of all of the prefabs created
        • Functions
          • Awake:
          • OnEnable
            • event listener for tracked image change event (on tracked image manager)
              • when new image from references is detected in the scene
              • or old one has left the scene
              • or moved
            • when event is enabled, we will attach our function that handles the change event (OnTrackedImageChanged)
          • Disables
            • remove the event handler again
          • OntrackedImageChanged
            • the event handler itself
            • When new image is detected:
              • loop through ‘added’ array from event arguments
                • look for corresponding Game object and if it has not been created (instantiated) yet, attach it to the tracked image and add it to our array of instantiated objects
            • When image is updated
              • set the prefab to the right tracking state
            • When item is removed, not able to be tracked anymore → left the scene completely
              • destroy the prefab and remove it from our array
  • Adding prefabs

    • 1 per tracked image
    • Assets > create > Prefab
      • Double click to edit
      • Add all kinds of content you want
        • 3D model:
          • 3D object
            • set the size it should have in the real world
        • drag to array of prefabs
  • Deploy to phone and test it

    • File > Build settings
    • Android
    • Add our scene
    • Switch platform
    • If connected with USB: build and run
  • Error while building…

    • A failure occurred while executing com.android.build.gradle.internal.tasks.Workers$ActionFacade See the Console for details.
    image

https://stackoverflow.com/questions/69776130/how-to-fix-gradle-build-failed-on-unity

Could be that I do not have ‘gradle’ installed on my computer, whatever it is…

I’m wondering if that could have been the problem before with Viro React as well?

References from Playful Technology:

Trying again with another version of unity

In the meantime: 8th wall

While I am having some troubles with building for Unity and waiting for another version to download, I will look back at 8th wall.

I noticed that chrome wouldn’t load the template in 8thWall’s Cloud Editor, so I tried it with Firefox and this works. I’ve had this issue before with in-browser web-editor elements in chrome not loading correctly (for example with expo). I’m not sure why.

Looking into some of the templates of 8th wall, you can choose A-frame or three.js, and some others as well.

Since I have made some experiments with the A-frame syntax and like the clarity of it, I will stick to this version. But it is nice to know I have some options.

There is a template specifically for React, but I want to have a bit more control. I would like to work on my own project, and have it on my github, instead of working inside their editor.

On their github, they try to explain how:

https://github.com/8thwall/web

Another article (about babylon.js, but I’m guessing the principles will be the same)

https://medium.com/8th-wall/babylon-js-8th-wall-integration-the-full-tutorial-7ed6a56fa168

It might be possible to add your own packages in the Cloud Editor:

https://www.8thwall.com/blog/post/89540744369/introducing-8th-wall-modules

Trying to solve Unity issues

https://forum.unity.com/threads/could-not-find-upm-executable-at-path.974862/

  • Trying a last time
  • At least the new version opens again
  • Now let’s see if it will finally build…
    • Still same error…
    • A failure occurred while executing com.android.build.gradle.internal.tasks.Workers$ActionFacade

https://stackoverflow.com/questions/70917420/unity-error-on-m1-mac-failed-to-read-key-from-keystore-invalid-keystore-for

Might be a problem with the debug keystore

It finally worked when creating my own keystore…

FINALLY

Conclusions:

  • Tracking is rather good, no lagging. But, when image is done tracking, the 3D elements disappear as well.
  • Let’s see if this is because of my script?
    • I tried 3 versions that could influence this according to me
      • Inside the ‘removed’ tracked images event
        1. Destroy the prefab
image
        2. Do nothing
        3. Set prefab to inactive
image
- All options had the same result: the 3D elements disappear completely when the image is not being tracked anymore.
  • Option to specify real world dimensions

Viro: also mention real world dimensions → might have made for better tracking

Video of Unity experiment

unity-ar-experiment.mp4

Day 20 - 28/1/23

8th wall

Today, I want to try the following things with 8th wall:

  • Basic image tracking set-up
  • Use the React template in their Cloud Editor
  • Use 8th wall in my own local example
  1. Basic image tracking with 8th wall
  • There are a lot of templates ready to use
  • Cloud Editor makes project encompassed whole.
  • Updating a template:
    • just change body.html for the main content
  • Starting tutorial from the documentation:
    • https://www.youtube.com/watch?time_continue=3&v=-iAhNh_qD9I&embeds_euri=https%3A%2F%2Fwww.8thwall.com%2F&feature=emb_logo
    • Logs from testing on mobile device are visible on console of Editor
    • Can preview 3D models in the editor and see scale
    • head.html, app.js, body.html → loaded in this order
    • hot reload connected devices
    • built in ‘git’ system → landing changes, see what has changed and add comment
    • When published: QR code → points to short link that stays the same, but the URL they redirect to can be modified at any point, so you can already share the QR code and make changes later
    • still access to landed code after free trial ended
    • slack community

Let’s try out a simple image tracking example

  • Started from the ‘Endless image Targets template’ in A-frame

  • Basic structure

    image
    • head.html → add necessary packages
    <meta name="8thwall:renderer" content="aframe:1.1.0">
    <meta name="8thwall:package" content="@8thwall.xrextras">
    <meta name="8thwall:package" content="@8thwall.landing-page">
    • app.js

    → register A-frame components

    • body.html

    → actual content

  • there’s an image target section, where you can add your own targets

    • Adding
    • Easy viewing for testing
    • Options for adding cans or cones as objects to track
    • Gives tips about which image targets work best
    image
image
  • Can’t really choose the dimension ratio
  • You can test them on the spot to make changes if needed
image
8thwall-target-testing.mp4
image

There's an option to 'auto' use targets: then you don't need to write anything in your code. You can just use the name. If you need more than 5 targets at once: you will need to add them explicitly in your code. (https://www.8thwall.com/docs/web/#changing-active-image-targets):

image
  • Let’s try a more basic example of image-tracking

    • Adding gltf models:

      • Easy preview!
      image
    • creates a bundle out of the models assets that belong together

  • Test: When trying out the same tracking images I used with mindAR, I notice a difference with different Android phone. Surprisingly, the issues were with the newer phone.

    • The mindAr example is also more difficult on the newer android, however it still tracks in some instances, while the 8th wall example doesn’t for some reason. Seems to me a bit of an autofocus problem.

Comparison videos:

Result of 8th wall image tracking demo with customisation via querystring:

https://avamc.8thwall.app/image-tracking-basics/?message=this+will+be+your+message&model=3&image=0

Tracking image to test:

image
8thwall-full-demo.mp4

Some conclusions about 8th wall so far:

  • Image tracking
    • Image tracking itself is rather accurate
    • There is a very easy image target adding UI in their cloud editor
    • jpg and png uploads
    • 8th wall processes them for you
    • Some limits to amount of tracking images
    • The ratio is fixed, so it will take part of an image. you can only choose portrait or landscape mode.
    • The images itself need to be of enough quality. For example, compared to mindAR, the question mark image worked less good. On another phone, it did not track at all.
    • tracking stops when image out of frame, objects disappear. No immersive effect, like with ViroReact.
  • Prerequisite knowledge:
    • Very simple templates can be used without too much pre-required knowledge
    • You need some basic a-frame and/or three.js or babylon.js knowledge to really start creating your own thing

devices: does not seem to work on desktop

8th wall is a very self-contained framework

→ can be nice, but sometimes it took a while if you wanted more custom functionalities

  • If you need very basic: a-frame knowledge very limited, for more custom features, you need to rely on more knowledge.
  • Very extensive documentation, however, if you are stuck with a specific problem, you won’t find too much on the known public platforms, such as stackoverflow,…
  • hosting via their own platform

Documentation

Ease of use

  • tutorials
  • examples
  • cloud editor
  • Easy interface and structure, very customer minded

Price:

  • Starts at 12$ a month

8th wall pros:

  • Very extensive documentation and very customer focused framework
  • pro/con: self-contained platform
  • Cloud Editor has a lot of simplifying features, especially for adding image targets or previewing 3D models, with scale reference!.

8th wall cons:

  • Very dependent on their system
  • Price

General thoughts/summary of ALL the frameworks I tested

ViroReact

PROS:

  • Best image tracking, no lagging
  • Immersive feeling: digital elements stay in environment, even when tracking image out of view.
  • image targets can be regular jpg/pngs
  • Free preview/testing app
  • Easy starting project

CONS

  • Very limited devices
  • Hard to integrate in your own project
    • Build problems
    • No auto-linking for React Native
  • Difficult testing process as developer
  • Some outdated documentation links, contradictory info in referencing articles

mindAR

PROS:

  • simplest structure in plain html
  • easy and clear assisting scanning UI (preview image to track,…)
  • free
  • Almost no pre-required knowledge needed

CONS:

  • precompiling of image targets
  • most lagging, least performant image tracking
  • Hard to integrate in other frameworks, such as react
  • Limited documentation, very few examples
  • Small community
    • Very bad google-ability of problems

AR.js

PROS

  • free
  • only one I successfully integrated in React project
  • large open-source community

CONS:

  • most limited image tracking
    • with half image tracking already stops
    • only markers worked for me → very strict image format
    • preprocessing of images needed
  • Bad documentation, non-working example demos
    • Confusing
    • very limited

Unity AR Foundation

PROS

  • Easy tutorials
  • Large community
  • Owned by Unity itself
  • Google-ability of problems

CONS:

  • limited devices
  • C#
  • Some build problems with Unity itself

8th wall

PROS

  • Self-contained customer-aimed platform
  • Very extensive documentation
    • Lots of tutorials
    • templates
  • Very limited laggin
  • Easy to use editor
    • Image target processor
    • 3D model previewer

CONS

  • Self-contained platform:
    • more steps required to add to own set-up
    • For more customisation, you need to understand more about their underlying structure and A-frame
  • Price
  • Target images more restricted than mindAR

Trying to incorporate into Next.js

I believe the easiest way will be to use an i-frame to include another 8th wall based page in my existing project.

https://www.8thwall.com/8thwall/inline-ar

Local projects:

https://github.com/8thwall/web/tree/master/serve

Disappointment: https://www.8thwall.com/docs/web/#start-a-new-project

  1. Select Hosting Type (Pro/Enterprise plans only): Decide up front if the project will be hosted by 8th Wall and developed using the 8th Wall Cloud Editor, or if you'll be self-hosting. This setting cannot be changed later. Self-hosting is only avilable to paid Pro/Enterprise workspaces. Self-hosting is not available to workspaces on Starter or Plus plans, or workspaces on the Pro plan during the free trial period.

I can’t use self-hosting to try out my project locally.

Day 22 - 30/1/23

Today I want to close off the research part and write the final article that I could possibly put on Medium.

Things I will try to do today:

  • Write out the article
  • Add links
  • Write out some explanation on my github README to make the examples clear
  • Add demo links with target image
  • Add videos in structured way to blog

Day 23 - 31/1/23

Planning for today:

  • Finish Medium Article
  • Prepare presentation
  • Finish up deliverables
  • Print out example cards
  • Coach meeting
  • Creating showcase video

Coach meeting

  • Don’t undersell yourself in your Medium article.
  • Good structure of the article, good to add some humour sometimes.
  • Maybe add link to Devine site or your own portfolio.
  • Presentation is well-structured and clear. Maybe leave out live demos. Add them as a link in the end during Q&A.
  • Project has ended as a clear whole.

Video is finished:

avaMirzaeeCheshmeh_personalpassionproject_showcasevideo.mp4

Day 24 - 1/2/23

The final day. Today I will just finalise my files to hand in and prepare my presentation for tomorrow.

Clone this wiki locally