![](https://lemmy.world/pictrs/image/2b20b0f1-bb44-44b4-b293-55c01b2a2ab0.png)
![](https://programming.dev/pictrs/image/028151d2-3692-416d-a8eb-9d3d4cc18b41.png)
And the Linux / Unix-specific ecosystem & technology arguments therein.
grow a plant, hug your dog, lift heavy, eat healthy, be a nerd, play a game and help each other out
And the Linux / Unix-specific ecosystem & technology arguments therein.
Appreciate the additional context! Have thankfully not needed to use the safetynet module with microg either.
I appreciate that you’re trying to inform me but if you make such a claim, you should be able to prove it.
A friend was able to provide some context, regardless:
The one binary I’m aware of microG downloading (assuming it still does) is the SafetyNet “DroidGuard” thing, which it only does if you explicitly enable SafetyNet, which is not on by default. There is no other way to provide it.
microG only has privileged access if you install it as a privileged app, which is up to you / your distribution, as microG works fine as a user app (provided signature spoofing is available to it). Also, being privileged itself really doesn’t mean giving privileges to “Google”.
Apps needing Google services may indeed contain all sorts of binaries, generally including Google ones, which doesn’t mean they contain Google services themselves. Anyway, they are proprietary apps and as such will certainly contain proprietary things, and it’s all to you to install them or not. It’s not like microG includes them.
Its also just a reimplementation of a small handful of useful Google services, such as push notifications, or the maps (not the spyware stuff like advertising) and each can be toggled on/off.
Also all apps on android are sandboxed
I appreciate the info. For my own learning, could you provide a link to some context around the types of official binaries leveraged by microG? The only firm info I have of its behaviour is that it will pseudonomise as much user information as possible.
I’m familiar with sandboxed google play on grapheneOS and have used it in the past.
Can you elaborate on being misled there?
As for google devices - yes, there’s irony in the notion that the most de-googleable phones are theirs, sure. They’re often sold at a loss around the holiday season, though.
I also use calyx but I’ll agree that graphene is technologically superior of the two. I’m more comfortable with the idea of using MicroG as opposed to sandboxes google play but that’s not to slant the implementation in any way.
Good to know, though same could be said for ROCm + HIP for AMD. Gets a bit weird as you generally want that for OCL support too.
Best of luck with this, let us know how it goes
This may take time but Intel have extremely deep pockets, they understand the value of presence in this market, I’m sure they can and will stick to it.
It’s kind of crazy to me how well it works! It’s hard for me to wrap my head around it sometimes.
My end goal is to not have to eventually not need to use windows at all but I’m still very impressed with how this behaves.
Very welcome! Yes, exactly as you described. The nice thing is that you have greater control over Windows in this virtualized environment, particularly with regards to limiting device and network access.
I gather that display dummy plugs are pretty common in the looking glass community.
There’s no stupid questions here - there’s absolutely nothing intuitive about computer ecosystems 😅
Like AMD, they use a kernel module and their user space drivers are in Mesa. If anything, you may have a better OOTB experience with Intel graphics on distros that have more recent packages, like Fedora.
A third player is absolutely welcome to the game but their share is for now still small on Windows.
The Arc Alchemist dGPU bringup has shown the world just how difficult graphics driver software is. They’ve made excellent progress lately in key areas (on both Windows and Linux) but there are are still many odd gaps to fill.
Battlemage mobile looks pretty exciting, mind you.
KVM/QEMU via virt-manager. I would imagine that your use case would work if you pass the USB device or the entire usb host controller through to the VM, but I’m not sure. Please check the video linked in my other comment for more information on the single GPU setup
Hey there, just using a single GPU in this system. If you have multiple adapters, you can try something like LookingGlass instead. In my case, I would need a single GPU that supports SRIOV, which is typically relegated to data centre products (I believe someone actually managed this with an Intel iGPU + and experimental sriov driver!).
I’m just passing my GPU through to a virtual machine; it takes precedence over the graphical session, leverages all connected displays and relevant peripherals, and gracefully resumes back into GDM / GNOME once the VM is powered off (can do this conventionally within W10).
I mostly followed this video:
https://www.youtube.com/watch?v=eTWf5D092VY
key thing for AMD gfx is to set ROMBAR = 0 in virt config, this will allow you to actually get functioning display output once the VM is started up.
As for your buying choices, consumer AMD GPUs have issues with GPU reset (unlike Intel or Nvidia). I think your experience with nvidia graphics here will be better than mine here with amd.
Byt yeah, since you have multiple gfx adapters at your disposal, it should be possible to get started with LookingGlass (a VM in a movable, resizable window that is fully hw accelerated with shared memory). The Level1Techs forum for LG is very helpful, though I believe the creator of the video above also has a relevant guide for this.
I got VFIO/IOMMU + single GPU pass-through working on Fedora 40 with my RX 6800 xt into a win10 VM.
More of a see if I could sort of thing, I don’t imagine I will actually need it much, but it may help if any of my friends are curious about switching over.
Oh I see, appreciate the background.
Yeah it was very sad to see the byran situation unfold. I was also a fan of that series.
I gather they are or were associates / friends with bryan lunduke, who is an extremely controversial character in the Linux space. That might explain the “bit crazy” remark but I really don’t know much about the nature of their relationship
From.the FAQ