May 8 00:50:34.742988 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] May 8 00:50:34.743007 kernel: Linux version 5.15.180-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Wed May 7 23:24:31 -00 2025 May 8 00:50:34.743023 kernel: efi: EFI v2.70 by EDK II May 8 00:50:34.743028 kernel: efi: SMBIOS 3.0=0xd9260000 ACPI 2.0=0xd9240000 MEMATTR=0xda32b018 RNG=0xd9220018 MEMRESERVE=0xd9521c18 May 8 00:50:34.743033 kernel: random: crng init done May 8 00:50:34.743039 kernel: ACPI: Early table checksum verification disabled May 8 00:50:34.743045 kernel: ACPI: RSDP 0x00000000D9240000 000024 (v02 BOCHS ) May 8 00:50:34.743052 kernel: ACPI: XSDT 0x00000000D9230000 000064 (v01 BOCHS BXPC 00000001 01000013) May 8 00:50:34.743058 kernel: ACPI: FACP 0x00000000D91E0000 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:50:34.743063 kernel: ACPI: DSDT 0x00000000D91F0000 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:50:34.743068 kernel: ACPI: APIC 0x00000000D91D0000 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:50:34.743074 kernel: ACPI: PPTT 0x00000000D91C0000 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:50:34.743079 kernel: ACPI: GTDT 0x00000000D91B0000 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:50:34.743084 kernel: ACPI: MCFG 0x00000000D91A0000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:50:34.743092 kernel: ACPI: SPCR 0x00000000D9190000 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:50:34.743098 kernel: ACPI: DBG2 0x00000000D9180000 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:50:34.743115 kernel: ACPI: IORT 0x00000000D9170000 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:50:34.743121 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 May 8 00:50:34.743126 kernel: NUMA: Failed to initialise from firmware May 8 00:50:34.743132 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] May 8 00:50:34.743138 kernel: NUMA: NODE_DATA [mem 0xdcb0c900-0xdcb11fff] May 8 00:50:34.743143 kernel: Zone ranges: May 8 00:50:34.743149 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] May 8 00:50:34.743157 kernel: DMA32 empty May 8 00:50:34.743162 kernel: Normal empty May 8 00:50:34.743168 kernel: Movable zone start for each node May 8 00:50:34.743173 kernel: Early memory node ranges May 8 00:50:34.743179 kernel: node 0: [mem 0x0000000040000000-0x00000000d924ffff] May 8 00:50:34.743184 kernel: node 0: [mem 0x00000000d9250000-0x00000000d951ffff] May 8 00:50:34.743190 kernel: node 0: [mem 0x00000000d9520000-0x00000000dc7fffff] May 8 00:50:34.743195 kernel: node 0: [mem 0x00000000dc800000-0x00000000dc88ffff] May 8 00:50:34.743201 kernel: node 0: [mem 0x00000000dc890000-0x00000000dc89ffff] May 8 00:50:34.743206 kernel: node 0: [mem 0x00000000dc8a0000-0x00000000dc9bffff] May 8 00:50:34.743212 kernel: node 0: [mem 0x00000000dc9c0000-0x00000000dcffffff] May 8 00:50:34.743218 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] May 8 00:50:34.743227 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges May 8 00:50:34.743233 kernel: psci: probing for conduit method from ACPI. May 8 00:50:34.743239 kernel: psci: PSCIv1.1 detected in firmware. May 8 00:50:34.743244 kernel: psci: Using standard PSCI v0.2 function IDs May 8 00:50:34.743250 kernel: psci: Trusted OS migration not required May 8 00:50:34.743258 kernel: psci: SMC Calling Convention v1.1 May 8 00:50:34.743264 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) May 8 00:50:34.743271 kernel: ACPI: SRAT not present May 8 00:50:34.743277 kernel: percpu: Embedded 30 pages/cpu s83032 r8192 d31656 u122880 May 8 00:50:34.743283 kernel: pcpu-alloc: s83032 r8192 d31656 u122880 alloc=30*4096 May 8 00:50:34.743290 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 May 8 00:50:34.743295 kernel: Detected PIPT I-cache on CPU0 May 8 00:50:34.743301 kernel: CPU features: detected: GIC system register CPU interface May 8 00:50:34.743307 kernel: CPU features: detected: Hardware dirty bit management May 8 00:50:34.743313 kernel: CPU features: detected: Spectre-v4 May 8 00:50:34.743319 kernel: CPU features: detected: Spectre-BHB May 8 00:50:34.743326 kernel: CPU features: kernel page table isolation forced ON by KASLR May 8 00:50:34.743332 kernel: CPU features: detected: Kernel page table isolation (KPTI) May 8 00:50:34.743338 kernel: CPU features: detected: ARM erratum 1418040 May 8 00:50:34.743344 kernel: CPU features: detected: SSBS not fully self-synchronizing May 8 00:50:34.743350 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 May 8 00:50:34.743356 kernel: Policy zone: DMA May 8 00:50:34.743363 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=3816e7a7ab4f80032c381006006d7d5ba477c6a86a1527e782723d869b29d497 May 8 00:50:34.743369 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 8 00:50:34.743375 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 8 00:50:34.743381 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 8 00:50:34.743387 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 8 00:50:34.743395 kernel: Memory: 2457408K/2572288K available (9792K kernel code, 2094K rwdata, 7584K rodata, 36416K init, 777K bss, 114880K reserved, 0K cma-reserved) May 8 00:50:34.743401 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 8 00:50:34.743407 kernel: trace event string verifier disabled May 8 00:50:34.743413 kernel: rcu: Preemptible hierarchical RCU implementation. May 8 00:50:34.743419 kernel: rcu: RCU event tracing is enabled. May 8 00:50:34.743425 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 8 00:50:34.743431 kernel: Trampoline variant of Tasks RCU enabled. May 8 00:50:34.743437 kernel: Tracing variant of Tasks RCU enabled. May 8 00:50:34.743444 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 8 00:50:34.743450 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 8 00:50:34.743456 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 May 8 00:50:34.743463 kernel: GICv3: 256 SPIs implemented May 8 00:50:34.743469 kernel: GICv3: 0 Extended SPIs implemented May 8 00:50:34.743475 kernel: GICv3: Distributor has no Range Selector support May 8 00:50:34.743481 kernel: Root IRQ handler: gic_handle_irq May 8 00:50:34.743487 kernel: GICv3: 16 PPIs implemented May 8 00:50:34.743492 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 May 8 00:50:34.743498 kernel: ACPI: SRAT not present May 8 00:50:34.743504 kernel: ITS [mem 0x08080000-0x0809ffff] May 8 00:50:34.743510 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400b0000 (indirect, esz 8, psz 64K, shr 1) May 8 00:50:34.743516 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400c0000 (flat, esz 8, psz 64K, shr 1) May 8 00:50:34.743522 kernel: GICv3: using LPI property table @0x00000000400d0000 May 8 00:50:34.743528 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000000400e0000 May 8 00:50:34.743535 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 00:50:34.743541 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). May 8 00:50:34.743548 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns May 8 00:50:34.743554 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns May 8 00:50:34.743560 kernel: arm-pv: using stolen time PV May 8 00:50:34.743566 kernel: Console: colour dummy device 80x25 May 8 00:50:34.743572 kernel: ACPI: Core revision 20210730 May 8 00:50:34.743578 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) May 8 00:50:34.743585 kernel: pid_max: default: 32768 minimum: 301 May 8 00:50:34.743590 kernel: LSM: Security Framework initializing May 8 00:50:34.743597 kernel: SELinux: Initializing. May 8 00:50:34.743604 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 8 00:50:34.743610 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 8 00:50:34.743616 kernel: rcu: Hierarchical SRCU implementation. May 8 00:50:34.743622 kernel: Platform MSI: ITS@0x8080000 domain created May 8 00:50:34.743628 kernel: PCI/MSI: ITS@0x8080000 domain created May 8 00:50:34.743634 kernel: Remapping and enabling EFI services. May 8 00:50:34.743640 kernel: smp: Bringing up secondary CPUs ... May 8 00:50:34.743646 kernel: Detected PIPT I-cache on CPU1 May 8 00:50:34.743654 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 May 8 00:50:34.743660 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000000400f0000 May 8 00:50:34.743666 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 00:50:34.743672 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] May 8 00:50:34.743679 kernel: Detected PIPT I-cache on CPU2 May 8 00:50:34.743685 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 May 8 00:50:34.743691 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040100000 May 8 00:50:34.743697 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 00:50:34.743703 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] May 8 00:50:34.743709 kernel: Detected PIPT I-cache on CPU3 May 8 00:50:34.743717 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 May 8 00:50:34.743723 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040110000 May 8 00:50:34.743729 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 00:50:34.743735 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] May 8 00:50:34.743745 kernel: smp: Brought up 1 node, 4 CPUs May 8 00:50:34.743752 kernel: SMP: Total of 4 processors activated. May 8 00:50:34.743759 kernel: CPU features: detected: 32-bit EL0 Support May 8 00:50:34.743765 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence May 8 00:50:34.743772 kernel: CPU features: detected: Common not Private translations May 8 00:50:34.743778 kernel: CPU features: detected: CRC32 instructions May 8 00:50:34.743785 kernel: CPU features: detected: RCpc load-acquire (LDAPR) May 8 00:50:34.743791 kernel: CPU features: detected: LSE atomic instructions May 8 00:50:34.743799 kernel: CPU features: detected: Privileged Access Never May 8 00:50:34.743805 kernel: CPU features: detected: RAS Extension Support May 8 00:50:34.743812 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) May 8 00:50:34.743818 kernel: CPU: All CPU(s) started at EL1 May 8 00:50:34.743825 kernel: alternatives: patching kernel code May 8 00:50:34.743832 kernel: devtmpfs: initialized May 8 00:50:34.743839 kernel: KASLR enabled May 8 00:50:34.743846 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 8 00:50:34.743852 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 8 00:50:34.743859 kernel: pinctrl core: initialized pinctrl subsystem May 8 00:50:34.743865 kernel: SMBIOS 3.0.0 present. May 8 00:50:34.743872 kernel: DMI: QEMU KVM Virtual Machine, BIOS 0.0.0 02/06/2015 May 8 00:50:34.743878 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 8 00:50:34.743885 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations May 8 00:50:34.743892 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations May 8 00:50:34.743899 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations May 8 00:50:34.743906 kernel: audit: initializing netlink subsys (disabled) May 8 00:50:34.743912 kernel: audit: type=2000 audit(0.033:1): state=initialized audit_enabled=0 res=1 May 8 00:50:34.743919 kernel: thermal_sys: Registered thermal governor 'step_wise' May 8 00:50:34.743925 kernel: cpuidle: using governor menu May 8 00:50:34.743932 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. May 8 00:50:34.743938 kernel: ASID allocator initialised with 32768 entries May 8 00:50:34.743945 kernel: ACPI: bus type PCI registered May 8 00:50:34.743953 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 8 00:50:34.743959 kernel: Serial: AMBA PL011 UART driver May 8 00:50:34.743966 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages May 8 00:50:34.743972 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages May 8 00:50:34.743979 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages May 8 00:50:34.743985 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages May 8 00:50:34.743992 kernel: cryptd: max_cpu_qlen set to 1000 May 8 00:50:34.743999 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) May 8 00:50:34.744005 kernel: ACPI: Added _OSI(Module Device) May 8 00:50:34.744017 kernel: ACPI: Added _OSI(Processor Device) May 8 00:50:34.744024 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 8 00:50:34.744031 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 8 00:50:34.744037 kernel: ACPI: Added _OSI(Linux-Dell-Video) May 8 00:50:34.744044 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) May 8 00:50:34.744050 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) May 8 00:50:34.744057 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 8 00:50:34.744063 kernel: ACPI: Interpreter enabled May 8 00:50:34.744070 kernel: ACPI: Using GIC for interrupt routing May 8 00:50:34.744078 kernel: ACPI: MCFG table detected, 1 entries May 8 00:50:34.744085 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA May 8 00:50:34.744092 kernel: printk: console [ttyAMA0] enabled May 8 00:50:34.744099 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 8 00:50:34.744236 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 8 00:50:34.744301 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] May 8 00:50:34.744360 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] May 8 00:50:34.744420 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 May 8 00:50:34.744477 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] May 8 00:50:34.744486 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] May 8 00:50:34.744493 kernel: PCI host bridge to bus 0000:00 May 8 00:50:34.744558 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] May 8 00:50:34.744612 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] May 8 00:50:34.744665 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] May 8 00:50:34.744718 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 8 00:50:34.744791 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 May 8 00:50:34.744860 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 May 8 00:50:34.744922 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] May 8 00:50:34.744983 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] May 8 00:50:34.745057 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] May 8 00:50:34.745135 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] May 8 00:50:34.745200 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] May 8 00:50:34.745260 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] May 8 00:50:34.745315 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] May 8 00:50:34.745367 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] May 8 00:50:34.745420 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] May 8 00:50:34.745429 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 May 8 00:50:34.745436 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 May 8 00:50:34.745443 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 May 8 00:50:34.745451 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 May 8 00:50:34.745458 kernel: iommu: Default domain type: Translated May 8 00:50:34.745465 kernel: iommu: DMA domain TLB invalidation policy: strict mode May 8 00:50:34.745471 kernel: vgaarb: loaded May 8 00:50:34.745478 kernel: pps_core: LinuxPPS API ver. 1 registered May 8 00:50:34.745485 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti May 8 00:50:34.745492 kernel: PTP clock support registered May 8 00:50:34.745498 kernel: Registered efivars operations May 8 00:50:34.745505 kernel: clocksource: Switched to clocksource arch_sys_counter May 8 00:50:34.745512 kernel: VFS: Disk quotas dquot_6.6.0 May 8 00:50:34.745519 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 8 00:50:34.745526 kernel: pnp: PnP ACPI init May 8 00:50:34.745590 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved May 8 00:50:34.745600 kernel: pnp: PnP ACPI: found 1 devices May 8 00:50:34.745607 kernel: NET: Registered PF_INET protocol family May 8 00:50:34.745614 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 8 00:50:34.745621 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 8 00:50:34.745629 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 8 00:50:34.745636 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 8 00:50:34.745643 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) May 8 00:50:34.745649 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 8 00:50:34.745656 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 8 00:50:34.745663 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 8 00:50:34.745669 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 8 00:50:34.745676 kernel: PCI: CLS 0 bytes, default 64 May 8 00:50:34.745683 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available May 8 00:50:34.745690 kernel: kvm [1]: HYP mode not available May 8 00:50:34.745697 kernel: Initialise system trusted keyrings May 8 00:50:34.745704 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 8 00:50:34.745710 kernel: Key type asymmetric registered May 8 00:50:34.745717 kernel: Asymmetric key parser 'x509' registered May 8 00:50:34.745724 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) May 8 00:50:34.745730 kernel: io scheduler mq-deadline registered May 8 00:50:34.745737 kernel: io scheduler kyber registered May 8 00:50:34.745743 kernel: io scheduler bfq registered May 8 00:50:34.745751 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 May 8 00:50:34.745758 kernel: ACPI: button: Power Button [PWRB] May 8 00:50:34.745765 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 May 8 00:50:34.745825 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) May 8 00:50:34.745834 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 8 00:50:34.745841 kernel: thunder_xcv, ver 1.0 May 8 00:50:34.745848 kernel: thunder_bgx, ver 1.0 May 8 00:50:34.745854 kernel: nicpf, ver 1.0 May 8 00:50:34.745861 kernel: nicvf, ver 1.0 May 8 00:50:34.745933 kernel: rtc-efi rtc-efi.0: registered as rtc0 May 8 00:50:34.745990 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-05-08T00:50:34 UTC (1746665434) May 8 00:50:34.745999 kernel: hid: raw HID events driver (C) Jiri Kosina May 8 00:50:34.746006 kernel: NET: Registered PF_INET6 protocol family May 8 00:50:34.746019 kernel: Segment Routing with IPv6 May 8 00:50:34.746026 kernel: In-situ OAM (IOAM) with IPv6 May 8 00:50:34.746033 kernel: NET: Registered PF_PACKET protocol family May 8 00:50:34.746039 kernel: Key type dns_resolver registered May 8 00:50:34.746047 kernel: registered taskstats version 1 May 8 00:50:34.746054 kernel: Loading compiled-in X.509 certificates May 8 00:50:34.746061 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.180-flatcar: 47302b466ab2df930dd804d2ee9c8ab44de4e2dc' May 8 00:50:34.746067 kernel: Key type .fscrypt registered May 8 00:50:34.746074 kernel: Key type fscrypt-provisioning registered May 8 00:50:34.746081 kernel: ima: No TPM chip found, activating TPM-bypass! May 8 00:50:34.746087 kernel: ima: Allocated hash algorithm: sha1 May 8 00:50:34.746094 kernel: ima: No architecture policies found May 8 00:50:34.746108 kernel: clk: Disabling unused clocks May 8 00:50:34.746116 kernel: Freeing unused kernel memory: 36416K May 8 00:50:34.746123 kernel: Run /init as init process May 8 00:50:34.746129 kernel: with arguments: May 8 00:50:34.746136 kernel: /init May 8 00:50:34.746142 kernel: with environment: May 8 00:50:34.746149 kernel: HOME=/ May 8 00:50:34.746155 kernel: TERM=linux May 8 00:50:34.746162 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 8 00:50:34.746171 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) May 8 00:50:34.746181 systemd[1]: Detected virtualization kvm. May 8 00:50:34.746188 systemd[1]: Detected architecture arm64. May 8 00:50:34.746195 systemd[1]: Running in initrd. May 8 00:50:34.746202 systemd[1]: No hostname configured, using default hostname. May 8 00:50:34.746209 systemd[1]: Hostname set to . May 8 00:50:34.746217 systemd[1]: Initializing machine ID from VM UUID. May 8 00:50:34.746224 systemd[1]: Queued start job for default target initrd.target. May 8 00:50:34.746232 systemd[1]: Started systemd-ask-password-console.path. May 8 00:50:34.746239 systemd[1]: Reached target cryptsetup.target. May 8 00:50:34.746246 systemd[1]: Reached target paths.target. May 8 00:50:34.746252 systemd[1]: Reached target slices.target. May 8 00:50:34.746259 systemd[1]: Reached target swap.target. May 8 00:50:34.746266 systemd[1]: Reached target timers.target. May 8 00:50:34.746273 systemd[1]: Listening on iscsid.socket. May 8 00:50:34.746282 systemd[1]: Listening on iscsiuio.socket. May 8 00:50:34.746289 systemd[1]: Listening on systemd-journald-audit.socket. May 8 00:50:34.746296 systemd[1]: Listening on systemd-journald-dev-log.socket. May 8 00:50:34.746303 systemd[1]: Listening on systemd-journald.socket. May 8 00:50:34.746310 systemd[1]: Listening on systemd-networkd.socket. May 8 00:50:34.746317 systemd[1]: Listening on systemd-udevd-control.socket. May 8 00:50:34.746324 systemd[1]: Listening on systemd-udevd-kernel.socket. May 8 00:50:34.746331 systemd[1]: Reached target sockets.target. May 8 00:50:34.746338 systemd[1]: Starting kmod-static-nodes.service... May 8 00:50:34.746346 systemd[1]: Finished network-cleanup.service. May 8 00:50:34.746353 systemd[1]: Starting systemd-fsck-usr.service... May 8 00:50:34.746360 systemd[1]: Starting systemd-journald.service... May 8 00:50:34.746367 systemd[1]: Starting systemd-modules-load.service... May 8 00:50:34.746374 systemd[1]: Starting systemd-resolved.service... May 8 00:50:34.746381 systemd[1]: Starting systemd-vconsole-setup.service... May 8 00:50:34.746388 systemd[1]: Finished kmod-static-nodes.service. May 8 00:50:34.746396 systemd[1]: Finished systemd-fsck-usr.service. May 8 00:50:34.746403 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... May 8 00:50:34.746411 systemd[1]: Finished systemd-vconsole-setup.service. May 8 00:50:34.746418 systemd[1]: Starting dracut-cmdline-ask.service... May 8 00:50:34.746425 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. May 8 00:50:34.746435 systemd-journald[290]: Journal started May 8 00:50:34.746474 systemd-journald[290]: Runtime Journal (/run/log/journal/bf7bfda477694e0c89771bc9e392b4c2) is 6.0M, max 48.7M, 42.6M free. May 8 00:50:34.730495 systemd-modules-load[291]: Inserted module 'overlay' May 8 00:50:34.750280 systemd[1]: Started systemd-journald.service. May 8 00:50:34.750298 kernel: audit: type=1130 audit(1746665434.747:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:50:34.747000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:50:34.748054 systemd[1]: Finished dracut-cmdline-ask.service. May 8 00:50:34.755223 kernel: audit: type=1130 audit(1746665434.750:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:50:34.750000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:50:34.750827 systemd-resolved[292]: Positive Trust Anchors: May 8 00:50:34.750834 systemd-resolved[292]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 8 00:50:34.756000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:50:34.750862 systemd-resolved[292]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test May 8 00:50:34.766713 kernel: audit: type=1130 audit(1746665434.756:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:50:34.766730 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 8 00:50:34.751514 systemd[1]: Starting dracut-cmdline.service... May 8 00:50:34.754925 systemd-resolved[292]: Defaulting to hostname 'linux'. May 8 00:50:34.755753 systemd[1]: Started systemd-resolved.service. May 8 00:50:34.769343 kernel: Bridge firewalling registered May 8 00:50:34.769435 dracut-cmdline[309]: dracut-dracut-053 May 8 00:50:34.756790 systemd[1]: Reached target nss-lookup.target. May 8 00:50:34.769155 systemd-modules-load[291]: Inserted module 'br_netfilter' May 8 00:50:34.772181 dracut-cmdline[309]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=3816e7a7ab4f80032c381006006d7d5ba477c6a86a1527e782723d869b29d497 May 8 00:50:34.780541 kernel: SCSI subsystem initialized May 8 00:50:34.788325 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 8 00:50:34.788355 kernel: device-mapper: uevent: version 1.0.3 May 8 00:50:34.789673 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com May 8 00:50:34.791724 systemd-modules-load[291]: Inserted module 'dm_multipath' May 8 00:50:34.792484 systemd[1]: Finished systemd-modules-load.service. May 8 00:50:34.793000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:50:34.793929 systemd[1]: Starting systemd-sysctl.service... May 8 00:50:34.796575 kernel: audit: type=1130 audit(1746665434.793:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:50:34.801817 systemd[1]: Finished systemd-sysctl.service. May 8 00:50:34.802000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:50:34.805127 kernel: audit: type=1130 audit(1746665434.802:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:50:34.835128 kernel: Loading iSCSI transport class v2.0-870. May 8 00:50:34.849127 kernel: iscsi: registered transport (tcp) May 8 00:50:34.863114 kernel: iscsi: registered transport (qla4xxx) May 8 00:50:34.863139 kernel: QLogic iSCSI HBA Driver May 8 00:50:34.896161 systemd[1]: Finished dracut-cmdline.service. May 8 00:50:34.896000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:50:34.897593 systemd[1]: Starting dracut-pre-udev.service... May 8 00:50:34.899876 kernel: audit: type=1130 audit(1746665434.896:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:50:34.941125 kernel: raid6: neonx8 gen() 13819 MB/s May 8 00:50:34.958112 kernel: raid6: neonx8 xor() 10839 MB/s May 8 00:50:34.975112 kernel: raid6: neonx4 gen() 13574 MB/s May 8 00:50:34.992116 kernel: raid6: neonx4 xor() 11263 MB/s May 8 00:50:35.009113 kernel: raid6: neonx2 gen() 12963 MB/s May 8 00:50:35.026121 kernel: raid6: neonx2 xor() 10378 MB/s May 8 00:50:35.043120 kernel: raid6: neonx1 gen() 10555 MB/s May 8 00:50:35.060113 kernel: raid6: neonx1 xor() 8789 MB/s May 8 00:50:35.077116 kernel: raid6: int64x8 gen() 6269 MB/s May 8 00:50:35.094121 kernel: raid6: int64x8 xor() 3544 MB/s May 8 00:50:35.111113 kernel: raid6: int64x4 gen() 7212 MB/s May 8 00:50:35.128117 kernel: raid6: int64x4 xor() 3859 MB/s May 8 00:50:35.145112 kernel: raid6: int64x2 gen() 6155 MB/s May 8 00:50:35.162112 kernel: raid6: int64x2 xor() 3323 MB/s May 8 00:50:35.179122 kernel: raid6: int64x1 gen() 5044 MB/s May 8 00:50:35.196304 kernel: raid6: int64x1 xor() 2646 MB/s May 8 00:50:35.196319 kernel: raid6: using algorithm neonx8 gen() 13819 MB/s May 8 00:50:35.196329 kernel: raid6: .... xor() 10839 MB/s, rmw enabled May 8 00:50:35.196337 kernel: raid6: using neon recovery algorithm May 8 00:50:35.207129 kernel: xor: measuring software checksum speed May 8 00:50:35.207156 kernel: 8regs : 17213 MB/sec May 8 00:50:35.207172 kernel: 32regs : 17734 MB/sec May 8 00:50:35.208434 kernel: arm64_neon : 27589 MB/sec May 8 00:50:35.208451 kernel: xor: using function: arm64_neon (27589 MB/sec) May 8 00:50:35.263126 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no May 8 00:50:35.272625 systemd[1]: Finished dracut-pre-udev.service. May 8 00:50:35.273000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:50:35.274312 systemd[1]: Starting systemd-udevd.service... May 8 00:50:35.277740 kernel: audit: type=1130 audit(1746665435.273:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:50:35.277762 kernel: audit: type=1334 audit(1746665435.273:9): prog-id=7 op=LOAD May 8 00:50:35.277772 kernel: audit: type=1334 audit(1746665435.273:10): prog-id=8 op=LOAD May 8 00:50:35.273000 audit: BPF prog-id=7 op=LOAD May 8 00:50:35.273000 audit: BPF prog-id=8 op=LOAD May 8 00:50:35.289174 systemd-udevd[492]: Using default interface naming scheme 'v252'. May 8 00:50:35.292566 systemd[1]: Started systemd-udevd.service. May 8 00:50:35.293000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:50:35.294302 systemd[1]: Starting dracut-pre-trigger.service... May 8 00:50:35.304596 dracut-pre-trigger[500]: rd.md=0: removing MD RAID activation May 8 00:50:35.330655 systemd[1]: Finished dracut-pre-trigger.service. May 8 00:50:35.331000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:50:35.331977 systemd[1]: Starting systemd-udev-trigger.service... May 8 00:50:35.367359 systemd[1]: Finished systemd-udev-trigger.service. May 8 00:50:35.367000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:50:35.395089 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 8 00:50:35.402676 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 8 00:50:35.402692 kernel: GPT:9289727 != 19775487 May 8 00:50:35.402700 kernel: GPT:Alternate GPT header not at the end of the disk. May 8 00:50:35.402709 kernel: GPT:9289727 != 19775487 May 8 00:50:35.402717 kernel: GPT: Use GNU Parted to correct GPT errors. May 8 00:50:35.402725 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 8 00:50:35.416839 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. May 8 00:50:35.417661 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. May 8 00:50:35.423128 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (551) May 8 00:50:35.425417 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. May 8 00:50:35.428564 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. May 8 00:50:35.431587 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. May 8 00:50:35.433062 systemd[1]: Starting disk-uuid.service... May 8 00:50:35.470422 disk-uuid[565]: Primary Header is updated. May 8 00:50:35.470422 disk-uuid[565]: Secondary Entries is updated. May 8 00:50:35.470422 disk-uuid[565]: Secondary Header is updated. May 8 00:50:35.474115 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 8 00:50:36.486083 disk-uuid[566]: The operation has completed successfully. May 8 00:50:36.488282 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 8 00:50:36.510745 systemd[1]: disk-uuid.service: Deactivated successfully. May 8 00:50:36.510848 systemd[1]: Finished disk-uuid.service. May 8 00:50:36.511000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:50:36.511000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:50:36.512218 systemd[1]: Starting verity-setup.service... May 8 00:50:36.529125 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" May 8 00:50:36.550416 systemd[1]: Found device dev-mapper-usr.device. May 8 00:50:36.552417 systemd[1]: Mounting sysusr-usr.mount... May 8 00:50:36.554364 systemd[1]: Finished verity-setup.service. May 8 00:50:36.555000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:50:36.605983 systemd[1]: Mounted sysusr-usr.mount. May 8 00:50:36.607138 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. May 8 00:50:36.606728 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. May 8 00:50:36.607522 systemd[1]: Starting ignition-setup.service... May 8 00:50:36.609587 systemd[1]: Starting parse-ip-for-networkd.service... May 8 00:50:36.618210 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 8 00:50:36.618247 kernel: BTRFS info (device vda6): using free space tree May 8 00:50:36.618257 kernel: BTRFS info (device vda6): has skinny extents May 8 00:50:36.628150 systemd[1]: mnt-oem.mount: Deactivated successfully. May 8 00:50:36.634405 systemd[1]: Finished ignition-setup.service. May 8 00:50:36.634000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:50:36.635947 systemd[1]: Starting ignition-fetch-offline.service... May 8 00:50:36.697701 systemd[1]: Finished parse-ip-for-networkd.service. May 8 00:50:36.698000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:50:36.700000 audit: BPF prog-id=9 op=LOAD May 8 00:50:36.701161 systemd[1]: Starting systemd-networkd.service... May 8 00:50:36.725174 systemd-networkd[740]: lo: Link UP May 8 00:50:36.725951 systemd-networkd[740]: lo: Gained carrier May 8 00:50:36.726982 systemd-networkd[740]: Enumeration completed May 8 00:50:36.727725 systemd[1]: Started systemd-networkd.service. May 8 00:50:36.728000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:50:36.728717 systemd[1]: Reached target network.target. May 8 00:50:36.730605 systemd-networkd[740]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 8 00:50:36.730682 systemd[1]: Starting iscsiuio.service... May 8 00:50:36.736307 systemd-networkd[740]: eth0: Link UP May 8 00:50:36.736967 systemd-networkd[740]: eth0: Gained carrier May 8 00:50:36.744384 systemd[1]: Started iscsiuio.service. May 8 00:50:36.744000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:50:36.745762 systemd[1]: Starting iscsid.service... May 8 00:50:36.749469 iscsid[746]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi May 8 00:50:36.749469 iscsid[746]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. May 8 00:50:36.749469 iscsid[746]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. May 8 00:50:36.749469 iscsid[746]: If using hardware iscsi like qla4xxx this message can be ignored. May 8 00:50:36.749469 iscsid[746]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi May 8 00:50:36.749469 iscsid[746]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf May 8 00:50:36.755000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:50:36.750641 ignition[651]: Ignition 2.14.0 May 8 00:50:36.754225 systemd[1]: Started iscsid.service. May 8 00:50:36.750648 ignition[651]: Stage: fetch-offline May 8 00:50:36.756086 systemd[1]: Starting dracut-initqueue.service... May 8 00:50:36.750689 ignition[651]: no configs at "/usr/lib/ignition/base.d" May 8 00:50:36.760573 systemd-networkd[740]: eth0: DHCPv4 address 10.0.0.122/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 8 00:50:36.750698 ignition[651]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 8 00:50:36.750833 ignition[651]: parsed url from cmdline: "" May 8 00:50:36.750837 ignition[651]: no config URL provided May 8 00:50:36.750842 ignition[651]: reading system config file "/usr/lib/ignition/user.ign" May 8 00:50:36.750848 ignition[651]: no config at "/usr/lib/ignition/user.ign" May 8 00:50:36.750866 ignition[651]: op(1): [started] loading QEMU firmware config module May 8 00:50:36.750871 ignition[651]: op(1): executing: "modprobe" "qemu_fw_cfg" May 8 00:50:36.767195 systemd[1]: Finished dracut-initqueue.service. May 8 00:50:36.768000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:50:36.768547 systemd[1]: Reached target remote-fs-pre.target. May 8 00:50:36.769741 systemd[1]: Reached target remote-cryptsetup.target. May 8 00:50:36.771133 systemd[1]: Reached target remote-fs.target. May 8 00:50:36.772997 systemd[1]: Starting dracut-pre-mount.service... May 8 00:50:36.773700 ignition[651]: op(1): [finished] loading QEMU firmware config module May 8 00:50:36.780153 systemd[1]: Finished dracut-pre-mount.service. May 8 00:50:36.780000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:50:36.781554 ignition[651]: parsing config with SHA512: 948258bffdfd1590e10c11df26ad29cf717d156ebefd327179c685275ccbf80d2f891d8406949b0501bdd9c0a4ea709418f13fb374722572e2b780e12761105e May 8 00:50:36.787024 unknown[651]: fetched base config from "system" May 8 00:50:36.787039 unknown[651]: fetched user config from "qemu" May 8 00:50:36.787328 ignition[651]: fetch-offline: fetch-offline passed May 8 00:50:36.789000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:50:36.788531 systemd[1]: Finished ignition-fetch-offline.service. May 8 00:50:36.787387 ignition[651]: Ignition finished successfully May 8 00:50:36.789775 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 8 00:50:36.790433 systemd[1]: Starting ignition-kargs.service... May 8 00:50:36.798579 ignition[761]: Ignition 2.14.0 May 8 00:50:36.798595 ignition[761]: Stage: kargs May 8 00:50:36.798684 ignition[761]: no configs at "/usr/lib/ignition/base.d" May 8 00:50:36.800782 systemd[1]: Finished ignition-kargs.service. May 8 00:50:36.801000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:50:36.798693 ignition[761]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 8 00:50:36.799339 ignition[761]: kargs: kargs passed May 8 00:50:36.802720 systemd[1]: Starting ignition-disks.service... May 8 00:50:36.799380 ignition[761]: Ignition finished successfully May 8 00:50:36.808978 ignition[767]: Ignition 2.14.0 May 8 00:50:36.808988 ignition[767]: Stage: disks May 8 00:50:36.809085 ignition[767]: no configs at "/usr/lib/ignition/base.d" May 8 00:50:36.811000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:50:36.810876 systemd[1]: Finished ignition-disks.service. May 8 00:50:36.809094 ignition[767]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 8 00:50:36.811692 systemd[1]: Reached target initrd-root-device.target. May 8 00:50:36.809901 ignition[767]: disks: disks passed May 8 00:50:36.812405 systemd[1]: Reached target local-fs-pre.target. May 8 00:50:36.809942 ignition[767]: Ignition finished successfully May 8 00:50:36.813023 systemd[1]: Reached target local-fs.target. May 8 00:50:36.813641 systemd[1]: Reached target sysinit.target. May 8 00:50:36.814698 systemd[1]: Reached target basic.target. May 8 00:50:36.816736 systemd[1]: Starting systemd-fsck-root.service... May 8 00:50:36.827794 systemd-fsck[775]: ROOT: clean, 623/553520 files, 56022/553472 blocks May 8 00:50:36.832011 systemd[1]: Finished systemd-fsck-root.service. May 8 00:50:36.833000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:50:36.833866 systemd[1]: Mounting sysroot.mount... May 8 00:50:36.842114 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. May 8 00:50:36.842733 systemd[1]: Mounted sysroot.mount. May 8 00:50:36.843552 systemd[1]: Reached target initrd-root-fs.target. May 8 00:50:36.845569 systemd[1]: Mounting sysroot-usr.mount... May 8 00:50:36.846306 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. May 8 00:50:36.846346 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 8 00:50:36.846368 systemd[1]: Reached target ignition-diskful.target. May 8 00:50:36.848155 systemd[1]: Mounted sysroot-usr.mount. May 8 00:50:36.849498 systemd[1]: Starting initrd-setup-root.service... May 8 00:50:36.853533 initrd-setup-root[785]: cut: /sysroot/etc/passwd: No such file or directory May 8 00:50:36.858044 initrd-setup-root[793]: cut: /sysroot/etc/group: No such file or directory May 8 00:50:36.862002 initrd-setup-root[801]: cut: /sysroot/etc/shadow: No such file or directory May 8 00:50:36.865681 initrd-setup-root[809]: cut: /sysroot/etc/gshadow: No such file or directory May 8 00:50:36.890732 systemd[1]: Finished initrd-setup-root.service. May 8 00:50:36.891000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:50:36.892345 systemd[1]: Starting ignition-mount.service... May 8 00:50:36.893459 systemd[1]: Starting sysroot-boot.service... May 8 00:50:36.897258 bash[826]: umount: /sysroot/usr/share/oem: not mounted. May 8 00:50:36.905596 ignition[828]: INFO : Ignition 2.14.0 May 8 00:50:36.905596 ignition[828]: INFO : Stage: mount May 8 00:50:36.906761 ignition[828]: INFO : no configs at "/usr/lib/ignition/base.d" May 8 00:50:36.906761 ignition[828]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 8 00:50:36.906761 ignition[828]: INFO : mount: mount passed May 8 00:50:36.906761 ignition[828]: INFO : Ignition finished successfully May 8 00:50:36.908000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:50:36.907176 systemd[1]: Finished ignition-mount.service. May 8 00:50:36.917166 systemd[1]: Finished sysroot-boot.service. May 8 00:50:36.917000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:50:37.560820 systemd[1]: Mounting sysroot-usr-share-oem.mount... May 8 00:50:37.567126 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (836) May 8 00:50:37.568383 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 8 00:50:37.568398 kernel: BTRFS info (device vda6): using free space tree May 8 00:50:37.568407 kernel: BTRFS info (device vda6): has skinny extents May 8 00:50:37.571577 systemd[1]: Mounted sysroot-usr-share-oem.mount. May 8 00:50:37.573152 systemd[1]: Starting ignition-files.service... May 8 00:50:37.586526 ignition[856]: INFO : Ignition 2.14.0 May 8 00:50:37.586526 ignition[856]: INFO : Stage: files May 8 00:50:37.587788 ignition[856]: INFO : no configs at "/usr/lib/ignition/base.d" May 8 00:50:37.587788 ignition[856]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 8 00:50:37.587788 ignition[856]: DEBUG : files: compiled without relabeling support, skipping May 8 00:50:37.590664 ignition[856]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 8 00:50:37.590664 ignition[856]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 8 00:50:37.590664 ignition[856]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 8 00:50:37.590664 ignition[856]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 8 00:50:37.590664 ignition[856]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 8 00:50:37.590599 unknown[856]: wrote ssh authorized keys file for user: core May 8 00:50:37.596467 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" May 8 00:50:37.596467 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" May 8 00:50:37.596467 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" May 8 00:50:37.596467 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 8 00:50:37.596467 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" May 8 00:50:37.596467 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" May 8 00:50:37.596467 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" May 8 00:50:37.596467 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-arm64.raw: attempt #1 May 8 00:50:37.921611 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK May 8 00:50:38.264752 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" May 8 00:50:38.264752 ignition[856]: INFO : files: op(7): [started] processing unit "coreos-metadata.service" May 8 00:50:38.267951 ignition[856]: INFO : files: op(7): op(8): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 8 00:50:38.267951 ignition[856]: INFO : files: op(7): op(8): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 8 00:50:38.267951 ignition[856]: INFO : files: op(7): [finished] processing unit "coreos-metadata.service" May 8 00:50:38.267951 ignition[856]: INFO : files: op(9): [started] setting preset to disabled for "coreos-metadata.service" May 8 00:50:38.267951 ignition[856]: INFO : files: op(9): op(a): [started] removing enablement symlink(s) for "coreos-metadata.service" May 8 00:50:38.295281 ignition[856]: INFO : files: op(9): op(a): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 8 00:50:38.296472 ignition[856]: INFO : files: op(9): [finished] setting preset to disabled for "coreos-metadata.service" May 8 00:50:38.296472 ignition[856]: INFO : files: createResultFile: createFiles: op(b): [started] writing file "/sysroot/etc/.ignition-result.json" May 8 00:50:38.296472 ignition[856]: INFO : files: createResultFile: createFiles: op(b): [finished] writing file "/sysroot/etc/.ignition-result.json" May 8 00:50:38.296472 ignition[856]: INFO : files: files passed May 8 00:50:38.296472 ignition[856]: INFO : Ignition finished successfully May 8 00:50:38.298000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:50:38.297404 systemd[1]: Finished ignition-files.service. May 8 00:50:38.299122 systemd[1]: Starting initrd-setup-root-after-ignition.service... May 8 00:50:38.299938 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). May 8 00:50:38.304000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:50:38.304000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:50:38.306213 initrd-setup-root-after-ignition[881]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory May 8 00:50:38.300591 systemd[1]: Starting ignition-quench.service... May 8 00:50:38.307000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:50:38.308579 initrd-setup-root-after-ignition[883]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 8 00:50:38.303504 systemd[1]: ignition-quench.service: Deactivated successfully. May 8 00:50:38.303589 systemd[1]: Finished ignition-quench.service. May 8 00:50:38.306819 systemd[1]: Finished initrd-setup-root-after-ignition.service. May 8 00:50:38.307947 systemd[1]: Reached target ignition-complete.target. May 8 00:50:38.309699 systemd[1]: Starting initrd-parse-etc.service... May 8 00:50:38.321564 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 8 00:50:38.321650 systemd[1]: Finished initrd-parse-etc.service. May 8 00:50:38.322000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:50:38.322000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:50:38.322892 systemd[1]: Reached target initrd-fs.target. May 8 00:50:38.323769 systemd[1]: Reached target initrd.target. May 8 00:50:38.324742 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. May 8 00:50:38.325428 systemd[1]: Starting dracut-pre-pivot.service... May 8 00:50:38.335545 systemd[1]: Finished dracut-pre-pivot.service. May 8 00:50:38.336000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:50:38.336804 systemd[1]: Starting initrd-cleanup.service... May 8 00:50:38.344211 systemd[1]: Stopped target nss-lookup.target. May 8 00:50:38.344854 systemd[1]: Stopped target remote-cryptsetup.target. May 8 00:50:38.345948 systemd[1]: Stopped target timers.target. May 8 00:50:38.346929 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 8 00:50:38.347000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:50:38.347028 systemd[1]: Stopped dracut-pre-pivot.service. May 8 00:50:38.347969 systemd[1]: Stopped target initrd.target. May 8 00:50:38.348951 systemd[1]: Stopped target basic.target. May 8 00:50:38.349948 systemd[1]: Stopped target ignition-complete.target. May 8 00:50:38.350941 systemd[1]: Stopped target ignition-diskful.target. May 8 00:50:38.351916 systemd[1]: Stopped target initrd-root-device.target. May 8 00:50:38.353174 systemd[1]: Stopped target remote-fs.target. May 8 00:50:38.354210 systemd[1]: Stopped target remote-fs-pre.target. May 8 00:50:38.355365 systemd[1]: Stopped target sysinit.target. May 8 00:50:38.356413 systemd[1]: Stopped target local-fs.target. May 8 00:50:38.357428 systemd[1]: Stopped target local-fs-pre.target. May 8 00:50:38.358401 systemd[1]: Stopped target swap.target. May 8 00:50:38.360000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:50:38.359448 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 8 00:50:38.359550 systemd[1]: Stopped dracut-pre-mount.service. May 8 00:50:38.362000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:50:38.360558 systemd[1]: Stopped target cryptsetup.target. May 8 00:50:38.363000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:50:38.361406 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 8 00:50:38.361499 systemd[1]: Stopped dracut-initqueue.service. May 8 00:50:38.362771 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 8 00:50:38.362862 systemd[1]: Stopped ignition-fetch-offline.service. May 8 00:50:38.363803 systemd[1]: Stopped target paths.target. May 8 00:50:38.364654 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 8 00:50:38.368134 systemd[1]: Stopped systemd-ask-password-console.path. May 8 00:50:38.369323 systemd[1]: Stopped target slices.target. May 8 00:50:38.370308 systemd[1]: Stopped target sockets.target. May 8 00:50:38.371241 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 8 00:50:38.372000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:50:38.371343 systemd[1]: Stopped initrd-setup-root-after-ignition.service. May 8 00:50:38.373000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:50:38.372386 systemd[1]: ignition-files.service: Deactivated successfully. May 8 00:50:38.372472 systemd[1]: Stopped ignition-files.service. May 8 00:50:38.376664 iscsid[746]: iscsid shutting down. May 8 00:50:38.374473 systemd[1]: Stopping ignition-mount.service... May 8 00:50:38.378000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:50:38.375169 systemd[1]: Stopping iscsid.service... May 8 00:50:38.376650 systemd[1]: Stopping sysroot-boot.service... May 8 00:50:38.379000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:50:38.377190 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 8 00:50:38.377332 systemd[1]: Stopped systemd-udev-trigger.service. May 8 00:50:38.382000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:50:38.378364 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 8 00:50:38.378451 systemd[1]: Stopped dracut-pre-trigger.service. May 8 00:50:38.385883 ignition[896]: INFO : Ignition 2.14.0 May 8 00:50:38.385883 ignition[896]: INFO : Stage: umount May 8 00:50:38.385883 ignition[896]: INFO : no configs at "/usr/lib/ignition/base.d" May 8 00:50:38.385883 ignition[896]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 8 00:50:38.385883 ignition[896]: INFO : umount: umount passed May 8 00:50:38.385883 ignition[896]: INFO : Ignition finished successfully May 8 00:50:38.388000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:50:38.389000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:50:38.390000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:50:38.392000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:50:38.380993 systemd[1]: iscsid.service: Deactivated successfully. May 8 00:50:38.381097 systemd[1]: Stopped iscsid.service. May 8 00:50:38.401000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:50:38.382615 systemd[1]: iscsid.socket: Deactivated successfully. May 8 00:50:38.401000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:50:38.382677 systemd[1]: Closed iscsid.socket. May 8 00:50:38.403000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:50:38.383439 systemd[1]: Stopping iscsiuio.service... May 8 00:50:38.386472 systemd[1]: iscsiuio.service: Deactivated successfully. May 8 00:50:38.386558 systemd[1]: Stopped iscsiuio.service. May 8 00:50:38.388991 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 8 00:50:38.389393 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 8 00:50:38.389529 systemd[1]: Finished initrd-cleanup.service. May 8 00:50:38.390319 systemd[1]: ignition-mount.service: Deactivated successfully. May 8 00:50:38.390388 systemd[1]: Stopped ignition-mount.service. May 8 00:50:38.392709 systemd[1]: Stopped target network.target. May 8 00:50:38.397964 systemd[1]: iscsiuio.socket: Deactivated successfully. May 8 00:50:38.411000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:50:38.397999 systemd[1]: Closed iscsiuio.socket. May 8 00:50:38.398937 systemd[1]: ignition-disks.service: Deactivated successfully. May 8 00:50:38.398974 systemd[1]: Stopped ignition-disks.service. May 8 00:50:38.401248 systemd[1]: ignition-kargs.service: Deactivated successfully. May 8 00:50:38.401288 systemd[1]: Stopped ignition-kargs.service. May 8 00:50:38.402202 systemd[1]: ignition-setup.service: Deactivated successfully. May 8 00:50:38.402236 systemd[1]: Stopped ignition-setup.service. May 8 00:50:38.403495 systemd[1]: Stopping systemd-networkd.service... May 8 00:50:38.404476 systemd[1]: Stopping systemd-resolved.service... May 8 00:50:38.410557 systemd[1]: systemd-resolved.service: Deactivated successfully. May 8 00:50:38.410651 systemd[1]: Stopped systemd-resolved.service. May 8 00:50:38.416150 systemd-networkd[740]: eth0: DHCPv6 lease lost May 8 00:50:38.417852 systemd[1]: systemd-networkd.service: Deactivated successfully. May 8 00:50:38.418000 audit: BPF prog-id=6 op=UNLOAD May 8 00:50:38.418000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:50:38.417953 systemd[1]: Stopped systemd-networkd.service. May 8 00:50:38.423000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:50:38.419239 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 8 00:50:38.424000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:50:38.419268 systemd[1]: Closed systemd-networkd.socket. May 8 00:50:38.425000 audit: BPF prog-id=9 op=UNLOAD May 8 00:50:38.425000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:50:38.420990 systemd[1]: Stopping network-cleanup.service... May 8 00:50:38.421846 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 8 00:50:38.431000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:50:38.421897 systemd[1]: Stopped parse-ip-for-networkd.service. May 8 00:50:38.432000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:50:38.423422 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 8 00:50:38.423463 systemd[1]: Stopped systemd-sysctl.service. May 8 00:50:38.434000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:50:38.425065 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 8 00:50:38.435000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:50:38.425118 systemd[1]: Stopped systemd-modules-load.service. May 8 00:50:38.426240 systemd[1]: Stopping systemd-udevd.service... May 8 00:50:38.430360 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 8 00:50:38.438000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:50:38.430776 systemd[1]: sysroot-boot.service: Deactivated successfully. May 8 00:50:38.439000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:50:38.430856 systemd[1]: Stopped sysroot-boot.service. May 8 00:50:38.441000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:50:38.432179 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 8 00:50:38.432227 systemd[1]: Stopped initrd-setup-root.service. May 8 00:50:38.433811 systemd[1]: network-cleanup.service: Deactivated successfully. May 8 00:50:38.443000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:50:38.433896 systemd[1]: Stopped network-cleanup.service. May 8 00:50:38.445000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:50:38.434868 systemd[1]: systemd-udevd.service: Deactivated successfully. May 8 00:50:38.446000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:50:38.434982 systemd[1]: Stopped systemd-udevd.service. May 8 00:50:38.435787 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 8 00:50:38.435816 systemd[1]: Closed systemd-udevd-control.socket. May 8 00:50:38.449000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:50:38.449000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:50:38.436829 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 8 00:50:38.436855 systemd[1]: Closed systemd-udevd-kernel.socket. May 8 00:50:38.437823 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 8 00:50:38.437856 systemd[1]: Stopped dracut-pre-udev.service. May 8 00:50:38.438816 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 8 00:50:38.438848 systemd[1]: Stopped dracut-cmdline.service. May 8 00:50:38.440070 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 8 00:50:38.440114 systemd[1]: Stopped dracut-cmdline-ask.service. May 8 00:50:38.441749 systemd[1]: Starting initrd-udevadm-cleanup-db.service... May 8 00:50:38.442983 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 8 00:50:38.443031 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. May 8 00:50:38.444749 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 8 00:50:38.444786 systemd[1]: Stopped kmod-static-nodes.service. May 8 00:50:38.445430 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 8 00:50:38.445463 systemd[1]: Stopped systemd-vconsole-setup.service. May 8 00:50:38.447522 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. May 8 00:50:38.447893 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 8 00:50:38.447968 systemd[1]: Finished initrd-udevadm-cleanup-db.service. May 8 00:50:38.449332 systemd[1]: Reached target initrd-switch-root.target. May 8 00:50:38.450854 systemd[1]: Starting initrd-switch-root.service... May 8 00:50:38.456636 systemd[1]: Switching root. May 8 00:50:38.473160 systemd-journald[290]: Journal stopped May 8 00:50:40.471177 systemd-journald[290]: Received SIGTERM from PID 1 (systemd). May 8 00:50:40.471226 kernel: SELinux: Class mctp_socket not defined in policy. May 8 00:50:40.471242 kernel: SELinux: Class anon_inode not defined in policy. May 8 00:50:40.471258 kernel: SELinux: the above unknown classes and permissions will be allowed May 8 00:50:40.471269 kernel: SELinux: policy capability network_peer_controls=1 May 8 00:50:40.471279 kernel: SELinux: policy capability open_perms=1 May 8 00:50:40.471289 kernel: SELinux: policy capability extended_socket_class=1 May 8 00:50:40.471301 kernel: SELinux: policy capability always_check_network=0 May 8 00:50:40.471315 kernel: SELinux: policy capability cgroup_seclabel=1 May 8 00:50:40.471325 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 8 00:50:40.471336 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 8 00:50:40.471346 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 8 00:50:40.471357 systemd[1]: Successfully loaded SELinux policy in 32.696ms. May 8 00:50:40.471372 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 7.090ms. May 8 00:50:40.471384 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) May 8 00:50:40.471395 systemd[1]: Detected virtualization kvm. May 8 00:50:40.471405 systemd[1]: Detected architecture arm64. May 8 00:50:40.471415 systemd[1]: Detected first boot. May 8 00:50:40.471427 systemd[1]: Initializing machine ID from VM UUID. May 8 00:50:40.471437 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). May 8 00:50:40.471447 systemd[1]: Populated /etc with preset unit settings. May 8 00:50:40.471458 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 8 00:50:40.471469 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 8 00:50:40.471481 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 8 00:50:40.471492 kernel: kauditd_printk_skb: 79 callbacks suppressed May 8 00:50:40.471502 kernel: audit: type=1334 audit(1746665440.341:83): prog-id=12 op=LOAD May 8 00:50:40.471514 kernel: audit: type=1334 audit(1746665440.341:84): prog-id=3 op=UNLOAD May 8 00:50:40.471523 kernel: audit: type=1334 audit(1746665440.343:85): prog-id=13 op=LOAD May 8 00:50:40.471533 kernel: audit: type=1334 audit(1746665440.344:86): prog-id=14 op=LOAD May 8 00:50:40.471543 kernel: audit: type=1334 audit(1746665440.344:87): prog-id=4 op=UNLOAD May 8 00:50:40.471553 kernel: audit: type=1334 audit(1746665440.344:88): prog-id=5 op=UNLOAD May 8 00:50:40.471563 kernel: audit: type=1334 audit(1746665440.346:89): prog-id=15 op=LOAD May 8 00:50:40.471572 kernel: audit: type=1334 audit(1746665440.346:90): prog-id=12 op=UNLOAD May 8 00:50:40.471582 kernel: audit: type=1334 audit(1746665440.347:91): prog-id=16 op=LOAD May 8 00:50:40.471591 kernel: audit: type=1334 audit(1746665440.347:92): prog-id=17 op=LOAD May 8 00:50:40.471604 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 8 00:50:40.471617 systemd[1]: Stopped initrd-switch-root.service. May 8 00:50:40.471628 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 8 00:50:40.471638 systemd[1]: Created slice system-addon\x2dconfig.slice. May 8 00:50:40.471649 systemd[1]: Created slice system-addon\x2drun.slice. May 8 00:50:40.471659 systemd[1]: Created slice system-getty.slice. May 8 00:50:40.471669 systemd[1]: Created slice system-modprobe.slice. May 8 00:50:40.471680 systemd[1]: Created slice system-serial\x2dgetty.slice. May 8 00:50:40.471692 systemd[1]: Created slice system-system\x2dcloudinit.slice. May 8 00:50:40.471703 systemd[1]: Created slice system-systemd\x2dfsck.slice. May 8 00:50:40.471713 systemd[1]: Created slice user.slice. May 8 00:50:40.471723 systemd[1]: Started systemd-ask-password-console.path. May 8 00:50:40.471734 systemd[1]: Started systemd-ask-password-wall.path. May 8 00:50:40.471745 systemd[1]: Set up automount boot.automount. May 8 00:50:40.471756 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. May 8 00:50:40.471768 systemd[1]: Stopped target initrd-switch-root.target. May 8 00:50:40.471779 systemd[1]: Stopped target initrd-fs.target. May 8 00:50:40.471789 systemd[1]: Stopped target initrd-root-fs.target. May 8 00:50:40.471800 systemd[1]: Reached target integritysetup.target. May 8 00:50:40.471810 systemd[1]: Reached target remote-cryptsetup.target. May 8 00:50:40.471821 systemd[1]: Reached target remote-fs.target. May 8 00:50:40.471832 systemd[1]: Reached target slices.target. May 8 00:50:40.471842 systemd[1]: Reached target swap.target. May 8 00:50:40.471853 systemd[1]: Reached target torcx.target. May 8 00:50:40.471864 systemd[1]: Reached target veritysetup.target. May 8 00:50:40.471875 systemd[1]: Listening on systemd-coredump.socket. May 8 00:50:40.471885 systemd[1]: Listening on systemd-initctl.socket. May 8 00:50:40.471896 systemd[1]: Listening on systemd-networkd.socket. May 8 00:50:40.471907 systemd[1]: Listening on systemd-udevd-control.socket. May 8 00:50:40.471917 systemd[1]: Listening on systemd-udevd-kernel.socket. May 8 00:50:40.471928 systemd[1]: Listening on systemd-userdbd.socket. May 8 00:50:40.471940 systemd[1]: Mounting dev-hugepages.mount... May 8 00:50:40.471951 systemd[1]: Mounting dev-mqueue.mount... May 8 00:50:40.471962 systemd[1]: Mounting media.mount... May 8 00:50:40.471973 systemd[1]: Mounting sys-kernel-debug.mount... May 8 00:50:40.471984 systemd[1]: Mounting sys-kernel-tracing.mount... May 8 00:50:40.471996 systemd[1]: Mounting tmp.mount... May 8 00:50:40.472006 systemd[1]: Starting flatcar-tmpfiles.service... May 8 00:50:40.472017 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 8 00:50:40.472027 systemd[1]: Starting kmod-static-nodes.service... May 8 00:50:40.472038 systemd[1]: Starting modprobe@configfs.service... May 8 00:50:40.472048 systemd[1]: Starting modprobe@dm_mod.service... May 8 00:50:40.472066 systemd[1]: Starting modprobe@drm.service... May 8 00:50:40.472078 systemd[1]: Starting modprobe@efi_pstore.service... May 8 00:50:40.472089 systemd[1]: Starting modprobe@fuse.service... May 8 00:50:40.472113 systemd[1]: Starting modprobe@loop.service... May 8 00:50:40.472126 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 8 00:50:40.472137 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 8 00:50:40.472147 systemd[1]: Stopped systemd-fsck-root.service. May 8 00:50:40.472162 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 8 00:50:40.472172 systemd[1]: Stopped systemd-fsck-usr.service. May 8 00:50:40.472186 systemd[1]: Stopped systemd-journald.service. May 8 00:50:40.472196 kernel: fuse: init (API version 7.34) May 8 00:50:40.472206 systemd[1]: Starting systemd-journald.service... May 8 00:50:40.472217 systemd[1]: Starting systemd-modules-load.service... May 8 00:50:40.472228 systemd[1]: Starting systemd-network-generator.service... May 8 00:50:40.472238 systemd[1]: Starting systemd-remount-fs.service... May 8 00:50:40.472249 systemd[1]: Starting systemd-udev-trigger.service... May 8 00:50:40.472259 kernel: loop: module loaded May 8 00:50:40.472269 systemd[1]: verity-setup.service: Deactivated successfully. May 8 00:50:40.472282 systemd[1]: Stopped verity-setup.service. May 8 00:50:40.472292 systemd[1]: Mounted dev-hugepages.mount. May 8 00:50:40.472303 systemd[1]: Mounted dev-mqueue.mount. May 8 00:50:40.472313 systemd[1]: Mounted media.mount. May 8 00:50:40.472323 systemd[1]: Mounted sys-kernel-debug.mount. May 8 00:50:40.472334 systemd[1]: Mounted sys-kernel-tracing.mount. May 8 00:50:40.472344 systemd[1]: Mounted tmp.mount. May 8 00:50:40.472354 systemd[1]: Finished kmod-static-nodes.service. May 8 00:50:40.472364 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 8 00:50:40.472376 systemd[1]: Finished modprobe@configfs.service. May 8 00:50:40.472387 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 8 00:50:40.472397 systemd[1]: Finished modprobe@dm_mod.service. May 8 00:50:40.472407 systemd[1]: modprobe@drm.service: Deactivated successfully. May 8 00:50:40.472418 systemd[1]: Finished modprobe@drm.service. May 8 00:50:40.472450 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 8 00:50:40.472462 systemd[1]: Finished modprobe@efi_pstore.service. May 8 00:50:40.472475 systemd-journald[992]: Journal started May 8 00:50:40.472516 systemd-journald[992]: Runtime Journal (/run/log/journal/bf7bfda477694e0c89771bc9e392b4c2) is 6.0M, max 48.7M, 42.6M free. May 8 00:50:38.530000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 May 8 00:50:38.595000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 May 8 00:50:38.595000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 May 8 00:50:38.595000 audit: BPF prog-id=10 op=LOAD May 8 00:50:38.595000 audit: BPF prog-id=10 op=UNLOAD May 8 00:50:38.595000 audit: BPF prog-id=11 op=LOAD May 8 00:50:38.595000 audit: BPF prog-id=11 op=UNLOAD May 8 00:50:38.645000 audit[929]: AVC avc: denied { associate } for pid=929 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" May 8 00:50:38.645000 audit[929]: SYSCALL arch=c00000b7 syscall=5 success=yes exit=0 a0=40001c58b4 a1=40000c8de0 a2=40000cf0c0 a3=32 items=0 ppid=912 pid=929 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:50:38.645000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 May 8 00:50:38.646000 audit[929]: AVC avc: denied { associate } for pid=929 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 May 8 00:50:38.646000 audit[929]: SYSCALL arch=c00000b7 syscall=34 success=yes exit=0 a0=ffffffffffffff9c a1=40001c5989 a2=1ed a3=0 items=2 ppid=912 pid=929 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:50:38.646000 audit: CWD cwd="/" May 8 00:50:38.646000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:50:38.646000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:50:38.646000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 May 8 00:50:40.341000 audit: BPF prog-id=12 op=LOAD May 8 00:50:40.341000 audit: BPF prog-id=3 op=UNLOAD May 8 00:50:40.343000 audit: BPF prog-id=13 op=LOAD May 8 00:50:40.344000 audit: BPF prog-id=14 op=LOAD May 8 00:50:40.344000 audit: BPF prog-id=4 op=UNLOAD May 8 00:50:40.344000 audit: BPF prog-id=5 op=UNLOAD May 8 00:50:40.346000 audit: BPF prog-id=15 op=LOAD May 8 00:50:40.346000 audit: BPF prog-id=12 op=UNLOAD May 8 00:50:40.347000 audit: BPF prog-id=16 op=LOAD May 8 00:50:40.347000 audit: BPF prog-id=17 op=LOAD May 8 00:50:40.347000 audit: BPF prog-id=13 op=UNLOAD May 8 00:50:40.347000 audit: BPF prog-id=14 op=UNLOAD May 8 00:50:40.348000 audit: BPF prog-id=18 op=LOAD May 8 00:50:40.348000 audit: BPF prog-id=15 op=UNLOAD May 8 00:50:40.348000 audit: BPF prog-id=19 op=LOAD May 8 00:50:40.348000 audit: BPF prog-id=20 op=LOAD May 8 00:50:40.348000 audit: BPF prog-id=16 op=UNLOAD May 8 00:50:40.348000 audit: BPF prog-id=17 op=UNLOAD May 8 00:50:40.349000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:50:40.352000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:50:40.352000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:50:40.361000 audit: BPF prog-id=18 op=UNLOAD May 8 00:50:40.432000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:50:40.434000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:50:40.435000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:50:40.435000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:50:40.436000 audit: BPF prog-id=21 op=LOAD May 8 00:50:40.437000 audit: BPF prog-id=22 op=LOAD May 8 00:50:40.437000 audit: BPF prog-id=23 op=LOAD May 8 00:50:40.437000 audit: BPF prog-id=19 op=UNLOAD May 8 00:50:40.437000 audit: BPF prog-id=20 op=UNLOAD May 8 00:50:40.453000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:50:40.462000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:50:40.465000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:50:40.465000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:50:40.467000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:50:40.467000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:50:40.469000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:50:40.469000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:50:40.469000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 May 8 00:50:40.469000 audit[992]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=5 a1=fffffa2e9c40 a2=4000 a3=1 items=0 ppid=1 pid=992 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:50:40.469000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" May 8 00:50:40.472000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:50:40.472000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:50:38.643779 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2025-05-08T00:50:38Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 8 00:50:40.340584 systemd[1]: Queued start job for default target multi-user.target. May 8 00:50:38.644054 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2025-05-08T00:50:38Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json May 8 00:50:40.340597 systemd[1]: Unnecessary job was removed for dev-vda6.device. May 8 00:50:38.644075 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2025-05-08T00:50:38Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json May 8 00:50:40.349281 systemd[1]: systemd-journald.service: Deactivated successfully. May 8 00:50:38.644120 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2025-05-08T00:50:38Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" May 8 00:50:38.644129 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2025-05-08T00:50:38Z" level=debug msg="skipped missing lower profile" missing profile=oem May 8 00:50:40.474347 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 8 00:50:40.474365 systemd[1]: Finished modprobe@fuse.service. May 8 00:50:38.644157 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2025-05-08T00:50:38Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" May 8 00:50:40.474000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:50:40.474000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:50:38.644169 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2025-05-08T00:50:38Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= May 8 00:50:38.644362 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2025-05-08T00:50:38Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack May 8 00:50:38.644396 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2025-05-08T00:50:38Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json May 8 00:50:38.644408 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2025-05-08T00:50:38Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json May 8 00:50:38.644815 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2025-05-08T00:50:38Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 May 8 00:50:38.644848 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2025-05-08T00:50:38Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl May 8 00:50:38.644865 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2025-05-08T00:50:38Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.7: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.7 May 8 00:50:38.644879 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2025-05-08T00:50:38Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store May 8 00:50:38.644895 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2025-05-08T00:50:38Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.7: no such file or directory" path=/var/lib/torcx/store/3510.3.7 May 8 00:50:40.475573 systemd[1]: Started systemd-journald.service. May 8 00:50:38.644907 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2025-05-08T00:50:38Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store May 8 00:50:40.094389 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2025-05-08T00:50:40Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 8 00:50:40.475000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:50:40.094653 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2025-05-08T00:50:40Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 8 00:50:40.094748 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2025-05-08T00:50:40Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 8 00:50:40.094907 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2025-05-08T00:50:40Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 8 00:50:40.094954 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2025-05-08T00:50:40Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= May 8 00:50:40.095008 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2025-05-08T00:50:40Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx May 8 00:50:40.476407 systemd[1]: Finished flatcar-tmpfiles.service. May 8 00:50:40.476000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:50:40.477231 systemd[1]: modprobe@loop.service: Deactivated successfully. May 8 00:50:40.477370 systemd[1]: Finished modprobe@loop.service. May 8 00:50:40.477000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:50:40.477000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:50:40.478357 systemd[1]: Finished systemd-modules-load.service. May 8 00:50:40.478000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:50:40.479281 systemd[1]: Finished systemd-network-generator.service. May 8 00:50:40.479000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:50:40.480219 systemd[1]: Finished systemd-remount-fs.service. May 8 00:50:40.480000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:50:40.481447 systemd[1]: Reached target network-pre.target. May 8 00:50:40.483054 systemd[1]: Mounting sys-fs-fuse-connections.mount... May 8 00:50:40.484678 systemd[1]: Mounting sys-kernel-config.mount... May 8 00:50:40.485410 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 8 00:50:40.486685 systemd[1]: Starting systemd-hwdb-update.service... May 8 00:50:40.488309 systemd[1]: Starting systemd-journal-flush.service... May 8 00:50:40.488970 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 8 00:50:40.489982 systemd[1]: Starting systemd-random-seed.service... May 8 00:50:40.490708 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 8 00:50:40.491761 systemd[1]: Starting systemd-sysctl.service... May 8 00:50:40.493530 systemd[1]: Starting systemd-sysusers.service... May 8 00:50:40.499185 systemd-journald[992]: Time spent on flushing to /var/log/journal/bf7bfda477694e0c89771bc9e392b4c2 is 22.878ms for 984 entries. May 8 00:50:40.499185 systemd-journald[992]: System Journal (/var/log/journal/bf7bfda477694e0c89771bc9e392b4c2) is 8.0M, max 195.6M, 187.6M free. May 8 00:50:40.529357 systemd-journald[992]: Received client request to flush runtime journal. May 8 00:50:40.506000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:50:40.509000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:50:40.511000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:50:40.519000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:50:40.496542 systemd[1]: Mounted sys-fs-fuse-connections.mount. May 8 00:50:40.497292 systemd[1]: Mounted sys-kernel-config.mount. May 8 00:50:40.530553 udevadm[1031]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. May 8 00:50:40.505853 systemd[1]: Finished systemd-sysctl.service. May 8 00:50:40.508237 systemd[1]: Finished systemd-random-seed.service. May 8 00:50:40.530000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:50:40.509404 systemd[1]: Reached target first-boot-complete.target. May 8 00:50:40.510571 systemd[1]: Finished systemd-udev-trigger.service. May 8 00:50:40.512399 systemd[1]: Starting systemd-udev-settle.service... May 8 00:50:40.518621 systemd[1]: Finished systemd-sysusers.service. May 8 00:50:40.520503 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... May 8 00:50:40.530230 systemd[1]: Finished systemd-journal-flush.service. May 8 00:50:40.537843 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. May 8 00:50:40.538000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:50:40.855387 systemd[1]: Finished systemd-hwdb-update.service. May 8 00:50:40.855000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:50:40.856000 audit: BPF prog-id=24 op=LOAD May 8 00:50:40.856000 audit: BPF prog-id=25 op=LOAD May 8 00:50:40.856000 audit: BPF prog-id=7 op=UNLOAD May 8 00:50:40.856000 audit: BPF prog-id=8 op=UNLOAD May 8 00:50:40.857331 systemd[1]: Starting systemd-udevd.service... May 8 00:50:40.879264 systemd-udevd[1035]: Using default interface naming scheme 'v252'. May 8 00:50:40.891405 systemd[1]: Started systemd-udevd.service. May 8 00:50:40.891000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:50:40.895000 audit: BPF prog-id=26 op=LOAD May 8 00:50:40.895853 systemd[1]: Starting systemd-networkd.service... May 8 00:50:40.910000 audit: BPF prog-id=27 op=LOAD May 8 00:50:40.910000 audit: BPF prog-id=28 op=LOAD May 8 00:50:40.910000 audit: BPF prog-id=29 op=LOAD May 8 00:50:40.911126 systemd[1]: Starting systemd-userdbd.service... May 8 00:50:40.913484 systemd[1]: Condition check resulted in dev-ttyAMA0.device being skipped. May 8 00:50:40.939081 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. May 8 00:50:40.954235 systemd[1]: Started systemd-userdbd.service. May 8 00:50:40.954000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:50:40.994843 systemd[1]: Finished systemd-udev-settle.service. May 8 00:50:40.995000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:50:40.996612 systemd[1]: Starting lvm2-activation-early.service... May 8 00:50:41.008686 systemd-networkd[1038]: lo: Link UP May 8 00:50:41.008694 systemd-networkd[1038]: lo: Gained carrier May 8 00:50:41.009018 systemd-networkd[1038]: Enumeration completed May 8 00:50:41.009121 systemd[1]: Started systemd-networkd.service. May 8 00:50:41.009141 systemd-networkd[1038]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 8 00:50:41.010398 systemd-networkd[1038]: eth0: Link UP May 8 00:50:41.010405 systemd-networkd[1038]: eth0: Gained carrier May 8 00:50:41.009000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:50:41.012847 lvm[1068]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 8 00:50:41.036215 systemd-networkd[1038]: eth0: DHCPv4 address 10.0.0.122/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 8 00:50:41.041884 systemd[1]: Finished lvm2-activation-early.service. May 8 00:50:41.042000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:50:41.042694 systemd[1]: Reached target cryptsetup.target. May 8 00:50:41.044331 systemd[1]: Starting lvm2-activation.service... May 8 00:50:41.047838 lvm[1069]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 8 00:50:41.072897 systemd[1]: Finished lvm2-activation.service. May 8 00:50:41.073000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:50:41.073695 systemd[1]: Reached target local-fs-pre.target. May 8 00:50:41.074397 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 8 00:50:41.074433 systemd[1]: Reached target local-fs.target. May 8 00:50:41.074992 systemd[1]: Reached target machines.target. May 8 00:50:41.076721 systemd[1]: Starting ldconfig.service... May 8 00:50:41.077627 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 8 00:50:41.077685 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 8 00:50:41.078810 systemd[1]: Starting systemd-boot-update.service... May 8 00:50:41.080653 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... May 8 00:50:41.082688 systemd[1]: Starting systemd-machine-id-commit.service... May 8 00:50:41.085618 systemd[1]: Starting systemd-sysext.service... May 8 00:50:41.087168 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1071 (bootctl) May 8 00:50:41.088234 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... May 8 00:50:41.099409 systemd[1]: Unmounting usr-share-oem.mount... May 8 00:50:41.106000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:50:41.105527 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. May 8 00:50:41.109214 systemd[1]: usr-share-oem.mount: Deactivated successfully. May 8 00:50:41.109426 systemd[1]: Unmounted usr-share-oem.mount. May 8 00:50:41.152121 kernel: loop0: detected capacity change from 0 to 189592 May 8 00:50:41.156468 systemd[1]: Finished systemd-machine-id-commit.service. May 8 00:50:41.157000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:50:41.165124 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 8 00:50:41.169406 systemd-fsck[1080]: fsck.fat 4.2 (2021-01-31) May 8 00:50:41.169406 systemd-fsck[1080]: /dev/vda1: 236 files, 117182/258078 clusters May 8 00:50:41.171554 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. May 8 00:50:41.173000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:50:41.186162 kernel: loop1: detected capacity change from 0 to 189592 May 8 00:50:41.194687 (sd-sysext)[1083]: Using extensions 'kubernetes'. May 8 00:50:41.195040 (sd-sysext)[1083]: Merged extensions into '/usr'. May 8 00:50:41.213994 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 8 00:50:41.215416 systemd[1]: Starting modprobe@dm_mod.service... May 8 00:50:41.217294 systemd[1]: Starting modprobe@efi_pstore.service... May 8 00:50:41.219865 systemd[1]: Starting modprobe@loop.service... May 8 00:50:41.220789 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 8 00:50:41.220925 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 8 00:50:41.221790 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 8 00:50:41.221924 systemd[1]: Finished modprobe@dm_mod.service. May 8 00:50:41.222000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:50:41.222000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:50:41.222970 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 8 00:50:41.223099 systemd[1]: Finished modprobe@efi_pstore.service. May 8 00:50:41.223000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:50:41.223000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:50:41.224365 systemd[1]: modprobe@loop.service: Deactivated successfully. May 8 00:50:41.224465 systemd[1]: Finished modprobe@loop.service. May 8 00:50:41.225000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:50:41.225000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:50:41.225684 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 8 00:50:41.225788 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 8 00:50:41.274876 ldconfig[1070]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 8 00:50:41.279096 systemd[1]: Finished ldconfig.service. May 8 00:50:41.279000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:50:41.454582 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 8 00:50:41.456412 systemd[1]: Mounting boot.mount... May 8 00:50:41.458098 systemd[1]: Mounting usr-share-oem.mount... May 8 00:50:41.464995 systemd[1]: Mounted boot.mount. May 8 00:50:41.465953 systemd[1]: Mounted usr-share-oem.mount. May 8 00:50:41.467695 systemd[1]: Finished systemd-sysext.service. May 8 00:50:41.468000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:50:41.470433 systemd[1]: Starting ensure-sysext.service... May 8 00:50:41.471956 systemd[1]: Starting systemd-tmpfiles-setup.service... May 8 00:50:41.473088 systemd[1]: Finished systemd-boot-update.service. May 8 00:50:41.473000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:50:41.476965 systemd[1]: Reloading. May 8 00:50:41.481095 systemd-tmpfiles[1092]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. May 8 00:50:41.481777 systemd-tmpfiles[1092]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 8 00:50:41.483044 systemd-tmpfiles[1092]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 8 00:50:41.503911 /usr/lib/systemd/system-generators/torcx-generator[1112]: time="2025-05-08T00:50:41Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 8 00:50:41.504255 /usr/lib/systemd/system-generators/torcx-generator[1112]: time="2025-05-08T00:50:41Z" level=info msg="torcx already run" May 8 00:50:41.563462 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 8 00:50:41.563482 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 8 00:50:41.578432 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 8 00:50:41.618000 audit: BPF prog-id=30 op=LOAD May 8 00:50:41.618000 audit: BPF prog-id=21 op=UNLOAD May 8 00:50:41.618000 audit: BPF prog-id=31 op=LOAD May 8 00:50:41.618000 audit: BPF prog-id=32 op=LOAD May 8 00:50:41.618000 audit: BPF prog-id=22 op=UNLOAD May 8 00:50:41.618000 audit: BPF prog-id=23 op=UNLOAD May 8 00:50:41.619000 audit: BPF prog-id=33 op=LOAD May 8 00:50:41.619000 audit: BPF prog-id=34 op=LOAD May 8 00:50:41.619000 audit: BPF prog-id=24 op=UNLOAD May 8 00:50:41.619000 audit: BPF prog-id=25 op=UNLOAD May 8 00:50:41.621000 audit: BPF prog-id=35 op=LOAD May 8 00:50:41.621000 audit: BPF prog-id=27 op=UNLOAD May 8 00:50:41.621000 audit: BPF prog-id=36 op=LOAD May 8 00:50:41.621000 audit: BPF prog-id=37 op=LOAD May 8 00:50:41.621000 audit: BPF prog-id=28 op=UNLOAD May 8 00:50:41.621000 audit: BPF prog-id=29 op=UNLOAD May 8 00:50:41.621000 audit: BPF prog-id=38 op=LOAD May 8 00:50:41.621000 audit: BPF prog-id=26 op=UNLOAD May 8 00:50:41.624180 systemd[1]: Finished systemd-tmpfiles-setup.service. May 8 00:50:41.624000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:50:41.628230 systemd[1]: Starting audit-rules.service... May 8 00:50:41.629825 systemd[1]: Starting clean-ca-certificates.service... May 8 00:50:41.633000 audit: BPF prog-id=39 op=LOAD May 8 00:50:41.631763 systemd[1]: Starting systemd-journal-catalog-update.service... May 8 00:50:41.635000 audit: BPF prog-id=40 op=LOAD May 8 00:50:41.634370 systemd[1]: Starting systemd-resolved.service... May 8 00:50:41.636582 systemd[1]: Starting systemd-timesyncd.service... May 8 00:50:41.638721 systemd[1]: Starting systemd-update-utmp.service... May 8 00:50:41.643000 audit[1160]: SYSTEM_BOOT pid=1160 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' May 8 00:50:41.643184 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 8 00:50:41.644299 systemd[1]: Starting modprobe@dm_mod.service... May 8 00:50:41.646116 systemd[1]: Starting modprobe@efi_pstore.service... May 8 00:50:41.647766 systemd[1]: Starting modprobe@loop.service... May 8 00:50:41.648454 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 8 00:50:41.648579 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 8 00:50:41.649443 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 8 00:50:41.649570 systemd[1]: Finished modprobe@dm_mod.service. May 8 00:50:41.650000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:50:41.650000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:50:41.650696 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 8 00:50:41.651000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:50:41.651000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:50:41.650811 systemd[1]: Finished modprobe@efi_pstore.service. May 8 00:50:41.651808 systemd[1]: modprobe@loop.service: Deactivated successfully. May 8 00:50:41.651936 systemd[1]: Finished modprobe@loop.service. May 8 00:50:41.652000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:50:41.652000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:50:41.653000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:50:41.653170 systemd[1]: Finished clean-ca-certificates.service. May 8 00:50:41.655929 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 8 00:50:41.656076 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 8 00:50:41.656171 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 8 00:50:41.657707 systemd[1]: Finished systemd-update-utmp.service. May 8 00:50:41.658000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:50:41.659384 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 8 00:50:41.660535 systemd[1]: Starting modprobe@dm_mod.service... May 8 00:50:41.662285 systemd[1]: Starting modprobe@efi_pstore.service... May 8 00:50:41.663886 systemd[1]: Starting modprobe@loop.service... May 8 00:50:41.664642 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 8 00:50:41.664768 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 8 00:50:41.664862 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 8 00:50:41.665688 systemd[1]: Finished systemd-journal-catalog-update.service. May 8 00:50:41.666000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:50:41.666766 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 8 00:50:41.666874 systemd[1]: Finished modprobe@dm_mod.service. May 8 00:50:41.667000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:50:41.667000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:50:41.667907 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 8 00:50:41.668015 systemd[1]: Finished modprobe@efi_pstore.service. May 8 00:50:41.668000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:50:41.668000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:50:41.669081 systemd[1]: modprobe@loop.service: Deactivated successfully. May 8 00:50:41.669213 systemd[1]: Finished modprobe@loop.service. May 8 00:50:41.669000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:50:41.669000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:50:41.672360 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 8 00:50:41.673706 systemd[1]: Starting modprobe@dm_mod.service... May 8 00:50:41.675513 systemd[1]: Starting modprobe@drm.service... May 8 00:50:41.677050 systemd[1]: Starting modprobe@efi_pstore.service... May 8 00:50:41.678935 systemd[1]: Starting modprobe@loop.service... May 8 00:50:41.679670 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 8 00:50:41.679812 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 8 00:50:41.680992 systemd[1]: Starting systemd-networkd-wait-online.service... May 8 00:50:41.683499 systemd[1]: Starting systemd-update-done.service... May 8 00:50:41.684284 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 8 00:50:41.685532 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 8 00:50:41.685690 systemd[1]: Finished modprobe@dm_mod.service. May 8 00:50:41.686000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:50:41.686000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:50:41.687000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:50:41.687000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:50:41.688000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:50:41.688000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:50:41.689000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:50:41.689000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:50:41.686754 systemd[1]: modprobe@drm.service: Deactivated successfully. May 8 00:50:41.686872 systemd[1]: Finished modprobe@drm.service. May 8 00:50:41.687893 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 8 00:50:41.688004 systemd[1]: Finished modprobe@efi_pstore.service. May 8 00:50:41.689261 systemd[1]: modprobe@loop.service: Deactivated successfully. May 8 00:50:41.689363 systemd[1]: Finished modprobe@loop.service. May 8 00:50:41.690404 systemd[1]: Finished systemd-update-done.service. May 8 00:50:41.691578 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 8 00:50:41.691692 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 8 00:50:41.691000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:50:41.695853 systemd[1]: Finished ensure-sysext.service. May 8 00:50:41.696000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:50:41.708573 systemd-resolved[1155]: Positive Trust Anchors: May 8 00:50:41.708834 systemd-resolved[1155]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 8 00:50:41.708919 systemd-resolved[1155]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test May 8 00:50:41.714000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 May 8 00:50:41.714000 audit[1184]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffcef11e50 a2=420 a3=0 items=0 ppid=1151 pid=1184 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:50:41.714000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 May 8 00:50:41.715283 augenrules[1184]: No rules May 8 00:50:41.716035 systemd[1]: Finished audit-rules.service. May 8 00:50:41.725573 systemd-resolved[1155]: Defaulting to hostname 'linux'. May 8 00:50:41.726837 systemd[1]: Started systemd-timesyncd.service. May 8 00:50:41.727621 systemd[1]: Reached target time-set.target. May 8 00:50:41.733211 systemd[1]: Started systemd-resolved.service. May 8 00:50:41.733706 systemd-timesyncd[1156]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 8 00:50:41.733770 systemd-timesyncd[1156]: Initial clock synchronization to Thu 2025-05-08 00:50:41.461072 UTC. May 8 00:50:41.734059 systemd[1]: Reached target network.target. May 8 00:50:41.734709 systemd[1]: Reached target nss-lookup.target. May 8 00:50:41.735316 systemd[1]: Reached target sysinit.target. May 8 00:50:41.735945 systemd[1]: Started motdgen.path. May 8 00:50:41.736509 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. May 8 00:50:41.737515 systemd[1]: Started logrotate.timer. May 8 00:50:41.738225 systemd[1]: Started mdadm.timer. May 8 00:50:41.738751 systemd[1]: Started systemd-tmpfiles-clean.timer. May 8 00:50:41.739389 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 8 00:50:41.739418 systemd[1]: Reached target paths.target. May 8 00:50:41.739950 systemd[1]: Reached target timers.target. May 8 00:50:41.740815 systemd[1]: Listening on dbus.socket. May 8 00:50:41.742381 systemd[1]: Starting docker.socket... May 8 00:50:41.745378 systemd[1]: Listening on sshd.socket. May 8 00:50:41.746143 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 8 00:50:41.746619 systemd[1]: Listening on docker.socket. May 8 00:50:41.747312 systemd[1]: Reached target sockets.target. May 8 00:50:41.747893 systemd[1]: Reached target basic.target. May 8 00:50:41.748523 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. May 8 00:50:41.748560 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. May 8 00:50:41.749474 systemd[1]: Starting containerd.service... May 8 00:50:41.750963 systemd[1]: Starting dbus.service... May 8 00:50:41.752728 systemd[1]: Starting enable-oem-cloudinit.service... May 8 00:50:41.754707 systemd[1]: Starting extend-filesystems.service... May 8 00:50:41.755481 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). May 8 00:50:41.756797 systemd[1]: Starting motdgen.service... May 8 00:50:41.760358 systemd[1]: Starting ssh-key-proc-cmdline.service... May 8 00:50:41.762340 systemd[1]: Starting sshd-keygen.service... May 8 00:50:41.769191 systemd[1]: Starting systemd-logind.service... May 8 00:50:41.769969 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 8 00:50:41.772647 jq[1194]: false May 8 00:50:41.773033 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 8 00:50:41.773518 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 8 00:50:41.774200 systemd[1]: Starting update-engine.service... May 8 00:50:41.776593 systemd[1]: Starting update-ssh-keys-after-ignition.service... May 8 00:50:41.782446 jq[1209]: true May 8 00:50:41.784458 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 8 00:50:41.784697 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. May 8 00:50:41.785000 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 8 00:50:41.785178 systemd[1]: Finished ssh-key-proc-cmdline.service. May 8 00:50:41.788245 systemd[1]: motdgen.service: Deactivated successfully. May 8 00:50:41.788468 systemd[1]: Finished motdgen.service. May 8 00:50:41.789717 extend-filesystems[1195]: Found loop1 May 8 00:50:41.791516 jq[1214]: true May 8 00:50:41.792369 extend-filesystems[1195]: Found vda May 8 00:50:41.793325 extend-filesystems[1195]: Found vda1 May 8 00:50:41.795664 extend-filesystems[1195]: Found vda2 May 8 00:50:41.797456 extend-filesystems[1195]: Found vda3 May 8 00:50:41.798440 extend-filesystems[1195]: Found usr May 8 00:50:41.799295 extend-filesystems[1195]: Found vda4 May 8 00:50:41.800144 extend-filesystems[1195]: Found vda6 May 8 00:50:41.800710 extend-filesystems[1195]: Found vda7 May 8 00:50:41.801610 extend-filesystems[1195]: Found vda9 May 8 00:50:41.801610 extend-filesystems[1195]: Checking size of /dev/vda9 May 8 00:50:41.814183 dbus-daemon[1193]: [system] SELinux support is enabled May 8 00:50:41.814361 systemd[1]: Started dbus.service. May 8 00:50:41.816638 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 8 00:50:41.816663 systemd[1]: Reached target system-config.target. May 8 00:50:41.817446 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 8 00:50:41.817472 systemd[1]: Reached target user-config.target. May 8 00:50:41.854383 extend-filesystems[1195]: Resized partition /dev/vda9 May 8 00:50:41.857522 extend-filesystems[1241]: resize2fs 1.46.5 (30-Dec-2021) May 8 00:50:41.869890 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 8 00:50:41.858212 systemd-logind[1202]: Watching system buttons on /dev/input/event0 (Power Button) May 8 00:50:41.858371 systemd-logind[1202]: New seat seat0. May 8 00:50:41.874483 systemd[1]: Started systemd-logind.service. May 8 00:50:41.882001 update_engine[1207]: I0508 00:50:41.879297 1207 main.cc:92] Flatcar Update Engine starting May 8 00:50:41.890791 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 8 00:50:41.889248 systemd[1]: Finished update-ssh-keys-after-ignition.service. May 8 00:50:41.901003 bash[1238]: Updated "/home/core/.ssh/authorized_keys" May 8 00:50:41.901168 extend-filesystems[1241]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 8 00:50:41.901168 extend-filesystems[1241]: old_desc_blocks = 1, new_desc_blocks = 1 May 8 00:50:41.901168 extend-filesystems[1241]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 8 00:50:41.892805 systemd[1]: Started update-engine.service. May 8 00:50:41.904382 update_engine[1207]: I0508 00:50:41.892831 1207 update_check_scheduler.cc:74] Next update check in 11m3s May 8 00:50:41.904453 extend-filesystems[1195]: Resized filesystem in /dev/vda9 May 8 00:50:41.895732 systemd[1]: Started locksmithd.service. May 8 00:50:41.901833 systemd[1]: extend-filesystems.service: Deactivated successfully. May 8 00:50:41.902004 systemd[1]: Finished extend-filesystems.service. May 8 00:50:41.911748 env[1215]: time="2025-05-08T00:50:41.911693360Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 May 8 00:50:41.932009 env[1215]: time="2025-05-08T00:50:41.929207600Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 8 00:50:41.932009 env[1215]: time="2025-05-08T00:50:41.929372400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 8 00:50:41.932009 env[1215]: time="2025-05-08T00:50:41.930731480Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.180-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 8 00:50:41.932009 env[1215]: time="2025-05-08T00:50:41.930758160Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 8 00:50:41.932009 env[1215]: time="2025-05-08T00:50:41.930971400Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 8 00:50:41.932009 env[1215]: time="2025-05-08T00:50:41.930989720Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 8 00:50:41.932009 env[1215]: time="2025-05-08T00:50:41.931002960Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" May 8 00:50:41.932009 env[1215]: time="2025-05-08T00:50:41.931013000Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 8 00:50:41.932009 env[1215]: time="2025-05-08T00:50:41.931093000Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 8 00:50:41.932009 env[1215]: time="2025-05-08T00:50:41.931470000Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 8 00:50:41.932382 env[1215]: time="2025-05-08T00:50:41.931591720Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 8 00:50:41.932382 env[1215]: time="2025-05-08T00:50:41.931607840Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 8 00:50:41.932382 env[1215]: time="2025-05-08T00:50:41.931659160Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" May 8 00:50:41.932382 env[1215]: time="2025-05-08T00:50:41.931672360Z" level=info msg="metadata content store policy set" policy=shared May 8 00:50:41.934860 env[1215]: time="2025-05-08T00:50:41.934828000Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 8 00:50:41.934860 env[1215]: time="2025-05-08T00:50:41.934858400Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 8 00:50:41.934946 env[1215]: time="2025-05-08T00:50:41.934874360Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 8 00:50:41.934946 env[1215]: time="2025-05-08T00:50:41.934927960Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 8 00:50:41.934989 env[1215]: time="2025-05-08T00:50:41.934944680Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 8 00:50:41.934989 env[1215]: time="2025-05-08T00:50:41.934958880Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 8 00:50:41.934989 env[1215]: time="2025-05-08T00:50:41.934971080Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 8 00:50:41.935386 env[1215]: time="2025-05-08T00:50:41.935360640Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 8 00:50:41.935427 env[1215]: time="2025-05-08T00:50:41.935391920Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 May 8 00:50:41.935427 env[1215]: time="2025-05-08T00:50:41.935406600Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 8 00:50:41.935427 env[1215]: time="2025-05-08T00:50:41.935419120Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 8 00:50:41.935427 env[1215]: time="2025-05-08T00:50:41.935431680Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 8 00:50:41.935613 env[1215]: time="2025-05-08T00:50:41.935534240Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 8 00:50:41.935613 env[1215]: time="2025-05-08T00:50:41.935608080Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 8 00:50:41.935905 env[1215]: time="2025-05-08T00:50:41.935882920Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 8 00:50:41.935946 env[1215]: time="2025-05-08T00:50:41.935919600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 8 00:50:41.935946 env[1215]: time="2025-05-08T00:50:41.935938760Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 8 00:50:41.936278 env[1215]: time="2025-05-08T00:50:41.936153480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 8 00:50:41.936278 env[1215]: time="2025-05-08T00:50:41.936173160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 8 00:50:41.936278 env[1215]: time="2025-05-08T00:50:41.936186760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 8 00:50:41.936278 env[1215]: time="2025-05-08T00:50:41.936199360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 8 00:50:41.936278 env[1215]: time="2025-05-08T00:50:41.936211720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 8 00:50:41.936278 env[1215]: time="2025-05-08T00:50:41.936224040Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 8 00:50:41.936278 env[1215]: time="2025-05-08T00:50:41.936235680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 8 00:50:41.936278 env[1215]: time="2025-05-08T00:50:41.936247960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 8 00:50:41.936278 env[1215]: time="2025-05-08T00:50:41.936261160Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 8 00:50:41.936460 env[1215]: time="2025-05-08T00:50:41.936377080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 8 00:50:41.936460 env[1215]: time="2025-05-08T00:50:41.936393160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 8 00:50:41.936460 env[1215]: time="2025-05-08T00:50:41.936405680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 8 00:50:41.936460 env[1215]: time="2025-05-08T00:50:41.936417120Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 8 00:50:41.936460 env[1215]: time="2025-05-08T00:50:41.936431680Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 May 8 00:50:41.936460 env[1215]: time="2025-05-08T00:50:41.936443680Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 8 00:50:41.936578 env[1215]: time="2025-05-08T00:50:41.936461560Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" May 8 00:50:41.936578 env[1215]: time="2025-05-08T00:50:41.936499960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 8 00:50:41.936788 env[1215]: time="2025-05-08T00:50:41.936698080Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 8 00:50:41.936788 env[1215]: time="2025-05-08T00:50:41.936756160Z" level=info msg="Connect containerd service" May 8 00:50:41.937652 env[1215]: time="2025-05-08T00:50:41.936790120Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 8 00:50:41.937652 env[1215]: time="2025-05-08T00:50:41.937599880Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 8 00:50:41.938041 env[1215]: time="2025-05-08T00:50:41.937945080Z" level=info msg="Start subscribing containerd event" May 8 00:50:41.938041 env[1215]: time="2025-05-08T00:50:41.938008720Z" level=info msg="Start recovering state" May 8 00:50:41.938277 env[1215]: time="2025-05-08T00:50:41.938245080Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 8 00:50:41.938486 env[1215]: time="2025-05-08T00:50:41.938434080Z" level=info msg="Start event monitor" May 8 00:50:41.938486 env[1215]: time="2025-05-08T00:50:41.938466600Z" level=info msg="Start snapshots syncer" May 8 00:50:41.938486 env[1215]: time="2025-05-08T00:50:41.938479880Z" level=info msg="Start cni network conf syncer for default" May 8 00:50:41.938575 env[1215]: time="2025-05-08T00:50:41.938494000Z" level=info msg="Start streaming server" May 8 00:50:41.942694 env[1215]: time="2025-05-08T00:50:41.939274000Z" level=info msg=serving... address=/run/containerd/containerd.sock May 8 00:50:41.942694 env[1215]: time="2025-05-08T00:50:41.939374040Z" level=info msg="containerd successfully booted in 0.028883s" May 8 00:50:41.939473 systemd[1]: Started containerd.service. May 8 00:50:41.950490 locksmithd[1243]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 8 00:50:42.288308 systemd-networkd[1038]: eth0: Gained IPv6LL May 8 00:50:42.289996 systemd[1]: Finished systemd-networkd-wait-online.service. May 8 00:50:42.291012 systemd[1]: Reached target network-online.target. May 8 00:50:42.293139 systemd[1]: Starting kubelet.service... May 8 00:50:42.777844 systemd[1]: Started kubelet.service. May 8 00:50:42.783370 sshd_keygen[1208]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 8 00:50:42.799738 systemd[1]: Finished sshd-keygen.service. May 8 00:50:42.801759 systemd[1]: Starting issuegen.service... May 8 00:50:42.805968 systemd[1]: issuegen.service: Deactivated successfully. May 8 00:50:42.806145 systemd[1]: Finished issuegen.service. May 8 00:50:42.807952 systemd[1]: Starting systemd-user-sessions.service... May 8 00:50:42.815551 systemd[1]: Finished systemd-user-sessions.service. May 8 00:50:42.817470 systemd[1]: Started getty@tty1.service. May 8 00:50:42.819405 systemd[1]: Started serial-getty@ttyAMA0.service. May 8 00:50:42.820433 systemd[1]: Reached target getty.target. May 8 00:50:42.821127 systemd[1]: Reached target multi-user.target. May 8 00:50:42.822787 systemd[1]: Starting systemd-update-utmp-runlevel.service... May 8 00:50:42.828470 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. May 8 00:50:42.828603 systemd[1]: Finished systemd-update-utmp-runlevel.service. May 8 00:50:42.829563 systemd[1]: Startup finished in 567ms (kernel) + 3.925s (initrd) + 4.336s (userspace) = 8.828s. May 8 00:50:43.213397 kubelet[1258]: E0508 00:50:43.213294 1258 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 8 00:50:43.215174 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 8 00:50:43.215303 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 8 00:50:46.816648 systemd[1]: Created slice system-sshd.slice. May 8 00:50:46.817708 systemd[1]: Started sshd@0-10.0.0.122:22-10.0.0.1:33522.service. May 8 00:50:46.862005 sshd[1281]: Accepted publickey for core from 10.0.0.1 port 33522 ssh2: RSA SHA256:bNzqUoNIi+loVoVjyqrqS2pcdituzSfXJGlDy1FbsUU May 8 00:50:46.864179 sshd[1281]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 8 00:50:46.872132 systemd[1]: Created slice user-500.slice. May 8 00:50:46.873243 systemd[1]: Starting user-runtime-dir@500.service... May 8 00:50:46.875179 systemd-logind[1202]: New session 1 of user core. May 8 00:50:46.881475 systemd[1]: Finished user-runtime-dir@500.service. May 8 00:50:46.882754 systemd[1]: Starting user@500.service... May 8 00:50:46.886036 (systemd)[1284]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 8 00:50:46.958384 systemd[1284]: Queued start job for default target default.target. May 8 00:50:46.958875 systemd[1284]: Reached target paths.target. May 8 00:50:46.962388 systemd[1284]: Reached target sockets.target. May 8 00:50:46.962433 systemd[1284]: Reached target timers.target. May 8 00:50:46.962445 systemd[1284]: Reached target basic.target. May 8 00:50:46.962509 systemd[1284]: Reached target default.target. May 8 00:50:46.962536 systemd[1284]: Startup finished in 70ms. May 8 00:50:46.962581 systemd[1]: Started user@500.service. May 8 00:50:46.963523 systemd[1]: Started session-1.scope. May 8 00:50:47.013455 systemd[1]: Started sshd@1-10.0.0.122:22-10.0.0.1:33536.service. May 8 00:50:47.054410 sshd[1293]: Accepted publickey for core from 10.0.0.1 port 33536 ssh2: RSA SHA256:bNzqUoNIi+loVoVjyqrqS2pcdituzSfXJGlDy1FbsUU May 8 00:50:47.055738 sshd[1293]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 8 00:50:47.059225 systemd-logind[1202]: New session 2 of user core. May 8 00:50:47.060658 systemd[1]: Started session-2.scope. May 8 00:50:47.113139 sshd[1293]: pam_unix(sshd:session): session closed for user core May 8 00:50:47.115664 systemd[1]: sshd@1-10.0.0.122:22-10.0.0.1:33536.service: Deactivated successfully. May 8 00:50:47.116308 systemd[1]: session-2.scope: Deactivated successfully. May 8 00:50:47.116774 systemd-logind[1202]: Session 2 logged out. Waiting for processes to exit. May 8 00:50:47.117695 systemd[1]: Started sshd@2-10.0.0.122:22-10.0.0.1:33542.service. May 8 00:50:47.118289 systemd-logind[1202]: Removed session 2. May 8 00:50:47.151812 sshd[1299]: Accepted publickey for core from 10.0.0.1 port 33542 ssh2: RSA SHA256:bNzqUoNIi+loVoVjyqrqS2pcdituzSfXJGlDy1FbsUU May 8 00:50:47.152957 sshd[1299]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 8 00:50:47.156292 systemd-logind[1202]: New session 3 of user core. May 8 00:50:47.157027 systemd[1]: Started session-3.scope. May 8 00:50:47.204405 sshd[1299]: pam_unix(sshd:session): session closed for user core May 8 00:50:47.207175 systemd[1]: sshd@2-10.0.0.122:22-10.0.0.1:33542.service: Deactivated successfully. May 8 00:50:47.207774 systemd[1]: session-3.scope: Deactivated successfully. May 8 00:50:47.208278 systemd-logind[1202]: Session 3 logged out. Waiting for processes to exit. May 8 00:50:47.209216 systemd[1]: Started sshd@3-10.0.0.122:22-10.0.0.1:33544.service. May 8 00:50:47.209869 systemd-logind[1202]: Removed session 3. May 8 00:50:47.243922 sshd[1305]: Accepted publickey for core from 10.0.0.1 port 33544 ssh2: RSA SHA256:bNzqUoNIi+loVoVjyqrqS2pcdituzSfXJGlDy1FbsUU May 8 00:50:47.245145 sshd[1305]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 8 00:50:47.248703 systemd-logind[1202]: New session 4 of user core. May 8 00:50:47.249067 systemd[1]: Started session-4.scope. May 8 00:50:47.301817 sshd[1305]: pam_unix(sshd:session): session closed for user core May 8 00:50:47.305714 systemd[1]: sshd@3-10.0.0.122:22-10.0.0.1:33544.service: Deactivated successfully. May 8 00:50:47.306344 systemd[1]: session-4.scope: Deactivated successfully. May 8 00:50:47.306864 systemd-logind[1202]: Session 4 logged out. Waiting for processes to exit. May 8 00:50:47.307856 systemd[1]: Started sshd@4-10.0.0.122:22-10.0.0.1:33560.service. May 8 00:50:47.308721 systemd-logind[1202]: Removed session 4. May 8 00:50:47.342185 sshd[1311]: Accepted publickey for core from 10.0.0.1 port 33560 ssh2: RSA SHA256:bNzqUoNIi+loVoVjyqrqS2pcdituzSfXJGlDy1FbsUU May 8 00:50:47.343390 sshd[1311]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 8 00:50:47.346540 systemd-logind[1202]: New session 5 of user core. May 8 00:50:47.347330 systemd[1]: Started session-5.scope. May 8 00:50:47.406473 sudo[1314]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 8 00:50:47.406690 sudo[1314]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) May 8 00:50:47.417934 systemd[1]: Starting coreos-metadata.service... May 8 00:50:47.424200 systemd[1]: coreos-metadata.service: Deactivated successfully. May 8 00:50:47.424441 systemd[1]: Finished coreos-metadata.service. May 8 00:50:47.884144 systemd[1]: Stopped kubelet.service. May 8 00:50:47.886569 systemd[1]: Starting kubelet.service... May 8 00:50:47.906928 systemd[1]: Reloading. May 8 00:50:47.968250 /usr/lib/systemd/system-generators/torcx-generator[1375]: time="2025-05-08T00:50:47Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 8 00:50:47.968283 /usr/lib/systemd/system-generators/torcx-generator[1375]: time="2025-05-08T00:50:47Z" level=info msg="torcx already run" May 8 00:50:48.124562 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 8 00:50:48.124708 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 8 00:50:48.140318 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 8 00:50:48.213480 systemd[1]: Started kubelet.service. May 8 00:50:48.214876 systemd[1]: Stopping kubelet.service... May 8 00:50:48.215115 systemd[1]: kubelet.service: Deactivated successfully. May 8 00:50:48.215289 systemd[1]: Stopped kubelet.service. May 8 00:50:48.216741 systemd[1]: Starting kubelet.service... May 8 00:50:48.299339 systemd[1]: Started kubelet.service. May 8 00:50:48.339457 kubelet[1419]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 8 00:50:48.339457 kubelet[1419]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 8 00:50:48.339457 kubelet[1419]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 8 00:50:48.339811 kubelet[1419]: I0508 00:50:48.339660 1419 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 8 00:50:49.124696 kubelet[1419]: I0508 00:50:49.124656 1419 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" May 8 00:50:49.124856 kubelet[1419]: I0508 00:50:49.124844 1419 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 8 00:50:49.125204 kubelet[1419]: I0508 00:50:49.125187 1419 server.go:929] "Client rotation is on, will bootstrap in background" May 8 00:50:49.192179 kubelet[1419]: I0508 00:50:49.192141 1419 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 8 00:50:49.201434 kubelet[1419]: E0508 00:50:49.201394 1419 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 8 00:50:49.201434 kubelet[1419]: I0508 00:50:49.201427 1419 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 8 00:50:49.205251 kubelet[1419]: I0508 00:50:49.205227 1419 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 8 00:50:49.205537 kubelet[1419]: I0508 00:50:49.205523 1419 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 8 00:50:49.205659 kubelet[1419]: I0508 00:50:49.205637 1419 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 8 00:50:49.205807 kubelet[1419]: I0508 00:50:49.205661 1419 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"10.0.0.122","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 8 00:50:49.205940 kubelet[1419]: I0508 00:50:49.205930 1419 topology_manager.go:138] "Creating topology manager with none policy" May 8 00:50:49.205969 kubelet[1419]: I0508 00:50:49.205942 1419 container_manager_linux.go:300] "Creating device plugin manager" May 8 00:50:49.206135 kubelet[1419]: I0508 00:50:49.206123 1419 state_mem.go:36] "Initialized new in-memory state store" May 8 00:50:49.207944 kubelet[1419]: I0508 00:50:49.207905 1419 kubelet.go:408] "Attempting to sync node with API server" May 8 00:50:49.207944 kubelet[1419]: I0508 00:50:49.207937 1419 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 8 00:50:49.208062 kubelet[1419]: I0508 00:50:49.208040 1419 kubelet.go:314] "Adding apiserver pod source" May 8 00:50:49.208062 kubelet[1419]: I0508 00:50:49.208055 1419 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 8 00:50:49.208147 kubelet[1419]: E0508 00:50:49.208127 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:50:49.208228 kubelet[1419]: E0508 00:50:49.208186 1419 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:50:49.209780 kubelet[1419]: I0508 00:50:49.209763 1419 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" May 8 00:50:49.211479 kubelet[1419]: I0508 00:50:49.211455 1419 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 8 00:50:49.212189 kubelet[1419]: W0508 00:50:49.212162 1419 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 8 00:50:49.212951 kubelet[1419]: I0508 00:50:49.212783 1419 server.go:1269] "Started kubelet" May 8 00:50:49.213328 kubelet[1419]: I0508 00:50:49.213303 1419 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 8 00:50:49.214830 kubelet[1419]: I0508 00:50:49.214808 1419 server.go:460] "Adding debug handlers to kubelet server" May 8 00:50:49.216902 kubelet[1419]: I0508 00:50:49.216846 1419 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 8 00:50:49.217168 kubelet[1419]: I0508 00:50:49.217145 1419 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 8 00:50:49.218889 kubelet[1419]: W0508 00:50:49.218611 1419 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "10.0.0.122" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope May 8 00:50:49.218889 kubelet[1419]: E0508 00:50:49.218731 1419 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"10.0.0.122\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" May 8 00:50:49.218889 kubelet[1419]: W0508 00:50:49.218700 1419 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope May 8 00:50:49.218889 kubelet[1419]: E0508 00:50:49.218840 1419 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" May 8 00:50:49.219718 kubelet[1419]: E0508 00:50:49.219696 1419 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 8 00:50:49.220678 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). May 8 00:50:49.220861 kubelet[1419]: I0508 00:50:49.220840 1419 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 8 00:50:49.221514 kubelet[1419]: I0508 00:50:49.221495 1419 volume_manager.go:289] "Starting Kubelet Volume Manager" May 8 00:50:49.221587 kubelet[1419]: I0508 00:50:49.220863 1419 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 8 00:50:49.221656 kubelet[1419]: I0508 00:50:49.221641 1419 desired_state_of_world_populator.go:146] "Desired state populator starts to run" May 8 00:50:49.221699 kubelet[1419]: I0508 00:50:49.221689 1419 reconciler.go:26] "Reconciler: start to sync state" May 8 00:50:49.222316 kubelet[1419]: E0508 00:50:49.222290 1419 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.122\" not found" May 8 00:50:49.226187 kubelet[1419]: E0508 00:50:49.226161 1419 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.0.0.122\" not found" node="10.0.0.122" May 8 00:50:49.226254 kubelet[1419]: I0508 00:50:49.226202 1419 factory.go:221] Registration of the containerd container factory successfully May 8 00:50:49.226254 kubelet[1419]: I0508 00:50:49.226218 1419 factory.go:221] Registration of the systemd container factory successfully May 8 00:50:49.226361 kubelet[1419]: I0508 00:50:49.226311 1419 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 8 00:50:49.239416 kubelet[1419]: I0508 00:50:49.239393 1419 cpu_manager.go:214] "Starting CPU manager" policy="none" May 8 00:50:49.239416 kubelet[1419]: I0508 00:50:49.239409 1419 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 8 00:50:49.239527 kubelet[1419]: I0508 00:50:49.239427 1419 state_mem.go:36] "Initialized new in-memory state store" May 8 00:50:49.313335 kubelet[1419]: I0508 00:50:49.313135 1419 policy_none.go:49] "None policy: Start" May 8 00:50:49.314310 kubelet[1419]: I0508 00:50:49.314290 1419 memory_manager.go:170] "Starting memorymanager" policy="None" May 8 00:50:49.314310 kubelet[1419]: I0508 00:50:49.314314 1419 state_mem.go:35] "Initializing new in-memory state store" May 8 00:50:49.320373 systemd[1]: Created slice kubepods.slice. May 8 00:50:49.322946 kubelet[1419]: E0508 00:50:49.322907 1419 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.122\" not found" May 8 00:50:49.325620 systemd[1]: Created slice kubepods-burstable.slice. May 8 00:50:49.328669 systemd[1]: Created slice kubepods-besteffort.slice. May 8 00:50:49.337979 kubelet[1419]: I0508 00:50:49.337952 1419 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 8 00:50:49.338249 kubelet[1419]: I0508 00:50:49.338224 1419 eviction_manager.go:189] "Eviction manager: starting control loop" May 8 00:50:49.338354 kubelet[1419]: I0508 00:50:49.338317 1419 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 8 00:50:49.338775 kubelet[1419]: I0508 00:50:49.338752 1419 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 8 00:50:49.340283 kubelet[1419]: E0508 00:50:49.340263 1419 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.0.122\" not found" May 8 00:50:49.355306 kubelet[1419]: I0508 00:50:49.355260 1419 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 8 00:50:49.356519 kubelet[1419]: I0508 00:50:49.356492 1419 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 8 00:50:49.356519 kubelet[1419]: I0508 00:50:49.356517 1419 status_manager.go:217] "Starting to sync pod status with apiserver" May 8 00:50:49.356612 kubelet[1419]: I0508 00:50:49.356538 1419 kubelet.go:2321] "Starting kubelet main sync loop" May 8 00:50:49.356612 kubelet[1419]: E0508 00:50:49.356577 1419 kubelet.go:2345] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" May 8 00:50:49.439560 kubelet[1419]: I0508 00:50:49.439453 1419 kubelet_node_status.go:72] "Attempting to register node" node="10.0.0.122" May 8 00:50:49.447334 kubelet[1419]: I0508 00:50:49.447294 1419 kubelet_node_status.go:75] "Successfully registered node" node="10.0.0.122" May 8 00:50:49.447334 kubelet[1419]: E0508 00:50:49.447331 1419 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"10.0.0.122\": node \"10.0.0.122\" not found" May 8 00:50:49.456701 kubelet[1419]: E0508 00:50:49.456665 1419 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.122\" not found" May 8 00:50:49.557517 kubelet[1419]: E0508 00:50:49.557463 1419 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.122\" not found" May 8 00:50:49.657931 kubelet[1419]: E0508 00:50:49.657899 1419 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.122\" not found" May 8 00:50:49.758871 kubelet[1419]: E0508 00:50:49.758773 1419 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.122\" not found" May 8 00:50:49.824839 sudo[1314]: pam_unix(sudo:session): session closed for user root May 8 00:50:49.828503 sshd[1311]: pam_unix(sshd:session): session closed for user core May 8 00:50:49.830852 systemd[1]: sshd@4-10.0.0.122:22-10.0.0.1:33560.service: Deactivated successfully. May 8 00:50:49.831522 systemd[1]: session-5.scope: Deactivated successfully. May 8 00:50:49.832027 systemd-logind[1202]: Session 5 logged out. Waiting for processes to exit. May 8 00:50:49.832672 systemd-logind[1202]: Removed session 5. May 8 00:50:49.859241 kubelet[1419]: E0508 00:50:49.859204 1419 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.122\" not found" May 8 00:50:49.959590 kubelet[1419]: E0508 00:50:49.959536 1419 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.122\" not found" May 8 00:50:50.060716 kubelet[1419]: E0508 00:50:50.060602 1419 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.122\" not found" May 8 00:50:50.129011 kubelet[1419]: I0508 00:50:50.128972 1419 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" May 8 00:50:50.129170 kubelet[1419]: W0508 00:50:50.129139 1419 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received May 8 00:50:50.129170 kubelet[1419]: W0508 00:50:50.129152 1419 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received May 8 00:50:50.129251 kubelet[1419]: W0508 00:50:50.129169 1419 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.Service ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received May 8 00:50:50.161026 kubelet[1419]: E0508 00:50:50.161001 1419 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.122\" not found" May 8 00:50:50.208390 kubelet[1419]: E0508 00:50:50.208359 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:50:50.261207 kubelet[1419]: E0508 00:50:50.261170 1419 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.122\" not found" May 8 00:50:50.361835 kubelet[1419]: E0508 00:50:50.361754 1419 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.122\" not found" May 8 00:50:50.462167 kubelet[1419]: E0508 00:50:50.462120 1419 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.122\" not found" May 8 00:50:50.562560 kubelet[1419]: E0508 00:50:50.562525 1419 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.122\" not found" May 8 00:50:50.663010 kubelet[1419]: E0508 00:50:50.662925 1419 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.122\" not found" May 8 00:50:50.763903 kubelet[1419]: I0508 00:50:50.763873 1419 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" May 8 00:50:50.764197 env[1215]: time="2025-05-08T00:50:50.764157773Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 8 00:50:50.764438 kubelet[1419]: I0508 00:50:50.764326 1419 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" May 8 00:50:51.209155 kubelet[1419]: I0508 00:50:51.209121 1419 apiserver.go:52] "Watching apiserver" May 8 00:50:51.209267 kubelet[1419]: E0508 00:50:51.209178 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:50:51.218360 systemd[1]: Created slice kubepods-burstable-podcdf85a09_3abb_488b_b462_0b1a3dc25c83.slice. May 8 00:50:51.222227 kubelet[1419]: I0508 00:50:51.222206 1419 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" May 8 00:50:51.231139 kubelet[1419]: I0508 00:50:51.231082 1419 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/cdf85a09-3abb-488b-b462-0b1a3dc25c83-cilium-run\") pod \"cilium-82vjw\" (UID: \"cdf85a09-3abb-488b-b462-0b1a3dc25c83\") " pod="kube-system/cilium-82vjw" May 8 00:50:51.231139 kubelet[1419]: I0508 00:50:51.231134 1419 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/cdf85a09-3abb-488b-b462-0b1a3dc25c83-cilium-cgroup\") pod \"cilium-82vjw\" (UID: \"cdf85a09-3abb-488b-b462-0b1a3dc25c83\") " pod="kube-system/cilium-82vjw" May 8 00:50:51.231240 kubelet[1419]: I0508 00:50:51.231154 1419 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cdf85a09-3abb-488b-b462-0b1a3dc25c83-lib-modules\") pod \"cilium-82vjw\" (UID: \"cdf85a09-3abb-488b-b462-0b1a3dc25c83\") " pod="kube-system/cilium-82vjw" May 8 00:50:51.231240 kubelet[1419]: I0508 00:50:51.231177 1419 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cdf85a09-3abb-488b-b462-0b1a3dc25c83-xtables-lock\") pod \"cilium-82vjw\" (UID: \"cdf85a09-3abb-488b-b462-0b1a3dc25c83\") " pod="kube-system/cilium-82vjw" May 8 00:50:51.231240 kubelet[1419]: I0508 00:50:51.231192 1419 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/cdf85a09-3abb-488b-b462-0b1a3dc25c83-clustermesh-secrets\") pod \"cilium-82vjw\" (UID: \"cdf85a09-3abb-488b-b462-0b1a3dc25c83\") " pod="kube-system/cilium-82vjw" May 8 00:50:51.231240 kubelet[1419]: I0508 00:50:51.231206 1419 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cdf85a09-3abb-488b-b462-0b1a3dc25c83-cilium-config-path\") pod \"cilium-82vjw\" (UID: \"cdf85a09-3abb-488b-b462-0b1a3dc25c83\") " pod="kube-system/cilium-82vjw" May 8 00:50:51.231240 kubelet[1419]: I0508 00:50:51.231220 1419 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/00a618a5-ee21-4770-8c09-7d9b8a2c1cb4-kube-proxy\") pod \"kube-proxy-wg7cs\" (UID: \"00a618a5-ee21-4770-8c09-7d9b8a2c1cb4\") " pod="kube-system/kube-proxy-wg7cs" May 8 00:50:51.231396 kubelet[1419]: I0508 00:50:51.231236 1419 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/00a618a5-ee21-4770-8c09-7d9b8a2c1cb4-xtables-lock\") pod \"kube-proxy-wg7cs\" (UID: \"00a618a5-ee21-4770-8c09-7d9b8a2c1cb4\") " pod="kube-system/kube-proxy-wg7cs" May 8 00:50:51.231396 kubelet[1419]: I0508 00:50:51.231261 1419 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/cdf85a09-3abb-488b-b462-0b1a3dc25c83-bpf-maps\") pod \"cilium-82vjw\" (UID: \"cdf85a09-3abb-488b-b462-0b1a3dc25c83\") " pod="kube-system/cilium-82vjw" May 8 00:50:51.231396 kubelet[1419]: I0508 00:50:51.231274 1419 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/cdf85a09-3abb-488b-b462-0b1a3dc25c83-hostproc\") pod \"cilium-82vjw\" (UID: \"cdf85a09-3abb-488b-b462-0b1a3dc25c83\") " pod="kube-system/cilium-82vjw" May 8 00:50:51.231396 kubelet[1419]: I0508 00:50:51.231288 1419 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/00a618a5-ee21-4770-8c09-7d9b8a2c1cb4-lib-modules\") pod \"kube-proxy-wg7cs\" (UID: \"00a618a5-ee21-4770-8c09-7d9b8a2c1cb4\") " pod="kube-system/kube-proxy-wg7cs" May 8 00:50:51.231396 kubelet[1419]: I0508 00:50:51.231302 1419 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zbmz4\" (UniqueName: \"kubernetes.io/projected/00a618a5-ee21-4770-8c09-7d9b8a2c1cb4-kube-api-access-zbmz4\") pod \"kube-proxy-wg7cs\" (UID: \"00a618a5-ee21-4770-8c09-7d9b8a2c1cb4\") " pod="kube-system/kube-proxy-wg7cs" May 8 00:50:51.231396 kubelet[1419]: I0508 00:50:51.231327 1419 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/cdf85a09-3abb-488b-b462-0b1a3dc25c83-hubble-tls\") pod \"cilium-82vjw\" (UID: \"cdf85a09-3abb-488b-b462-0b1a3dc25c83\") " pod="kube-system/cilium-82vjw" May 8 00:50:51.231523 kubelet[1419]: I0508 00:50:51.231348 1419 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/cdf85a09-3abb-488b-b462-0b1a3dc25c83-host-proc-sys-net\") pod \"cilium-82vjw\" (UID: \"cdf85a09-3abb-488b-b462-0b1a3dc25c83\") " pod="kube-system/cilium-82vjw" May 8 00:50:51.231523 kubelet[1419]: I0508 00:50:51.231363 1419 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/cdf85a09-3abb-488b-b462-0b1a3dc25c83-host-proc-sys-kernel\") pod \"cilium-82vjw\" (UID: \"cdf85a09-3abb-488b-b462-0b1a3dc25c83\") " pod="kube-system/cilium-82vjw" May 8 00:50:51.231523 kubelet[1419]: I0508 00:50:51.231388 1419 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qxn96\" (UniqueName: \"kubernetes.io/projected/cdf85a09-3abb-488b-b462-0b1a3dc25c83-kube-api-access-qxn96\") pod \"cilium-82vjw\" (UID: \"cdf85a09-3abb-488b-b462-0b1a3dc25c83\") " pod="kube-system/cilium-82vjw" May 8 00:50:51.231523 kubelet[1419]: I0508 00:50:51.231413 1419 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/cdf85a09-3abb-488b-b462-0b1a3dc25c83-cni-path\") pod \"cilium-82vjw\" (UID: \"cdf85a09-3abb-488b-b462-0b1a3dc25c83\") " pod="kube-system/cilium-82vjw" May 8 00:50:51.231523 kubelet[1419]: I0508 00:50:51.231428 1419 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cdf85a09-3abb-488b-b462-0b1a3dc25c83-etc-cni-netd\") pod \"cilium-82vjw\" (UID: \"cdf85a09-3abb-488b-b462-0b1a3dc25c83\") " pod="kube-system/cilium-82vjw" May 8 00:50:51.236953 systemd[1]: Created slice kubepods-besteffort-pod00a618a5_ee21_4770_8c09_7d9b8a2c1cb4.slice. May 8 00:50:51.333283 kubelet[1419]: I0508 00:50:51.333248 1419 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" May 8 00:50:51.535795 kubelet[1419]: E0508 00:50:51.535691 1419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:50:51.536792 env[1215]: time="2025-05-08T00:50:51.536717952Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-82vjw,Uid:cdf85a09-3abb-488b-b462-0b1a3dc25c83,Namespace:kube-system,Attempt:0,}" May 8 00:50:51.546707 kubelet[1419]: E0508 00:50:51.546680 1419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:50:51.547123 env[1215]: time="2025-05-08T00:50:51.547066455Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wg7cs,Uid:00a618a5-ee21-4770-8c09-7d9b8a2c1cb4,Namespace:kube-system,Attempt:0,}" May 8 00:50:52.130428 env[1215]: time="2025-05-08T00:50:52.130157485Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:50:52.133021 env[1215]: time="2025-05-08T00:50:52.132990464Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:50:52.135344 env[1215]: time="2025-05-08T00:50:52.135305369Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:50:52.137656 env[1215]: time="2025-05-08T00:50:52.137624792Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:50:52.139090 env[1215]: time="2025-05-08T00:50:52.139061083Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:50:52.140604 env[1215]: time="2025-05-08T00:50:52.140574714Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:50:52.142013 env[1215]: time="2025-05-08T00:50:52.141982502Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:50:52.146347 env[1215]: time="2025-05-08T00:50:52.146292207Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:50:52.186609 env[1215]: time="2025-05-08T00:50:52.186520791Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:50:52.186609 env[1215]: time="2025-05-08T00:50:52.186575694Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:50:52.186609 env[1215]: time="2025-05-08T00:50:52.186598290Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:50:52.186978 env[1215]: time="2025-05-08T00:50:52.186944518Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c0bf3d793d2a32006310dc7fda5edf01ef06735188c4fa7ecff28772c36e429b pid=1483 runtime=io.containerd.runc.v2 May 8 00:50:52.187172 env[1215]: time="2025-05-08T00:50:52.187118426Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:50:52.187267 env[1215]: time="2025-05-08T00:50:52.187227915Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:50:52.187267 env[1215]: time="2025-05-08T00:50:52.187247023Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:50:52.187527 env[1215]: time="2025-05-08T00:50:52.187489827Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e1e7ca1552f184e0984bc486bb93b6735d206425bcd776cda6628bd172d357b4 pid=1482 runtime=io.containerd.runc.v2 May 8 00:50:52.209410 kubelet[1419]: E0508 00:50:52.209375 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:50:52.212642 systemd[1]: Started cri-containerd-c0bf3d793d2a32006310dc7fda5edf01ef06735188c4fa7ecff28772c36e429b.scope. May 8 00:50:52.215294 systemd[1]: Started cri-containerd-e1e7ca1552f184e0984bc486bb93b6735d206425bcd776cda6628bd172d357b4.scope. May 8 00:50:52.255001 env[1215]: time="2025-05-08T00:50:52.254944069Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wg7cs,Uid:00a618a5-ee21-4770-8c09-7d9b8a2c1cb4,Namespace:kube-system,Attempt:0,} returns sandbox id \"c0bf3d793d2a32006310dc7fda5edf01ef06735188c4fa7ecff28772c36e429b\"" May 8 00:50:52.256748 kubelet[1419]: E0508 00:50:52.256275 1419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:50:52.257952 env[1215]: time="2025-05-08T00:50:52.257917974Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\"" May 8 00:50:52.259192 env[1215]: time="2025-05-08T00:50:52.259159784Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-82vjw,Uid:cdf85a09-3abb-488b-b462-0b1a3dc25c83,Namespace:kube-system,Attempt:0,} returns sandbox id \"e1e7ca1552f184e0984bc486bb93b6735d206425bcd776cda6628bd172d357b4\"" May 8 00:50:52.260009 kubelet[1419]: E0508 00:50:52.259848 1419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:50:52.339509 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4118306462.mount: Deactivated successfully. May 8 00:50:53.210320 kubelet[1419]: E0508 00:50:53.210285 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:50:53.236558 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2021719251.mount: Deactivated successfully. May 8 00:50:53.683932 env[1215]: time="2025-05-08T00:50:53.683816390Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.31.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:50:53.685762 env[1215]: time="2025-05-08T00:50:53.685724508Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c5361ece77e80334cd5fb082c0b678cb3244f5834ecacea1719ae6b38b465581,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:50:53.686979 env[1215]: time="2025-05-08T00:50:53.686918034Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.31.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:50:53.688087 env[1215]: time="2025-05-08T00:50:53.688046196Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:dd0c9a37670f209947b1ed880f06a2e93e1d41da78c037f52f94b13858769838,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:50:53.688539 env[1215]: time="2025-05-08T00:50:53.688501597Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\" returns image reference \"sha256:c5361ece77e80334cd5fb082c0b678cb3244f5834ecacea1719ae6b38b465581\"" May 8 00:50:53.690160 env[1215]: time="2025-05-08T00:50:53.689979078Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 8 00:50:53.691465 env[1215]: time="2025-05-08T00:50:53.691428184Z" level=info msg="CreateContainer within sandbox \"c0bf3d793d2a32006310dc7fda5edf01ef06735188c4fa7ecff28772c36e429b\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 8 00:50:53.703435 env[1215]: time="2025-05-08T00:50:53.703384716Z" level=info msg="CreateContainer within sandbox \"c0bf3d793d2a32006310dc7fda5edf01ef06735188c4fa7ecff28772c36e429b\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"310c99be19a6609bc1cd15f7bbc710a7f74ddd122547ed9c3ed78e66a4da7cdd\"" May 8 00:50:53.704051 env[1215]: time="2025-05-08T00:50:53.703993980Z" level=info msg="StartContainer for \"310c99be19a6609bc1cd15f7bbc710a7f74ddd122547ed9c3ed78e66a4da7cdd\"" May 8 00:50:53.723590 systemd[1]: Started cri-containerd-310c99be19a6609bc1cd15f7bbc710a7f74ddd122547ed9c3ed78e66a4da7cdd.scope. May 8 00:50:53.766505 env[1215]: time="2025-05-08T00:50:53.766445305Z" level=info msg="StartContainer for \"310c99be19a6609bc1cd15f7bbc710a7f74ddd122547ed9c3ed78e66a4da7cdd\" returns successfully" May 8 00:50:54.210448 kubelet[1419]: E0508 00:50:54.210413 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:50:54.366146 kubelet[1419]: E0508 00:50:54.366085 1419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:50:54.699812 systemd[1]: run-containerd-runc-k8s.io-310c99be19a6609bc1cd15f7bbc710a7f74ddd122547ed9c3ed78e66a4da7cdd-runc.8Z4kUQ.mount: Deactivated successfully. May 8 00:50:55.210545 kubelet[1419]: E0508 00:50:55.210500 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:50:55.367698 kubelet[1419]: E0508 00:50:55.367646 1419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:50:56.211218 kubelet[1419]: E0508 00:50:56.211171 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:50:57.211735 kubelet[1419]: E0508 00:50:57.211679 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:50:57.478407 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3210900972.mount: Deactivated successfully. May 8 00:50:58.212011 kubelet[1419]: E0508 00:50:58.211955 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:50:59.212816 kubelet[1419]: E0508 00:50:59.212774 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:50:59.647885 env[1215]: time="2025-05-08T00:50:59.647762388Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:50:59.650968 env[1215]: time="2025-05-08T00:50:59.650925416Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:50:59.652754 env[1215]: time="2025-05-08T00:50:59.652711427Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:50:59.653376 env[1215]: time="2025-05-08T00:50:59.653329444Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" May 8 00:50:59.655769 env[1215]: time="2025-05-08T00:50:59.655736144Z" level=info msg="CreateContainer within sandbox \"e1e7ca1552f184e0984bc486bb93b6735d206425bcd776cda6628bd172d357b4\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 8 00:50:59.665674 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount808975568.mount: Deactivated successfully. May 8 00:50:59.668354 env[1215]: time="2025-05-08T00:50:59.668315354Z" level=info msg="CreateContainer within sandbox \"e1e7ca1552f184e0984bc486bb93b6735d206425bcd776cda6628bd172d357b4\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"566ee8fed6c582c740d10897bd0dadc8a93e145ff8e4d07ff030d7bc09285c8b\"" May 8 00:50:59.668861 env[1215]: time="2025-05-08T00:50:59.668836752Z" level=info msg="StartContainer for \"566ee8fed6c582c740d10897bd0dadc8a93e145ff8e4d07ff030d7bc09285c8b\"" May 8 00:50:59.683979 systemd[1]: Started cri-containerd-566ee8fed6c582c740d10897bd0dadc8a93e145ff8e4d07ff030d7bc09285c8b.scope. May 8 00:50:59.723554 env[1215]: time="2025-05-08T00:50:59.723176144Z" level=info msg="StartContainer for \"566ee8fed6c582c740d10897bd0dadc8a93e145ff8e4d07ff030d7bc09285c8b\" returns successfully" May 8 00:50:59.771756 systemd[1]: cri-containerd-566ee8fed6c582c740d10897bd0dadc8a93e145ff8e4d07ff030d7bc09285c8b.scope: Deactivated successfully. May 8 00:50:59.912608 env[1215]: time="2025-05-08T00:50:59.912487450Z" level=info msg="shim disconnected" id=566ee8fed6c582c740d10897bd0dadc8a93e145ff8e4d07ff030d7bc09285c8b May 8 00:50:59.912608 env[1215]: time="2025-05-08T00:50:59.912535679Z" level=warning msg="cleaning up after shim disconnected" id=566ee8fed6c582c740d10897bd0dadc8a93e145ff8e4d07ff030d7bc09285c8b namespace=k8s.io May 8 00:50:59.912608 env[1215]: time="2025-05-08T00:50:59.912545485Z" level=info msg="cleaning up dead shim" May 8 00:50:59.919626 env[1215]: time="2025-05-08T00:50:59.919587970Z" level=warning msg="cleanup warnings time=\"2025-05-08T00:50:59Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1767 runtime=io.containerd.runc.v2\n" May 8 00:51:00.213286 kubelet[1419]: E0508 00:51:00.213168 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:51:00.375159 kubelet[1419]: E0508 00:51:00.374991 1419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:51:00.376676 env[1215]: time="2025-05-08T00:51:00.376639226Z" level=info msg="CreateContainer within sandbox \"e1e7ca1552f184e0984bc486bb93b6735d206425bcd776cda6628bd172d357b4\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 8 00:51:00.389640 env[1215]: time="2025-05-08T00:51:00.389581764Z" level=info msg="CreateContainer within sandbox \"e1e7ca1552f184e0984bc486bb93b6735d206425bcd776cda6628bd172d357b4\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"cc4999ac022aeae4476e793b9d17aa2ce0d091cd4c5dee269315c55345f1b258\"" May 8 00:51:00.390487 env[1215]: time="2025-05-08T00:51:00.390459293Z" level=info msg="StartContainer for \"cc4999ac022aeae4476e793b9d17aa2ce0d091cd4c5dee269315c55345f1b258\"" May 8 00:51:00.391469 kubelet[1419]: I0508 00:51:00.391400 1419 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-wg7cs" podStartSLOduration=9.959125625 podStartE2EDuration="11.391383877s" podCreationTimestamp="2025-05-08 00:50:49 +0000 UTC" firstStartedPulling="2025-05-08 00:50:52.25744961 +0000 UTC m=+3.952935888" lastFinishedPulling="2025-05-08 00:50:53.689707862 +0000 UTC m=+5.385194140" observedRunningTime="2025-05-08 00:50:54.383439544 +0000 UTC m=+6.078925901" watchObservedRunningTime="2025-05-08 00:51:00.391383877 +0000 UTC m=+12.086870155" May 8 00:51:00.403857 systemd[1]: Started cri-containerd-cc4999ac022aeae4476e793b9d17aa2ce0d091cd4c5dee269315c55345f1b258.scope. May 8 00:51:00.432873 env[1215]: time="2025-05-08T00:51:00.432639152Z" level=info msg="StartContainer for \"cc4999ac022aeae4476e793b9d17aa2ce0d091cd4c5dee269315c55345f1b258\" returns successfully" May 8 00:51:00.450588 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 8 00:51:00.450788 systemd[1]: Stopped systemd-sysctl.service. May 8 00:51:00.450953 systemd[1]: Stopping systemd-sysctl.service... May 8 00:51:00.452356 systemd[1]: Starting systemd-sysctl.service... May 8 00:51:00.453996 systemd[1]: cri-containerd-cc4999ac022aeae4476e793b9d17aa2ce0d091cd4c5dee269315c55345f1b258.scope: Deactivated successfully. May 8 00:51:00.460076 systemd[1]: Finished systemd-sysctl.service. May 8 00:51:00.471922 env[1215]: time="2025-05-08T00:51:00.471810665Z" level=info msg="shim disconnected" id=cc4999ac022aeae4476e793b9d17aa2ce0d091cd4c5dee269315c55345f1b258 May 8 00:51:00.471922 env[1215]: time="2025-05-08T00:51:00.471855885Z" level=warning msg="cleaning up after shim disconnected" id=cc4999ac022aeae4476e793b9d17aa2ce0d091cd4c5dee269315c55345f1b258 namespace=k8s.io May 8 00:51:00.471922 env[1215]: time="2025-05-08T00:51:00.471867290Z" level=info msg="cleaning up dead shim" May 8 00:51:00.477875 env[1215]: time="2025-05-08T00:51:00.477834497Z" level=warning msg="cleanup warnings time=\"2025-05-08T00:51:00Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1833 runtime=io.containerd.runc.v2\n" May 8 00:51:00.663892 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-566ee8fed6c582c740d10897bd0dadc8a93e145ff8e4d07ff030d7bc09285c8b-rootfs.mount: Deactivated successfully. May 8 00:51:01.213944 kubelet[1419]: E0508 00:51:01.213896 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:51:01.377654 kubelet[1419]: E0508 00:51:01.377624 1419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:51:01.379524 env[1215]: time="2025-05-08T00:51:01.379485594Z" level=info msg="CreateContainer within sandbox \"e1e7ca1552f184e0984bc486bb93b6735d206425bcd776cda6628bd172d357b4\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 8 00:51:01.394453 env[1215]: time="2025-05-08T00:51:01.394393473Z" level=info msg="CreateContainer within sandbox \"e1e7ca1552f184e0984bc486bb93b6735d206425bcd776cda6628bd172d357b4\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"bba4f75fdf85d8375cc91092fbaafeb42919336fffa69c53c14f334da6b28f44\"" May 8 00:51:01.394970 env[1215]: time="2025-05-08T00:51:01.394944624Z" level=info msg="StartContainer for \"bba4f75fdf85d8375cc91092fbaafeb42919336fffa69c53c14f334da6b28f44\"" May 8 00:51:01.414610 systemd[1]: Started cri-containerd-bba4f75fdf85d8375cc91092fbaafeb42919336fffa69c53c14f334da6b28f44.scope. May 8 00:51:01.443027 env[1215]: time="2025-05-08T00:51:01.442984182Z" level=info msg="StartContainer for \"bba4f75fdf85d8375cc91092fbaafeb42919336fffa69c53c14f334da6b28f44\" returns successfully" May 8 00:51:01.455655 systemd[1]: cri-containerd-bba4f75fdf85d8375cc91092fbaafeb42919336fffa69c53c14f334da6b28f44.scope: Deactivated successfully. May 8 00:51:01.475687 env[1215]: time="2025-05-08T00:51:01.475585534Z" level=info msg="shim disconnected" id=bba4f75fdf85d8375cc91092fbaafeb42919336fffa69c53c14f334da6b28f44 May 8 00:51:01.475687 env[1215]: time="2025-05-08T00:51:01.475627421Z" level=warning msg="cleaning up after shim disconnected" id=bba4f75fdf85d8375cc91092fbaafeb42919336fffa69c53c14f334da6b28f44 namespace=k8s.io May 8 00:51:01.475687 env[1215]: time="2025-05-08T00:51:01.475636835Z" level=info msg="cleaning up dead shim" May 8 00:51:01.482564 env[1215]: time="2025-05-08T00:51:01.482526061Z" level=warning msg="cleanup warnings time=\"2025-05-08T00:51:01Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1890 runtime=io.containerd.runc.v2\n" May 8 00:51:01.663968 systemd[1]: run-containerd-runc-k8s.io-bba4f75fdf85d8375cc91092fbaafeb42919336fffa69c53c14f334da6b28f44-runc.WNgsUp.mount: Deactivated successfully. May 8 00:51:01.664068 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bba4f75fdf85d8375cc91092fbaafeb42919336fffa69c53c14f334da6b28f44-rootfs.mount: Deactivated successfully. May 8 00:51:02.214248 kubelet[1419]: E0508 00:51:02.214198 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:51:02.381290 kubelet[1419]: E0508 00:51:02.381122 1419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:51:02.382822 env[1215]: time="2025-05-08T00:51:02.382782810Z" level=info msg="CreateContainer within sandbox \"e1e7ca1552f184e0984bc486bb93b6735d206425bcd776cda6628bd172d357b4\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 8 00:51:02.392013 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1169559299.mount: Deactivated successfully. May 8 00:51:02.396022 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3526899099.mount: Deactivated successfully. May 8 00:51:02.400015 env[1215]: time="2025-05-08T00:51:02.399969271Z" level=info msg="CreateContainer within sandbox \"e1e7ca1552f184e0984bc486bb93b6735d206425bcd776cda6628bd172d357b4\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"53c09c781967cf299dff8f4c815d6a7d70fc351750b35f2505d10da09ea4a6f7\"" May 8 00:51:02.400483 env[1215]: time="2025-05-08T00:51:02.400456998Z" level=info msg="StartContainer for \"53c09c781967cf299dff8f4c815d6a7d70fc351750b35f2505d10da09ea4a6f7\"" May 8 00:51:02.413337 systemd[1]: Started cri-containerd-53c09c781967cf299dff8f4c815d6a7d70fc351750b35f2505d10da09ea4a6f7.scope. May 8 00:51:02.441420 systemd[1]: cri-containerd-53c09c781967cf299dff8f4c815d6a7d70fc351750b35f2505d10da09ea4a6f7.scope: Deactivated successfully. May 8 00:51:02.442655 env[1215]: time="2025-05-08T00:51:02.442614202Z" level=info msg="StartContainer for \"53c09c781967cf299dff8f4c815d6a7d70fc351750b35f2505d10da09ea4a6f7\" returns successfully" May 8 00:51:02.461321 env[1215]: time="2025-05-08T00:51:02.461087462Z" level=info msg="shim disconnected" id=53c09c781967cf299dff8f4c815d6a7d70fc351750b35f2505d10da09ea4a6f7 May 8 00:51:02.461549 env[1215]: time="2025-05-08T00:51:02.461528978Z" level=warning msg="cleaning up after shim disconnected" id=53c09c781967cf299dff8f4c815d6a7d70fc351750b35f2505d10da09ea4a6f7 namespace=k8s.io May 8 00:51:02.461657 env[1215]: time="2025-05-08T00:51:02.461642151Z" level=info msg="cleaning up dead shim" May 8 00:51:02.467401 env[1215]: time="2025-05-08T00:51:02.467318695Z" level=warning msg="cleanup warnings time=\"2025-05-08T00:51:02Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1947 runtime=io.containerd.runc.v2\n" May 8 00:51:03.215206 kubelet[1419]: E0508 00:51:03.215167 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:51:03.385396 kubelet[1419]: E0508 00:51:03.385363 1419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:51:03.387410 env[1215]: time="2025-05-08T00:51:03.387371698Z" level=info msg="CreateContainer within sandbox \"e1e7ca1552f184e0984bc486bb93b6735d206425bcd776cda6628bd172d357b4\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 8 00:51:03.400721 env[1215]: time="2025-05-08T00:51:03.400663299Z" level=info msg="CreateContainer within sandbox \"e1e7ca1552f184e0984bc486bb93b6735d206425bcd776cda6628bd172d357b4\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"fc32d656c791bf726a052d9b40cf6e104c5648028f1d75919671406fc65f1714\"" May 8 00:51:03.401331 env[1215]: time="2025-05-08T00:51:03.401297587Z" level=info msg="StartContainer for \"fc32d656c791bf726a052d9b40cf6e104c5648028f1d75919671406fc65f1714\"" May 8 00:51:03.415618 systemd[1]: Started cri-containerd-fc32d656c791bf726a052d9b40cf6e104c5648028f1d75919671406fc65f1714.scope. May 8 00:51:03.446766 env[1215]: time="2025-05-08T00:51:03.446717806Z" level=info msg="StartContainer for \"fc32d656c791bf726a052d9b40cf6e104c5648028f1d75919671406fc65f1714\" returns successfully" May 8 00:51:03.581213 kubelet[1419]: I0508 00:51:03.581060 1419 kubelet_node_status.go:488] "Fast updating node status as it just became ready" May 8 00:51:03.697141 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! May 8 00:51:03.934123 kernel: Initializing XFRM netlink socket May 8 00:51:03.936125 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! May 8 00:51:04.215504 kubelet[1419]: E0508 00:51:04.215378 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:51:04.388830 kubelet[1419]: E0508 00:51:04.388799 1419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:51:04.403545 kubelet[1419]: I0508 00:51:04.403299 1419 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-82vjw" podStartSLOduration=8.009379502 podStartE2EDuration="15.403285327s" podCreationTimestamp="2025-05-08 00:50:49 +0000 UTC" firstStartedPulling="2025-05-08 00:50:52.260551874 +0000 UTC m=+3.956038112" lastFinishedPulling="2025-05-08 00:50:59.65445766 +0000 UTC m=+11.349943937" observedRunningTime="2025-05-08 00:51:04.40309012 +0000 UTC m=+16.098576358" watchObservedRunningTime="2025-05-08 00:51:04.403285327 +0000 UTC m=+16.098771685" May 8 00:51:05.216001 kubelet[1419]: E0508 00:51:05.215943 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:51:05.390095 kubelet[1419]: E0508 00:51:05.390059 1419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:51:05.550819 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready May 8 00:51:05.550922 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready May 8 00:51:05.549281 systemd-networkd[1038]: cilium_host: Link UP May 8 00:51:05.549685 systemd-networkd[1038]: cilium_net: Link UP May 8 00:51:05.550273 systemd-networkd[1038]: cilium_net: Gained carrier May 8 00:51:05.550776 systemd-networkd[1038]: cilium_host: Gained carrier May 8 00:51:05.616361 systemd[1]: Created slice kubepods-besteffort-pod43b8b30d_d6c4_4efa_806c_3be2473c765c.slice. May 8 00:51:05.638641 systemd-networkd[1038]: cilium_vxlan: Link UP May 8 00:51:05.638648 systemd-networkd[1038]: cilium_vxlan: Gained carrier May 8 00:51:05.710241 kubelet[1419]: I0508 00:51:05.710179 1419 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67n7b\" (UniqueName: \"kubernetes.io/projected/43b8b30d-d6c4-4efa-806c-3be2473c765c-kube-api-access-67n7b\") pod \"nginx-deployment-8587fbcb89-ffh5l\" (UID: \"43b8b30d-d6c4-4efa-806c-3be2473c765c\") " pod="default/nginx-deployment-8587fbcb89-ffh5l" May 8 00:51:05.912297 systemd-networkd[1038]: cilium_net: Gained IPv6LL May 8 00:51:05.919530 env[1215]: time="2025-05-08T00:51:05.919483952Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-ffh5l,Uid:43b8b30d-d6c4-4efa-806c-3be2473c765c,Namespace:default,Attempt:0,}" May 8 00:51:05.969131 kernel: NET: Registered PF_ALG protocol family May 8 00:51:06.216934 kubelet[1419]: E0508 00:51:06.216839 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:51:06.353226 systemd-networkd[1038]: cilium_host: Gained IPv6LL May 8 00:51:06.391157 kubelet[1419]: E0508 00:51:06.391125 1419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:51:06.512770 systemd-networkd[1038]: lxc_health: Link UP May 8 00:51:06.523798 systemd-networkd[1038]: lxc_health: Gained carrier May 8 00:51:06.524128 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready May 8 00:51:06.736242 systemd-networkd[1038]: cilium_vxlan: Gained IPv6LL May 8 00:51:06.975826 systemd-networkd[1038]: lxcb96be3ae7dac: Link UP May 8 00:51:06.988138 kernel: eth0: renamed from tmp35915 May 8 00:51:06.999655 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready May 8 00:51:06.999744 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcb96be3ae7dac: link becomes ready May 8 00:51:06.999807 systemd-networkd[1038]: lxcb96be3ae7dac: Gained carrier May 8 00:51:07.217238 kubelet[1419]: E0508 00:51:07.217185 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:51:07.537517 kubelet[1419]: E0508 00:51:07.537463 1419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:51:07.760268 systemd-networkd[1038]: lxc_health: Gained IPv6LL May 8 00:51:08.217831 kubelet[1419]: E0508 00:51:08.217770 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:51:08.785317 systemd-networkd[1038]: lxcb96be3ae7dac: Gained IPv6LL May 8 00:51:09.208565 kubelet[1419]: E0508 00:51:09.208453 1419 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:51:09.218171 kubelet[1419]: E0508 00:51:09.218133 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:51:10.219188 kubelet[1419]: E0508 00:51:10.219150 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:51:10.482236 env[1215]: time="2025-05-08T00:51:10.481896878Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:51:10.482559 env[1215]: time="2025-05-08T00:51:10.481938205Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:51:10.482559 env[1215]: time="2025-05-08T00:51:10.481969659Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:51:10.482638 env[1215]: time="2025-05-08T00:51:10.482607622Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/35915ac13e02035db64d7591e21d2c5b86350a7df47f7109d58fa935a49ddb07 pid=2484 runtime=io.containerd.runc.v2 May 8 00:51:10.492895 systemd[1]: Started cri-containerd-35915ac13e02035db64d7591e21d2c5b86350a7df47f7109d58fa935a49ddb07.scope. May 8 00:51:10.496186 systemd[1]: run-containerd-runc-k8s.io-35915ac13e02035db64d7591e21d2c5b86350a7df47f7109d58fa935a49ddb07-runc.L62eqF.mount: Deactivated successfully. May 8 00:51:10.557033 systemd-resolved[1155]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 8 00:51:10.571330 env[1215]: time="2025-05-08T00:51:10.571279204Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-ffh5l,Uid:43b8b30d-d6c4-4efa-806c-3be2473c765c,Namespace:default,Attempt:0,} returns sandbox id \"35915ac13e02035db64d7591e21d2c5b86350a7df47f7109d58fa935a49ddb07\"" May 8 00:51:10.572940 env[1215]: time="2025-05-08T00:51:10.572910601Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" May 8 00:51:11.219861 kubelet[1419]: E0508 00:51:11.219802 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:51:12.220267 kubelet[1419]: E0508 00:51:12.220190 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:51:12.455420 kubelet[1419]: I0508 00:51:12.455257 1419 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 8 00:51:12.455743 kubelet[1419]: E0508 00:51:12.455712 1419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:51:12.792925 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2339372768.mount: Deactivated successfully. May 8 00:51:13.220390 kubelet[1419]: E0508 00:51:13.220341 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:51:13.402011 kubelet[1419]: E0508 00:51:13.401968 1419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:51:14.037360 env[1215]: time="2025-05-08T00:51:14.037313562Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:51:14.038715 env[1215]: time="2025-05-08T00:51:14.038682112Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e8b1cb61bd96acc3ff3c695318c9cc691213d532eee3731d038af92816fcb5f4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:51:14.043612 env[1215]: time="2025-05-08T00:51:14.043562584Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:51:14.045259 env[1215]: time="2025-05-08T00:51:14.045229078Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx@sha256:beabce8f1782671ba500ddff99dd260fbf9c5ec85fb9c3162e35a3c40bafd023,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:51:14.046207 env[1215]: time="2025-05-08T00:51:14.046170674Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:e8b1cb61bd96acc3ff3c695318c9cc691213d532eee3731d038af92816fcb5f4\"" May 8 00:51:14.048399 env[1215]: time="2025-05-08T00:51:14.048354450Z" level=info msg="CreateContainer within sandbox \"35915ac13e02035db64d7591e21d2c5b86350a7df47f7109d58fa935a49ddb07\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" May 8 00:51:14.057010 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount5557418.mount: Deactivated successfully. May 8 00:51:14.059346 env[1215]: time="2025-05-08T00:51:14.059305090Z" level=info msg="CreateContainer within sandbox \"35915ac13e02035db64d7591e21d2c5b86350a7df47f7109d58fa935a49ddb07\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"09fc6f88bf29b62a1798b7b198ba91ad9cd4ce6792fcdf8b812468399da2de45\"" May 8 00:51:14.059821 env[1215]: time="2025-05-08T00:51:14.059787929Z" level=info msg="StartContainer for \"09fc6f88bf29b62a1798b7b198ba91ad9cd4ce6792fcdf8b812468399da2de45\"" May 8 00:51:14.074759 systemd[1]: Started cri-containerd-09fc6f88bf29b62a1798b7b198ba91ad9cd4ce6792fcdf8b812468399da2de45.scope. May 8 00:51:14.104967 env[1215]: time="2025-05-08T00:51:14.104918919Z" level=info msg="StartContainer for \"09fc6f88bf29b62a1798b7b198ba91ad9cd4ce6792fcdf8b812468399da2de45\" returns successfully" May 8 00:51:14.220748 kubelet[1419]: E0508 00:51:14.220708 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:51:14.411939 kubelet[1419]: I0508 00:51:14.411883 1419 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-8587fbcb89-ffh5l" podStartSLOduration=5.937182273 podStartE2EDuration="9.411867846s" podCreationTimestamp="2025-05-08 00:51:05 +0000 UTC" firstStartedPulling="2025-05-08 00:51:10.572399415 +0000 UTC m=+22.267885693" lastFinishedPulling="2025-05-08 00:51:14.047084988 +0000 UTC m=+25.742571266" observedRunningTime="2025-05-08 00:51:14.4117992 +0000 UTC m=+26.107285478" watchObservedRunningTime="2025-05-08 00:51:14.411867846 +0000 UTC m=+26.107354124" May 8 00:51:15.221885 kubelet[1419]: E0508 00:51:15.221825 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:51:16.222229 kubelet[1419]: E0508 00:51:16.222167 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:51:17.223198 kubelet[1419]: E0508 00:51:17.223138 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:51:17.710735 systemd[1]: Created slice kubepods-besteffort-poda2b92029_4965_4ffd_b7ee_416e5ec34b06.slice. May 8 00:51:17.775031 kubelet[1419]: I0508 00:51:17.774982 1419 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/a2b92029-4965-4ffd-b7ee-416e5ec34b06-data\") pod \"nfs-server-provisioner-0\" (UID: \"a2b92029-4965-4ffd-b7ee-416e5ec34b06\") " pod="default/nfs-server-provisioner-0" May 8 00:51:17.775031 kubelet[1419]: I0508 00:51:17.775035 1419 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4rhj6\" (UniqueName: \"kubernetes.io/projected/a2b92029-4965-4ffd-b7ee-416e5ec34b06-kube-api-access-4rhj6\") pod \"nfs-server-provisioner-0\" (UID: \"a2b92029-4965-4ffd-b7ee-416e5ec34b06\") " pod="default/nfs-server-provisioner-0" May 8 00:51:18.014404 env[1215]: time="2025-05-08T00:51:18.014027084Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:a2b92029-4965-4ffd-b7ee-416e5ec34b06,Namespace:default,Attempt:0,}" May 8 00:51:18.044084 systemd-networkd[1038]: lxc1166f522973c: Link UP May 8 00:51:18.056131 kernel: eth0: renamed from tmp5ce8b May 8 00:51:18.064577 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready May 8 00:51:18.064688 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc1166f522973c: link becomes ready May 8 00:51:18.064841 systemd-networkd[1038]: lxc1166f522973c: Gained carrier May 8 00:51:18.200837 env[1215]: time="2025-05-08T00:51:18.200764225Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:51:18.200977 env[1215]: time="2025-05-08T00:51:18.200849711Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:51:18.200977 env[1215]: time="2025-05-08T00:51:18.200877752Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:51:18.201090 env[1215]: time="2025-05-08T00:51:18.201041923Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/5ce8b313d3c837cf267efa58c87e1b2bd8b44698aa3997dd66f5fc25e0fb5a9b pid=2617 runtime=io.containerd.runc.v2 May 8 00:51:18.215782 systemd[1]: Started cri-containerd-5ce8b313d3c837cf267efa58c87e1b2bd8b44698aa3997dd66f5fc25e0fb5a9b.scope. May 8 00:51:18.223504 kubelet[1419]: E0508 00:51:18.223436 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:51:18.238025 systemd-resolved[1155]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 8 00:51:18.254883 env[1215]: time="2025-05-08T00:51:18.254835734Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:a2b92029-4965-4ffd-b7ee-416e5ec34b06,Namespace:default,Attempt:0,} returns sandbox id \"5ce8b313d3c837cf267efa58c87e1b2bd8b44698aa3997dd66f5fc25e0fb5a9b\"" May 8 00:51:18.256742 env[1215]: time="2025-05-08T00:51:18.256619369Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" May 8 00:51:18.887017 systemd[1]: run-containerd-runc-k8s.io-5ce8b313d3c837cf267efa58c87e1b2bd8b44698aa3997dd66f5fc25e0fb5a9b-runc.eJurW5.mount: Deactivated successfully. May 8 00:51:19.224059 kubelet[1419]: E0508 00:51:19.224017 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:51:19.408294 systemd-networkd[1038]: lxc1166f522973c: Gained IPv6LL May 8 00:51:20.224872 kubelet[1419]: E0508 00:51:20.224830 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:51:20.555994 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1386865608.mount: Deactivated successfully. May 8 00:51:21.225278 kubelet[1419]: E0508 00:51:21.225216 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:51:22.225843 kubelet[1419]: E0508 00:51:22.225802 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:51:22.368037 env[1215]: time="2025-05-08T00:51:22.367989873Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:51:22.369427 env[1215]: time="2025-05-08T00:51:22.369398946Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:51:22.370811 env[1215]: time="2025-05-08T00:51:22.370781217Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:51:22.372412 env[1215]: time="2025-05-08T00:51:22.372381940Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:51:22.373189 env[1215]: time="2025-05-08T00:51:22.373162940Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\"" May 8 00:51:22.375548 env[1215]: time="2025-05-08T00:51:22.375518142Z" level=info msg="CreateContainer within sandbox \"5ce8b313d3c837cf267efa58c87e1b2bd8b44698aa3997dd66f5fc25e0fb5a9b\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" May 8 00:51:22.383898 env[1215]: time="2025-05-08T00:51:22.383855652Z" level=info msg="CreateContainer within sandbox \"5ce8b313d3c837cf267efa58c87e1b2bd8b44698aa3997dd66f5fc25e0fb5a9b\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"b2715368392ff4179e8bf6498275d4d6878fa2df1562fd4641938dee40ead3a1\"" May 8 00:51:22.384342 env[1215]: time="2025-05-08T00:51:22.384316476Z" level=info msg="StartContainer for \"b2715368392ff4179e8bf6498275d4d6878fa2df1562fd4641938dee40ead3a1\"" May 8 00:51:22.405185 systemd[1]: Started cri-containerd-b2715368392ff4179e8bf6498275d4d6878fa2df1562fd4641938dee40ead3a1.scope. May 8 00:51:22.450439 env[1215]: time="2025-05-08T00:51:22.450397405Z" level=info msg="StartContainer for \"b2715368392ff4179e8bf6498275d4d6878fa2df1562fd4641938dee40ead3a1\" returns successfully" May 8 00:51:23.226757 kubelet[1419]: E0508 00:51:23.226712 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:51:23.381276 systemd[1]: run-containerd-runc-k8s.io-b2715368392ff4179e8bf6498275d4d6878fa2df1562fd4641938dee40ead3a1-runc.BUM9ur.mount: Deactivated successfully. May 8 00:51:23.444589 kubelet[1419]: I0508 00:51:23.444389 1419 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=2.326066574 podStartE2EDuration="6.444373684s" podCreationTimestamp="2025-05-08 00:51:17 +0000 UTC" firstStartedPulling="2025-05-08 00:51:18.256033611 +0000 UTC m=+29.951519889" lastFinishedPulling="2025-05-08 00:51:22.374340721 +0000 UTC m=+34.069826999" observedRunningTime="2025-05-08 00:51:23.444059549 +0000 UTC m=+35.139545867" watchObservedRunningTime="2025-05-08 00:51:23.444373684 +0000 UTC m=+35.139859962" May 8 00:51:24.227687 kubelet[1419]: E0508 00:51:24.227639 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:51:25.228325 kubelet[1419]: E0508 00:51:25.228282 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:51:26.229164 kubelet[1419]: E0508 00:51:26.229118 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:51:27.006581 update_engine[1207]: I0508 00:51:27.006150 1207 update_attempter.cc:509] Updating boot flags... May 8 00:51:27.230015 kubelet[1419]: E0508 00:51:27.229966 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:51:28.231989 kubelet[1419]: E0508 00:51:28.231930 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:51:29.209196 kubelet[1419]: E0508 00:51:29.209136 1419 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:51:29.232413 kubelet[1419]: E0508 00:51:29.232384 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:51:30.232753 kubelet[1419]: E0508 00:51:30.232707 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:51:31.233548 kubelet[1419]: E0508 00:51:31.233509 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:51:32.234541 kubelet[1419]: E0508 00:51:32.234506 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:51:32.336012 systemd[1]: Created slice kubepods-besteffort-podf4614a99_cad8_4b44_9f0e_81e0461bd62e.slice. May 8 00:51:32.360445 kubelet[1419]: I0508 00:51:32.360417 1419 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-8802e489-8fe7-4b40-a872-65e42f6a0a74\" (UniqueName: \"kubernetes.io/nfs/f4614a99-cad8-4b44-9f0e-81e0461bd62e-pvc-8802e489-8fe7-4b40-a872-65e42f6a0a74\") pod \"test-pod-1\" (UID: \"f4614a99-cad8-4b44-9f0e-81e0461bd62e\") " pod="default/test-pod-1" May 8 00:51:32.360614 kubelet[1419]: I0508 00:51:32.360596 1419 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hkh4p\" (UniqueName: \"kubernetes.io/projected/f4614a99-cad8-4b44-9f0e-81e0461bd62e-kube-api-access-hkh4p\") pod \"test-pod-1\" (UID: \"f4614a99-cad8-4b44-9f0e-81e0461bd62e\") " pod="default/test-pod-1" May 8 00:51:32.492252 kernel: FS-Cache: Loaded May 8 00:51:32.526147 kernel: RPC: Registered named UNIX socket transport module. May 8 00:51:32.526268 kernel: RPC: Registered udp transport module. May 8 00:51:32.526291 kernel: RPC: Registered tcp transport module. May 8 00:51:32.527130 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. May 8 00:51:32.576130 kernel: FS-Cache: Netfs 'nfs' registered for caching May 8 00:51:32.720764 kernel: NFS: Registering the id_resolver key type May 8 00:51:32.720893 kernel: Key type id_resolver registered May 8 00:51:32.720916 kernel: Key type id_legacy registered May 8 00:51:32.745830 nfsidmap[2752]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' May 8 00:51:32.751260 nfsidmap[2755]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' May 8 00:51:32.938602 env[1215]: time="2025-05-08T00:51:32.938520601Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:f4614a99-cad8-4b44-9f0e-81e0461bd62e,Namespace:default,Attempt:0,}" May 8 00:51:32.964988 systemd-networkd[1038]: lxcabea1ae4f8d6: Link UP May 8 00:51:32.978135 kernel: eth0: renamed from tmpee94e May 8 00:51:32.983306 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready May 8 00:51:32.983377 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcabea1ae4f8d6: link becomes ready May 8 00:51:32.983381 systemd-networkd[1038]: lxcabea1ae4f8d6: Gained carrier May 8 00:51:33.155305 env[1215]: time="2025-05-08T00:51:33.155234531Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:51:33.155456 env[1215]: time="2025-05-08T00:51:33.155313773Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:51:33.155456 env[1215]: time="2025-05-08T00:51:33.155341614Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:51:33.155585 env[1215]: time="2025-05-08T00:51:33.155517499Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ee94e5cf97203c6cbbec1855a9ac674c27e33594d3ee1c48263b0bf9a38f6ea2 pid=2789 runtime=io.containerd.runc.v2 May 8 00:51:33.166807 systemd[1]: Started cri-containerd-ee94e5cf97203c6cbbec1855a9ac674c27e33594d3ee1c48263b0bf9a38f6ea2.scope. May 8 00:51:33.207634 systemd-resolved[1155]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 8 00:51:33.225589 env[1215]: time="2025-05-08T00:51:33.225526753Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:f4614a99-cad8-4b44-9f0e-81e0461bd62e,Namespace:default,Attempt:0,} returns sandbox id \"ee94e5cf97203c6cbbec1855a9ac674c27e33594d3ee1c48263b0bf9a38f6ea2\"" May 8 00:51:33.226915 env[1215]: time="2025-05-08T00:51:33.226877994Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" May 8 00:51:33.235741 kubelet[1419]: E0508 00:51:33.235694 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:51:33.522705 env[1215]: time="2025-05-08T00:51:33.522337032Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:51:33.523763 env[1215]: time="2025-05-08T00:51:33.523733033Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:e8b1cb61bd96acc3ff3c695318c9cc691213d532eee3731d038af92816fcb5f4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:51:33.529118 env[1215]: time="2025-05-08T00:51:33.526054023Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:51:33.529197 env[1215]: time="2025-05-08T00:51:33.529139355Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:e8b1cb61bd96acc3ff3c695318c9cc691213d532eee3731d038af92816fcb5f4\"" May 8 00:51:33.529728 env[1215]: time="2025-05-08T00:51:33.529703812Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx@sha256:beabce8f1782671ba500ddff99dd260fbf9c5ec85fb9c3162e35a3c40bafd023,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:51:33.532224 env[1215]: time="2025-05-08T00:51:33.532176366Z" level=info msg="CreateContainer within sandbox \"ee94e5cf97203c6cbbec1855a9ac674c27e33594d3ee1c48263b0bf9a38f6ea2\" for container &ContainerMetadata{Name:test,Attempt:0,}" May 8 00:51:33.544267 env[1215]: time="2025-05-08T00:51:33.544216646Z" level=info msg="CreateContainer within sandbox \"ee94e5cf97203c6cbbec1855a9ac674c27e33594d3ee1c48263b0bf9a38f6ea2\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"116230b9f7c9b9f5dc3ea8f1343b3ccb35a5512d83e901c155c0f4dc250859d1\"" May 8 00:51:33.544760 env[1215]: time="2025-05-08T00:51:33.544721461Z" level=info msg="StartContainer for \"116230b9f7c9b9f5dc3ea8f1343b3ccb35a5512d83e901c155c0f4dc250859d1\"" May 8 00:51:33.568553 systemd[1]: Started cri-containerd-116230b9f7c9b9f5dc3ea8f1343b3ccb35a5512d83e901c155c0f4dc250859d1.scope. May 8 00:51:33.606079 env[1215]: time="2025-05-08T00:51:33.605965533Z" level=info msg="StartContainer for \"116230b9f7c9b9f5dc3ea8f1343b3ccb35a5512d83e901c155c0f4dc250859d1\" returns successfully" May 8 00:51:34.236176 kubelet[1419]: E0508 00:51:34.236083 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:51:34.256371 systemd-networkd[1038]: lxcabea1ae4f8d6: Gained IPv6LL May 8 00:51:35.236329 kubelet[1419]: E0508 00:51:35.236243 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:51:36.236966 kubelet[1419]: E0508 00:51:36.236926 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:51:37.238457 kubelet[1419]: E0508 00:51:37.238398 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:51:38.239435 kubelet[1419]: E0508 00:51:38.239399 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:51:39.240678 kubelet[1419]: E0508 00:51:39.240630 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:51:40.241243 kubelet[1419]: E0508 00:51:40.241206 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:51:40.954452 kubelet[1419]: I0508 00:51:40.954342 1419 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=23.650282249 podStartE2EDuration="23.954325624s" podCreationTimestamp="2025-05-08 00:51:17 +0000 UTC" firstStartedPulling="2025-05-08 00:51:33.226693868 +0000 UTC m=+44.922180106" lastFinishedPulling="2025-05-08 00:51:33.530737203 +0000 UTC m=+45.226223481" observedRunningTime="2025-05-08 00:51:34.468866132 +0000 UTC m=+46.164352410" watchObservedRunningTime="2025-05-08 00:51:40.954325624 +0000 UTC m=+52.649811902" May 8 00:51:40.968989 systemd[1]: run-containerd-runc-k8s.io-fc32d656c791bf726a052d9b40cf6e104c5648028f1d75919671406fc65f1714-runc.v7pJRq.mount: Deactivated successfully. May 8 00:51:40.989125 env[1215]: time="2025-05-08T00:51:40.988889435Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 8 00:51:40.993415 env[1215]: time="2025-05-08T00:51:40.993380095Z" level=info msg="StopContainer for \"fc32d656c791bf726a052d9b40cf6e104c5648028f1d75919671406fc65f1714\" with timeout 2 (s)" May 8 00:51:40.993688 env[1215]: time="2025-05-08T00:51:40.993656741Z" level=info msg="Stop container \"fc32d656c791bf726a052d9b40cf6e104c5648028f1d75919671406fc65f1714\" with signal terminated" May 8 00:51:40.998742 systemd-networkd[1038]: lxc_health: Link DOWN May 8 00:51:40.998751 systemd-networkd[1038]: lxc_health: Lost carrier May 8 00:51:41.040425 systemd[1]: cri-containerd-fc32d656c791bf726a052d9b40cf6e104c5648028f1d75919671406fc65f1714.scope: Deactivated successfully. May 8 00:51:41.040738 systemd[1]: cri-containerd-fc32d656c791bf726a052d9b40cf6e104c5648028f1d75919671406fc65f1714.scope: Consumed 6.328s CPU time. May 8 00:51:41.054800 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fc32d656c791bf726a052d9b40cf6e104c5648028f1d75919671406fc65f1714-rootfs.mount: Deactivated successfully. May 8 00:51:41.208636 env[1215]: time="2025-05-08T00:51:41.208512997Z" level=info msg="shim disconnected" id=fc32d656c791bf726a052d9b40cf6e104c5648028f1d75919671406fc65f1714 May 8 00:51:41.208636 env[1215]: time="2025-05-08T00:51:41.208557758Z" level=warning msg="cleaning up after shim disconnected" id=fc32d656c791bf726a052d9b40cf6e104c5648028f1d75919671406fc65f1714 namespace=k8s.io May 8 00:51:41.208636 env[1215]: time="2025-05-08T00:51:41.208568518Z" level=info msg="cleaning up dead shim" May 8 00:51:41.215211 env[1215]: time="2025-05-08T00:51:41.215169979Z" level=warning msg="cleanup warnings time=\"2025-05-08T00:51:41Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2921 runtime=io.containerd.runc.v2\n" May 8 00:51:41.218797 env[1215]: time="2025-05-08T00:51:41.218762897Z" level=info msg="StopContainer for \"fc32d656c791bf726a052d9b40cf6e104c5648028f1d75919671406fc65f1714\" returns successfully" May 8 00:51:41.219434 env[1215]: time="2025-05-08T00:51:41.219408910Z" level=info msg="StopPodSandbox for \"e1e7ca1552f184e0984bc486bb93b6735d206425bcd776cda6628bd172d357b4\"" May 8 00:51:41.219497 env[1215]: time="2025-05-08T00:51:41.219470312Z" level=info msg="Container to stop \"cc4999ac022aeae4476e793b9d17aa2ce0d091cd4c5dee269315c55345f1b258\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 8 00:51:41.219497 env[1215]: time="2025-05-08T00:51:41.219486272Z" level=info msg="Container to stop \"566ee8fed6c582c740d10897bd0dadc8a93e145ff8e4d07ff030d7bc09285c8b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 8 00:51:41.219556 env[1215]: time="2025-05-08T00:51:41.219497672Z" level=info msg="Container to stop \"bba4f75fdf85d8375cc91092fbaafeb42919336fffa69c53c14f334da6b28f44\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 8 00:51:41.219556 env[1215]: time="2025-05-08T00:51:41.219509953Z" level=info msg="Container to stop \"53c09c781967cf299dff8f4c815d6a7d70fc351750b35f2505d10da09ea4a6f7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 8 00:51:41.219556 env[1215]: time="2025-05-08T00:51:41.219519873Z" level=info msg="Container to stop \"fc32d656c791bf726a052d9b40cf6e104c5648028f1d75919671406fc65f1714\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 8 00:51:41.221154 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e1e7ca1552f184e0984bc486bb93b6735d206425bcd776cda6628bd172d357b4-shm.mount: Deactivated successfully. May 8 00:51:41.226153 systemd[1]: cri-containerd-e1e7ca1552f184e0984bc486bb93b6735d206425bcd776cda6628bd172d357b4.scope: Deactivated successfully. May 8 00:51:41.242623 kubelet[1419]: E0508 00:51:41.242487 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:51:41.244723 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e1e7ca1552f184e0984bc486bb93b6735d206425bcd776cda6628bd172d357b4-rootfs.mount: Deactivated successfully. May 8 00:51:41.247682 env[1215]: time="2025-05-08T00:51:41.247626516Z" level=info msg="shim disconnected" id=e1e7ca1552f184e0984bc486bb93b6735d206425bcd776cda6628bd172d357b4 May 8 00:51:41.247682 env[1215]: time="2025-05-08T00:51:41.247682557Z" level=warning msg="cleaning up after shim disconnected" id=e1e7ca1552f184e0984bc486bb93b6735d206425bcd776cda6628bd172d357b4 namespace=k8s.io May 8 00:51:41.247823 env[1215]: time="2025-05-08T00:51:41.247691957Z" level=info msg="cleaning up dead shim" May 8 00:51:41.254256 env[1215]: time="2025-05-08T00:51:41.254204537Z" level=warning msg="cleanup warnings time=\"2025-05-08T00:51:41Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2952 runtime=io.containerd.runc.v2\n" May 8 00:51:41.254538 env[1215]: time="2025-05-08T00:51:41.254501383Z" level=info msg="TearDown network for sandbox \"e1e7ca1552f184e0984bc486bb93b6735d206425bcd776cda6628bd172d357b4\" successfully" May 8 00:51:41.254538 env[1215]: time="2025-05-08T00:51:41.254528144Z" level=info msg="StopPodSandbox for \"e1e7ca1552f184e0984bc486bb93b6735d206425bcd776cda6628bd172d357b4\" returns successfully" May 8 00:51:41.410983 kubelet[1419]: I0508 00:51:41.410488 1419 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cdf85a09-3abb-488b-b462-0b1a3dc25c83-xtables-lock\") pod \"cdf85a09-3abb-488b-b462-0b1a3dc25c83\" (UID: \"cdf85a09-3abb-488b-b462-0b1a3dc25c83\") " May 8 00:51:41.410983 kubelet[1419]: I0508 00:51:41.410531 1419 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qxn96\" (UniqueName: \"kubernetes.io/projected/cdf85a09-3abb-488b-b462-0b1a3dc25c83-kube-api-access-qxn96\") pod \"cdf85a09-3abb-488b-b462-0b1a3dc25c83\" (UID: \"cdf85a09-3abb-488b-b462-0b1a3dc25c83\") " May 8 00:51:41.410983 kubelet[1419]: I0508 00:51:41.410550 1419 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/cdf85a09-3abb-488b-b462-0b1a3dc25c83-hostproc\") pod \"cdf85a09-3abb-488b-b462-0b1a3dc25c83\" (UID: \"cdf85a09-3abb-488b-b462-0b1a3dc25c83\") " May 8 00:51:41.410983 kubelet[1419]: I0508 00:51:41.410622 1419 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cdf85a09-3abb-488b-b462-0b1a3dc25c83-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "cdf85a09-3abb-488b-b462-0b1a3dc25c83" (UID: "cdf85a09-3abb-488b-b462-0b1a3dc25c83"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 00:51:41.410983 kubelet[1419]: I0508 00:51:41.410666 1419 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cdf85a09-3abb-488b-b462-0b1a3dc25c83-hostproc" (OuterVolumeSpecName: "hostproc") pod "cdf85a09-3abb-488b-b462-0b1a3dc25c83" (UID: "cdf85a09-3abb-488b-b462-0b1a3dc25c83"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 00:51:41.410983 kubelet[1419]: I0508 00:51:41.410566 1419 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/cdf85a09-3abb-488b-b462-0b1a3dc25c83-hubble-tls\") pod \"cdf85a09-3abb-488b-b462-0b1a3dc25c83\" (UID: \"cdf85a09-3abb-488b-b462-0b1a3dc25c83\") " May 8 00:51:41.411308 kubelet[1419]: I0508 00:51:41.410698 1419 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/cdf85a09-3abb-488b-b462-0b1a3dc25c83-host-proc-sys-net\") pod \"cdf85a09-3abb-488b-b462-0b1a3dc25c83\" (UID: \"cdf85a09-3abb-488b-b462-0b1a3dc25c83\") " May 8 00:51:41.411308 kubelet[1419]: I0508 00:51:41.410999 1419 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cdf85a09-3abb-488b-b462-0b1a3dc25c83-etc-cni-netd\") pod \"cdf85a09-3abb-488b-b462-0b1a3dc25c83\" (UID: \"cdf85a09-3abb-488b-b462-0b1a3dc25c83\") " May 8 00:51:41.411308 kubelet[1419]: I0508 00:51:41.411015 1419 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/cdf85a09-3abb-488b-b462-0b1a3dc25c83-cilium-run\") pod \"cdf85a09-3abb-488b-b462-0b1a3dc25c83\" (UID: \"cdf85a09-3abb-488b-b462-0b1a3dc25c83\") " May 8 00:51:41.411308 kubelet[1419]: I0508 00:51:41.411048 1419 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cdf85a09-3abb-488b-b462-0b1a3dc25c83-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "cdf85a09-3abb-488b-b462-0b1a3dc25c83" (UID: "cdf85a09-3abb-488b-b462-0b1a3dc25c83"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 00:51:41.411308 kubelet[1419]: I0508 00:51:41.411069 1419 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cdf85a09-3abb-488b-b462-0b1a3dc25c83-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "cdf85a09-3abb-488b-b462-0b1a3dc25c83" (UID: "cdf85a09-3abb-488b-b462-0b1a3dc25c83"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 00:51:41.411443 kubelet[1419]: I0508 00:51:41.411093 1419 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/cdf85a09-3abb-488b-b462-0b1a3dc25c83-clustermesh-secrets\") pod \"cdf85a09-3abb-488b-b462-0b1a3dc25c83\" (UID: \"cdf85a09-3abb-488b-b462-0b1a3dc25c83\") " May 8 00:51:41.411443 kubelet[1419]: I0508 00:51:41.411129 1419 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cdf85a09-3abb-488b-b462-0b1a3dc25c83-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "cdf85a09-3abb-488b-b462-0b1a3dc25c83" (UID: "cdf85a09-3abb-488b-b462-0b1a3dc25c83"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 00:51:41.411443 kubelet[1419]: I0508 00:51:41.411153 1419 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/cdf85a09-3abb-488b-b462-0b1a3dc25c83-cilium-cgroup\") pod \"cdf85a09-3abb-488b-b462-0b1a3dc25c83\" (UID: \"cdf85a09-3abb-488b-b462-0b1a3dc25c83\") " May 8 00:51:41.411443 kubelet[1419]: I0508 00:51:41.411168 1419 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/cdf85a09-3abb-488b-b462-0b1a3dc25c83-bpf-maps\") pod \"cdf85a09-3abb-488b-b462-0b1a3dc25c83\" (UID: \"cdf85a09-3abb-488b-b462-0b1a3dc25c83\") " May 8 00:51:41.411443 kubelet[1419]: I0508 00:51:41.411182 1419 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/cdf85a09-3abb-488b-b462-0b1a3dc25c83-host-proc-sys-kernel\") pod \"cdf85a09-3abb-488b-b462-0b1a3dc25c83\" (UID: \"cdf85a09-3abb-488b-b462-0b1a3dc25c83\") " May 8 00:51:41.411558 kubelet[1419]: I0508 00:51:41.411207 1419 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cdf85a09-3abb-488b-b462-0b1a3dc25c83-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "cdf85a09-3abb-488b-b462-0b1a3dc25c83" (UID: "cdf85a09-3abb-488b-b462-0b1a3dc25c83"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 00:51:41.411558 kubelet[1419]: I0508 00:51:41.411421 1419 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/cdf85a09-3abb-488b-b462-0b1a3dc25c83-cni-path\") pod \"cdf85a09-3abb-488b-b462-0b1a3dc25c83\" (UID: \"cdf85a09-3abb-488b-b462-0b1a3dc25c83\") " May 8 00:51:41.411558 kubelet[1419]: I0508 00:51:41.411446 1419 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cdf85a09-3abb-488b-b462-0b1a3dc25c83-lib-modules\") pod \"cdf85a09-3abb-488b-b462-0b1a3dc25c83\" (UID: \"cdf85a09-3abb-488b-b462-0b1a3dc25c83\") " May 8 00:51:41.411558 kubelet[1419]: I0508 00:51:41.411464 1419 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cdf85a09-3abb-488b-b462-0b1a3dc25c83-cilium-config-path\") pod \"cdf85a09-3abb-488b-b462-0b1a3dc25c83\" (UID: \"cdf85a09-3abb-488b-b462-0b1a3dc25c83\") " May 8 00:51:41.411558 kubelet[1419]: I0508 00:51:41.411523 1419 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cdf85a09-3abb-488b-b462-0b1a3dc25c83-xtables-lock\") on node \"10.0.0.122\" DevicePath \"\"" May 8 00:51:41.411558 kubelet[1419]: I0508 00:51:41.411528 1419 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cdf85a09-3abb-488b-b462-0b1a3dc25c83-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "cdf85a09-3abb-488b-b462-0b1a3dc25c83" (UID: "cdf85a09-3abb-488b-b462-0b1a3dc25c83"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 00:51:41.411710 kubelet[1419]: I0508 00:51:41.411558 1419 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/cdf85a09-3abb-488b-b462-0b1a3dc25c83-host-proc-sys-net\") on node \"10.0.0.122\" DevicePath \"\"" May 8 00:51:41.411710 kubelet[1419]: I0508 00:51:41.411571 1419 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cdf85a09-3abb-488b-b462-0b1a3dc25c83-etc-cni-netd\") on node \"10.0.0.122\" DevicePath \"\"" May 8 00:51:41.411710 kubelet[1419]: I0508 00:51:41.411579 1419 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/cdf85a09-3abb-488b-b462-0b1a3dc25c83-cilium-run\") on node \"10.0.0.122\" DevicePath \"\"" May 8 00:51:41.411710 kubelet[1419]: I0508 00:51:41.411579 1419 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cdf85a09-3abb-488b-b462-0b1a3dc25c83-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "cdf85a09-3abb-488b-b462-0b1a3dc25c83" (UID: "cdf85a09-3abb-488b-b462-0b1a3dc25c83"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 00:51:41.411710 kubelet[1419]: I0508 00:51:41.411590 1419 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/cdf85a09-3abb-488b-b462-0b1a3dc25c83-hostproc\") on node \"10.0.0.122\" DevicePath \"\"" May 8 00:51:41.411710 kubelet[1419]: I0508 00:51:41.411598 1419 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/cdf85a09-3abb-488b-b462-0b1a3dc25c83-cilium-cgroup\") on node \"10.0.0.122\" DevicePath \"\"" May 8 00:51:41.411710 kubelet[1419]: I0508 00:51:41.411602 1419 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cdf85a09-3abb-488b-b462-0b1a3dc25c83-cni-path" (OuterVolumeSpecName: "cni-path") pod "cdf85a09-3abb-488b-b462-0b1a3dc25c83" (UID: "cdf85a09-3abb-488b-b462-0b1a3dc25c83"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 00:51:41.411867 kubelet[1419]: I0508 00:51:41.411619 1419 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cdf85a09-3abb-488b-b462-0b1a3dc25c83-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "cdf85a09-3abb-488b-b462-0b1a3dc25c83" (UID: "cdf85a09-3abb-488b-b462-0b1a3dc25c83"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 00:51:41.413519 kubelet[1419]: I0508 00:51:41.413457 1419 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cdf85a09-3abb-488b-b462-0b1a3dc25c83-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "cdf85a09-3abb-488b-b462-0b1a3dc25c83" (UID: "cdf85a09-3abb-488b-b462-0b1a3dc25c83"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 8 00:51:41.414279 kubelet[1419]: I0508 00:51:41.414250 1419 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cdf85a09-3abb-488b-b462-0b1a3dc25c83-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "cdf85a09-3abb-488b-b462-0b1a3dc25c83" (UID: "cdf85a09-3abb-488b-b462-0b1a3dc25c83"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" May 8 00:51:41.414594 kubelet[1419]: I0508 00:51:41.414568 1419 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cdf85a09-3abb-488b-b462-0b1a3dc25c83-kube-api-access-qxn96" (OuterVolumeSpecName: "kube-api-access-qxn96") pod "cdf85a09-3abb-488b-b462-0b1a3dc25c83" (UID: "cdf85a09-3abb-488b-b462-0b1a3dc25c83"). InnerVolumeSpecName "kube-api-access-qxn96". PluginName "kubernetes.io/projected", VolumeGidValue "" May 8 00:51:41.415577 kubelet[1419]: I0508 00:51:41.415543 1419 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cdf85a09-3abb-488b-b462-0b1a3dc25c83-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "cdf85a09-3abb-488b-b462-0b1a3dc25c83" (UID: "cdf85a09-3abb-488b-b462-0b1a3dc25c83"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" May 8 00:51:41.475844 kubelet[1419]: I0508 00:51:41.474734 1419 scope.go:117] "RemoveContainer" containerID="fc32d656c791bf726a052d9b40cf6e104c5648028f1d75919671406fc65f1714" May 8 00:51:41.479457 systemd[1]: Removed slice kubepods-burstable-podcdf85a09_3abb_488b_b462_0b1a3dc25c83.slice. May 8 00:51:41.479542 systemd[1]: kubepods-burstable-podcdf85a09_3abb_488b_b462_0b1a3dc25c83.slice: Consumed 6.526s CPU time. May 8 00:51:41.480967 env[1215]: time="2025-05-08T00:51:41.480929721Z" level=info msg="RemoveContainer for \"fc32d656c791bf726a052d9b40cf6e104c5648028f1d75919671406fc65f1714\"" May 8 00:51:41.483869 env[1215]: time="2025-05-08T00:51:41.483818103Z" level=info msg="RemoveContainer for \"fc32d656c791bf726a052d9b40cf6e104c5648028f1d75919671406fc65f1714\" returns successfully" May 8 00:51:41.484220 kubelet[1419]: I0508 00:51:41.484193 1419 scope.go:117] "RemoveContainer" containerID="53c09c781967cf299dff8f4c815d6a7d70fc351750b35f2505d10da09ea4a6f7" May 8 00:51:41.485243 env[1215]: time="2025-05-08T00:51:41.485213013Z" level=info msg="RemoveContainer for \"53c09c781967cf299dff8f4c815d6a7d70fc351750b35f2505d10da09ea4a6f7\"" May 8 00:51:41.487449 env[1215]: time="2025-05-08T00:51:41.487409580Z" level=info msg="RemoveContainer for \"53c09c781967cf299dff8f4c815d6a7d70fc351750b35f2505d10da09ea4a6f7\" returns successfully" May 8 00:51:41.487588 kubelet[1419]: I0508 00:51:41.487567 1419 scope.go:117] "RemoveContainer" containerID="bba4f75fdf85d8375cc91092fbaafeb42919336fffa69c53c14f334da6b28f44" May 8 00:51:41.488973 env[1215]: time="2025-05-08T00:51:41.488934893Z" level=info msg="RemoveContainer for \"bba4f75fdf85d8375cc91092fbaafeb42919336fffa69c53c14f334da6b28f44\"" May 8 00:51:41.491246 env[1215]: time="2025-05-08T00:51:41.491206861Z" level=info msg="RemoveContainer for \"bba4f75fdf85d8375cc91092fbaafeb42919336fffa69c53c14f334da6b28f44\" returns successfully" May 8 00:51:41.491445 kubelet[1419]: I0508 00:51:41.491418 1419 scope.go:117] "RemoveContainer" containerID="cc4999ac022aeae4476e793b9d17aa2ce0d091cd4c5dee269315c55345f1b258" May 8 00:51:41.492563 env[1215]: time="2025-05-08T00:51:41.492533210Z" level=info msg="RemoveContainer for \"cc4999ac022aeae4476e793b9d17aa2ce0d091cd4c5dee269315c55345f1b258\"" May 8 00:51:41.494927 env[1215]: time="2025-05-08T00:51:41.494892941Z" level=info msg="RemoveContainer for \"cc4999ac022aeae4476e793b9d17aa2ce0d091cd4c5dee269315c55345f1b258\" returns successfully" May 8 00:51:41.495068 kubelet[1419]: I0508 00:51:41.495045 1419 scope.go:117] "RemoveContainer" containerID="566ee8fed6c582c740d10897bd0dadc8a93e145ff8e4d07ff030d7bc09285c8b" May 8 00:51:41.498939 env[1215]: time="2025-05-08T00:51:41.498911427Z" level=info msg="RemoveContainer for \"566ee8fed6c582c740d10897bd0dadc8a93e145ff8e4d07ff030d7bc09285c8b\"" May 8 00:51:41.501317 env[1215]: time="2025-05-08T00:51:41.501284318Z" level=info msg="RemoveContainer for \"566ee8fed6c582c740d10897bd0dadc8a93e145ff8e4d07ff030d7bc09285c8b\" returns successfully" May 8 00:51:41.501569 kubelet[1419]: I0508 00:51:41.501540 1419 scope.go:117] "RemoveContainer" containerID="fc32d656c791bf726a052d9b40cf6e104c5648028f1d75919671406fc65f1714" May 8 00:51:41.501826 env[1215]: time="2025-05-08T00:51:41.501749128Z" level=error msg="ContainerStatus for \"fc32d656c791bf726a052d9b40cf6e104c5648028f1d75919671406fc65f1714\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"fc32d656c791bf726a052d9b40cf6e104c5648028f1d75919671406fc65f1714\": not found" May 8 00:51:41.501965 kubelet[1419]: E0508 00:51:41.501942 1419 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"fc32d656c791bf726a052d9b40cf6e104c5648028f1d75919671406fc65f1714\": not found" containerID="fc32d656c791bf726a052d9b40cf6e104c5648028f1d75919671406fc65f1714" May 8 00:51:41.502043 kubelet[1419]: I0508 00:51:41.501977 1419 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"fc32d656c791bf726a052d9b40cf6e104c5648028f1d75919671406fc65f1714"} err="failed to get container status \"fc32d656c791bf726a052d9b40cf6e104c5648028f1d75919671406fc65f1714\": rpc error: code = NotFound desc = an error occurred when try to find container \"fc32d656c791bf726a052d9b40cf6e104c5648028f1d75919671406fc65f1714\": not found" May 8 00:51:41.502089 kubelet[1419]: I0508 00:51:41.502045 1419 scope.go:117] "RemoveContainer" containerID="53c09c781967cf299dff8f4c815d6a7d70fc351750b35f2505d10da09ea4a6f7" May 8 00:51:41.502291 env[1215]: time="2025-05-08T00:51:41.502246618Z" level=error msg="ContainerStatus for \"53c09c781967cf299dff8f4c815d6a7d70fc351750b35f2505d10da09ea4a6f7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"53c09c781967cf299dff8f4c815d6a7d70fc351750b35f2505d10da09ea4a6f7\": not found" May 8 00:51:41.502705 kubelet[1419]: E0508 00:51:41.502683 1419 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"53c09c781967cf299dff8f4c815d6a7d70fc351750b35f2505d10da09ea4a6f7\": not found" containerID="53c09c781967cf299dff8f4c815d6a7d70fc351750b35f2505d10da09ea4a6f7" May 8 00:51:41.502739 kubelet[1419]: I0508 00:51:41.502714 1419 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"53c09c781967cf299dff8f4c815d6a7d70fc351750b35f2505d10da09ea4a6f7"} err="failed to get container status \"53c09c781967cf299dff8f4c815d6a7d70fc351750b35f2505d10da09ea4a6f7\": rpc error: code = NotFound desc = an error occurred when try to find container \"53c09c781967cf299dff8f4c815d6a7d70fc351750b35f2505d10da09ea4a6f7\": not found" May 8 00:51:41.502739 kubelet[1419]: I0508 00:51:41.502732 1419 scope.go:117] "RemoveContainer" containerID="bba4f75fdf85d8375cc91092fbaafeb42919336fffa69c53c14f334da6b28f44" May 8 00:51:41.502912 env[1215]: time="2025-05-08T00:51:41.502867872Z" level=error msg="ContainerStatus for \"bba4f75fdf85d8375cc91092fbaafeb42919336fffa69c53c14f334da6b28f44\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"bba4f75fdf85d8375cc91092fbaafeb42919336fffa69c53c14f334da6b28f44\": not found" May 8 00:51:41.503032 kubelet[1419]: E0508 00:51:41.503014 1419 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"bba4f75fdf85d8375cc91092fbaafeb42919336fffa69c53c14f334da6b28f44\": not found" containerID="bba4f75fdf85d8375cc91092fbaafeb42919336fffa69c53c14f334da6b28f44" May 8 00:51:41.503069 kubelet[1419]: I0508 00:51:41.503037 1419 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"bba4f75fdf85d8375cc91092fbaafeb42919336fffa69c53c14f334da6b28f44"} err="failed to get container status \"bba4f75fdf85d8375cc91092fbaafeb42919336fffa69c53c14f334da6b28f44\": rpc error: code = NotFound desc = an error occurred when try to find container \"bba4f75fdf85d8375cc91092fbaafeb42919336fffa69c53c14f334da6b28f44\": not found" May 8 00:51:41.503098 kubelet[1419]: I0508 00:51:41.503070 1419 scope.go:117] "RemoveContainer" containerID="cc4999ac022aeae4476e793b9d17aa2ce0d091cd4c5dee269315c55345f1b258" May 8 00:51:41.503274 env[1215]: time="2025-05-08T00:51:41.503222319Z" level=error msg="ContainerStatus for \"cc4999ac022aeae4476e793b9d17aa2ce0d091cd4c5dee269315c55345f1b258\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"cc4999ac022aeae4476e793b9d17aa2ce0d091cd4c5dee269315c55345f1b258\": not found" May 8 00:51:41.503378 kubelet[1419]: E0508 00:51:41.503361 1419 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"cc4999ac022aeae4476e793b9d17aa2ce0d091cd4c5dee269315c55345f1b258\": not found" containerID="cc4999ac022aeae4476e793b9d17aa2ce0d091cd4c5dee269315c55345f1b258" May 8 00:51:41.503423 kubelet[1419]: I0508 00:51:41.503382 1419 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"cc4999ac022aeae4476e793b9d17aa2ce0d091cd4c5dee269315c55345f1b258"} err="failed to get container status \"cc4999ac022aeae4476e793b9d17aa2ce0d091cd4c5dee269315c55345f1b258\": rpc error: code = NotFound desc = an error occurred when try to find container \"cc4999ac022aeae4476e793b9d17aa2ce0d091cd4c5dee269315c55345f1b258\": not found" May 8 00:51:41.503423 kubelet[1419]: I0508 00:51:41.503396 1419 scope.go:117] "RemoveContainer" containerID="566ee8fed6c582c740d10897bd0dadc8a93e145ff8e4d07ff030d7bc09285c8b" May 8 00:51:41.503590 env[1215]: time="2025-05-08T00:51:41.503542886Z" level=error msg="ContainerStatus for \"566ee8fed6c582c740d10897bd0dadc8a93e145ff8e4d07ff030d7bc09285c8b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"566ee8fed6c582c740d10897bd0dadc8a93e145ff8e4d07ff030d7bc09285c8b\": not found" May 8 00:51:41.503718 kubelet[1419]: E0508 00:51:41.503694 1419 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"566ee8fed6c582c740d10897bd0dadc8a93e145ff8e4d07ff030d7bc09285c8b\": not found" containerID="566ee8fed6c582c740d10897bd0dadc8a93e145ff8e4d07ff030d7bc09285c8b" May 8 00:51:41.503750 kubelet[1419]: I0508 00:51:41.503729 1419 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"566ee8fed6c582c740d10897bd0dadc8a93e145ff8e4d07ff030d7bc09285c8b"} err="failed to get container status \"566ee8fed6c582c740d10897bd0dadc8a93e145ff8e4d07ff030d7bc09285c8b\": rpc error: code = NotFound desc = an error occurred when try to find container \"566ee8fed6c582c740d10897bd0dadc8a93e145ff8e4d07ff030d7bc09285c8b\": not found" May 8 00:51:41.511987 kubelet[1419]: I0508 00:51:41.511960 1419 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-qxn96\" (UniqueName: \"kubernetes.io/projected/cdf85a09-3abb-488b-b462-0b1a3dc25c83-kube-api-access-qxn96\") on node \"10.0.0.122\" DevicePath \"\"" May 8 00:51:41.511987 kubelet[1419]: I0508 00:51:41.511987 1419 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/cdf85a09-3abb-488b-b462-0b1a3dc25c83-hubble-tls\") on node \"10.0.0.122\" DevicePath \"\"" May 8 00:51:41.512066 kubelet[1419]: I0508 00:51:41.511997 1419 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/cdf85a09-3abb-488b-b462-0b1a3dc25c83-clustermesh-secrets\") on node \"10.0.0.122\" DevicePath \"\"" May 8 00:51:41.512066 kubelet[1419]: I0508 00:51:41.512006 1419 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/cdf85a09-3abb-488b-b462-0b1a3dc25c83-bpf-maps\") on node \"10.0.0.122\" DevicePath \"\"" May 8 00:51:41.512066 kubelet[1419]: I0508 00:51:41.512015 1419 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/cdf85a09-3abb-488b-b462-0b1a3dc25c83-host-proc-sys-kernel\") on node \"10.0.0.122\" DevicePath \"\"" May 8 00:51:41.512066 kubelet[1419]: I0508 00:51:41.512022 1419 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/cdf85a09-3abb-488b-b462-0b1a3dc25c83-cni-path\") on node \"10.0.0.122\" DevicePath \"\"" May 8 00:51:41.512066 kubelet[1419]: I0508 00:51:41.512029 1419 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cdf85a09-3abb-488b-b462-0b1a3dc25c83-lib-modules\") on node \"10.0.0.122\" DevicePath \"\"" May 8 00:51:41.512066 kubelet[1419]: I0508 00:51:41.512036 1419 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cdf85a09-3abb-488b-b462-0b1a3dc25c83-cilium-config-path\") on node \"10.0.0.122\" DevicePath \"\"" May 8 00:51:41.966889 systemd[1]: var-lib-kubelet-pods-cdf85a09\x2d3abb\x2d488b\x2db462\x2d0b1a3dc25c83-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dqxn96.mount: Deactivated successfully. May 8 00:51:41.966987 systemd[1]: var-lib-kubelet-pods-cdf85a09\x2d3abb\x2d488b\x2db462\x2d0b1a3dc25c83-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 8 00:51:41.967048 systemd[1]: var-lib-kubelet-pods-cdf85a09\x2d3abb\x2d488b\x2db462\x2d0b1a3dc25c83-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 8 00:51:42.243474 kubelet[1419]: E0508 00:51:42.243161 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:51:43.243914 kubelet[1419]: E0508 00:51:43.243863 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:51:43.360081 kubelet[1419]: I0508 00:51:43.360034 1419 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cdf85a09-3abb-488b-b462-0b1a3dc25c83" path="/var/lib/kubelet/pods/cdf85a09-3abb-488b-b462-0b1a3dc25c83/volumes" May 8 00:51:43.750051 kubelet[1419]: E0508 00:51:43.750013 1419 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="cdf85a09-3abb-488b-b462-0b1a3dc25c83" containerName="clean-cilium-state" May 8 00:51:43.750051 kubelet[1419]: E0508 00:51:43.750048 1419 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="cdf85a09-3abb-488b-b462-0b1a3dc25c83" containerName="mount-cgroup" May 8 00:51:43.750051 kubelet[1419]: E0508 00:51:43.750054 1419 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="cdf85a09-3abb-488b-b462-0b1a3dc25c83" containerName="apply-sysctl-overwrites" May 8 00:51:43.750250 kubelet[1419]: E0508 00:51:43.750061 1419 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="cdf85a09-3abb-488b-b462-0b1a3dc25c83" containerName="cilium-agent" May 8 00:51:43.750250 kubelet[1419]: E0508 00:51:43.750067 1419 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="cdf85a09-3abb-488b-b462-0b1a3dc25c83" containerName="mount-bpf-fs" May 8 00:51:43.750250 kubelet[1419]: I0508 00:51:43.750089 1419 memory_manager.go:354] "RemoveStaleState removing state" podUID="cdf85a09-3abb-488b-b462-0b1a3dc25c83" containerName="cilium-agent" May 8 00:51:43.757176 systemd[1]: Created slice kubepods-burstable-podd2e2485b_99c2_43e7_beb2_eb4db0afc3b4.slice. May 8 00:51:43.780634 systemd[1]: Created slice kubepods-besteffort-pod266d97f7_5658_489e_9f09_2aaa33ba8d02.slice. May 8 00:51:43.922861 kubelet[1419]: E0508 00:51:43.922813 1419 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[bpf-maps cilium-cgroup cilium-config-path cilium-ipsec-secrets cilium-run clustermesh-secrets cni-path etc-cni-netd host-proc-sys-kernel host-proc-sys-net hostproc hubble-tls kube-api-access-5wsgv lib-modules xtables-lock], unattached volumes=[], failed to process volumes=[]: context canceled" pod="kube-system/cilium-gkd5c" podUID="d2e2485b-99c2-43e7-beb2-eb4db0afc3b4" May 8 00:51:43.927315 kubelet[1419]: I0508 00:51:43.927290 1419 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d2e2485b-99c2-43e7-beb2-eb4db0afc3b4-host-proc-sys-kernel\") pod \"cilium-gkd5c\" (UID: \"d2e2485b-99c2-43e7-beb2-eb4db0afc3b4\") " pod="kube-system/cilium-gkd5c" May 8 00:51:43.927453 kubelet[1419]: I0508 00:51:43.927437 1419 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d2e2485b-99c2-43e7-beb2-eb4db0afc3b4-cilium-run\") pod \"cilium-gkd5c\" (UID: \"d2e2485b-99c2-43e7-beb2-eb4db0afc3b4\") " pod="kube-system/cilium-gkd5c" May 8 00:51:43.927556 kubelet[1419]: I0508 00:51:43.927544 1419 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d2e2485b-99c2-43e7-beb2-eb4db0afc3b4-hostproc\") pod \"cilium-gkd5c\" (UID: \"d2e2485b-99c2-43e7-beb2-eb4db0afc3b4\") " pod="kube-system/cilium-gkd5c" May 8 00:51:43.927655 kubelet[1419]: I0508 00:51:43.927634 1419 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d2e2485b-99c2-43e7-beb2-eb4db0afc3b4-cilium-cgroup\") pod \"cilium-gkd5c\" (UID: \"d2e2485b-99c2-43e7-beb2-eb4db0afc3b4\") " pod="kube-system/cilium-gkd5c" May 8 00:51:43.927748 kubelet[1419]: I0508 00:51:43.927735 1419 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d2e2485b-99c2-43e7-beb2-eb4db0afc3b4-lib-modules\") pod \"cilium-gkd5c\" (UID: \"d2e2485b-99c2-43e7-beb2-eb4db0afc3b4\") " pod="kube-system/cilium-gkd5c" May 8 00:51:43.927873 kubelet[1419]: I0508 00:51:43.927859 1419 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/d2e2485b-99c2-43e7-beb2-eb4db0afc3b4-cilium-ipsec-secrets\") pod \"cilium-gkd5c\" (UID: \"d2e2485b-99c2-43e7-beb2-eb4db0afc3b4\") " pod="kube-system/cilium-gkd5c" May 8 00:51:43.927968 kubelet[1419]: I0508 00:51:43.927954 1419 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5gdhh\" (UniqueName: \"kubernetes.io/projected/266d97f7-5658-489e-9f09-2aaa33ba8d02-kube-api-access-5gdhh\") pod \"cilium-operator-5d85765b45-qjpmn\" (UID: \"266d97f7-5658-489e-9f09-2aaa33ba8d02\") " pod="kube-system/cilium-operator-5d85765b45-qjpmn" May 8 00:51:43.928068 kubelet[1419]: I0508 00:51:43.928053 1419 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d2e2485b-99c2-43e7-beb2-eb4db0afc3b4-bpf-maps\") pod \"cilium-gkd5c\" (UID: \"d2e2485b-99c2-43e7-beb2-eb4db0afc3b4\") " pod="kube-system/cilium-gkd5c" May 8 00:51:43.928171 kubelet[1419]: I0508 00:51:43.928158 1419 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d2e2485b-99c2-43e7-beb2-eb4db0afc3b4-cilium-config-path\") pod \"cilium-gkd5c\" (UID: \"d2e2485b-99c2-43e7-beb2-eb4db0afc3b4\") " pod="kube-system/cilium-gkd5c" May 8 00:51:43.928267 kubelet[1419]: I0508 00:51:43.928254 1419 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d2e2485b-99c2-43e7-beb2-eb4db0afc3b4-host-proc-sys-net\") pod \"cilium-gkd5c\" (UID: \"d2e2485b-99c2-43e7-beb2-eb4db0afc3b4\") " pod="kube-system/cilium-gkd5c" May 8 00:51:43.928358 kubelet[1419]: I0508 00:51:43.928346 1419 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/266d97f7-5658-489e-9f09-2aaa33ba8d02-cilium-config-path\") pod \"cilium-operator-5d85765b45-qjpmn\" (UID: \"266d97f7-5658-489e-9f09-2aaa33ba8d02\") " pod="kube-system/cilium-operator-5d85765b45-qjpmn" May 8 00:51:43.928450 kubelet[1419]: I0508 00:51:43.928438 1419 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d2e2485b-99c2-43e7-beb2-eb4db0afc3b4-cni-path\") pod \"cilium-gkd5c\" (UID: \"d2e2485b-99c2-43e7-beb2-eb4db0afc3b4\") " pod="kube-system/cilium-gkd5c" May 8 00:51:43.928529 kubelet[1419]: I0508 00:51:43.928517 1419 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5wsgv\" (UniqueName: \"kubernetes.io/projected/d2e2485b-99c2-43e7-beb2-eb4db0afc3b4-kube-api-access-5wsgv\") pod \"cilium-gkd5c\" (UID: \"d2e2485b-99c2-43e7-beb2-eb4db0afc3b4\") " pod="kube-system/cilium-gkd5c" May 8 00:51:43.928624 kubelet[1419]: I0508 00:51:43.928610 1419 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d2e2485b-99c2-43e7-beb2-eb4db0afc3b4-etc-cni-netd\") pod \"cilium-gkd5c\" (UID: \"d2e2485b-99c2-43e7-beb2-eb4db0afc3b4\") " pod="kube-system/cilium-gkd5c" May 8 00:51:43.928722 kubelet[1419]: I0508 00:51:43.928710 1419 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d2e2485b-99c2-43e7-beb2-eb4db0afc3b4-xtables-lock\") pod \"cilium-gkd5c\" (UID: \"d2e2485b-99c2-43e7-beb2-eb4db0afc3b4\") " pod="kube-system/cilium-gkd5c" May 8 00:51:43.928844 kubelet[1419]: I0508 00:51:43.928830 1419 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d2e2485b-99c2-43e7-beb2-eb4db0afc3b4-clustermesh-secrets\") pod \"cilium-gkd5c\" (UID: \"d2e2485b-99c2-43e7-beb2-eb4db0afc3b4\") " pod="kube-system/cilium-gkd5c" May 8 00:51:43.928964 kubelet[1419]: I0508 00:51:43.928948 1419 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d2e2485b-99c2-43e7-beb2-eb4db0afc3b4-hubble-tls\") pod \"cilium-gkd5c\" (UID: \"d2e2485b-99c2-43e7-beb2-eb4db0afc3b4\") " pod="kube-system/cilium-gkd5c" May 8 00:51:44.084710 kubelet[1419]: E0508 00:51:44.083966 1419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:51:44.084834 env[1215]: time="2025-05-08T00:51:44.084684402Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-qjpmn,Uid:266d97f7-5658-489e-9f09-2aaa33ba8d02,Namespace:kube-system,Attempt:0,}" May 8 00:51:44.099716 env[1215]: time="2025-05-08T00:51:44.099554288Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:51:44.099716 env[1215]: time="2025-05-08T00:51:44.099600889Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:51:44.099716 env[1215]: time="2025-05-08T00:51:44.099615449Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:51:44.099891 env[1215]: time="2025-05-08T00:51:44.099803853Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/de4529ad3b623aa9cea067cc6071d74146ed5654747149bd1d4aeada970e034c pid=2979 runtime=io.containerd.runc.v2 May 8 00:51:44.113243 systemd[1]: Started cri-containerd-de4529ad3b623aa9cea067cc6071d74146ed5654747149bd1d4aeada970e034c.scope. May 8 00:51:44.166702 env[1215]: time="2025-05-08T00:51:44.166649699Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-qjpmn,Uid:266d97f7-5658-489e-9f09-2aaa33ba8d02,Namespace:kube-system,Attempt:0,} returns sandbox id \"de4529ad3b623aa9cea067cc6071d74146ed5654747149bd1d4aeada970e034c\"" May 8 00:51:44.167401 kubelet[1419]: E0508 00:51:44.167379 1419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:51:44.168449 env[1215]: time="2025-05-08T00:51:44.168415493Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 8 00:51:44.244681 kubelet[1419]: E0508 00:51:44.244642 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:51:44.353906 kubelet[1419]: E0508 00:51:44.353516 1419 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 8 00:51:44.535395 kubelet[1419]: I0508 00:51:44.535351 1419 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d2e2485b-99c2-43e7-beb2-eb4db0afc3b4-clustermesh-secrets\") pod \"d2e2485b-99c2-43e7-beb2-eb4db0afc3b4\" (UID: \"d2e2485b-99c2-43e7-beb2-eb4db0afc3b4\") " May 8 00:51:44.535395 kubelet[1419]: I0508 00:51:44.535390 1419 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d2e2485b-99c2-43e7-beb2-eb4db0afc3b4-hostproc\") pod \"d2e2485b-99c2-43e7-beb2-eb4db0afc3b4\" (UID: \"d2e2485b-99c2-43e7-beb2-eb4db0afc3b4\") " May 8 00:51:44.535540 kubelet[1419]: I0508 00:51:44.535421 1419 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d2e2485b-99c2-43e7-beb2-eb4db0afc3b4-cilium-cgroup\") pod \"d2e2485b-99c2-43e7-beb2-eb4db0afc3b4\" (UID: \"d2e2485b-99c2-43e7-beb2-eb4db0afc3b4\") " May 8 00:51:44.535540 kubelet[1419]: I0508 00:51:44.535440 1419 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d2e2485b-99c2-43e7-beb2-eb4db0afc3b4-hubble-tls\") pod \"d2e2485b-99c2-43e7-beb2-eb4db0afc3b4\" (UID: \"d2e2485b-99c2-43e7-beb2-eb4db0afc3b4\") " May 8 00:51:44.535540 kubelet[1419]: I0508 00:51:44.535455 1419 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d2e2485b-99c2-43e7-beb2-eb4db0afc3b4-cilium-run\") pod \"d2e2485b-99c2-43e7-beb2-eb4db0afc3b4\" (UID: \"d2e2485b-99c2-43e7-beb2-eb4db0afc3b4\") " May 8 00:51:44.535540 kubelet[1419]: I0508 00:51:44.535472 1419 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d2e2485b-99c2-43e7-beb2-eb4db0afc3b4-host-proc-sys-net\") pod \"d2e2485b-99c2-43e7-beb2-eb4db0afc3b4\" (UID: \"d2e2485b-99c2-43e7-beb2-eb4db0afc3b4\") " May 8 00:51:44.535540 kubelet[1419]: I0508 00:51:44.535497 1419 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d2e2485b-99c2-43e7-beb2-eb4db0afc3b4-etc-cni-netd\") pod \"d2e2485b-99c2-43e7-beb2-eb4db0afc3b4\" (UID: \"d2e2485b-99c2-43e7-beb2-eb4db0afc3b4\") " May 8 00:51:44.535540 kubelet[1419]: I0508 00:51:44.535511 1419 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d2e2485b-99c2-43e7-beb2-eb4db0afc3b4-xtables-lock\") pod \"d2e2485b-99c2-43e7-beb2-eb4db0afc3b4\" (UID: \"d2e2485b-99c2-43e7-beb2-eb4db0afc3b4\") " May 8 00:51:44.535687 kubelet[1419]: I0508 00:51:44.535529 1419 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d2e2485b-99c2-43e7-beb2-eb4db0afc3b4-lib-modules\") pod \"d2e2485b-99c2-43e7-beb2-eb4db0afc3b4\" (UID: \"d2e2485b-99c2-43e7-beb2-eb4db0afc3b4\") " May 8 00:51:44.535687 kubelet[1419]: I0508 00:51:44.535547 1419 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d2e2485b-99c2-43e7-beb2-eb4db0afc3b4-cilium-config-path\") pod \"d2e2485b-99c2-43e7-beb2-eb4db0afc3b4\" (UID: \"d2e2485b-99c2-43e7-beb2-eb4db0afc3b4\") " May 8 00:51:44.535687 kubelet[1419]: I0508 00:51:44.535570 1419 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d2e2485b-99c2-43e7-beb2-eb4db0afc3b4-bpf-maps\") pod \"d2e2485b-99c2-43e7-beb2-eb4db0afc3b4\" (UID: \"d2e2485b-99c2-43e7-beb2-eb4db0afc3b4\") " May 8 00:51:44.535687 kubelet[1419]: I0508 00:51:44.535586 1419 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d2e2485b-99c2-43e7-beb2-eb4db0afc3b4-cni-path\") pod \"d2e2485b-99c2-43e7-beb2-eb4db0afc3b4\" (UID: \"d2e2485b-99c2-43e7-beb2-eb4db0afc3b4\") " May 8 00:51:44.535687 kubelet[1419]: I0508 00:51:44.535601 1419 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5wsgv\" (UniqueName: \"kubernetes.io/projected/d2e2485b-99c2-43e7-beb2-eb4db0afc3b4-kube-api-access-5wsgv\") pod \"d2e2485b-99c2-43e7-beb2-eb4db0afc3b4\" (UID: \"d2e2485b-99c2-43e7-beb2-eb4db0afc3b4\") " May 8 00:51:44.535687 kubelet[1419]: I0508 00:51:44.535615 1419 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d2e2485b-99c2-43e7-beb2-eb4db0afc3b4-host-proc-sys-kernel\") pod \"d2e2485b-99c2-43e7-beb2-eb4db0afc3b4\" (UID: \"d2e2485b-99c2-43e7-beb2-eb4db0afc3b4\") " May 8 00:51:44.535836 kubelet[1419]: I0508 00:51:44.535631 1419 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/d2e2485b-99c2-43e7-beb2-eb4db0afc3b4-cilium-ipsec-secrets\") pod \"d2e2485b-99c2-43e7-beb2-eb4db0afc3b4\" (UID: \"d2e2485b-99c2-43e7-beb2-eb4db0afc3b4\") " May 8 00:51:44.535942 kubelet[1419]: I0508 00:51:44.535919 1419 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d2e2485b-99c2-43e7-beb2-eb4db0afc3b4-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "d2e2485b-99c2-43e7-beb2-eb4db0afc3b4" (UID: "d2e2485b-99c2-43e7-beb2-eb4db0afc3b4"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 00:51:44.537156 kubelet[1419]: I0508 00:51:44.535944 1419 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d2e2485b-99c2-43e7-beb2-eb4db0afc3b4-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "d2e2485b-99c2-43e7-beb2-eb4db0afc3b4" (UID: "d2e2485b-99c2-43e7-beb2-eb4db0afc3b4"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 00:51:44.537156 kubelet[1419]: I0508 00:51:44.536008 1419 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d2e2485b-99c2-43e7-beb2-eb4db0afc3b4-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "d2e2485b-99c2-43e7-beb2-eb4db0afc3b4" (UID: "d2e2485b-99c2-43e7-beb2-eb4db0afc3b4"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 00:51:44.537156 kubelet[1419]: I0508 00:51:44.536023 1419 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d2e2485b-99c2-43e7-beb2-eb4db0afc3b4-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "d2e2485b-99c2-43e7-beb2-eb4db0afc3b4" (UID: "d2e2485b-99c2-43e7-beb2-eb4db0afc3b4"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 00:51:44.537156 kubelet[1419]: I0508 00:51:44.536039 1419 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d2e2485b-99c2-43e7-beb2-eb4db0afc3b4-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "d2e2485b-99c2-43e7-beb2-eb4db0afc3b4" (UID: "d2e2485b-99c2-43e7-beb2-eb4db0afc3b4"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 00:51:44.537156 kubelet[1419]: I0508 00:51:44.536456 1419 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d2e2485b-99c2-43e7-beb2-eb4db0afc3b4-cni-path" (OuterVolumeSpecName: "cni-path") pod "d2e2485b-99c2-43e7-beb2-eb4db0afc3b4" (UID: "d2e2485b-99c2-43e7-beb2-eb4db0afc3b4"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 00:51:44.537344 kubelet[1419]: I0508 00:51:44.536543 1419 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d2e2485b-99c2-43e7-beb2-eb4db0afc3b4-hostproc" (OuterVolumeSpecName: "hostproc") pod "d2e2485b-99c2-43e7-beb2-eb4db0afc3b4" (UID: "d2e2485b-99c2-43e7-beb2-eb4db0afc3b4"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 00:51:44.537344 kubelet[1419]: I0508 00:51:44.536592 1419 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d2e2485b-99c2-43e7-beb2-eb4db0afc3b4-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "d2e2485b-99c2-43e7-beb2-eb4db0afc3b4" (UID: "d2e2485b-99c2-43e7-beb2-eb4db0afc3b4"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 00:51:44.537344 kubelet[1419]: I0508 00:51:44.536606 1419 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d2e2485b-99c2-43e7-beb2-eb4db0afc3b4-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "d2e2485b-99c2-43e7-beb2-eb4db0afc3b4" (UID: "d2e2485b-99c2-43e7-beb2-eb4db0afc3b4"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 00:51:44.537344 kubelet[1419]: I0508 00:51:44.536620 1419 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d2e2485b-99c2-43e7-beb2-eb4db0afc3b4-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "d2e2485b-99c2-43e7-beb2-eb4db0afc3b4" (UID: "d2e2485b-99c2-43e7-beb2-eb4db0afc3b4"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 00:51:44.538358 kubelet[1419]: I0508 00:51:44.538330 1419 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d2e2485b-99c2-43e7-beb2-eb4db0afc3b4-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "d2e2485b-99c2-43e7-beb2-eb4db0afc3b4" (UID: "d2e2485b-99c2-43e7-beb2-eb4db0afc3b4"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 8 00:51:44.538714 kubelet[1419]: I0508 00:51:44.538669 1419 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d2e2485b-99c2-43e7-beb2-eb4db0afc3b4-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "d2e2485b-99c2-43e7-beb2-eb4db0afc3b4" (UID: "d2e2485b-99c2-43e7-beb2-eb4db0afc3b4"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" May 8 00:51:44.538846 kubelet[1419]: I0508 00:51:44.538818 1419 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d2e2485b-99c2-43e7-beb2-eb4db0afc3b4-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "d2e2485b-99c2-43e7-beb2-eb4db0afc3b4" (UID: "d2e2485b-99c2-43e7-beb2-eb4db0afc3b4"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" May 8 00:51:44.539178 kubelet[1419]: I0508 00:51:44.539147 1419 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d2e2485b-99c2-43e7-beb2-eb4db0afc3b4-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "d2e2485b-99c2-43e7-beb2-eb4db0afc3b4" (UID: "d2e2485b-99c2-43e7-beb2-eb4db0afc3b4"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" May 8 00:51:44.540929 kubelet[1419]: I0508 00:51:44.540904 1419 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d2e2485b-99c2-43e7-beb2-eb4db0afc3b4-kube-api-access-5wsgv" (OuterVolumeSpecName: "kube-api-access-5wsgv") pod "d2e2485b-99c2-43e7-beb2-eb4db0afc3b4" (UID: "d2e2485b-99c2-43e7-beb2-eb4db0afc3b4"). InnerVolumeSpecName "kube-api-access-5wsgv". PluginName "kubernetes.io/projected", VolumeGidValue "" May 8 00:51:44.637030 kubelet[1419]: I0508 00:51:44.636279 1419 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d2e2485b-99c2-43e7-beb2-eb4db0afc3b4-cilium-config-path\") on node \"10.0.0.122\" DevicePath \"\"" May 8 00:51:44.637030 kubelet[1419]: I0508 00:51:44.637023 1419 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d2e2485b-99c2-43e7-beb2-eb4db0afc3b4-host-proc-sys-net\") on node \"10.0.0.122\" DevicePath \"\"" May 8 00:51:44.637205 kubelet[1419]: I0508 00:51:44.637042 1419 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d2e2485b-99c2-43e7-beb2-eb4db0afc3b4-etc-cni-netd\") on node \"10.0.0.122\" DevicePath \"\"" May 8 00:51:44.637205 kubelet[1419]: I0508 00:51:44.637053 1419 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d2e2485b-99c2-43e7-beb2-eb4db0afc3b4-xtables-lock\") on node \"10.0.0.122\" DevicePath \"\"" May 8 00:51:44.637205 kubelet[1419]: I0508 00:51:44.637061 1419 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d2e2485b-99c2-43e7-beb2-eb4db0afc3b4-lib-modules\") on node \"10.0.0.122\" DevicePath \"\"" May 8 00:51:44.637205 kubelet[1419]: I0508 00:51:44.637070 1419 reconciler_common.go:288] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/d2e2485b-99c2-43e7-beb2-eb4db0afc3b4-cilium-ipsec-secrets\") on node \"10.0.0.122\" DevicePath \"\"" May 8 00:51:44.637205 kubelet[1419]: I0508 00:51:44.637078 1419 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d2e2485b-99c2-43e7-beb2-eb4db0afc3b4-bpf-maps\") on node \"10.0.0.122\" DevicePath \"\"" May 8 00:51:44.637205 kubelet[1419]: I0508 00:51:44.637086 1419 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d2e2485b-99c2-43e7-beb2-eb4db0afc3b4-cni-path\") on node \"10.0.0.122\" DevicePath \"\"" May 8 00:51:44.637205 kubelet[1419]: I0508 00:51:44.637093 1419 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-5wsgv\" (UniqueName: \"kubernetes.io/projected/d2e2485b-99c2-43e7-beb2-eb4db0afc3b4-kube-api-access-5wsgv\") on node \"10.0.0.122\" DevicePath \"\"" May 8 00:51:44.637205 kubelet[1419]: I0508 00:51:44.637116 1419 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d2e2485b-99c2-43e7-beb2-eb4db0afc3b4-host-proc-sys-kernel\") on node \"10.0.0.122\" DevicePath \"\"" May 8 00:51:44.637378 kubelet[1419]: I0508 00:51:44.637128 1419 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d2e2485b-99c2-43e7-beb2-eb4db0afc3b4-clustermesh-secrets\") on node \"10.0.0.122\" DevicePath \"\"" May 8 00:51:44.637378 kubelet[1419]: I0508 00:51:44.637136 1419 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d2e2485b-99c2-43e7-beb2-eb4db0afc3b4-hostproc\") on node \"10.0.0.122\" DevicePath \"\"" May 8 00:51:44.637378 kubelet[1419]: I0508 00:51:44.637144 1419 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d2e2485b-99c2-43e7-beb2-eb4db0afc3b4-cilium-cgroup\") on node \"10.0.0.122\" DevicePath \"\"" May 8 00:51:44.637378 kubelet[1419]: I0508 00:51:44.637151 1419 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d2e2485b-99c2-43e7-beb2-eb4db0afc3b4-hubble-tls\") on node \"10.0.0.122\" DevicePath \"\"" May 8 00:51:44.637378 kubelet[1419]: I0508 00:51:44.637158 1419 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d2e2485b-99c2-43e7-beb2-eb4db0afc3b4-cilium-run\") on node \"10.0.0.122\" DevicePath \"\"" May 8 00:51:45.034705 systemd[1]: var-lib-kubelet-pods-d2e2485b\x2d99c2\x2d43e7\x2dbeb2\x2deb4db0afc3b4-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d5wsgv.mount: Deactivated successfully. May 8 00:51:45.034808 systemd[1]: var-lib-kubelet-pods-d2e2485b\x2d99c2\x2d43e7\x2dbeb2\x2deb4db0afc3b4-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 8 00:51:45.034869 systemd[1]: var-lib-kubelet-pods-d2e2485b\x2d99c2\x2d43e7\x2dbeb2\x2deb4db0afc3b4-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 8 00:51:45.034919 systemd[1]: var-lib-kubelet-pods-d2e2485b\x2d99c2\x2d43e7\x2dbeb2\x2deb4db0afc3b4-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. May 8 00:51:45.244927 kubelet[1419]: E0508 00:51:45.244810 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:51:45.365577 systemd[1]: Removed slice kubepods-burstable-podd2e2485b_99c2_43e7_beb2_eb4db0afc3b4.slice. May 8 00:51:45.482577 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2492338670.mount: Deactivated successfully. May 8 00:51:45.550372 systemd[1]: Created slice kubepods-burstable-pod58f4f744_a052_49fa_afce_3f174230e511.slice. May 8 00:51:45.643347 kubelet[1419]: I0508 00:51:45.643218 1419 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/58f4f744-a052-49fa-afce-3f174230e511-cilium-config-path\") pod \"cilium-8tt68\" (UID: \"58f4f744-a052-49fa-afce-3f174230e511\") " pod="kube-system/cilium-8tt68" May 8 00:51:45.643347 kubelet[1419]: I0508 00:51:45.643268 1419 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/58f4f744-a052-49fa-afce-3f174230e511-host-proc-sys-net\") pod \"cilium-8tt68\" (UID: \"58f4f744-a052-49fa-afce-3f174230e511\") " pod="kube-system/cilium-8tt68" May 8 00:51:45.643347 kubelet[1419]: I0508 00:51:45.643289 1419 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/58f4f744-a052-49fa-afce-3f174230e511-bpf-maps\") pod \"cilium-8tt68\" (UID: \"58f4f744-a052-49fa-afce-3f174230e511\") " pod="kube-system/cilium-8tt68" May 8 00:51:45.643347 kubelet[1419]: I0508 00:51:45.643306 1419 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/58f4f744-a052-49fa-afce-3f174230e511-cilium-cgroup\") pod \"cilium-8tt68\" (UID: \"58f4f744-a052-49fa-afce-3f174230e511\") " pod="kube-system/cilium-8tt68" May 8 00:51:45.643347 kubelet[1419]: I0508 00:51:45.643321 1419 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/58f4f744-a052-49fa-afce-3f174230e511-lib-modules\") pod \"cilium-8tt68\" (UID: \"58f4f744-a052-49fa-afce-3f174230e511\") " pod="kube-system/cilium-8tt68" May 8 00:51:45.643347 kubelet[1419]: I0508 00:51:45.643337 1419 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/58f4f744-a052-49fa-afce-3f174230e511-xtables-lock\") pod \"cilium-8tt68\" (UID: \"58f4f744-a052-49fa-afce-3f174230e511\") " pod="kube-system/cilium-8tt68" May 8 00:51:45.643576 kubelet[1419]: I0508 00:51:45.643351 1419 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/58f4f744-a052-49fa-afce-3f174230e511-clustermesh-secrets\") pod \"cilium-8tt68\" (UID: \"58f4f744-a052-49fa-afce-3f174230e511\") " pod="kube-system/cilium-8tt68" May 8 00:51:45.643576 kubelet[1419]: I0508 00:51:45.643366 1419 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/58f4f744-a052-49fa-afce-3f174230e511-cilium-run\") pod \"cilium-8tt68\" (UID: \"58f4f744-a052-49fa-afce-3f174230e511\") " pod="kube-system/cilium-8tt68" May 8 00:51:45.643576 kubelet[1419]: I0508 00:51:45.643381 1419 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/58f4f744-a052-49fa-afce-3f174230e511-hostproc\") pod \"cilium-8tt68\" (UID: \"58f4f744-a052-49fa-afce-3f174230e511\") " pod="kube-system/cilium-8tt68" May 8 00:51:45.643576 kubelet[1419]: I0508 00:51:45.643394 1419 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/58f4f744-a052-49fa-afce-3f174230e511-cilium-ipsec-secrets\") pod \"cilium-8tt68\" (UID: \"58f4f744-a052-49fa-afce-3f174230e511\") " pod="kube-system/cilium-8tt68" May 8 00:51:45.643576 kubelet[1419]: I0508 00:51:45.643409 1419 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/58f4f744-a052-49fa-afce-3f174230e511-hubble-tls\") pod \"cilium-8tt68\" (UID: \"58f4f744-a052-49fa-afce-3f174230e511\") " pod="kube-system/cilium-8tt68" May 8 00:51:45.643576 kubelet[1419]: I0508 00:51:45.643424 1419 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/58f4f744-a052-49fa-afce-3f174230e511-cni-path\") pod \"cilium-8tt68\" (UID: \"58f4f744-a052-49fa-afce-3f174230e511\") " pod="kube-system/cilium-8tt68" May 8 00:51:45.643707 kubelet[1419]: I0508 00:51:45.643440 1419 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/58f4f744-a052-49fa-afce-3f174230e511-etc-cni-netd\") pod \"cilium-8tt68\" (UID: \"58f4f744-a052-49fa-afce-3f174230e511\") " pod="kube-system/cilium-8tt68" May 8 00:51:45.643707 kubelet[1419]: I0508 00:51:45.643458 1419 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f4m56\" (UniqueName: \"kubernetes.io/projected/58f4f744-a052-49fa-afce-3f174230e511-kube-api-access-f4m56\") pod \"cilium-8tt68\" (UID: \"58f4f744-a052-49fa-afce-3f174230e511\") " pod="kube-system/cilium-8tt68" May 8 00:51:45.643707 kubelet[1419]: I0508 00:51:45.643474 1419 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/58f4f744-a052-49fa-afce-3f174230e511-host-proc-sys-kernel\") pod \"cilium-8tt68\" (UID: \"58f4f744-a052-49fa-afce-3f174230e511\") " pod="kube-system/cilium-8tt68" May 8 00:51:45.866907 kubelet[1419]: E0508 00:51:45.866838 1419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:51:45.867384 env[1215]: time="2025-05-08T00:51:45.867337800Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-8tt68,Uid:58f4f744-a052-49fa-afce-3f174230e511,Namespace:kube-system,Attempt:0,}" May 8 00:51:45.878218 env[1215]: time="2025-05-08T00:51:45.878154481Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:51:45.878386 env[1215]: time="2025-05-08T00:51:45.878195122Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:51:45.878386 env[1215]: time="2025-05-08T00:51:45.878205842Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:51:45.878386 env[1215]: time="2025-05-08T00:51:45.878335765Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/01de7df07bc05c35716073dc6e24bdbb7e859254b146aa54db4deea36b73e354 pid=3030 runtime=io.containerd.runc.v2 May 8 00:51:45.889153 systemd[1]: Started cri-containerd-01de7df07bc05c35716073dc6e24bdbb7e859254b146aa54db4deea36b73e354.scope. May 8 00:51:45.917786 env[1215]: time="2025-05-08T00:51:45.917731058Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-8tt68,Uid:58f4f744-a052-49fa-afce-3f174230e511,Namespace:kube-system,Attempt:0,} returns sandbox id \"01de7df07bc05c35716073dc6e24bdbb7e859254b146aa54db4deea36b73e354\"" May 8 00:51:45.918615 kubelet[1419]: E0508 00:51:45.918592 1419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:51:45.920977 env[1215]: time="2025-05-08T00:51:45.920926437Z" level=info msg="CreateContainer within sandbox \"01de7df07bc05c35716073dc6e24bdbb7e859254b146aa54db4deea36b73e354\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 8 00:51:45.932801 env[1215]: time="2025-05-08T00:51:45.932712936Z" level=info msg="CreateContainer within sandbox \"01de7df07bc05c35716073dc6e24bdbb7e859254b146aa54db4deea36b73e354\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"0c97ce6125b0299defa7e134333de939fe8a7432de6edf37f4245fe33edd46d2\"" May 8 00:51:45.933349 env[1215]: time="2025-05-08T00:51:45.933320388Z" level=info msg="StartContainer for \"0c97ce6125b0299defa7e134333de939fe8a7432de6edf37f4245fe33edd46d2\"" May 8 00:51:45.948417 systemd[1]: Started cri-containerd-0c97ce6125b0299defa7e134333de939fe8a7432de6edf37f4245fe33edd46d2.scope. May 8 00:51:45.984244 env[1215]: time="2025-05-08T00:51:45.983976690Z" level=info msg="StartContainer for \"0c97ce6125b0299defa7e134333de939fe8a7432de6edf37f4245fe33edd46d2\" returns successfully" May 8 00:51:45.991265 systemd[1]: cri-containerd-0c97ce6125b0299defa7e134333de939fe8a7432de6edf37f4245fe33edd46d2.scope: Deactivated successfully. May 8 00:51:46.050251 env[1215]: time="2025-05-08T00:51:46.050207093Z" level=info msg="shim disconnected" id=0c97ce6125b0299defa7e134333de939fe8a7432de6edf37f4245fe33edd46d2 May 8 00:51:46.050489 env[1215]: time="2025-05-08T00:51:46.050470337Z" level=warning msg="cleaning up after shim disconnected" id=0c97ce6125b0299defa7e134333de939fe8a7432de6edf37f4245fe33edd46d2 namespace=k8s.io May 8 00:51:46.050549 env[1215]: time="2025-05-08T00:51:46.050536659Z" level=info msg="cleaning up dead shim" May 8 00:51:46.056902 env[1215]: time="2025-05-08T00:51:46.056867893Z" level=warning msg="cleanup warnings time=\"2025-05-08T00:51:46Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3116 runtime=io.containerd.runc.v2\n" May 8 00:51:46.185264 env[1215]: time="2025-05-08T00:51:46.185151482Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:51:46.186611 env[1215]: time="2025-05-08T00:51:46.186570707Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:51:46.188167 env[1215]: time="2025-05-08T00:51:46.188142816Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:51:46.188586 env[1215]: time="2025-05-08T00:51:46.188554743Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" May 8 00:51:46.190696 env[1215]: time="2025-05-08T00:51:46.190665301Z" level=info msg="CreateContainer within sandbox \"de4529ad3b623aa9cea067cc6071d74146ed5654747149bd1d4aeada970e034c\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 8 00:51:46.202447 env[1215]: time="2025-05-08T00:51:46.202410513Z" level=info msg="CreateContainer within sandbox \"de4529ad3b623aa9cea067cc6071d74146ed5654747149bd1d4aeada970e034c\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"e3bd0b4549aa8c2141057452eaadcba80c8398f5bb273d385f9b779a1ed09994\"" May 8 00:51:46.202873 env[1215]: time="2025-05-08T00:51:46.202848281Z" level=info msg="StartContainer for \"e3bd0b4549aa8c2141057452eaadcba80c8398f5bb273d385f9b779a1ed09994\"" May 8 00:51:46.220882 systemd[1]: Started cri-containerd-e3bd0b4549aa8c2141057452eaadcba80c8398f5bb273d385f9b779a1ed09994.scope. May 8 00:51:46.245514 kubelet[1419]: E0508 00:51:46.245465 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:51:46.252559 env[1215]: time="2025-05-08T00:51:46.252514055Z" level=info msg="StartContainer for \"e3bd0b4549aa8c2141057452eaadcba80c8398f5bb273d385f9b779a1ed09994\" returns successfully" May 8 00:51:46.487452 kubelet[1419]: E0508 00:51:46.487136 1419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:51:46.489662 kubelet[1419]: E0508 00:51:46.489138 1419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:51:46.490928 env[1215]: time="2025-05-08T00:51:46.490887466Z" level=info msg="CreateContainer within sandbox \"01de7df07bc05c35716073dc6e24bdbb7e859254b146aa54db4deea36b73e354\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 8 00:51:46.508229 env[1215]: time="2025-05-08T00:51:46.508174537Z" level=info msg="CreateContainer within sandbox \"01de7df07bc05c35716073dc6e24bdbb7e859254b146aa54db4deea36b73e354\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"a27005ab010ee4a0d2121e746df625f8fa3178c612a11409877adf9fd0873675\"" May 8 00:51:46.508727 env[1215]: time="2025-05-08T00:51:46.508698266Z" level=info msg="StartContainer for \"a27005ab010ee4a0d2121e746df625f8fa3178c612a11409877adf9fd0873675\"" May 8 00:51:46.516671 kubelet[1419]: I0508 00:51:46.516604 1419 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-qjpmn" podStartSLOduration=1.495121854 podStartE2EDuration="3.516586888s" podCreationTimestamp="2025-05-08 00:51:43 +0000 UTC" firstStartedPulling="2025-05-08 00:51:44.168009206 +0000 UTC m=+55.863495484" lastFinishedPulling="2025-05-08 00:51:46.18947424 +0000 UTC m=+57.884960518" observedRunningTime="2025-05-08 00:51:46.499651983 +0000 UTC m=+58.195138261" watchObservedRunningTime="2025-05-08 00:51:46.516586888 +0000 UTC m=+58.212073166" May 8 00:51:46.523144 systemd[1]: Started cri-containerd-a27005ab010ee4a0d2121e746df625f8fa3178c612a11409877adf9fd0873675.scope. May 8 00:51:46.577925 env[1215]: time="2025-05-08T00:51:46.577876472Z" level=info msg="StartContainer for \"a27005ab010ee4a0d2121e746df625f8fa3178c612a11409877adf9fd0873675\" returns successfully" May 8 00:51:46.589986 systemd[1]: cri-containerd-a27005ab010ee4a0d2121e746df625f8fa3178c612a11409877adf9fd0873675.scope: Deactivated successfully. May 8 00:51:46.616530 env[1215]: time="2025-05-08T00:51:46.616478806Z" level=info msg="shim disconnected" id=a27005ab010ee4a0d2121e746df625f8fa3178c612a11409877adf9fd0873675 May 8 00:51:46.616870 env[1215]: time="2025-05-08T00:51:46.616842893Z" level=warning msg="cleaning up after shim disconnected" id=a27005ab010ee4a0d2121e746df625f8fa3178c612a11409877adf9fd0873675 namespace=k8s.io May 8 00:51:46.616962 env[1215]: time="2025-05-08T00:51:46.616946135Z" level=info msg="cleaning up dead shim" May 8 00:51:46.623258 env[1215]: time="2025-05-08T00:51:46.623223008Z" level=warning msg="cleanup warnings time=\"2025-05-08T00:51:46Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3215 runtime=io.containerd.runc.v2\n" May 8 00:51:47.034216 systemd[1]: run-containerd-runc-k8s.io-e3bd0b4549aa8c2141057452eaadcba80c8398f5bb273d385f9b779a1ed09994-runc.No3MbU.mount: Deactivated successfully. May 8 00:51:47.246641 kubelet[1419]: E0508 00:51:47.246590 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:51:47.360198 kubelet[1419]: I0508 00:51:47.359967 1419 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d2e2485b-99c2-43e7-beb2-eb4db0afc3b4" path="/var/lib/kubelet/pods/d2e2485b-99c2-43e7-beb2-eb4db0afc3b4/volumes" May 8 00:51:47.492135 kubelet[1419]: E0508 00:51:47.492069 1419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:51:47.492292 kubelet[1419]: E0508 00:51:47.492207 1419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:51:47.494388 env[1215]: time="2025-05-08T00:51:47.494331251Z" level=info msg="CreateContainer within sandbox \"01de7df07bc05c35716073dc6e24bdbb7e859254b146aa54db4deea36b73e354\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 8 00:51:47.505971 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4272604696.mount: Deactivated successfully. May 8 00:51:47.510479 env[1215]: time="2025-05-08T00:51:47.509805401Z" level=info msg="CreateContainer within sandbox \"01de7df07bc05c35716073dc6e24bdbb7e859254b146aa54db4deea36b73e354\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"00a392a572518f7c4f6c3d22823b27f839ac6a10f5f4771ff62798ffcffd472d\"" May 8 00:51:47.510642 env[1215]: time="2025-05-08T00:51:47.510597175Z" level=info msg="StartContainer for \"00a392a572518f7c4f6c3d22823b27f839ac6a10f5f4771ff62798ffcffd472d\"" May 8 00:51:47.527009 systemd[1]: Started cri-containerd-00a392a572518f7c4f6c3d22823b27f839ac6a10f5f4771ff62798ffcffd472d.scope. May 8 00:51:47.565431 env[1215]: time="2025-05-08T00:51:47.565386570Z" level=info msg="StartContainer for \"00a392a572518f7c4f6c3d22823b27f839ac6a10f5f4771ff62798ffcffd472d\" returns successfully" May 8 00:51:47.567489 systemd[1]: cri-containerd-00a392a572518f7c4f6c3d22823b27f839ac6a10f5f4771ff62798ffcffd472d.scope: Deactivated successfully. May 8 00:51:47.588923 env[1215]: time="2025-05-08T00:51:47.588878300Z" level=info msg="shim disconnected" id=00a392a572518f7c4f6c3d22823b27f839ac6a10f5f4771ff62798ffcffd472d May 8 00:51:47.588923 env[1215]: time="2025-05-08T00:51:47.588922860Z" level=warning msg="cleaning up after shim disconnected" id=00a392a572518f7c4f6c3d22823b27f839ac6a10f5f4771ff62798ffcffd472d namespace=k8s.io May 8 00:51:47.589146 env[1215]: time="2025-05-08T00:51:47.588931661Z" level=info msg="cleaning up dead shim" May 8 00:51:47.595064 env[1215]: time="2025-05-08T00:51:47.595026087Z" level=warning msg="cleanup warnings time=\"2025-05-08T00:51:47Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3271 runtime=io.containerd.runc.v2\n" May 8 00:51:48.034298 systemd[1]: run-containerd-runc-k8s.io-00a392a572518f7c4f6c3d22823b27f839ac6a10f5f4771ff62798ffcffd472d-runc.uOspgl.mount: Deactivated successfully. May 8 00:51:48.034402 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-00a392a572518f7c4f6c3d22823b27f839ac6a10f5f4771ff62798ffcffd472d-rootfs.mount: Deactivated successfully. May 8 00:51:48.247197 kubelet[1419]: E0508 00:51:48.247154 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:51:48.496155 kubelet[1419]: E0508 00:51:48.496063 1419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:51:48.497952 env[1215]: time="2025-05-08T00:51:48.497913727Z" level=info msg="CreateContainer within sandbox \"01de7df07bc05c35716073dc6e24bdbb7e859254b146aa54db4deea36b73e354\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 8 00:51:48.510640 env[1215]: time="2025-05-08T00:51:48.510591821Z" level=info msg="CreateContainer within sandbox \"01de7df07bc05c35716073dc6e24bdbb7e859254b146aa54db4deea36b73e354\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"2df78f183b136ab0a97b113f22b6346716fff22a162853addb56d7c8806f0a36\"" May 8 00:51:48.511322 env[1215]: time="2025-05-08T00:51:48.511280313Z" level=info msg="StartContainer for \"2df78f183b136ab0a97b113f22b6346716fff22a162853addb56d7c8806f0a36\"" May 8 00:51:48.529978 systemd[1]: Started cri-containerd-2df78f183b136ab0a97b113f22b6346716fff22a162853addb56d7c8806f0a36.scope. May 8 00:51:48.570684 systemd[1]: cri-containerd-2df78f183b136ab0a97b113f22b6346716fff22a162853addb56d7c8806f0a36.scope: Deactivated successfully. May 8 00:51:48.571361 env[1215]: time="2025-05-08T00:51:48.571054244Z" level=info msg="StartContainer for \"2df78f183b136ab0a97b113f22b6346716fff22a162853addb56d7c8806f0a36\" returns successfully" May 8 00:51:48.589017 env[1215]: time="2025-05-08T00:51:48.588969426Z" level=info msg="shim disconnected" id=2df78f183b136ab0a97b113f22b6346716fff22a162853addb56d7c8806f0a36 May 8 00:51:48.589017 env[1215]: time="2025-05-08T00:51:48.589016227Z" level=warning msg="cleaning up after shim disconnected" id=2df78f183b136ab0a97b113f22b6346716fff22a162853addb56d7c8806f0a36 namespace=k8s.io May 8 00:51:48.589017 env[1215]: time="2025-05-08T00:51:48.589025507Z" level=info msg="cleaning up dead shim" May 8 00:51:48.595470 env[1215]: time="2025-05-08T00:51:48.595425335Z" level=warning msg="cleanup warnings time=\"2025-05-08T00:51:48Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3327 runtime=io.containerd.runc.v2\n" May 8 00:51:49.034396 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2df78f183b136ab0a97b113f22b6346716fff22a162853addb56d7c8806f0a36-rootfs.mount: Deactivated successfully. May 8 00:51:49.208211 kubelet[1419]: E0508 00:51:49.208171 1419 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:51:49.221005 env[1215]: time="2025-05-08T00:51:49.220965123Z" level=info msg="StopPodSandbox for \"e1e7ca1552f184e0984bc486bb93b6735d206425bcd776cda6628bd172d357b4\"" May 8 00:51:49.221138 env[1215]: time="2025-05-08T00:51:49.221058085Z" level=info msg="TearDown network for sandbox \"e1e7ca1552f184e0984bc486bb93b6735d206425bcd776cda6628bd172d357b4\" successfully" May 8 00:51:49.221138 env[1215]: time="2025-05-08T00:51:49.221091005Z" level=info msg="StopPodSandbox for \"e1e7ca1552f184e0984bc486bb93b6735d206425bcd776cda6628bd172d357b4\" returns successfully" May 8 00:51:49.221508 env[1215]: time="2025-05-08T00:51:49.221481211Z" level=info msg="RemovePodSandbox for \"e1e7ca1552f184e0984bc486bb93b6735d206425bcd776cda6628bd172d357b4\"" May 8 00:51:49.221547 env[1215]: time="2025-05-08T00:51:49.221514452Z" level=info msg="Forcibly stopping sandbox \"e1e7ca1552f184e0984bc486bb93b6735d206425bcd776cda6628bd172d357b4\"" May 8 00:51:49.221599 env[1215]: time="2025-05-08T00:51:49.221583293Z" level=info msg="TearDown network for sandbox \"e1e7ca1552f184e0984bc486bb93b6735d206425bcd776cda6628bd172d357b4\" successfully" May 8 00:51:49.226020 env[1215]: time="2025-05-08T00:51:49.225977325Z" level=info msg="RemovePodSandbox \"e1e7ca1552f184e0984bc486bb93b6735d206425bcd776cda6628bd172d357b4\" returns successfully" May 8 00:51:49.247640 kubelet[1419]: E0508 00:51:49.247598 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:51:49.354963 kubelet[1419]: E0508 00:51:49.354871 1419 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 8 00:51:49.500578 kubelet[1419]: E0508 00:51:49.500544 1419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:51:49.502825 env[1215]: time="2025-05-08T00:51:49.502520422Z" level=info msg="CreateContainer within sandbox \"01de7df07bc05c35716073dc6e24bdbb7e859254b146aa54db4deea36b73e354\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 8 00:51:49.515328 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3989629814.mount: Deactivated successfully. May 8 00:51:49.520278 env[1215]: time="2025-05-08T00:51:49.520213472Z" level=info msg="CreateContainer within sandbox \"01de7df07bc05c35716073dc6e24bdbb7e859254b146aa54db4deea36b73e354\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"34e0664b7e6b5b3dd68e037bc81b31630713efd802d158dad26b6575298ada06\"" May 8 00:51:49.520792 env[1215]: time="2025-05-08T00:51:49.520745601Z" level=info msg="StartContainer for \"34e0664b7e6b5b3dd68e037bc81b31630713efd802d158dad26b6575298ada06\"" May 8 00:51:49.538001 systemd[1]: Started cri-containerd-34e0664b7e6b5b3dd68e037bc81b31630713efd802d158dad26b6575298ada06.scope. May 8 00:51:49.577739 env[1215]: time="2025-05-08T00:51:49.577683135Z" level=info msg="StartContainer for \"34e0664b7e6b5b3dd68e037bc81b31630713efd802d158dad26b6575298ada06\" returns successfully" May 8 00:51:49.826265 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm-aes-ce))) May 8 00:51:50.034429 systemd[1]: run-containerd-runc-k8s.io-34e0664b7e6b5b3dd68e037bc81b31630713efd802d158dad26b6575298ada06-runc.y068T5.mount: Deactivated successfully. May 8 00:51:50.248330 kubelet[1419]: E0508 00:51:50.248260 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:51:50.505063 kubelet[1419]: E0508 00:51:50.504950 1419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:51:50.518983 kubelet[1419]: I0508 00:51:50.518927 1419 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-8tt68" podStartSLOduration=5.5189139350000005 podStartE2EDuration="5.518913935s" podCreationTimestamp="2025-05-08 00:51:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:51:50.518803654 +0000 UTC m=+62.214289892" watchObservedRunningTime="2025-05-08 00:51:50.518913935 +0000 UTC m=+62.214400213" May 8 00:51:50.886582 kubelet[1419]: I0508 00:51:50.886208 1419 setters.go:600] "Node became not ready" node="10.0.0.122" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-05-08T00:51:50Z","lastTransitionTime":"2025-05-08T00:51:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} May 8 00:51:51.249171 kubelet[1419]: E0508 00:51:51.249125 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:51:51.868705 kubelet[1419]: E0508 00:51:51.868668 1419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:51:52.222572 systemd[1]: run-containerd-runc-k8s.io-34e0664b7e6b5b3dd68e037bc81b31630713efd802d158dad26b6575298ada06-runc.iRbH8S.mount: Deactivated successfully. May 8 00:51:52.250279 kubelet[1419]: E0508 00:51:52.250226 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:51:52.590067 systemd-networkd[1038]: lxc_health: Link UP May 8 00:51:52.610615 systemd-networkd[1038]: lxc_health: Gained carrier May 8 00:51:52.611161 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready May 8 00:51:53.250524 kubelet[1419]: E0508 00:51:53.250466 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:51:53.713351 systemd-networkd[1038]: lxc_health: Gained IPv6LL May 8 00:51:53.868752 kubelet[1419]: E0508 00:51:53.868668 1419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:51:54.251581 kubelet[1419]: E0508 00:51:54.251547 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:51:54.358079 systemd[1]: run-containerd-runc-k8s.io-34e0664b7e6b5b3dd68e037bc81b31630713efd802d158dad26b6575298ada06-runc.C3KsL9.mount: Deactivated successfully. May 8 00:51:54.511624 kubelet[1419]: E0508 00:51:54.511312 1419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:51:55.252909 kubelet[1419]: E0508 00:51:55.252850 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:51:55.512520 kubelet[1419]: E0508 00:51:55.512405 1419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:51:56.253629 kubelet[1419]: E0508 00:51:56.253578 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:51:57.254098 kubelet[1419]: E0508 00:51:57.254050 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:51:57.361048 kubelet[1419]: E0508 00:51:57.358465 1419 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:51:58.254513 kubelet[1419]: E0508 00:51:58.254473 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:51:59.254657 kubelet[1419]: E0508 00:51:59.254615 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 8 00:52:00.255541 kubelet[1419]: E0508 00:52:00.255504 1419 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"