May 16 00:54:10.721991 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] May 16 00:54:10.722010 kernel: Linux version 5.15.181-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Thu May 15 23:21:39 -00 2025 May 16 00:54:10.722018 kernel: efi: EFI v2.70 by EDK II May 16 00:54:10.722024 kernel: efi: SMBIOS 3.0=0xd9260000 ACPI 2.0=0xd9240000 MEMATTR=0xda32b018 RNG=0xd9220018 MEMRESERVE=0xd9521c18 May 16 00:54:10.722028 kernel: random: crng init done May 16 00:54:10.722034 kernel: ACPI: Early table checksum verification disabled May 16 00:54:10.722040 kernel: ACPI: RSDP 0x00000000D9240000 000024 (v02 BOCHS ) May 16 00:54:10.722047 kernel: ACPI: XSDT 0x00000000D9230000 000064 (v01 BOCHS BXPC 00000001 01000013) May 16 00:54:10.722053 kernel: ACPI: FACP 0x00000000D91E0000 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) May 16 00:54:10.722058 kernel: ACPI: DSDT 0x00000000D91F0000 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 16 00:54:10.722063 kernel: ACPI: APIC 0x00000000D91D0000 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) May 16 00:54:10.722068 kernel: ACPI: PPTT 0x00000000D91C0000 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) May 16 00:54:10.722073 kernel: ACPI: GTDT 0x00000000D91B0000 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 16 00:54:10.722079 kernel: ACPI: MCFG 0x00000000D91A0000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 16 00:54:10.722086 kernel: ACPI: SPCR 0x00000000D9190000 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 16 00:54:10.722092 kernel: ACPI: DBG2 0x00000000D9180000 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) May 16 00:54:10.722098 kernel: ACPI: IORT 0x00000000D9170000 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 16 00:54:10.722104 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 May 16 00:54:10.722110 kernel: NUMA: Failed to initialise from firmware May 16 00:54:10.722115 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] May 16 00:54:10.722121 kernel: NUMA: NODE_DATA [mem 0xdcb09900-0xdcb0efff] May 16 00:54:10.722127 kernel: Zone ranges: May 16 00:54:10.722132 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] May 16 00:54:10.722139 kernel: DMA32 empty May 16 00:54:10.722145 kernel: Normal empty May 16 00:54:10.722150 kernel: Movable zone start for each node May 16 00:54:10.722156 kernel: Early memory node ranges May 16 00:54:10.722161 kernel: node 0: [mem 0x0000000040000000-0x00000000d924ffff] May 16 00:54:10.722167 kernel: node 0: [mem 0x00000000d9250000-0x00000000d951ffff] May 16 00:54:10.722173 kernel: node 0: [mem 0x00000000d9520000-0x00000000dc7fffff] May 16 00:54:10.722178 kernel: node 0: [mem 0x00000000dc800000-0x00000000dc88ffff] May 16 00:54:10.722184 kernel: node 0: [mem 0x00000000dc890000-0x00000000dc89ffff] May 16 00:54:10.722189 kernel: node 0: [mem 0x00000000dc8a0000-0x00000000dc9bffff] May 16 00:54:10.722195 kernel: node 0: [mem 0x00000000dc9c0000-0x00000000dcffffff] May 16 00:54:10.722200 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] May 16 00:54:10.722207 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges May 16 00:54:10.722213 kernel: psci: probing for conduit method from ACPI. May 16 00:54:10.722218 kernel: psci: PSCIv1.1 detected in firmware. May 16 00:54:10.722224 kernel: psci: Using standard PSCI v0.2 function IDs May 16 00:54:10.722229 kernel: psci: Trusted OS migration not required May 16 00:54:10.722237 kernel: psci: SMC Calling Convention v1.1 May 16 00:54:10.722244 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) May 16 00:54:10.722251 kernel: ACPI: SRAT not present May 16 00:54:10.722257 kernel: percpu: Embedded 30 pages/cpu s83032 r8192 d31656 u122880 May 16 00:54:10.722263 kernel: pcpu-alloc: s83032 r8192 d31656 u122880 alloc=30*4096 May 16 00:54:10.722270 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 May 16 00:54:10.722276 kernel: Detected PIPT I-cache on CPU0 May 16 00:54:10.722282 kernel: CPU features: detected: GIC system register CPU interface May 16 00:54:10.722288 kernel: CPU features: detected: Hardware dirty bit management May 16 00:54:10.722294 kernel: CPU features: detected: Spectre-v4 May 16 00:54:10.722300 kernel: CPU features: detected: Spectre-BHB May 16 00:54:10.722307 kernel: CPU features: kernel page table isolation forced ON by KASLR May 16 00:54:10.722313 kernel: CPU features: detected: Kernel page table isolation (KPTI) May 16 00:54:10.722319 kernel: CPU features: detected: ARM erratum 1418040 May 16 00:54:10.722325 kernel: CPU features: detected: SSBS not fully self-synchronizing May 16 00:54:10.722331 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 May 16 00:54:10.722337 kernel: Policy zone: DMA May 16 00:54:10.722344 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=2d88e96fdc9dc9b028836e57c250f3fd2abd3e6490e27ecbf72d8b216e3efce8 May 16 00:54:10.722350 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 16 00:54:10.722356 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 16 00:54:10.722362 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 16 00:54:10.722368 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 16 00:54:10.722376 kernel: Memory: 2457332K/2572288K available (9792K kernel code, 2094K rwdata, 7584K rodata, 36480K init, 777K bss, 114956K reserved, 0K cma-reserved) May 16 00:54:10.722382 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 16 00:54:10.722388 kernel: trace event string verifier disabled May 16 00:54:10.722394 kernel: rcu: Preemptible hierarchical RCU implementation. May 16 00:54:10.722400 kernel: rcu: RCU event tracing is enabled. May 16 00:54:10.722407 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 16 00:54:10.722413 kernel: Trampoline variant of Tasks RCU enabled. May 16 00:54:10.722419 kernel: Tracing variant of Tasks RCU enabled. May 16 00:54:10.722425 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 16 00:54:10.722431 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 16 00:54:10.722438 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 May 16 00:54:10.722445 kernel: GICv3: 256 SPIs implemented May 16 00:54:10.722451 kernel: GICv3: 0 Extended SPIs implemented May 16 00:54:10.722457 kernel: GICv3: Distributor has no Range Selector support May 16 00:54:10.722463 kernel: Root IRQ handler: gic_handle_irq May 16 00:54:10.722469 kernel: GICv3: 16 PPIs implemented May 16 00:54:10.722475 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 May 16 00:54:10.722481 kernel: ACPI: SRAT not present May 16 00:54:10.722486 kernel: ITS [mem 0x08080000-0x0809ffff] May 16 00:54:10.722492 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400b0000 (indirect, esz 8, psz 64K, shr 1) May 16 00:54:10.722499 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400c0000 (flat, esz 8, psz 64K, shr 1) May 16 00:54:10.722505 kernel: GICv3: using LPI property table @0x00000000400d0000 May 16 00:54:10.722511 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000000400e0000 May 16 00:54:10.722519 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 16 00:54:10.722525 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). May 16 00:54:10.722548 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns May 16 00:54:10.722555 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns May 16 00:54:10.722561 kernel: arm-pv: using stolen time PV May 16 00:54:10.722567 kernel: Console: colour dummy device 80x25 May 16 00:54:10.722573 kernel: ACPI: Core revision 20210730 May 16 00:54:10.722579 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) May 16 00:54:10.722586 kernel: pid_max: default: 32768 minimum: 301 May 16 00:54:10.722592 kernel: LSM: Security Framework initializing May 16 00:54:10.722600 kernel: SELinux: Initializing. May 16 00:54:10.722606 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 16 00:54:10.722612 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 16 00:54:10.722618 kernel: ACPI PPTT: PPTT table found, but unable to locate core 3 (3) May 16 00:54:10.722625 kernel: rcu: Hierarchical SRCU implementation. May 16 00:54:10.722631 kernel: Platform MSI: ITS@0x8080000 domain created May 16 00:54:10.722637 kernel: PCI/MSI: ITS@0x8080000 domain created May 16 00:54:10.722643 kernel: Remapping and enabling EFI services. May 16 00:54:10.722649 kernel: smp: Bringing up secondary CPUs ... May 16 00:54:10.722656 kernel: Detected PIPT I-cache on CPU1 May 16 00:54:10.722662 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 May 16 00:54:10.722668 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000000400f0000 May 16 00:54:10.722675 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 16 00:54:10.722681 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] May 16 00:54:10.722687 kernel: Detected PIPT I-cache on CPU2 May 16 00:54:10.722693 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 May 16 00:54:10.722700 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040100000 May 16 00:54:10.722706 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 16 00:54:10.722712 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] May 16 00:54:10.722719 kernel: Detected PIPT I-cache on CPU3 May 16 00:54:10.722726 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 May 16 00:54:10.722732 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040110000 May 16 00:54:10.722738 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 16 00:54:10.722748 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] May 16 00:54:10.722756 kernel: smp: Brought up 1 node, 4 CPUs May 16 00:54:10.722762 kernel: SMP: Total of 4 processors activated. May 16 00:54:10.722769 kernel: CPU features: detected: 32-bit EL0 Support May 16 00:54:10.722775 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence May 16 00:54:10.722782 kernel: CPU features: detected: Common not Private translations May 16 00:54:10.722788 kernel: CPU features: detected: CRC32 instructions May 16 00:54:10.722795 kernel: CPU features: detected: RCpc load-acquire (LDAPR) May 16 00:54:10.722802 kernel: CPU features: detected: LSE atomic instructions May 16 00:54:10.722809 kernel: CPU features: detected: Privileged Access Never May 16 00:54:10.722815 kernel: CPU features: detected: RAS Extension Support May 16 00:54:10.722822 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) May 16 00:54:10.722857 kernel: CPU: All CPU(s) started at EL1 May 16 00:54:10.722867 kernel: alternatives: patching kernel code May 16 00:54:10.722874 kernel: devtmpfs: initialized May 16 00:54:10.722880 kernel: KASLR enabled May 16 00:54:10.722887 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 16 00:54:10.722894 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 16 00:54:10.722901 kernel: pinctrl core: initialized pinctrl subsystem May 16 00:54:10.722907 kernel: SMBIOS 3.0.0 present. May 16 00:54:10.722914 kernel: DMI: QEMU KVM Virtual Machine, BIOS 0.0.0 02/06/2015 May 16 00:54:10.722920 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 16 00:54:10.722928 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations May 16 00:54:10.722935 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations May 16 00:54:10.722941 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations May 16 00:54:10.722948 kernel: audit: initializing netlink subsys (disabled) May 16 00:54:10.722954 kernel: audit: type=2000 audit(0.032:1): state=initialized audit_enabled=0 res=1 May 16 00:54:10.722961 kernel: thermal_sys: Registered thermal governor 'step_wise' May 16 00:54:10.722967 kernel: cpuidle: using governor menu May 16 00:54:10.722974 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. May 16 00:54:10.722980 kernel: ASID allocator initialised with 32768 entries May 16 00:54:10.722988 kernel: ACPI: bus type PCI registered May 16 00:54:10.726658 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 16 00:54:10.726680 kernel: Serial: AMBA PL011 UART driver May 16 00:54:10.726687 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages May 16 00:54:10.726694 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages May 16 00:54:10.726701 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages May 16 00:54:10.726708 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages May 16 00:54:10.726715 kernel: cryptd: max_cpu_qlen set to 1000 May 16 00:54:10.726721 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) May 16 00:54:10.726733 kernel: ACPI: Added _OSI(Module Device) May 16 00:54:10.726740 kernel: ACPI: Added _OSI(Processor Device) May 16 00:54:10.726746 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 16 00:54:10.726753 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 16 00:54:10.726760 kernel: ACPI: Added _OSI(Linux-Dell-Video) May 16 00:54:10.726766 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) May 16 00:54:10.726773 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) May 16 00:54:10.726779 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 16 00:54:10.726786 kernel: ACPI: Interpreter enabled May 16 00:54:10.726794 kernel: ACPI: Using GIC for interrupt routing May 16 00:54:10.726800 kernel: ACPI: MCFG table detected, 1 entries May 16 00:54:10.726807 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA May 16 00:54:10.726814 kernel: printk: console [ttyAMA0] enabled May 16 00:54:10.726826 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 16 00:54:10.726948 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 16 00:54:10.727021 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] May 16 00:54:10.727087 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] May 16 00:54:10.727149 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 May 16 00:54:10.727211 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] May 16 00:54:10.727220 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] May 16 00:54:10.727227 kernel: PCI host bridge to bus 0000:00 May 16 00:54:10.727295 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] May 16 00:54:10.727351 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] May 16 00:54:10.727406 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] May 16 00:54:10.727463 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 16 00:54:10.727570 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 May 16 00:54:10.727657 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 May 16 00:54:10.727727 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] May 16 00:54:10.727796 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] May 16 00:54:10.727856 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] May 16 00:54:10.727918 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] May 16 00:54:10.727978 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] May 16 00:54:10.728038 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] May 16 00:54:10.728092 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] May 16 00:54:10.728145 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] May 16 00:54:10.728198 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] May 16 00:54:10.728207 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 May 16 00:54:10.728214 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 May 16 00:54:10.728222 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 May 16 00:54:10.728229 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 May 16 00:54:10.728236 kernel: iommu: Default domain type: Translated May 16 00:54:10.728242 kernel: iommu: DMA domain TLB invalidation policy: strict mode May 16 00:54:10.728249 kernel: vgaarb: loaded May 16 00:54:10.728255 kernel: pps_core: LinuxPPS API ver. 1 registered May 16 00:54:10.728262 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti May 16 00:54:10.728269 kernel: PTP clock support registered May 16 00:54:10.728276 kernel: Registered efivars operations May 16 00:54:10.728284 kernel: clocksource: Switched to clocksource arch_sys_counter May 16 00:54:10.728291 kernel: VFS: Disk quotas dquot_6.6.0 May 16 00:54:10.728297 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 16 00:54:10.728304 kernel: pnp: PnP ACPI init May 16 00:54:10.728366 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved May 16 00:54:10.728376 kernel: pnp: PnP ACPI: found 1 devices May 16 00:54:10.728382 kernel: NET: Registered PF_INET protocol family May 16 00:54:10.728389 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 16 00:54:10.728398 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 16 00:54:10.728404 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 16 00:54:10.728411 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 16 00:54:10.728418 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) May 16 00:54:10.728424 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 16 00:54:10.728431 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 16 00:54:10.728437 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 16 00:54:10.728444 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 16 00:54:10.728451 kernel: PCI: CLS 0 bytes, default 64 May 16 00:54:10.728459 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available May 16 00:54:10.728465 kernel: kvm [1]: HYP mode not available May 16 00:54:10.728472 kernel: Initialise system trusted keyrings May 16 00:54:10.728479 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 16 00:54:10.728485 kernel: Key type asymmetric registered May 16 00:54:10.728492 kernel: Asymmetric key parser 'x509' registered May 16 00:54:10.728498 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) May 16 00:54:10.728505 kernel: io scheduler mq-deadline registered May 16 00:54:10.728512 kernel: io scheduler kyber registered May 16 00:54:10.728519 kernel: io scheduler bfq registered May 16 00:54:10.728531 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 May 16 00:54:10.728550 kernel: ACPI: button: Power Button [PWRB] May 16 00:54:10.728557 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 May 16 00:54:10.728628 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) May 16 00:54:10.728637 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 16 00:54:10.728644 kernel: thunder_xcv, ver 1.0 May 16 00:54:10.728650 kernel: thunder_bgx, ver 1.0 May 16 00:54:10.728657 kernel: nicpf, ver 1.0 May 16 00:54:10.728666 kernel: nicvf, ver 1.0 May 16 00:54:10.728733 kernel: rtc-efi rtc-efi.0: registered as rtc0 May 16 00:54:10.728789 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-05-16T00:54:10 UTC (1747356850) May 16 00:54:10.728798 kernel: hid: raw HID events driver (C) Jiri Kosina May 16 00:54:10.728804 kernel: NET: Registered PF_INET6 protocol family May 16 00:54:10.728811 kernel: Segment Routing with IPv6 May 16 00:54:10.728817 kernel: In-situ OAM (IOAM) with IPv6 May 16 00:54:10.728824 kernel: NET: Registered PF_PACKET protocol family May 16 00:54:10.728832 kernel: Key type dns_resolver registered May 16 00:54:10.728839 kernel: registered taskstats version 1 May 16 00:54:10.728845 kernel: Loading compiled-in X.509 certificates May 16 00:54:10.728852 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.181-flatcar: 2793d535c1de6f1789b22ef06bd5666144f4eeb2' May 16 00:54:10.728859 kernel: Key type .fscrypt registered May 16 00:54:10.728865 kernel: Key type fscrypt-provisioning registered May 16 00:54:10.728872 kernel: ima: No TPM chip found, activating TPM-bypass! May 16 00:54:10.728878 kernel: ima: Allocated hash algorithm: sha1 May 16 00:54:10.728885 kernel: ima: No architecture policies found May 16 00:54:10.728893 kernel: clk: Disabling unused clocks May 16 00:54:10.728899 kernel: Freeing unused kernel memory: 36480K May 16 00:54:10.728905 kernel: Run /init as init process May 16 00:54:10.728912 kernel: with arguments: May 16 00:54:10.728918 kernel: /init May 16 00:54:10.728925 kernel: with environment: May 16 00:54:10.728931 kernel: HOME=/ May 16 00:54:10.728937 kernel: TERM=linux May 16 00:54:10.728944 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 16 00:54:10.728953 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) May 16 00:54:10.728962 systemd[1]: Detected virtualization kvm. May 16 00:54:10.728969 systemd[1]: Detected architecture arm64. May 16 00:54:10.728976 systemd[1]: Running in initrd. May 16 00:54:10.728983 systemd[1]: No hostname configured, using default hostname. May 16 00:54:10.728990 systemd[1]: Hostname set to . May 16 00:54:10.728997 systemd[1]: Initializing machine ID from VM UUID. May 16 00:54:10.729005 systemd[1]: Queued start job for default target initrd.target. May 16 00:54:10.729012 systemd[1]: Started systemd-ask-password-console.path. May 16 00:54:10.729019 systemd[1]: Reached target cryptsetup.target. May 16 00:54:10.729026 systemd[1]: Reached target paths.target. May 16 00:54:10.729033 systemd[1]: Reached target slices.target. May 16 00:54:10.729040 systemd[1]: Reached target swap.target. May 16 00:54:10.729047 systemd[1]: Reached target timers.target. May 16 00:54:10.729054 systemd[1]: Listening on iscsid.socket. May 16 00:54:10.729062 systemd[1]: Listening on iscsiuio.socket. May 16 00:54:10.729069 systemd[1]: Listening on systemd-journald-audit.socket. May 16 00:54:10.729076 systemd[1]: Listening on systemd-journald-dev-log.socket. May 16 00:54:10.729083 systemd[1]: Listening on systemd-journald.socket. May 16 00:54:10.729090 systemd[1]: Listening on systemd-networkd.socket. May 16 00:54:10.729097 systemd[1]: Listening on systemd-udevd-control.socket. May 16 00:54:10.729104 systemd[1]: Listening on systemd-udevd-kernel.socket. May 16 00:54:10.729111 systemd[1]: Reached target sockets.target. May 16 00:54:10.729119 systemd[1]: Starting kmod-static-nodes.service... May 16 00:54:10.729126 systemd[1]: Finished network-cleanup.service. May 16 00:54:10.729133 systemd[1]: Starting systemd-fsck-usr.service... May 16 00:54:10.729139 systemd[1]: Starting systemd-journald.service... May 16 00:54:10.729146 systemd[1]: Starting systemd-modules-load.service... May 16 00:54:10.729153 systemd[1]: Starting systemd-resolved.service... May 16 00:54:10.729160 systemd[1]: Starting systemd-vconsole-setup.service... May 16 00:54:10.729167 systemd[1]: Finished kmod-static-nodes.service. May 16 00:54:10.729174 systemd[1]: Finished systemd-fsck-usr.service. May 16 00:54:10.729182 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... May 16 00:54:10.729189 systemd[1]: Finished systemd-vconsole-setup.service. May 16 00:54:10.729196 kernel: audit: type=1130 audit(1747356850.722:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:54:10.729204 systemd[1]: Starting dracut-cmdline-ask.service... May 16 00:54:10.729214 systemd-journald[290]: Journal started May 16 00:54:10.729253 systemd-journald[290]: Runtime Journal (/run/log/journal/068544ba8a3341a481e395eafcf79e8f) is 6.0M, max 48.7M, 42.6M free. May 16 00:54:10.722000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:54:10.715772 systemd-modules-load[291]: Inserted module 'overlay' May 16 00:54:10.730000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:54:10.731587 systemd[1]: Started systemd-journald.service. May 16 00:54:10.731608 kernel: audit: type=1130 audit(1747356850.730:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:54:10.731582 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. May 16 00:54:10.737296 kernel: audit: type=1130 audit(1747356850.734:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:54:10.734000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:54:10.743668 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 16 00:54:10.743135 systemd-resolved[292]: Positive Trust Anchors: May 16 00:54:10.743151 systemd-resolved[292]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 16 00:54:10.743178 systemd-resolved[292]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test May 16 00:54:10.747375 systemd-resolved[292]: Defaulting to hostname 'linux'. May 16 00:54:10.752634 kernel: Bridge firewalling registered May 16 00:54:10.752656 kernel: audit: type=1130 audit(1747356850.751:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:54:10.751000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:54:10.749940 systemd[1]: Started systemd-resolved.service. May 16 00:54:10.751895 systemd-modules-load[291]: Inserted module 'br_netfilter' May 16 00:54:10.751979 systemd[1]: Reached target nss-lookup.target. May 16 00:54:10.760856 kernel: audit: type=1130 audit(1747356850.756:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:54:10.756000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:54:10.755958 systemd[1]: Finished dracut-cmdline-ask.service. May 16 00:54:10.758200 systemd[1]: Starting dracut-cmdline.service... May 16 00:54:10.765555 kernel: SCSI subsystem initialized May 16 00:54:10.767971 dracut-cmdline[310]: dracut-dracut-053 May 16 00:54:10.770099 dracut-cmdline[310]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=2d88e96fdc9dc9b028836e57c250f3fd2abd3e6490e27ecbf72d8b216e3efce8 May 16 00:54:10.776548 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 16 00:54:10.776577 kernel: device-mapper: uevent: version 1.0.3 May 16 00:54:10.776586 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com May 16 00:54:10.778424 systemd-modules-load[291]: Inserted module 'dm_multipath' May 16 00:54:10.779752 systemd[1]: Finished systemd-modules-load.service. May 16 00:54:10.779000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:54:10.781007 systemd[1]: Starting systemd-sysctl.service... May 16 00:54:10.783780 kernel: audit: type=1130 audit(1747356850.779:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:54:10.789162 systemd[1]: Finished systemd-sysctl.service. May 16 00:54:10.789000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:54:10.792562 kernel: audit: type=1130 audit(1747356850.789:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:54:10.829558 kernel: Loading iSCSI transport class v2.0-870. May 16 00:54:10.841562 kernel: iscsi: registered transport (tcp) May 16 00:54:10.857552 kernel: iscsi: registered transport (qla4xxx) May 16 00:54:10.857574 kernel: QLogic iSCSI HBA Driver May 16 00:54:10.889890 systemd[1]: Finished dracut-cmdline.service. May 16 00:54:10.889000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:54:10.891330 systemd[1]: Starting dracut-pre-udev.service... May 16 00:54:10.893664 kernel: audit: type=1130 audit(1747356850.889:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:54:10.934554 kernel: raid6: neonx8 gen() 13812 MB/s May 16 00:54:10.951558 kernel: raid6: neonx8 xor() 10837 MB/s May 16 00:54:10.968549 kernel: raid6: neonx4 gen() 13529 MB/s May 16 00:54:10.985546 kernel: raid6: neonx4 xor() 11316 MB/s May 16 00:54:11.002547 kernel: raid6: neonx2 gen() 12946 MB/s May 16 00:54:11.019548 kernel: raid6: neonx2 xor() 10483 MB/s May 16 00:54:11.036548 kernel: raid6: neonx1 gen() 10472 MB/s May 16 00:54:11.053555 kernel: raid6: neonx1 xor() 8762 MB/s May 16 00:54:11.070546 kernel: raid6: int64x8 gen() 6229 MB/s May 16 00:54:11.087548 kernel: raid6: int64x8 xor() 3539 MB/s May 16 00:54:11.104552 kernel: raid6: int64x4 gen() 7245 MB/s May 16 00:54:11.121550 kernel: raid6: int64x4 xor() 3855 MB/s May 16 00:54:11.138550 kernel: raid6: int64x2 gen() 6147 MB/s May 16 00:54:11.155559 kernel: raid6: int64x2 xor() 3317 MB/s May 16 00:54:11.172552 kernel: raid6: int64x1 gen() 5043 MB/s May 16 00:54:11.189775 kernel: raid6: int64x1 xor() 2653 MB/s May 16 00:54:11.189787 kernel: raid6: using algorithm neonx8 gen() 13812 MB/s May 16 00:54:11.189796 kernel: raid6: .... xor() 10837 MB/s, rmw enabled May 16 00:54:11.189804 kernel: raid6: using neon recovery algorithm May 16 00:54:11.200903 kernel: xor: measuring software checksum speed May 16 00:54:11.200918 kernel: 8regs : 17249 MB/sec May 16 00:54:11.200927 kernel: 32regs : 20723 MB/sec May 16 00:54:11.201843 kernel: arm64_neon : 27654 MB/sec May 16 00:54:11.201853 kernel: xor: using function: arm64_neon (27654 MB/sec) May 16 00:54:11.255558 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no May 16 00:54:11.265847 systemd[1]: Finished dracut-pre-udev.service. May 16 00:54:11.266000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:54:11.268000 audit: BPF prog-id=7 op=LOAD May 16 00:54:11.268000 audit: BPF prog-id=8 op=LOAD May 16 00:54:11.269217 systemd[1]: Starting systemd-udevd.service... May 16 00:54:11.270283 kernel: audit: type=1130 audit(1747356851.266:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:54:11.281101 systemd-udevd[492]: Using default interface naming scheme 'v252'. May 16 00:54:11.284369 systemd[1]: Started systemd-udevd.service. May 16 00:54:11.284000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:54:11.285661 systemd[1]: Starting dracut-pre-trigger.service... May 16 00:54:11.297023 dracut-pre-trigger[498]: rd.md=0: removing MD RAID activation May 16 00:54:11.322256 systemd[1]: Finished dracut-pre-trigger.service. May 16 00:54:11.322000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:54:11.323596 systemd[1]: Starting systemd-udev-trigger.service... May 16 00:54:11.356333 systemd[1]: Finished systemd-udev-trigger.service. May 16 00:54:11.357000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:54:11.382968 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 16 00:54:11.387648 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 16 00:54:11.387662 kernel: GPT:9289727 != 19775487 May 16 00:54:11.387671 kernel: GPT:Alternate GPT header not at the end of the disk. May 16 00:54:11.387679 kernel: GPT:9289727 != 19775487 May 16 00:54:11.387687 kernel: GPT: Use GNU Parted to correct GPT errors. May 16 00:54:11.387701 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 16 00:54:11.402180 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. May 16 00:54:11.407226 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. May 16 00:54:11.409940 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. May 16 00:54:11.410794 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. May 16 00:54:11.413828 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (539) May 16 00:54:11.414682 systemd[1]: Starting disk-uuid.service... May 16 00:54:11.420258 disk-uuid[563]: Primary Header is updated. May 16 00:54:11.420258 disk-uuid[563]: Secondary Entries is updated. May 16 00:54:11.420258 disk-uuid[563]: Secondary Header is updated. May 16 00:54:11.421088 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. May 16 00:54:11.424561 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 16 00:54:12.436963 disk-uuid[564]: The operation has completed successfully. May 16 00:54:12.437934 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 16 00:54:12.461870 systemd[1]: disk-uuid.service: Deactivated successfully. May 16 00:54:12.461000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:54:12.462000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:54:12.461962 systemd[1]: Finished disk-uuid.service. May 16 00:54:12.463303 systemd[1]: Starting verity-setup.service... May 16 00:54:12.478670 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" May 16 00:54:12.498238 systemd[1]: Found device dev-mapper-usr.device. May 16 00:54:12.500167 systemd[1]: Mounting sysusr-usr.mount... May 16 00:54:12.502088 systemd[1]: Finished verity-setup.service. May 16 00:54:12.502000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:54:12.547556 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. May 16 00:54:12.547930 systemd[1]: Mounted sysusr-usr.mount. May 16 00:54:12.548581 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. May 16 00:54:12.549249 systemd[1]: Starting ignition-setup.service... May 16 00:54:12.550956 systemd[1]: Starting parse-ip-for-networkd.service... May 16 00:54:12.557598 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 16 00:54:12.557645 kernel: BTRFS info (device vda6): using free space tree May 16 00:54:12.557655 kernel: BTRFS info (device vda6): has skinny extents May 16 00:54:12.565660 systemd[1]: mnt-oem.mount: Deactivated successfully. May 16 00:54:12.572344 systemd[1]: Finished ignition-setup.service. May 16 00:54:12.572000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:54:12.573693 systemd[1]: Starting ignition-fetch-offline.service... May 16 00:54:12.633613 systemd[1]: Finished parse-ip-for-networkd.service. May 16 00:54:12.633000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:54:12.634000 audit: BPF prog-id=9 op=LOAD May 16 00:54:12.635725 systemd[1]: Starting systemd-networkd.service... May 16 00:54:12.650084 ignition[650]: Ignition 2.14.0 May 16 00:54:12.650094 ignition[650]: Stage: fetch-offline May 16 00:54:12.650129 ignition[650]: no configs at "/usr/lib/ignition/base.d" May 16 00:54:12.650137 ignition[650]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 16 00:54:12.650258 ignition[650]: parsed url from cmdline: "" May 16 00:54:12.650261 ignition[650]: no config URL provided May 16 00:54:12.650266 ignition[650]: reading system config file "/usr/lib/ignition/user.ign" May 16 00:54:12.650272 ignition[650]: no config at "/usr/lib/ignition/user.ign" May 16 00:54:12.650289 ignition[650]: op(1): [started] loading QEMU firmware config module May 16 00:54:12.650293 ignition[650]: op(1): executing: "modprobe" "qemu_fw_cfg" May 16 00:54:12.659752 ignition[650]: op(1): [finished] loading QEMU firmware config module May 16 00:54:12.661000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:54:12.660260 systemd-networkd[741]: lo: Link UP May 16 00:54:12.660264 systemd-networkd[741]: lo: Gained carrier May 16 00:54:12.660633 systemd-networkd[741]: Enumeration completed May 16 00:54:12.660807 systemd-networkd[741]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 16 00:54:12.660872 systemd[1]: Started systemd-networkd.service. May 16 00:54:12.662086 systemd[1]: Reached target network.target. May 16 00:54:12.662219 systemd-networkd[741]: eth0: Link UP May 16 00:54:12.662223 systemd-networkd[741]: eth0: Gained carrier May 16 00:54:12.663696 systemd[1]: Starting iscsiuio.service... May 16 00:54:12.672000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:54:12.672656 systemd[1]: Started iscsiuio.service. May 16 00:54:12.673037 ignition[650]: parsing config with SHA512: fa3e4e251d5b67eebf8dfcd837ce0a1a5f79eaaa6d79a63c083d4a388007aef294e90e03b6c2ac939b4febdff9b14f3c53b7e68f2e8b38b2545822b52743be4e May 16 00:54:12.678138 iscsid[747]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi May 16 00:54:12.678138 iscsid[747]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log May 16 00:54:12.678138 iscsid[747]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. May 16 00:54:12.678138 iscsid[747]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. May 16 00:54:12.678138 iscsid[747]: If using hardware iscsi like qla4xxx this message can be ignored. May 16 00:54:12.678138 iscsid[747]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi May 16 00:54:12.678138 iscsid[747]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf May 16 00:54:12.682000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:54:12.674384 systemd[1]: Starting iscsid.service... May 16 00:54:12.687666 ignition[650]: fetch-offline: fetch-offline passed May 16 00:54:12.682712 systemd[1]: Started iscsid.service. May 16 00:54:12.692000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:54:12.687746 ignition[650]: Ignition finished successfully May 16 00:54:12.684307 systemd[1]: Starting dracut-initqueue.service... May 16 00:54:12.687244 unknown[650]: fetched base config from "system" May 16 00:54:12.687252 unknown[650]: fetched user config from "qemu" May 16 00:54:12.697000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:54:12.688642 systemd-networkd[741]: eth0: DHCPv4 address 10.0.0.138/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 16 00:54:12.691575 systemd[1]: Finished ignition-fetch-offline.service. May 16 00:54:12.693166 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 16 00:54:12.693947 systemd[1]: Starting ignition-kargs.service... May 16 00:54:12.701953 ignition[755]: Ignition 2.14.0 May 16 00:54:12.697037 systemd[1]: Finished dracut-initqueue.service. May 16 00:54:12.705000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:54:12.701959 ignition[755]: Stage: kargs May 16 00:54:12.697952 systemd[1]: Reached target remote-fs-pre.target. May 16 00:54:12.702041 ignition[755]: no configs at "/usr/lib/ignition/base.d" May 16 00:54:12.699131 systemd[1]: Reached target remote-cryptsetup.target. May 16 00:54:12.702050 ignition[755]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 16 00:54:12.711000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:54:12.701020 systemd[1]: Reached target remote-fs.target. May 16 00:54:12.702911 ignition[755]: kargs: kargs passed May 16 00:54:12.703087 systemd[1]: Starting dracut-pre-mount.service... May 16 00:54:12.702949 ignition[755]: Ignition finished successfully May 16 00:54:12.715000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:54:12.705352 systemd[1]: Finished ignition-kargs.service. May 16 00:54:12.714148 ignition[764]: Ignition 2.14.0 May 16 00:54:12.707225 systemd[1]: Starting ignition-disks.service... May 16 00:54:12.714154 ignition[764]: Stage: disks May 16 00:54:12.711465 systemd[1]: Finished dracut-pre-mount.service. May 16 00:54:12.714241 ignition[764]: no configs at "/usr/lib/ignition/base.d" May 16 00:54:12.715688 systemd[1]: Finished ignition-disks.service. May 16 00:54:12.714250 ignition[764]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 16 00:54:12.716580 systemd[1]: Reached target initrd-root-device.target. May 16 00:54:12.714899 ignition[764]: disks: disks passed May 16 00:54:12.717866 systemd[1]: Reached target local-fs-pre.target. May 16 00:54:12.714941 ignition[764]: Ignition finished successfully May 16 00:54:12.719192 systemd[1]: Reached target local-fs.target. May 16 00:54:12.720307 systemd[1]: Reached target sysinit.target. May 16 00:54:12.721611 systemd[1]: Reached target basic.target. May 16 00:54:12.723447 systemd[1]: Starting systemd-fsck-root.service... May 16 00:54:12.735322 systemd-fsck[776]: ROOT: clean, 619/553520 files, 56022/553472 blocks May 16 00:54:12.738038 systemd[1]: Finished systemd-fsck-root.service. May 16 00:54:12.738000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:54:12.739903 systemd[1]: Mounting sysroot.mount... May 16 00:54:12.745390 systemd[1]: Mounted sysroot.mount. May 16 00:54:12.746726 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. May 16 00:54:12.746244 systemd[1]: Reached target initrd-root-fs.target. May 16 00:54:12.748946 systemd[1]: Mounting sysroot-usr.mount... May 16 00:54:12.749812 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. May 16 00:54:12.749853 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 16 00:54:12.749878 systemd[1]: Reached target ignition-diskful.target. May 16 00:54:12.751927 systemd[1]: Mounted sysroot-usr.mount. May 16 00:54:12.753722 systemd[1]: Starting initrd-setup-root.service... May 16 00:54:12.757870 initrd-setup-root[786]: cut: /sysroot/etc/passwd: No such file or directory May 16 00:54:12.761548 initrd-setup-root[794]: cut: /sysroot/etc/group: No such file or directory May 16 00:54:12.765766 initrd-setup-root[802]: cut: /sysroot/etc/shadow: No such file or directory May 16 00:54:12.769486 initrd-setup-root[810]: cut: /sysroot/etc/gshadow: No such file or directory May 16 00:54:12.796021 systemd[1]: Finished initrd-setup-root.service. May 16 00:54:12.796000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:54:12.797678 systemd[1]: Starting ignition-mount.service... May 16 00:54:12.799006 systemd[1]: Starting sysroot-boot.service... May 16 00:54:12.803601 bash[827]: umount: /sysroot/usr/share/oem: not mounted. May 16 00:54:12.812035 ignition[829]: INFO : Ignition 2.14.0 May 16 00:54:12.812984 ignition[829]: INFO : Stage: mount May 16 00:54:12.813809 ignition[829]: INFO : no configs at "/usr/lib/ignition/base.d" May 16 00:54:12.814839 ignition[829]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 16 00:54:12.816803 ignition[829]: INFO : mount: mount passed May 16 00:54:12.817679 systemd[1]: Finished sysroot-boot.service. May 16 00:54:12.817000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:54:12.819131 ignition[829]: INFO : Ignition finished successfully May 16 00:54:12.819203 systemd[1]: Finished ignition-mount.service. May 16 00:54:12.820000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:54:13.509055 systemd[1]: Mounting sysroot-usr-share-oem.mount... May 16 00:54:13.514557 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (838) May 16 00:54:13.516017 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 16 00:54:13.516029 kernel: BTRFS info (device vda6): using free space tree May 16 00:54:13.516044 kernel: BTRFS info (device vda6): has skinny extents May 16 00:54:13.519022 systemd[1]: Mounted sysroot-usr-share-oem.mount. May 16 00:54:13.520491 systemd[1]: Starting ignition-files.service... May 16 00:54:13.534166 ignition[858]: INFO : Ignition 2.14.0 May 16 00:54:13.534166 ignition[858]: INFO : Stage: files May 16 00:54:13.535795 ignition[858]: INFO : no configs at "/usr/lib/ignition/base.d" May 16 00:54:13.535795 ignition[858]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 16 00:54:13.535795 ignition[858]: DEBUG : files: compiled without relabeling support, skipping May 16 00:54:13.539069 ignition[858]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 16 00:54:13.539069 ignition[858]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 16 00:54:13.541779 ignition[858]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 16 00:54:13.541779 ignition[858]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 16 00:54:13.541779 ignition[858]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 16 00:54:13.541472 unknown[858]: wrote ssh authorized keys file for user: core May 16 00:54:13.546793 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" May 16 00:54:13.546793 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" May 16 00:54:13.546793 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" May 16 00:54:13.546793 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 16 00:54:13.546793 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" May 16 00:54:13.546793 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" May 16 00:54:13.546793 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" May 16 00:54:13.546793 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-arm64.raw: attempt #1 May 16 00:54:14.082618 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK May 16 00:54:14.329803 systemd-networkd[741]: eth0: Gained IPv6LL May 16 00:54:14.712936 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" May 16 00:54:14.712936 ignition[858]: INFO : files: op(7): [started] processing unit "coreos-metadata.service" May 16 00:54:14.716957 ignition[858]: INFO : files: op(7): op(8): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 16 00:54:14.716957 ignition[858]: INFO : files: op(7): op(8): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 16 00:54:14.716957 ignition[858]: INFO : files: op(7): [finished] processing unit "coreos-metadata.service" May 16 00:54:14.716957 ignition[858]: INFO : files: op(9): [started] setting preset to disabled for "coreos-metadata.service" May 16 00:54:14.716957 ignition[858]: INFO : files: op(9): op(a): [started] removing enablement symlink(s) for "coreos-metadata.service" May 16 00:54:14.743010 ignition[858]: INFO : files: op(9): op(a): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 16 00:54:14.745000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:54:14.748152 ignition[858]: INFO : files: op(9): [finished] setting preset to disabled for "coreos-metadata.service" May 16 00:54:14.748152 ignition[858]: INFO : files: createResultFile: createFiles: op(b): [started] writing file "/sysroot/etc/.ignition-result.json" May 16 00:54:14.748152 ignition[858]: INFO : files: createResultFile: createFiles: op(b): [finished] writing file "/sysroot/etc/.ignition-result.json" May 16 00:54:14.748152 ignition[858]: INFO : files: files passed May 16 00:54:14.748152 ignition[858]: INFO : Ignition finished successfully May 16 00:54:14.754000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:54:14.754000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:54:14.754000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:54:14.745474 systemd[1]: Finished ignition-files.service. May 16 00:54:14.747143 systemd[1]: Starting initrd-setup-root-after-ignition.service... May 16 00:54:14.758986 initrd-setup-root-after-ignition[884]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory May 16 00:54:14.748044 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). May 16 00:54:14.762242 initrd-setup-root-after-ignition[886]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 16 00:54:14.748695 systemd[1]: Starting ignition-quench.service... May 16 00:54:14.752248 systemd[1]: ignition-quench.service: Deactivated successfully. May 16 00:54:14.752335 systemd[1]: Finished ignition-quench.service. May 16 00:54:14.754392 systemd[1]: Finished initrd-setup-root-after-ignition.service. May 16 00:54:14.755601 systemd[1]: Reached target ignition-complete.target. May 16 00:54:14.757585 systemd[1]: Starting initrd-parse-etc.service... May 16 00:54:14.769071 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 16 00:54:14.769161 systemd[1]: Finished initrd-parse-etc.service. May 16 00:54:14.769000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:54:14.769000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:54:14.770675 systemd[1]: Reached target initrd-fs.target. May 16 00:54:14.771930 systemd[1]: Reached target initrd.target. May 16 00:54:14.773058 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. May 16 00:54:14.773733 systemd[1]: Starting dracut-pre-pivot.service... May 16 00:54:14.783595 systemd[1]: Finished dracut-pre-pivot.service. May 16 00:54:14.783000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:54:14.784876 systemd[1]: Starting initrd-cleanup.service... May 16 00:54:14.792013 systemd[1]: Stopped target nss-lookup.target. May 16 00:54:14.792741 systemd[1]: Stopped target remote-cryptsetup.target. May 16 00:54:14.794053 systemd[1]: Stopped target timers.target. May 16 00:54:14.795200 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 16 00:54:14.795000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:54:14.795296 systemd[1]: Stopped dracut-pre-pivot.service. May 16 00:54:14.796429 systemd[1]: Stopped target initrd.target. May 16 00:54:14.797664 systemd[1]: Stopped target basic.target. May 16 00:54:14.798746 systemd[1]: Stopped target ignition-complete.target. May 16 00:54:14.799877 systemd[1]: Stopped target ignition-diskful.target. May 16 00:54:14.801005 systemd[1]: Stopped target initrd-root-device.target. May 16 00:54:14.802317 systemd[1]: Stopped target remote-fs.target. May 16 00:54:14.803570 systemd[1]: Stopped target remote-fs-pre.target. May 16 00:54:14.804899 systemd[1]: Stopped target sysinit.target. May 16 00:54:14.806008 systemd[1]: Stopped target local-fs.target. May 16 00:54:14.807137 systemd[1]: Stopped target local-fs-pre.target. May 16 00:54:14.808272 systemd[1]: Stopped target swap.target. May 16 00:54:14.810000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:54:14.809310 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 16 00:54:14.809407 systemd[1]: Stopped dracut-pre-mount.service. May 16 00:54:14.812000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:54:14.810669 systemd[1]: Stopped target cryptsetup.target. May 16 00:54:14.813000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:54:14.811723 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 16 00:54:14.811813 systemd[1]: Stopped dracut-initqueue.service. May 16 00:54:14.813131 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 16 00:54:14.813223 systemd[1]: Stopped ignition-fetch-offline.service. May 16 00:54:14.814445 systemd[1]: Stopped target paths.target. May 16 00:54:14.815505 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 16 00:54:14.819575 systemd[1]: Stopped systemd-ask-password-console.path. May 16 00:54:14.821189 systemd[1]: Stopped target slices.target. May 16 00:54:14.822350 systemd[1]: Stopped target sockets.target. May 16 00:54:14.823416 systemd[1]: iscsid.socket: Deactivated successfully. May 16 00:54:14.823482 systemd[1]: Closed iscsid.socket. May 16 00:54:14.824507 systemd[1]: iscsiuio.socket: Deactivated successfully. May 16 00:54:14.826000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:54:14.824608 systemd[1]: Closed iscsiuio.socket. May 16 00:54:14.828000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:54:14.825674 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 16 00:54:14.825764 systemd[1]: Stopped initrd-setup-root-after-ignition.service. May 16 00:54:14.826807 systemd[1]: ignition-files.service: Deactivated successfully. May 16 00:54:14.830000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:54:14.826894 systemd[1]: Stopped ignition-files.service. May 16 00:54:14.828873 systemd[1]: Stopping ignition-mount.service... May 16 00:54:14.835000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:54:14.830186 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 16 00:54:14.837000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:54:14.838594 ignition[899]: INFO : Ignition 2.14.0 May 16 00:54:14.838594 ignition[899]: INFO : Stage: umount May 16 00:54:14.838594 ignition[899]: INFO : no configs at "/usr/lib/ignition/base.d" May 16 00:54:14.838594 ignition[899]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 16 00:54:14.838594 ignition[899]: INFO : umount: umount passed May 16 00:54:14.838594 ignition[899]: INFO : Ignition finished successfully May 16 00:54:14.840000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:54:14.830295 systemd[1]: Stopped kmod-static-nodes.service. May 16 00:54:14.845000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:54:14.832257 systemd[1]: Stopping sysroot-boot.service... May 16 00:54:14.846000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:54:14.832829 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 16 00:54:14.832952 systemd[1]: Stopped systemd-udev-trigger.service. May 16 00:54:14.849000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:54:14.835838 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 16 00:54:14.835933 systemd[1]: Stopped dracut-pre-trigger.service. May 16 00:54:14.839051 systemd[1]: ignition-mount.service: Deactivated successfully. May 16 00:54:14.839137 systemd[1]: Stopped ignition-mount.service. May 16 00:54:14.855000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:54:14.855000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:54:14.840635 systemd[1]: Stopped target network.target. May 16 00:54:14.841624 systemd[1]: ignition-disks.service: Deactivated successfully. May 16 00:54:14.857000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:54:14.841676 systemd[1]: Stopped ignition-disks.service. May 16 00:54:14.846140 systemd[1]: ignition-kargs.service: Deactivated successfully. May 16 00:54:14.846182 systemd[1]: Stopped ignition-kargs.service. May 16 00:54:14.847511 systemd[1]: ignition-setup.service: Deactivated successfully. May 16 00:54:14.863000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:54:14.847613 systemd[1]: Stopped ignition-setup.service. May 16 00:54:14.864000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:54:14.850339 systemd[1]: Stopping systemd-networkd.service... May 16 00:54:14.866000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:54:14.851558 systemd[1]: Stopping systemd-resolved.service... May 16 00:54:14.854426 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 16 00:54:14.854902 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 16 00:54:14.872000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:54:14.854985 systemd[1]: Finished initrd-cleanup.service. May 16 00:54:14.855826 systemd-networkd[741]: eth0: DHCPv6 lease lost May 16 00:54:14.874000 audit: BPF prog-id=9 op=UNLOAD May 16 00:54:14.856977 systemd[1]: systemd-networkd.service: Deactivated successfully. May 16 00:54:14.875000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:54:14.857070 systemd[1]: Stopped systemd-networkd.service. May 16 00:54:14.858610 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 16 00:54:14.878000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:54:14.858640 systemd[1]: Closed systemd-networkd.socket. May 16 00:54:14.880000 audit: BPF prog-id=6 op=UNLOAD May 16 00:54:14.860662 systemd[1]: Stopping network-cleanup.service... May 16 00:54:14.862204 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 16 00:54:14.882000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:54:14.862256 systemd[1]: Stopped parse-ip-for-networkd.service. May 16 00:54:14.884000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:54:14.863678 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 16 00:54:14.885000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:54:14.863718 systemd[1]: Stopped systemd-sysctl.service. May 16 00:54:14.865963 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 16 00:54:14.888000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:54:14.866006 systemd[1]: Stopped systemd-modules-load.service. May 16 00:54:14.889000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:54:14.866961 systemd[1]: Stopping systemd-udevd.service... May 16 00:54:14.891000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:54:14.871266 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 16 00:54:14.892000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:54:14.892000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:54:14.871747 systemd[1]: systemd-resolved.service: Deactivated successfully. May 16 00:54:14.871834 systemd[1]: Stopped systemd-resolved.service. May 16 00:54:14.875651 systemd[1]: systemd-udevd.service: Deactivated successfully. May 16 00:54:14.875777 systemd[1]: Stopped systemd-udevd.service. May 16 00:54:14.876677 systemd[1]: network-cleanup.service: Deactivated successfully. May 16 00:54:14.876747 systemd[1]: Stopped network-cleanup.service. May 16 00:54:14.878757 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 16 00:54:14.878790 systemd[1]: Closed systemd-udevd-control.socket. May 16 00:54:14.880241 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 16 00:54:14.880271 systemd[1]: Closed systemd-udevd-kernel.socket. May 16 00:54:14.881600 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 16 00:54:14.881640 systemd[1]: Stopped dracut-pre-udev.service. May 16 00:54:14.883084 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 16 00:54:14.883124 systemd[1]: Stopped dracut-cmdline.service. May 16 00:54:14.884653 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 16 00:54:14.884689 systemd[1]: Stopped dracut-cmdline-ask.service. May 16 00:54:14.886731 systemd[1]: Starting initrd-udevadm-cleanup-db.service... May 16 00:54:14.887588 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 16 00:54:14.887640 systemd[1]: Stopped systemd-vconsole-setup.service. May 16 00:54:14.889236 systemd[1]: sysroot-boot.service: Deactivated successfully. May 16 00:54:14.889330 systemd[1]: Stopped sysroot-boot.service. May 16 00:54:14.890673 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 16 00:54:14.890709 systemd[1]: Stopped initrd-setup-root.service. May 16 00:54:14.892153 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 16 00:54:14.892235 systemd[1]: Finished initrd-udevadm-cleanup-db.service. May 16 00:54:14.918814 systemd-journald[290]: Received SIGTERM from PID 1 (systemd). May 16 00:54:14.918845 iscsid[747]: iscsid shutting down. May 16 00:54:14.893505 systemd[1]: Reached target initrd-switch-root.target. May 16 00:54:14.895905 systemd[1]: Starting initrd-switch-root.service... May 16 00:54:14.901703 systemd[1]: Switching root. May 16 00:54:14.921723 systemd-journald[290]: Journal stopped May 16 00:54:16.863388 kernel: SELinux: Class mctp_socket not defined in policy. May 16 00:54:16.863440 kernel: SELinux: Class anon_inode not defined in policy. May 16 00:54:16.863452 kernel: SELinux: the above unknown classes and permissions will be allowed May 16 00:54:16.863462 kernel: SELinux: policy capability network_peer_controls=1 May 16 00:54:16.863472 kernel: SELinux: policy capability open_perms=1 May 16 00:54:16.863482 kernel: SELinux: policy capability extended_socket_class=1 May 16 00:54:16.863491 kernel: SELinux: policy capability always_check_network=0 May 16 00:54:16.863504 kernel: SELinux: policy capability cgroup_seclabel=1 May 16 00:54:16.863514 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 16 00:54:16.863532 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 16 00:54:16.863561 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 16 00:54:16.863576 systemd[1]: Successfully loaded SELinux policy in 31.169ms. May 16 00:54:16.863596 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 7.075ms. May 16 00:54:16.863607 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) May 16 00:54:16.863618 systemd[1]: Detected virtualization kvm. May 16 00:54:16.863630 systemd[1]: Detected architecture arm64. May 16 00:54:16.863640 systemd[1]: Detected first boot. May 16 00:54:16.863651 systemd[1]: Initializing machine ID from VM UUID. May 16 00:54:16.863661 kernel: kauditd_printk_skb: 63 callbacks suppressed May 16 00:54:16.863672 kernel: audit: type=1400 audit(1747356855.042:74): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 May 16 00:54:16.863683 kernel: audit: type=1400 audit(1747356855.042:75): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 May 16 00:54:16.863693 kernel: audit: type=1334 audit(1747356855.043:76): prog-id=10 op=LOAD May 16 00:54:16.863703 kernel: audit: type=1334 audit(1747356855.043:77): prog-id=10 op=UNLOAD May 16 00:54:16.863714 kernel: audit: type=1334 audit(1747356855.045:78): prog-id=11 op=LOAD May 16 00:54:16.863723 kernel: audit: type=1334 audit(1747356855.045:79): prog-id=11 op=UNLOAD May 16 00:54:16.863733 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). May 16 00:54:16.863743 kernel: audit: type=1400 audit(1747356855.077:80): avc: denied { associate } for pid=933 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" May 16 00:54:16.863754 kernel: audit: type=1300 audit(1747356855.077:80): arch=c00000b7 syscall=5 success=yes exit=0 a0=40001bd89c a1=400013ede0 a2=40001450c0 a3=32 items=0 ppid=916 pid=933 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:54:16.863764 kernel: audit: type=1327 audit(1747356855.077:80): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 May 16 00:54:16.863776 kernel: audit: type=1400 audit(1747356855.078:81): avc: denied { associate } for pid=933 comm="torcx-generator" name="lib" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 May 16 00:54:16.863788 systemd[1]: Populated /etc with preset unit settings. May 16 00:54:16.863803 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 16 00:54:16.863815 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 16 00:54:16.863827 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 16 00:54:16.863838 systemd[1]: iscsiuio.service: Deactivated successfully. May 16 00:54:16.863848 systemd[1]: Stopped iscsiuio.service. May 16 00:54:16.863858 systemd[1]: iscsid.service: Deactivated successfully. May 16 00:54:16.863868 systemd[1]: Stopped iscsid.service. May 16 00:54:16.863880 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 16 00:54:16.863890 systemd[1]: Stopped initrd-switch-root.service. May 16 00:54:16.863902 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 16 00:54:16.863913 systemd[1]: Created slice system-addon\x2dconfig.slice. May 16 00:54:16.863924 systemd[1]: Created slice system-addon\x2drun.slice. May 16 00:54:16.863934 systemd[1]: Created slice system-getty.slice. May 16 00:54:16.863944 systemd[1]: Created slice system-modprobe.slice. May 16 00:54:16.863954 systemd[1]: Created slice system-serial\x2dgetty.slice. May 16 00:54:16.863966 systemd[1]: Created slice system-system\x2dcloudinit.slice. May 16 00:54:16.863976 systemd[1]: Created slice system-systemd\x2dfsck.slice. May 16 00:54:16.863986 systemd[1]: Created slice user.slice. May 16 00:54:16.863997 systemd[1]: Started systemd-ask-password-console.path. May 16 00:54:16.864007 systemd[1]: Started systemd-ask-password-wall.path. May 16 00:54:16.864017 systemd[1]: Set up automount boot.automount. May 16 00:54:16.864027 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. May 16 00:54:16.864037 systemd[1]: Stopped target initrd-switch-root.target. May 16 00:54:16.864048 systemd[1]: Stopped target initrd-fs.target. May 16 00:54:16.864061 systemd[1]: Stopped target initrd-root-fs.target. May 16 00:54:16.864073 systemd[1]: Reached target integritysetup.target. May 16 00:54:16.864084 systemd[1]: Reached target remote-cryptsetup.target. May 16 00:54:16.864094 systemd[1]: Reached target remote-fs.target. May 16 00:54:16.864104 systemd[1]: Reached target slices.target. May 16 00:54:16.864115 systemd[1]: Reached target swap.target. May 16 00:54:16.864128 systemd[1]: Reached target torcx.target. May 16 00:54:16.864139 systemd[1]: Reached target veritysetup.target. May 16 00:54:16.864150 systemd[1]: Listening on systemd-coredump.socket. May 16 00:54:16.864161 systemd[1]: Listening on systemd-initctl.socket. May 16 00:54:16.864172 systemd[1]: Listening on systemd-networkd.socket. May 16 00:54:16.864183 systemd[1]: Listening on systemd-udevd-control.socket. May 16 00:54:16.864194 systemd[1]: Listening on systemd-udevd-kernel.socket. May 16 00:54:16.864205 systemd[1]: Listening on systemd-userdbd.socket. May 16 00:54:16.864215 systemd[1]: Mounting dev-hugepages.mount... May 16 00:54:16.864226 systemd[1]: Mounting dev-mqueue.mount... May 16 00:54:16.864237 systemd[1]: Mounting media.mount... May 16 00:54:16.864248 systemd[1]: Mounting sys-kernel-debug.mount... May 16 00:54:16.864259 systemd[1]: Mounting sys-kernel-tracing.mount... May 16 00:54:16.864269 systemd[1]: Mounting tmp.mount... May 16 00:54:16.864280 systemd[1]: Starting flatcar-tmpfiles.service... May 16 00:54:16.864290 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 16 00:54:16.864300 systemd[1]: Starting kmod-static-nodes.service... May 16 00:54:16.864311 systemd[1]: Starting modprobe@configfs.service... May 16 00:54:16.864323 systemd[1]: Starting modprobe@dm_mod.service... May 16 00:54:16.864333 systemd[1]: Starting modprobe@drm.service... May 16 00:54:16.864344 systemd[1]: Starting modprobe@efi_pstore.service... May 16 00:54:16.864355 systemd[1]: Starting modprobe@fuse.service... May 16 00:54:16.864366 systemd[1]: Starting modprobe@loop.service... May 16 00:54:16.864376 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 16 00:54:16.864386 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 16 00:54:16.864397 systemd[1]: Stopped systemd-fsck-root.service. May 16 00:54:16.864407 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 16 00:54:16.864419 systemd[1]: Stopped systemd-fsck-usr.service. May 16 00:54:16.864430 kernel: fuse: init (API version 7.34) May 16 00:54:16.864440 systemd[1]: Stopped systemd-journald.service. May 16 00:54:16.864451 kernel: loop: module loaded May 16 00:54:16.864461 systemd[1]: Starting systemd-journald.service... May 16 00:54:16.864472 systemd[1]: Starting systemd-modules-load.service... May 16 00:54:16.864482 systemd[1]: Starting systemd-network-generator.service... May 16 00:54:16.864492 systemd[1]: Starting systemd-remount-fs.service... May 16 00:54:16.864502 systemd[1]: Starting systemd-udev-trigger.service... May 16 00:54:16.864512 systemd[1]: verity-setup.service: Deactivated successfully. May 16 00:54:16.864529 systemd[1]: Stopped verity-setup.service. May 16 00:54:16.864545 systemd[1]: Mounted dev-hugepages.mount. May 16 00:54:16.864569 systemd[1]: Mounted dev-mqueue.mount. May 16 00:54:16.864580 systemd[1]: Mounted media.mount. May 16 00:54:16.864590 systemd[1]: Mounted sys-kernel-debug.mount. May 16 00:54:16.864600 systemd[1]: Mounted sys-kernel-tracing.mount. May 16 00:54:16.864611 systemd[1]: Mounted tmp.mount. May 16 00:54:16.864621 systemd[1]: Finished kmod-static-nodes.service. May 16 00:54:16.864631 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 16 00:54:16.864645 systemd-journald[1003]: Journal started May 16 00:54:16.864685 systemd-journald[1003]: Runtime Journal (/run/log/journal/068544ba8a3341a481e395eafcf79e8f) is 6.0M, max 48.7M, 42.6M free. May 16 00:54:14.973000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 May 16 00:54:15.042000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 May 16 00:54:15.042000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 May 16 00:54:15.043000 audit: BPF prog-id=10 op=LOAD May 16 00:54:15.043000 audit: BPF prog-id=10 op=UNLOAD May 16 00:54:15.045000 audit: BPF prog-id=11 op=LOAD May 16 00:54:15.045000 audit: BPF prog-id=11 op=UNLOAD May 16 00:54:15.077000 audit[933]: AVC avc: denied { associate } for pid=933 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" May 16 00:54:15.077000 audit[933]: SYSCALL arch=c00000b7 syscall=5 success=yes exit=0 a0=40001bd89c a1=400013ede0 a2=40001450c0 a3=32 items=0 ppid=916 pid=933 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:54:15.077000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 May 16 00:54:15.078000 audit[933]: AVC avc: denied { associate } for pid=933 comm="torcx-generator" name="lib" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 May 16 00:54:15.078000 audit[933]: SYSCALL arch=c00000b7 syscall=34 success=yes exit=0 a0=ffffffffffffff9c a1=40001bd975 a2=1ed a3=0 items=2 ppid=916 pid=933 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:54:15.078000 audit: CWD cwd="/" May 16 00:54:15.078000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 16 00:54:15.078000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 16 00:54:15.078000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 May 16 00:54:16.739000 audit: BPF prog-id=12 op=LOAD May 16 00:54:16.739000 audit: BPF prog-id=3 op=UNLOAD May 16 00:54:16.739000 audit: BPF prog-id=13 op=LOAD May 16 00:54:16.739000 audit: BPF prog-id=14 op=LOAD May 16 00:54:16.739000 audit: BPF prog-id=4 op=UNLOAD May 16 00:54:16.739000 audit: BPF prog-id=5 op=UNLOAD May 16 00:54:16.741000 audit: BPF prog-id=15 op=LOAD May 16 00:54:16.741000 audit: BPF prog-id=12 op=UNLOAD May 16 00:54:16.741000 audit: BPF prog-id=16 op=LOAD May 16 00:54:16.741000 audit: BPF prog-id=17 op=LOAD May 16 00:54:16.741000 audit: BPF prog-id=13 op=UNLOAD May 16 00:54:16.741000 audit: BPF prog-id=14 op=UNLOAD May 16 00:54:16.742000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:54:16.744000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:54:16.746000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:54:16.749000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:54:16.749000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:54:16.753000 audit: BPF prog-id=15 op=UNLOAD May 16 00:54:16.832000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:54:16.834000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:54:16.836000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:54:16.836000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:54:16.837000 audit: BPF prog-id=18 op=LOAD May 16 00:54:16.837000 audit: BPF prog-id=19 op=LOAD May 16 00:54:16.837000 audit: BPF prog-id=20 op=LOAD May 16 00:54:16.837000 audit: BPF prog-id=16 op=UNLOAD May 16 00:54:16.837000 audit: BPF prog-id=17 op=UNLOAD May 16 00:54:16.851000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:54:16.861000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 May 16 00:54:16.861000 audit[1003]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=6 a1=ffffdae6a8b0 a2=4000 a3=1 items=0 ppid=1 pid=1003 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:54:16.861000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" May 16 00:54:16.862000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:54:15.075778 /usr/lib/systemd/system-generators/torcx-generator[933]: time="2025-05-16T00:54:15Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 16 00:54:16.738741 systemd[1]: Queued start job for default target multi-user.target. May 16 00:54:15.076397 /usr/lib/systemd/system-generators/torcx-generator[933]: time="2025-05-16T00:54:15Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json May 16 00:54:16.738753 systemd[1]: Unnecessary job was removed for dev-vda6.device. May 16 00:54:15.076416 /usr/lib/systemd/system-generators/torcx-generator[933]: time="2025-05-16T00:54:15Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json May 16 00:54:16.742336 systemd[1]: systemd-journald.service: Deactivated successfully. May 16 00:54:15.076445 /usr/lib/systemd/system-generators/torcx-generator[933]: time="2025-05-16T00:54:15Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" May 16 00:54:15.076455 /usr/lib/systemd/system-generators/torcx-generator[933]: time="2025-05-16T00:54:15Z" level=debug msg="skipped missing lower profile" missing profile=oem May 16 00:54:15.076485 /usr/lib/systemd/system-generators/torcx-generator[933]: time="2025-05-16T00:54:15Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" May 16 00:54:15.076497 /usr/lib/systemd/system-generators/torcx-generator[933]: time="2025-05-16T00:54:15Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= May 16 00:54:15.076733 /usr/lib/systemd/system-generators/torcx-generator[933]: time="2025-05-16T00:54:15Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack May 16 00:54:15.076768 /usr/lib/systemd/system-generators/torcx-generator[933]: time="2025-05-16T00:54:15Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json May 16 00:54:15.076780 /usr/lib/systemd/system-generators/torcx-generator[933]: time="2025-05-16T00:54:15Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json May 16 00:54:15.077655 /usr/lib/systemd/system-generators/torcx-generator[933]: time="2025-05-16T00:54:15Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 May 16 00:54:16.871746 systemd[1]: Finished modprobe@configfs.service. May 16 00:54:15.077686 /usr/lib/systemd/system-generators/torcx-generator[933]: time="2025-05-16T00:54:15Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl May 16 00:54:16.871000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:54:16.871000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:54:15.077704 /usr/lib/systemd/system-generators/torcx-generator[933]: time="2025-05-16T00:54:15Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.7: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.7 May 16 00:54:15.077719 /usr/lib/systemd/system-generators/torcx-generator[933]: time="2025-05-16T00:54:15Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store May 16 00:54:15.077736 /usr/lib/systemd/system-generators/torcx-generator[933]: time="2025-05-16T00:54:15Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.7: no such file or directory" path=/var/lib/torcx/store/3510.3.7 May 16 00:54:15.077750 /usr/lib/systemd/system-generators/torcx-generator[933]: time="2025-05-16T00:54:15Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store May 16 00:54:16.498687 /usr/lib/systemd/system-generators/torcx-generator[933]: time="2025-05-16T00:54:16Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 16 00:54:16.498936 /usr/lib/systemd/system-generators/torcx-generator[933]: time="2025-05-16T00:54:16Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 16 00:54:16.499044 /usr/lib/systemd/system-generators/torcx-generator[933]: time="2025-05-16T00:54:16Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 16 00:54:16.499205 /usr/lib/systemd/system-generators/torcx-generator[933]: time="2025-05-16T00:54:16Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 16 00:54:16.499253 /usr/lib/systemd/system-generators/torcx-generator[933]: time="2025-05-16T00:54:16Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= May 16 00:54:16.499308 /usr/lib/systemd/system-generators/torcx-generator[933]: time="2025-05-16T00:54:16Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx May 16 00:54:16.872000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:54:16.873549 systemd[1]: Started systemd-journald.service. May 16 00:54:16.873868 systemd[1]: Finished flatcar-tmpfiles.service. May 16 00:54:16.873000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:54:16.874719 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 16 00:54:16.874898 systemd[1]: Finished modprobe@dm_mod.service. May 16 00:54:16.874000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:54:16.874000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:54:16.875750 systemd[1]: modprobe@drm.service: Deactivated successfully. May 16 00:54:16.875905 systemd[1]: Finished modprobe@drm.service. May 16 00:54:16.875000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:54:16.875000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:54:16.876731 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 16 00:54:16.876883 systemd[1]: Finished modprobe@efi_pstore.service. May 16 00:54:16.876000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:54:16.876000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:54:16.877758 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 16 00:54:16.877906 systemd[1]: Finished modprobe@fuse.service. May 16 00:54:16.877000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:54:16.877000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:54:16.878733 systemd[1]: modprobe@loop.service: Deactivated successfully. May 16 00:54:16.878880 systemd[1]: Finished modprobe@loop.service. May 16 00:54:16.878000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:54:16.878000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:54:16.879768 systemd[1]: Finished systemd-modules-load.service. May 16 00:54:16.879000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:54:16.880704 systemd[1]: Finished systemd-network-generator.service. May 16 00:54:16.880000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:54:16.881790 systemd[1]: Finished systemd-remount-fs.service. May 16 00:54:16.881000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:54:16.882846 systemd[1]: Reached target network-pre.target. May 16 00:54:16.884511 systemd[1]: Mounting sys-fs-fuse-connections.mount... May 16 00:54:16.886603 systemd[1]: Mounting sys-kernel-config.mount... May 16 00:54:16.887202 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 16 00:54:16.889875 systemd[1]: Starting systemd-hwdb-update.service... May 16 00:54:16.891609 systemd[1]: Starting systemd-journal-flush.service... May 16 00:54:16.892304 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 16 00:54:16.893270 systemd[1]: Starting systemd-random-seed.service... May 16 00:54:16.894107 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 16 00:54:16.895070 systemd[1]: Starting systemd-sysctl.service... May 16 00:54:16.896861 systemd[1]: Starting systemd-sysusers.service... May 16 00:54:16.898257 systemd-journald[1003]: Time spent on flushing to /var/log/journal/068544ba8a3341a481e395eafcf79e8f is 13.128ms for 977 entries. May 16 00:54:16.898257 systemd-journald[1003]: System Journal (/var/log/journal/068544ba8a3341a481e395eafcf79e8f) is 8.0M, max 195.6M, 187.6M free. May 16 00:54:16.924146 systemd-journald[1003]: Received client request to flush runtime journal. May 16 00:54:16.901000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:54:16.910000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:54:16.916000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:54:16.919000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:54:16.900374 systemd[1]: Finished systemd-udev-trigger.service. May 16 00:54:16.924611 udevadm[1033]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. May 16 00:54:16.901325 systemd[1]: Mounted sys-fs-fuse-connections.mount. May 16 00:54:16.902261 systemd[1]: Mounted sys-kernel-config.mount. May 16 00:54:16.904080 systemd[1]: Starting systemd-udev-settle.service... May 16 00:54:16.910046 systemd[1]: Finished systemd-random-seed.service. May 16 00:54:16.910929 systemd[1]: Reached target first-boot-complete.target. May 16 00:54:16.916829 systemd[1]: Finished systemd-sysctl.service. May 16 00:54:16.919189 systemd[1]: Finished systemd-sysusers.service. May 16 00:54:16.924998 systemd[1]: Finished systemd-journal-flush.service. May 16 00:54:16.925000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:54:17.252184 systemd[1]: Finished systemd-hwdb-update.service. May 16 00:54:17.252000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:54:17.253000 audit: BPF prog-id=21 op=LOAD May 16 00:54:17.253000 audit: BPF prog-id=22 op=LOAD May 16 00:54:17.253000 audit: BPF prog-id=7 op=UNLOAD May 16 00:54:17.253000 audit: BPF prog-id=8 op=UNLOAD May 16 00:54:17.254317 systemd[1]: Starting systemd-udevd.service... May 16 00:54:17.272771 systemd-udevd[1036]: Using default interface naming scheme 'v252'. May 16 00:54:17.283865 systemd[1]: Started systemd-udevd.service. May 16 00:54:17.283000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:54:17.284000 audit: BPF prog-id=23 op=LOAD May 16 00:54:17.287964 systemd[1]: Starting systemd-networkd.service... May 16 00:54:17.293000 audit: BPF prog-id=24 op=LOAD May 16 00:54:17.293000 audit: BPF prog-id=25 op=LOAD May 16 00:54:17.293000 audit: BPF prog-id=26 op=LOAD May 16 00:54:17.295246 systemd[1]: Starting systemd-userdbd.service... May 16 00:54:17.301145 systemd[1]: Condition check resulted in dev-ttyAMA0.device being skipped. May 16 00:54:17.331784 systemd[1]: Started systemd-userdbd.service. May 16 00:54:17.331000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:54:17.366659 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. May 16 00:54:17.390000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:54:17.389907 systemd[1]: Finished systemd-udev-settle.service. May 16 00:54:17.392055 systemd[1]: Starting lvm2-activation-early.service... May 16 00:54:17.393847 systemd-networkd[1043]: lo: Link UP May 16 00:54:17.393853 systemd-networkd[1043]: lo: Gained carrier May 16 00:54:17.394174 systemd-networkd[1043]: Enumeration completed May 16 00:54:17.394258 systemd[1]: Started systemd-networkd.service. May 16 00:54:17.394275 systemd-networkd[1043]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 16 00:54:17.394000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:54:17.397268 systemd-networkd[1043]: eth0: Link UP May 16 00:54:17.397276 systemd-networkd[1043]: eth0: Gained carrier May 16 00:54:17.405345 lvm[1069]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 16 00:54:17.416661 systemd-networkd[1043]: eth0: DHCPv4 address 10.0.0.138/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 16 00:54:17.433424 systemd[1]: Finished lvm2-activation-early.service. May 16 00:54:17.433000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:54:17.434444 systemd[1]: Reached target cryptsetup.target. May 16 00:54:17.436346 systemd[1]: Starting lvm2-activation.service... May 16 00:54:17.439558 lvm[1070]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 16 00:54:17.467383 systemd[1]: Finished lvm2-activation.service. May 16 00:54:17.467000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:54:17.468331 systemd[1]: Reached target local-fs-pre.target. May 16 00:54:17.469192 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 16 00:54:17.469225 systemd[1]: Reached target local-fs.target. May 16 00:54:17.470027 systemd[1]: Reached target machines.target. May 16 00:54:17.471916 systemd[1]: Starting ldconfig.service... May 16 00:54:17.472989 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 16 00:54:17.473042 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 16 00:54:17.474073 systemd[1]: Starting systemd-boot-update.service... May 16 00:54:17.476090 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... May 16 00:54:17.478330 systemd[1]: Starting systemd-machine-id-commit.service... May 16 00:54:17.480342 systemd[1]: Starting systemd-sysext.service... May 16 00:54:17.481419 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1072 (bootctl) May 16 00:54:17.482479 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... May 16 00:54:17.488789 systemd[1]: Unmounting usr-share-oem.mount... May 16 00:54:17.492390 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. May 16 00:54:17.492000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:54:17.497792 systemd[1]: usr-share-oem.mount: Deactivated successfully. May 16 00:54:17.497970 systemd[1]: Unmounted usr-share-oem.mount. May 16 00:54:17.515564 kernel: loop0: detected capacity change from 0 to 211168 May 16 00:54:17.560000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:54:17.560290 systemd[1]: Finished systemd-machine-id-commit.service. May 16 00:54:17.566345 systemd-fsck[1083]: fsck.fat 4.2 (2021-01-31) May 16 00:54:17.566345 systemd-fsck[1083]: /dev/vda1: 236 files, 117310/258078 clusters May 16 00:54:17.566697 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 16 00:54:17.568000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:54:17.568118 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. May 16 00:54:17.588566 kernel: loop1: detected capacity change from 0 to 211168 May 16 00:54:17.592957 (sd-sysext)[1086]: Using extensions 'kubernetes'. May 16 00:54:17.593395 (sd-sysext)[1086]: Merged extensions into '/usr'. May 16 00:54:17.613154 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 16 00:54:17.614407 systemd[1]: Starting modprobe@dm_mod.service... May 16 00:54:17.616241 systemd[1]: Starting modprobe@efi_pstore.service... May 16 00:54:17.618262 systemd[1]: Starting modprobe@loop.service... May 16 00:54:17.618988 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 16 00:54:17.619114 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 16 00:54:17.619847 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 16 00:54:17.619966 systemd[1]: Finished modprobe@dm_mod.service. May 16 00:54:17.620000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:54:17.620000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:54:17.621056 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 16 00:54:17.621162 systemd[1]: Finished modprobe@efi_pstore.service. May 16 00:54:17.621000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:54:17.621000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:54:17.622320 systemd[1]: modprobe@loop.service: Deactivated successfully. May 16 00:54:17.622431 systemd[1]: Finished modprobe@loop.service. May 16 00:54:17.622000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:54:17.622000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:54:17.623558 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 16 00:54:17.623664 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 16 00:54:17.651626 ldconfig[1071]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 16 00:54:17.655219 systemd[1]: Finished ldconfig.service. May 16 00:54:17.655000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:54:17.853877 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 16 00:54:17.855633 systemd[1]: Mounting boot.mount... May 16 00:54:17.857346 systemd[1]: Mounting usr-share-oem.mount... May 16 00:54:17.863149 systemd[1]: Mounted boot.mount. May 16 00:54:17.863924 systemd[1]: Mounted usr-share-oem.mount. May 16 00:54:17.865638 systemd[1]: Finished systemd-sysext.service. May 16 00:54:17.865000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:54:17.867458 systemd[1]: Starting ensure-sysext.service... May 16 00:54:17.869282 systemd[1]: Starting systemd-tmpfiles-setup.service... May 16 00:54:17.870364 systemd[1]: Finished systemd-boot-update.service. May 16 00:54:17.870000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:54:17.874223 systemd[1]: Reloading. May 16 00:54:17.882609 systemd-tmpfiles[1094]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. May 16 00:54:17.884464 systemd-tmpfiles[1094]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 16 00:54:17.887358 systemd-tmpfiles[1094]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 16 00:54:17.930500 /usr/lib/systemd/system-generators/torcx-generator[1114]: time="2025-05-16T00:54:17Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 16 00:54:17.930553 /usr/lib/systemd/system-generators/torcx-generator[1114]: time="2025-05-16T00:54:17Z" level=info msg="torcx already run" May 16 00:54:17.971915 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 16 00:54:17.971939 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 16 00:54:17.987217 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 16 00:54:18.028000 audit: BPF prog-id=27 op=LOAD May 16 00:54:18.028000 audit: BPF prog-id=28 op=LOAD May 16 00:54:18.028000 audit: BPF prog-id=21 op=UNLOAD May 16 00:54:18.028000 audit: BPF prog-id=22 op=UNLOAD May 16 00:54:18.029000 audit: BPF prog-id=29 op=LOAD May 16 00:54:18.029000 audit: BPF prog-id=23 op=UNLOAD May 16 00:54:18.030000 audit: BPF prog-id=30 op=LOAD May 16 00:54:18.030000 audit: BPF prog-id=24 op=UNLOAD May 16 00:54:18.030000 audit: BPF prog-id=31 op=LOAD May 16 00:54:18.030000 audit: BPF prog-id=32 op=LOAD May 16 00:54:18.030000 audit: BPF prog-id=25 op=UNLOAD May 16 00:54:18.030000 audit: BPF prog-id=26 op=UNLOAD May 16 00:54:18.032000 audit: BPF prog-id=33 op=LOAD May 16 00:54:18.032000 audit: BPF prog-id=18 op=UNLOAD May 16 00:54:18.032000 audit: BPF prog-id=34 op=LOAD May 16 00:54:18.032000 audit: BPF prog-id=35 op=LOAD May 16 00:54:18.032000 audit: BPF prog-id=19 op=UNLOAD May 16 00:54:18.032000 audit: BPF prog-id=20 op=UNLOAD May 16 00:54:18.034705 systemd[1]: Finished systemd-tmpfiles-setup.service. May 16 00:54:18.035000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:54:18.039189 systemd[1]: Starting audit-rules.service... May 16 00:54:18.041162 systemd[1]: Starting clean-ca-certificates.service... May 16 00:54:18.043324 systemd[1]: Starting systemd-journal-catalog-update.service... May 16 00:54:18.046000 audit: BPF prog-id=36 op=LOAD May 16 00:54:18.047389 systemd[1]: Starting systemd-resolved.service... May 16 00:54:18.048000 audit: BPF prog-id=37 op=LOAD May 16 00:54:18.050787 systemd[1]: Starting systemd-timesyncd.service... May 16 00:54:18.052824 systemd[1]: Starting systemd-update-utmp.service... May 16 00:54:18.054335 systemd[1]: Finished clean-ca-certificates.service. May 16 00:54:18.055000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:54:18.057249 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 16 00:54:18.058000 audit[1164]: SYSTEM_BOOT pid=1164 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' May 16 00:54:18.062363 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 16 00:54:18.063653 systemd[1]: Starting modprobe@dm_mod.service... May 16 00:54:18.065783 systemd[1]: Starting modprobe@efi_pstore.service... May 16 00:54:18.067692 systemd[1]: Starting modprobe@loop.service... May 16 00:54:18.068424 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 16 00:54:18.068632 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 16 00:54:18.068780 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 16 00:54:18.070010 systemd[1]: Finished systemd-journal-catalog-update.service. May 16 00:54:18.070000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:54:18.071367 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 16 00:54:18.071481 systemd[1]: Finished modprobe@dm_mod.service. May 16 00:54:18.071000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:54:18.071000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:54:18.072613 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 16 00:54:18.072761 systemd[1]: Finished modprobe@efi_pstore.service. May 16 00:54:18.073000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:54:18.073000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:54:18.073934 systemd[1]: modprobe@loop.service: Deactivated successfully. May 16 00:54:18.074041 systemd[1]: Finished modprobe@loop.service. May 16 00:54:18.074000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:54:18.074000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:54:18.075365 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 16 00:54:18.075484 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 16 00:54:18.076848 systemd[1]: Starting systemd-update-done.service... May 16 00:54:18.078104 systemd[1]: Finished systemd-update-utmp.service. May 16 00:54:18.078000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:54:18.081160 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 16 00:54:18.082378 systemd[1]: Starting modprobe@dm_mod.service... May 16 00:54:18.084327 systemd[1]: Starting modprobe@efi_pstore.service... May 16 00:54:18.087413 systemd[1]: Starting modprobe@loop.service... May 16 00:54:18.088245 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 16 00:54:18.088370 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 16 00:54:18.088472 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 16 00:54:18.089282 systemd[1]: Finished systemd-update-done.service. May 16 00:54:18.089000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:54:18.090464 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 16 00:54:18.090643 systemd[1]: Finished modprobe@dm_mod.service. May 16 00:54:18.090000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:54:18.090000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:54:18.091622 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 16 00:54:18.091737 systemd[1]: Finished modprobe@efi_pstore.service. May 16 00:54:18.091000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:54:18.091000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:54:18.092812 systemd[1]: modprobe@loop.service: Deactivated successfully. May 16 00:54:18.092920 systemd[1]: Finished modprobe@loop.service. May 16 00:54:18.093952 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 16 00:54:18.094044 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 16 00:54:18.093000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:54:18.093000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:54:18.094445 augenrules[1177]: No rules May 16 00:54:18.093000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 May 16 00:54:18.093000 audit[1177]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffe7f56fa0 a2=420 a3=0 items=0 ppid=1153 pid=1177 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:54:18.093000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 May 16 00:54:18.096322 systemd[1]: Finished audit-rules.service. May 16 00:54:18.097743 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 16 00:54:18.098954 systemd[1]: Starting modprobe@dm_mod.service... May 16 00:54:18.100839 systemd[1]: Starting modprobe@drm.service... May 16 00:54:18.102993 systemd[1]: Starting modprobe@efi_pstore.service... May 16 00:54:18.104962 systemd[1]: Starting modprobe@loop.service... May 16 00:54:18.105858 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 16 00:54:18.105979 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 16 00:54:18.107129 systemd[1]: Starting systemd-networkd-wait-online.service... May 16 00:54:18.108182 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 16 00:54:18.109222 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 16 00:54:18.109389 systemd[1]: Finished modprobe@dm_mod.service. May 16 00:54:18.110427 systemd[1]: modprobe@drm.service: Deactivated successfully. May 16 00:54:18.110572 systemd[1]: Finished modprobe@drm.service. May 16 00:54:18.111662 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 16 00:54:18.111774 systemd[1]: Finished modprobe@efi_pstore.service. May 16 00:54:18.112827 systemd[1]: Started systemd-timesyncd.service. May 16 00:54:17.674977 systemd-resolved[1157]: Positive Trust Anchors: May 16 00:54:17.695441 systemd-journald[1003]: Time jumped backwards, rotating. May 16 00:54:17.675049 systemd-timesyncd[1162]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 16 00:54:17.675141 systemd-timesyncd[1162]: Initial clock synchronization to Fri 2025-05-16 00:54:17.674870 UTC. May 16 00:54:17.675396 systemd[1]: modprobe@loop.service: Deactivated successfully. May 16 00:54:17.675499 systemd[1]: Finished modprobe@loop.service. May 16 00:54:17.675805 systemd-resolved[1157]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 16 00:54:17.675833 systemd-resolved[1157]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test May 16 00:54:17.677601 systemd[1]: Reached target time-set.target. May 16 00:54:17.678400 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 16 00:54:17.678433 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 16 00:54:17.678894 systemd[1]: Finished ensure-sysext.service. May 16 00:54:17.689113 systemd-resolved[1157]: Defaulting to hostname 'linux'. May 16 00:54:17.690406 systemd[1]: Started systemd-resolved.service. May 16 00:54:17.691074 systemd[1]: Reached target network.target. May 16 00:54:17.691790 systemd[1]: Reached target nss-lookup.target. May 16 00:54:17.692471 systemd[1]: Reached target sysinit.target. May 16 00:54:17.693263 systemd[1]: Started motdgen.path. May 16 00:54:17.694076 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. May 16 00:54:17.695167 systemd[1]: Started logrotate.timer. May 16 00:54:17.695836 systemd[1]: Started mdadm.timer. May 16 00:54:17.696321 systemd[1]: Started systemd-tmpfiles-clean.timer. May 16 00:54:17.696930 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 16 00:54:17.696958 systemd[1]: Reached target paths.target. May 16 00:54:17.697476 systemd[1]: Reached target timers.target. May 16 00:54:17.698321 systemd[1]: Listening on dbus.socket. May 16 00:54:17.699926 systemd[1]: Starting docker.socket... May 16 00:54:17.702828 systemd[1]: Listening on sshd.socket. May 16 00:54:17.703466 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 16 00:54:17.703889 systemd[1]: Listening on docker.socket. May 16 00:54:17.704506 systemd[1]: Reached target sockets.target. May 16 00:54:17.705132 systemd[1]: Reached target basic.target. May 16 00:54:17.705688 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. May 16 00:54:17.705719 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. May 16 00:54:17.706620 systemd[1]: Starting containerd.service... May 16 00:54:17.708176 systemd[1]: Starting dbus.service... May 16 00:54:17.709785 systemd[1]: Starting enable-oem-cloudinit.service... May 16 00:54:17.711593 systemd[1]: Starting extend-filesystems.service... May 16 00:54:17.712342 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). May 16 00:54:17.714337 systemd[1]: Starting motdgen.service... May 16 00:54:17.716232 systemd[1]: Starting ssh-key-proc-cmdline.service... May 16 00:54:17.718168 systemd[1]: Starting sshd-keygen.service... May 16 00:54:17.722856 systemd[1]: Starting systemd-logind.service... May 16 00:54:17.723477 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 16 00:54:17.723564 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 16 00:54:17.724498 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 16 00:54:17.725184 systemd[1]: Starting update-engine.service... May 16 00:54:17.726743 systemd[1]: Starting update-ssh-keys-after-ignition.service... May 16 00:54:17.729395 jq[1209]: true May 16 00:54:17.728878 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 16 00:54:17.736973 jq[1196]: false May 16 00:54:17.729042 systemd[1]: Finished ssh-key-proc-cmdline.service. May 16 00:54:17.744345 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 16 00:54:17.744554 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. May 16 00:54:17.748200 systemd[1]: motdgen.service: Deactivated successfully. May 16 00:54:17.748375 systemd[1]: Finished motdgen.service. May 16 00:54:17.749811 dbus-daemon[1195]: [system] SELinux support is enabled May 16 00:54:17.749973 systemd[1]: Started dbus.service. May 16 00:54:17.752180 extend-filesystems[1197]: Found loop1 May 16 00:54:17.752180 extend-filesystems[1197]: Found vda May 16 00:54:17.752180 extend-filesystems[1197]: Found vda1 May 16 00:54:17.752180 extend-filesystems[1197]: Found vda2 May 16 00:54:17.752180 extend-filesystems[1197]: Found vda3 May 16 00:54:17.752180 extend-filesystems[1197]: Found usr May 16 00:54:17.752180 extend-filesystems[1197]: Found vda4 May 16 00:54:17.752180 extend-filesystems[1197]: Found vda6 May 16 00:54:17.752180 extend-filesystems[1197]: Found vda7 May 16 00:54:17.752180 extend-filesystems[1197]: Found vda9 May 16 00:54:17.752180 extend-filesystems[1197]: Checking size of /dev/vda9 May 16 00:54:17.752382 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 16 00:54:17.774034 jq[1217]: true May 16 00:54:17.752418 systemd[1]: Reached target system-config.target. May 16 00:54:17.753123 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 16 00:54:17.753137 systemd[1]: Reached target user-config.target. May 16 00:54:17.780037 extend-filesystems[1197]: Resized partition /dev/vda9 May 16 00:54:17.794555 extend-filesystems[1233]: resize2fs 1.46.5 (30-Dec-2021) May 16 00:54:17.809805 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 16 00:54:17.810288 systemd-logind[1203]: Watching system buttons on /dev/input/event0 (Power Button) May 16 00:54:17.810660 systemd-logind[1203]: New seat seat0. May 16 00:54:17.812324 systemd[1]: Started systemd-logind.service. May 16 00:54:17.826875 update_engine[1207]: I0516 00:54:17.826622 1207 main.cc:92] Flatcar Update Engine starting May 16 00:54:17.829180 systemd[1]: Started update-engine.service. May 16 00:54:17.832583 update_engine[1207]: I0516 00:54:17.829177 1207 update_check_scheduler.cc:74] Next update check in 10m17s May 16 00:54:17.837905 systemd[1]: Started locksmithd.service. May 16 00:54:17.842857 env[1216]: time="2025-05-16T00:54:17.842810950Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 May 16 00:54:17.846789 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 16 00:54:17.856525 extend-filesystems[1233]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 16 00:54:17.856525 extend-filesystems[1233]: old_desc_blocks = 1, new_desc_blocks = 1 May 16 00:54:17.856525 extend-filesystems[1233]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 16 00:54:17.862327 extend-filesystems[1197]: Resized filesystem in /dev/vda9 May 16 00:54:17.862983 bash[1243]: Updated "/home/core/.ssh/authorized_keys" May 16 00:54:17.858179 systemd[1]: extend-filesystems.service: Deactivated successfully. May 16 00:54:17.858359 systemd[1]: Finished extend-filesystems.service. May 16 00:54:17.859480 systemd[1]: Finished update-ssh-keys-after-ignition.service. May 16 00:54:17.866406 env[1216]: time="2025-05-16T00:54:17.866366350Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 16 00:54:17.866595 env[1216]: time="2025-05-16T00:54:17.866526430Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 16 00:54:17.867883 env[1216]: time="2025-05-16T00:54:17.867838710Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.181-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 16 00:54:17.867883 env[1216]: time="2025-05-16T00:54:17.867872070Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 16 00:54:17.868114 env[1216]: time="2025-05-16T00:54:17.868082950Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 16 00:54:17.868159 env[1216]: time="2025-05-16T00:54:17.868108310Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 16 00:54:17.868180 env[1216]: time="2025-05-16T00:54:17.868156670Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" May 16 00:54:17.868180 env[1216]: time="2025-05-16T00:54:17.868176110Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 16 00:54:17.868272 env[1216]: time="2025-05-16T00:54:17.868257150Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 16 00:54:17.868553 env[1216]: time="2025-05-16T00:54:17.868523030Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 16 00:54:17.868694 env[1216]: time="2025-05-16T00:54:17.868673910Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 16 00:54:17.868716 env[1216]: time="2025-05-16T00:54:17.868694510Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 16 00:54:17.868796 env[1216]: time="2025-05-16T00:54:17.868778630Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" May 16 00:54:17.868827 env[1216]: time="2025-05-16T00:54:17.868797710Z" level=info msg="metadata content store policy set" policy=shared May 16 00:54:17.871933 env[1216]: time="2025-05-16T00:54:17.871903230Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 16 00:54:17.871977 env[1216]: time="2025-05-16T00:54:17.871942870Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 16 00:54:17.871977 env[1216]: time="2025-05-16T00:54:17.871956670Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 16 00:54:17.872025 env[1216]: time="2025-05-16T00:54:17.871991470Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 16 00:54:17.872025 env[1216]: time="2025-05-16T00:54:17.872006990Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 16 00:54:17.872025 env[1216]: time="2025-05-16T00:54:17.872021110Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 16 00:54:17.872091 env[1216]: time="2025-05-16T00:54:17.872034270Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 16 00:54:17.872463 env[1216]: time="2025-05-16T00:54:17.872430630Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 16 00:54:17.872463 env[1216]: time="2025-05-16T00:54:17.872460950Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 May 16 00:54:17.872519 env[1216]: time="2025-05-16T00:54:17.872475430Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 16 00:54:17.872519 env[1216]: time="2025-05-16T00:54:17.872489350Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 16 00:54:17.872519 env[1216]: time="2025-05-16T00:54:17.872502110Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 16 00:54:17.872626 env[1216]: time="2025-05-16T00:54:17.872607990Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 16 00:54:17.872821 env[1216]: time="2025-05-16T00:54:17.872731070Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 16 00:54:17.874100 env[1216]: time="2025-05-16T00:54:17.873331030Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 16 00:54:17.874100 env[1216]: time="2025-05-16T00:54:17.873389030Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 16 00:54:17.874100 env[1216]: time="2025-05-16T00:54:17.873409030Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 16 00:54:17.874100 env[1216]: time="2025-05-16T00:54:17.873528750Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 16 00:54:17.874100 env[1216]: time="2025-05-16T00:54:17.873546910Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 16 00:54:17.874100 env[1216]: time="2025-05-16T00:54:17.873563550Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 16 00:54:17.874100 env[1216]: time="2025-05-16T00:54:17.873580230Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 16 00:54:17.874100 env[1216]: time="2025-05-16T00:54:17.873600430Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 16 00:54:17.874100 env[1216]: time="2025-05-16T00:54:17.873616550Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 16 00:54:17.874100 env[1216]: time="2025-05-16T00:54:17.873695870Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 16 00:54:17.874100 env[1216]: time="2025-05-16T00:54:17.873711870Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 16 00:54:17.874100 env[1216]: time="2025-05-16T00:54:17.873793550Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 16 00:54:17.874100 env[1216]: time="2025-05-16T00:54:17.873957550Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 16 00:54:17.874100 env[1216]: time="2025-05-16T00:54:17.873979670Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 16 00:54:17.874100 env[1216]: time="2025-05-16T00:54:17.873995950Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 16 00:54:17.874412 env[1216]: time="2025-05-16T00:54:17.874012230Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 16 00:54:17.874412 env[1216]: time="2025-05-16T00:54:17.874032470Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 May 16 00:54:17.874412 env[1216]: time="2025-05-16T00:54:17.874051150Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 16 00:54:17.874412 env[1216]: time="2025-05-16T00:54:17.874073430Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" May 16 00:54:17.874639 env[1216]: time="2025-05-16T00:54:17.874616270Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 16 00:54:17.875174 env[1216]: time="2025-05-16T00:54:17.875111310Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 16 00:54:17.875905 env[1216]: time="2025-05-16T00:54:17.875584230Z" level=info msg="Connect containerd service" May 16 00:54:17.875905 env[1216]: time="2025-05-16T00:54:17.875646430Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 16 00:54:17.876548 env[1216]: time="2025-05-16T00:54:17.876520630Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 16 00:54:17.876947 env[1216]: time="2025-05-16T00:54:17.876885670Z" level=info msg="Start subscribing containerd event" May 16 00:54:17.876947 env[1216]: time="2025-05-16T00:54:17.876937910Z" level=info msg="Start recovering state" May 16 00:54:17.877012 env[1216]: time="2025-05-16T00:54:17.876994230Z" level=info msg="Start event monitor" May 16 00:54:17.877033 env[1216]: time="2025-05-16T00:54:17.877012710Z" level=info msg="Start snapshots syncer" May 16 00:54:17.877033 env[1216]: time="2025-05-16T00:54:17.877022030Z" level=info msg="Start cni network conf syncer for default" May 16 00:54:17.877033 env[1216]: time="2025-05-16T00:54:17.877029430Z" level=info msg="Start streaming server" May 16 00:54:17.877213 env[1216]: time="2025-05-16T00:54:17.877189110Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 16 00:54:17.877262 env[1216]: time="2025-05-16T00:54:17.877237790Z" level=info msg=serving... address=/run/containerd/containerd.sock May 16 00:54:17.877262 env[1216]: time="2025-05-16T00:54:17.877275190Z" level=info msg="containerd successfully booted in 0.035446s" May 16 00:54:17.877340 systemd[1]: Started containerd.service. May 16 00:54:17.886987 locksmithd[1245]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 16 00:54:18.946906 systemd-networkd[1043]: eth0: Gained IPv6LL May 16 00:54:18.948519 systemd[1]: Finished systemd-networkd-wait-online.service. May 16 00:54:18.949571 systemd[1]: Reached target network-online.target. May 16 00:54:18.951963 systemd[1]: Starting kubelet.service... May 16 00:54:19.530642 systemd[1]: Started kubelet.service. May 16 00:54:19.964877 kubelet[1259]: E0516 00:54:19.964784 1259 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 16 00:54:19.966883 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 16 00:54:19.967000 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 16 00:54:20.488426 sshd_keygen[1210]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 16 00:54:20.505729 systemd[1]: Finished sshd-keygen.service. May 16 00:54:20.507865 systemd[1]: Starting issuegen.service... May 16 00:54:20.512540 systemd[1]: issuegen.service: Deactivated successfully. May 16 00:54:20.512715 systemd[1]: Finished issuegen.service. May 16 00:54:20.514896 systemd[1]: Starting systemd-user-sessions.service... May 16 00:54:20.520914 systemd[1]: Finished systemd-user-sessions.service. May 16 00:54:20.522951 systemd[1]: Started getty@tty1.service. May 16 00:54:20.524828 systemd[1]: Started serial-getty@ttyAMA0.service. May 16 00:54:20.525668 systemd[1]: Reached target getty.target. May 16 00:54:20.526463 systemd[1]: Reached target multi-user.target. May 16 00:54:20.528264 systemd[1]: Starting systemd-update-utmp-runlevel.service... May 16 00:54:20.534450 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. May 16 00:54:20.534603 systemd[1]: Finished systemd-update-utmp-runlevel.service. May 16 00:54:20.535489 systemd[1]: Startup finished in 555ms (kernel) + 4.375s (initrd) + 6.033s (userspace) = 10.964s. May 16 00:54:22.997081 systemd[1]: Created slice system-sshd.slice. May 16 00:54:22.998165 systemd[1]: Started sshd@0-10.0.0.138:22-10.0.0.1:59160.service. May 16 00:54:23.044926 sshd[1281]: Accepted publickey for core from 10.0.0.1 port 59160 ssh2: RSA SHA256:czXyODm5lEdSCdxgc4UKYE1H3sjGZqNxHBxH/SPqyp4 May 16 00:54:23.046792 sshd[1281]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 16 00:54:23.058125 systemd[1]: Created slice user-500.slice. May 16 00:54:23.059164 systemd[1]: Starting user-runtime-dir@500.service... May 16 00:54:23.060694 systemd-logind[1203]: New session 1 of user core. May 16 00:54:23.066745 systemd[1]: Finished user-runtime-dir@500.service. May 16 00:54:23.068011 systemd[1]: Starting user@500.service... May 16 00:54:23.070370 (systemd)[1284]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 16 00:54:23.126577 systemd[1284]: Queued start job for default target default.target. May 16 00:54:23.127009 systemd[1284]: Reached target paths.target. May 16 00:54:23.127041 systemd[1284]: Reached target sockets.target. May 16 00:54:23.127052 systemd[1284]: Reached target timers.target. May 16 00:54:23.127062 systemd[1284]: Reached target basic.target. May 16 00:54:23.127099 systemd[1284]: Reached target default.target. May 16 00:54:23.127121 systemd[1284]: Startup finished in 51ms. May 16 00:54:23.127168 systemd[1]: Started user@500.service. May 16 00:54:23.128045 systemd[1]: Started session-1.scope. May 16 00:54:23.178188 systemd[1]: Started sshd@1-10.0.0.138:22-10.0.0.1:59174.service. May 16 00:54:23.218113 sshd[1293]: Accepted publickey for core from 10.0.0.1 port 59174 ssh2: RSA SHA256:czXyODm5lEdSCdxgc4UKYE1H3sjGZqNxHBxH/SPqyp4 May 16 00:54:23.219354 sshd[1293]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 16 00:54:23.222695 systemd-logind[1203]: New session 2 of user core. May 16 00:54:23.223893 systemd[1]: Started session-2.scope. May 16 00:54:23.276067 sshd[1293]: pam_unix(sshd:session): session closed for user core May 16 00:54:23.279160 systemd[1]: Started sshd@2-10.0.0.138:22-10.0.0.1:59188.service. May 16 00:54:23.279641 systemd[1]: sshd@1-10.0.0.138:22-10.0.0.1:59174.service: Deactivated successfully. May 16 00:54:23.280276 systemd[1]: session-2.scope: Deactivated successfully. May 16 00:54:23.281849 systemd-logind[1203]: Session 2 logged out. Waiting for processes to exit. May 16 00:54:23.282721 systemd-logind[1203]: Removed session 2. May 16 00:54:23.319340 sshd[1298]: Accepted publickey for core from 10.0.0.1 port 59188 ssh2: RSA SHA256:czXyODm5lEdSCdxgc4UKYE1H3sjGZqNxHBxH/SPqyp4 May 16 00:54:23.320401 sshd[1298]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 16 00:54:23.323571 systemd-logind[1203]: New session 3 of user core. May 16 00:54:23.323920 systemd[1]: Started session-3.scope. May 16 00:54:23.372961 sshd[1298]: pam_unix(sshd:session): session closed for user core May 16 00:54:23.376478 systemd[1]: sshd@2-10.0.0.138:22-10.0.0.1:59188.service: Deactivated successfully. May 16 00:54:23.377032 systemd[1]: session-3.scope: Deactivated successfully. May 16 00:54:23.377527 systemd-logind[1203]: Session 3 logged out. Waiting for processes to exit. May 16 00:54:23.378509 systemd[1]: Started sshd@3-10.0.0.138:22-10.0.0.1:59198.service. May 16 00:54:23.379240 systemd-logind[1203]: Removed session 3. May 16 00:54:23.418270 sshd[1305]: Accepted publickey for core from 10.0.0.1 port 59198 ssh2: RSA SHA256:czXyODm5lEdSCdxgc4UKYE1H3sjGZqNxHBxH/SPqyp4 May 16 00:54:23.419306 sshd[1305]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 16 00:54:23.422572 systemd-logind[1203]: New session 4 of user core. May 16 00:54:23.422965 systemd[1]: Started session-4.scope. May 16 00:54:23.475623 sshd[1305]: pam_unix(sshd:session): session closed for user core May 16 00:54:23.479353 systemd[1]: sshd@3-10.0.0.138:22-10.0.0.1:59198.service: Deactivated successfully. May 16 00:54:23.479893 systemd[1]: session-4.scope: Deactivated successfully. May 16 00:54:23.480401 systemd-logind[1203]: Session 4 logged out. Waiting for processes to exit. May 16 00:54:23.481417 systemd[1]: Started sshd@4-10.0.0.138:22-10.0.0.1:59202.service. May 16 00:54:23.482107 systemd-logind[1203]: Removed session 4. May 16 00:54:23.521204 sshd[1311]: Accepted publickey for core from 10.0.0.1 port 59202 ssh2: RSA SHA256:czXyODm5lEdSCdxgc4UKYE1H3sjGZqNxHBxH/SPqyp4 May 16 00:54:23.522237 sshd[1311]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 16 00:54:23.525173 systemd-logind[1203]: New session 5 of user core. May 16 00:54:23.525916 systemd[1]: Started session-5.scope. May 16 00:54:23.583820 sudo[1314]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 16 00:54:23.584031 sudo[1314]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) May 16 00:54:23.594722 systemd[1]: Starting coreos-metadata.service... May 16 00:54:23.600922 systemd[1]: coreos-metadata.service: Deactivated successfully. May 16 00:54:23.601153 systemd[1]: Finished coreos-metadata.service. May 16 00:54:24.064524 systemd[1]: Stopped kubelet.service. May 16 00:54:24.066996 systemd[1]: Starting kubelet.service... May 16 00:54:24.090249 systemd[1]: Reloading. May 16 00:54:24.148724 /usr/lib/systemd/system-generators/torcx-generator[1378]: time="2025-05-16T00:54:24Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 16 00:54:24.148781 /usr/lib/systemd/system-generators/torcx-generator[1378]: time="2025-05-16T00:54:24Z" level=info msg="torcx already run" May 16 00:54:24.234911 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 16 00:54:24.235050 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 16 00:54:24.250170 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 16 00:54:24.311884 systemd[1]: Started kubelet.service. May 16 00:54:24.314599 systemd[1]: Stopping kubelet.service... May 16 00:54:24.315010 systemd[1]: kubelet.service: Deactivated successfully. May 16 00:54:24.315166 systemd[1]: Stopped kubelet.service. May 16 00:54:24.316477 systemd[1]: Starting kubelet.service... May 16 00:54:24.409479 systemd[1]: Started kubelet.service. May 16 00:54:24.441284 kubelet[1421]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 16 00:54:24.441284 kubelet[1421]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 16 00:54:24.441284 kubelet[1421]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 16 00:54:24.441571 kubelet[1421]: I0516 00:54:24.441332 1421 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 16 00:54:25.111406 kubelet[1421]: I0516 00:54:25.111342 1421 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" May 16 00:54:25.111538 kubelet[1421]: I0516 00:54:25.111527 1421 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 16 00:54:25.111858 kubelet[1421]: I0516 00:54:25.111837 1421 server.go:956] "Client rotation is on, will bootstrap in background" May 16 00:54:25.166484 kubelet[1421]: I0516 00:54:25.166450 1421 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 16 00:54:25.180591 kubelet[1421]: E0516 00:54:25.180552 1421 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 16 00:54:25.180591 kubelet[1421]: I0516 00:54:25.180588 1421 server.go:1423] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 16 00:54:25.183076 kubelet[1421]: I0516 00:54:25.183059 1421 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 16 00:54:25.184309 kubelet[1421]: I0516 00:54:25.184265 1421 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 16 00:54:25.184468 kubelet[1421]: I0516 00:54:25.184310 1421 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"10.0.0.138","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 16 00:54:25.184537 kubelet[1421]: I0516 00:54:25.184524 1421 topology_manager.go:138] "Creating topology manager with none policy" May 16 00:54:25.184537 kubelet[1421]: I0516 00:54:25.184533 1421 container_manager_linux.go:303] "Creating device plugin manager" May 16 00:54:25.184737 kubelet[1421]: I0516 00:54:25.184712 1421 state_mem.go:36] "Initialized new in-memory state store" May 16 00:54:25.187419 kubelet[1421]: I0516 00:54:25.187391 1421 kubelet.go:480] "Attempting to sync node with API server" May 16 00:54:25.187472 kubelet[1421]: I0516 00:54:25.187427 1421 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" May 16 00:54:25.187472 kubelet[1421]: I0516 00:54:25.187462 1421 kubelet.go:386] "Adding apiserver pod source" May 16 00:54:25.189071 kubelet[1421]: E0516 00:54:25.189040 1421 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:54:25.189123 kubelet[1421]: E0516 00:54:25.189107 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:54:25.189146 kubelet[1421]: I0516 00:54:25.189110 1421 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 16 00:54:25.196314 kubelet[1421]: I0516 00:54:25.196262 1421 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" May 16 00:54:25.197119 kubelet[1421]: I0516 00:54:25.197095 1421 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" May 16 00:54:25.197231 kubelet[1421]: W0516 00:54:25.197221 1421 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 16 00:54:25.199380 kubelet[1421]: I0516 00:54:25.199356 1421 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 16 00:54:25.199431 kubelet[1421]: I0516 00:54:25.199400 1421 server.go:1289] "Started kubelet" May 16 00:54:25.199568 kubelet[1421]: I0516 00:54:25.199516 1421 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 16 00:54:25.199952 kubelet[1421]: I0516 00:54:25.199928 1421 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 16 00:54:25.200089 kubelet[1421]: I0516 00:54:25.200068 1421 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 May 16 00:54:25.201013 kubelet[1421]: I0516 00:54:25.200992 1421 server.go:317] "Adding debug handlers to kubelet server" May 16 00:54:25.201342 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). May 16 00:54:25.201458 kubelet[1421]: I0516 00:54:25.201435 1421 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 16 00:54:25.203191 kubelet[1421]: I0516 00:54:25.203167 1421 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 16 00:54:25.206032 kubelet[1421]: E0516 00:54:25.205998 1421 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"10.0.0.138\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" May 16 00:54:25.206616 kubelet[1421]: E0516 00:54:25.206587 1421 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" May 16 00:54:25.206853 kubelet[1421]: E0516 00:54:25.206837 1421 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.138\" not found" May 16 00:54:25.206935 kubelet[1421]: I0516 00:54:25.206924 1421 volume_manager.go:297] "Starting Kubelet Volume Manager" May 16 00:54:25.214720 kubelet[1421]: I0516 00:54:25.212827 1421 desired_state_of_world_populator.go:150] "Desired state populator starts to run" May 16 00:54:25.214720 kubelet[1421]: I0516 00:54:25.213041 1421 reconciler.go:26] "Reconciler: start to sync state" May 16 00:54:25.215437 kubelet[1421]: I0516 00:54:25.215413 1421 factory.go:223] Registration of the systemd container factory successfully May 16 00:54:25.215536 kubelet[1421]: I0516 00:54:25.215511 1421 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 16 00:54:25.216797 kubelet[1421]: E0516 00:54:25.216766 1421 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 16 00:54:25.217643 kubelet[1421]: I0516 00:54:25.217608 1421 factory.go:223] Registration of the containerd container factory successfully May 16 00:54:25.217735 kubelet[1421]: E0516 00:54:25.216027 1421 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.138.183fdbcf9e0c7816 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.138,UID:10.0.0.138,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:10.0.0.138,},FirstTimestamp:2025-05-16 00:54:25.19937231 +0000 UTC m=+0.786368121,LastTimestamp:2025-05-16 00:54:25.19937231 +0000 UTC m=+0.786368121,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.138,}" May 16 00:54:25.230362 kubelet[1421]: I0516 00:54:25.229861 1421 cpu_manager.go:221] "Starting CPU manager" policy="none" May 16 00:54:25.230619 kubelet[1421]: I0516 00:54:25.230597 1421 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 16 00:54:25.232132 kubelet[1421]: I0516 00:54:25.232112 1421 state_mem.go:36] "Initialized new in-memory state store" May 16 00:54:25.247405 kubelet[1421]: E0516 00:54:25.247358 1421 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.0.0.138\" not found" node="10.0.0.138" May 16 00:54:25.305704 kubelet[1421]: I0516 00:54:25.305682 1421 policy_none.go:49] "None policy: Start" May 16 00:54:25.305846 kubelet[1421]: I0516 00:54:25.305835 1421 memory_manager.go:186] "Starting memorymanager" policy="None" May 16 00:54:25.305904 kubelet[1421]: I0516 00:54:25.305895 1421 state_mem.go:35] "Initializing new in-memory state store" May 16 00:54:25.307436 kubelet[1421]: E0516 00:54:25.307413 1421 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.138\" not found" May 16 00:54:25.311021 systemd[1]: Created slice kubepods.slice. May 16 00:54:25.313796 kubelet[1421]: I0516 00:54:25.313749 1421 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" May 16 00:54:25.315264 systemd[1]: Created slice kubepods-burstable.slice. May 16 00:54:25.317853 systemd[1]: Created slice kubepods-besteffort.slice. May 16 00:54:25.329663 kubelet[1421]: E0516 00:54:25.329620 1421 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" May 16 00:54:25.329838 kubelet[1421]: I0516 00:54:25.329813 1421 eviction_manager.go:189] "Eviction manager: starting control loop" May 16 00:54:25.329876 kubelet[1421]: I0516 00:54:25.329834 1421 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 16 00:54:25.330504 kubelet[1421]: I0516 00:54:25.330368 1421 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 16 00:54:25.330738 kubelet[1421]: E0516 00:54:25.330708 1421 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 16 00:54:25.330868 kubelet[1421]: E0516 00:54:25.330854 1421 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.0.138\" not found" May 16 00:54:25.376680 kubelet[1421]: I0516 00:54:25.376595 1421 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" May 16 00:54:25.376680 kubelet[1421]: I0516 00:54:25.376627 1421 status_manager.go:230] "Starting to sync pod status with apiserver" May 16 00:54:25.376680 kubelet[1421]: I0516 00:54:25.376647 1421 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 16 00:54:25.376680 kubelet[1421]: I0516 00:54:25.376654 1421 kubelet.go:2436] "Starting kubelet main sync loop" May 16 00:54:25.376871 kubelet[1421]: E0516 00:54:25.376696 1421 kubelet.go:2460] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" May 16 00:54:25.430632 kubelet[1421]: I0516 00:54:25.430602 1421 kubelet_node_status.go:75] "Attempting to register node" node="10.0.0.138" May 16 00:54:25.434453 kubelet[1421]: I0516 00:54:25.434432 1421 kubelet_node_status.go:78] "Successfully registered node" node="10.0.0.138" May 16 00:54:25.434453 kubelet[1421]: E0516 00:54:25.434459 1421 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"10.0.0.138\": node \"10.0.0.138\" not found" May 16 00:54:25.445764 kubelet[1421]: E0516 00:54:25.445736 1421 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.138\" not found" May 16 00:54:25.546719 kubelet[1421]: E0516 00:54:25.546683 1421 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.138\" not found" May 16 00:54:25.647096 kubelet[1421]: E0516 00:54:25.647018 1421 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.138\" not found" May 16 00:54:25.747863 kubelet[1421]: E0516 00:54:25.747838 1421 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.138\" not found" May 16 00:54:25.848351 kubelet[1421]: E0516 00:54:25.848327 1421 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.138\" not found" May 16 00:54:25.931144 sudo[1314]: pam_unix(sudo:session): session closed for user root May 16 00:54:25.933978 sshd[1311]: pam_unix(sshd:session): session closed for user core May 16 00:54:25.936121 systemd[1]: sshd@4-10.0.0.138:22-10.0.0.1:59202.service: Deactivated successfully. May 16 00:54:25.936788 systemd[1]: session-5.scope: Deactivated successfully. May 16 00:54:25.937882 systemd-logind[1203]: Session 5 logged out. Waiting for processes to exit. May 16 00:54:25.938556 systemd-logind[1203]: Removed session 5. May 16 00:54:25.948880 kubelet[1421]: E0516 00:54:25.948816 1421 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.138\" not found" May 16 00:54:26.049440 kubelet[1421]: E0516 00:54:26.049395 1421 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.138\" not found" May 16 00:54:26.113631 kubelet[1421]: I0516 00:54:26.113596 1421 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" May 16 00:54:26.113927 kubelet[1421]: I0516 00:54:26.113902 1421 reflector.go:556] "Warning: watch ended with error" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" err="very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received" May 16 00:54:26.114064 kubelet[1421]: I0516 00:54:26.113908 1421 reflector.go:556] "Warning: watch ended with error" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" err="very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received" May 16 00:54:26.150191 kubelet[1421]: E0516 00:54:26.150158 1421 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.138\" not found" May 16 00:54:26.189654 kubelet[1421]: E0516 00:54:26.189581 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:54:26.250677 kubelet[1421]: E0516 00:54:26.250637 1421 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.138\" not found" May 16 00:54:26.351378 kubelet[1421]: E0516 00:54:26.351336 1421 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.138\" not found" May 16 00:54:26.452093 kubelet[1421]: E0516 00:54:26.452019 1421 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.138\" not found" May 16 00:54:26.553562 kubelet[1421]: I0516 00:54:26.553509 1421 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" May 16 00:54:26.553928 env[1216]: time="2025-05-16T00:54:26.553870270Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 16 00:54:26.554153 kubelet[1421]: I0516 00:54:26.554095 1421 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" May 16 00:54:27.190383 kubelet[1421]: E0516 00:54:27.190356 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:54:27.190566 kubelet[1421]: I0516 00:54:27.190527 1421 apiserver.go:52] "Watching apiserver" May 16 00:54:27.204011 systemd[1]: Created slice kubepods-besteffort-pod2f6bc126_63f9_4680_8f19_2d2b92dcc416.slice. May 16 00:54:27.215364 kubelet[1421]: I0516 00:54:27.215332 1421 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" May 16 00:54:27.216707 systemd[1]: Created slice kubepods-burstable-podd3db3bbd_35cc_44f9_b6e4_37d771d1030c.slice. May 16 00:54:27.222858 kubelet[1421]: I0516 00:54:27.222814 1421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2f6bc126-63f9-4680-8f19-2d2b92dcc416-lib-modules\") pod \"kube-proxy-zbkq5\" (UID: \"2f6bc126-63f9-4680-8f19-2d2b92dcc416\") " pod="kube-system/kube-proxy-zbkq5" May 16 00:54:27.222937 kubelet[1421]: I0516 00:54:27.222864 1421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d3db3bbd-35cc-44f9-b6e4-37d771d1030c-cni-path\") pod \"cilium-zvflk\" (UID: \"d3db3bbd-35cc-44f9-b6e4-37d771d1030c\") " pod="kube-system/cilium-zvflk" May 16 00:54:27.222937 kubelet[1421]: I0516 00:54:27.222897 1421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mgsws\" (UniqueName: \"kubernetes.io/projected/2f6bc126-63f9-4680-8f19-2d2b92dcc416-kube-api-access-mgsws\") pod \"kube-proxy-zbkq5\" (UID: \"2f6bc126-63f9-4680-8f19-2d2b92dcc416\") " pod="kube-system/kube-proxy-zbkq5" May 16 00:54:27.222937 kubelet[1421]: I0516 00:54:27.222926 1421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d3db3bbd-35cc-44f9-b6e4-37d771d1030c-xtables-lock\") pod \"cilium-zvflk\" (UID: \"d3db3bbd-35cc-44f9-b6e4-37d771d1030c\") " pod="kube-system/cilium-zvflk" May 16 00:54:27.223039 kubelet[1421]: I0516 00:54:27.222980 1421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d3db3bbd-35cc-44f9-b6e4-37d771d1030c-hubble-tls\") pod \"cilium-zvflk\" (UID: \"d3db3bbd-35cc-44f9-b6e4-37d771d1030c\") " pod="kube-system/cilium-zvflk" May 16 00:54:27.223039 kubelet[1421]: I0516 00:54:27.222996 1421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6z7kc\" (UniqueName: \"kubernetes.io/projected/d3db3bbd-35cc-44f9-b6e4-37d771d1030c-kube-api-access-6z7kc\") pod \"cilium-zvflk\" (UID: \"d3db3bbd-35cc-44f9-b6e4-37d771d1030c\") " pod="kube-system/cilium-zvflk" May 16 00:54:27.223039 kubelet[1421]: I0516 00:54:27.223011 1421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/2f6bc126-63f9-4680-8f19-2d2b92dcc416-kube-proxy\") pod \"kube-proxy-zbkq5\" (UID: \"2f6bc126-63f9-4680-8f19-2d2b92dcc416\") " pod="kube-system/kube-proxy-zbkq5" May 16 00:54:27.223039 kubelet[1421]: I0516 00:54:27.223024 1421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d3db3bbd-35cc-44f9-b6e4-37d771d1030c-cilium-run\") pod \"cilium-zvflk\" (UID: \"d3db3bbd-35cc-44f9-b6e4-37d771d1030c\") " pod="kube-system/cilium-zvflk" May 16 00:54:27.223160 kubelet[1421]: I0516 00:54:27.223040 1421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d3db3bbd-35cc-44f9-b6e4-37d771d1030c-bpf-maps\") pod \"cilium-zvflk\" (UID: \"d3db3bbd-35cc-44f9-b6e4-37d771d1030c\") " pod="kube-system/cilium-zvflk" May 16 00:54:27.223160 kubelet[1421]: I0516 00:54:27.223055 1421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d3db3bbd-35cc-44f9-b6e4-37d771d1030c-cilium-cgroup\") pod \"cilium-zvflk\" (UID: \"d3db3bbd-35cc-44f9-b6e4-37d771d1030c\") " pod="kube-system/cilium-zvflk" May 16 00:54:27.223160 kubelet[1421]: I0516 00:54:27.223069 1421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d3db3bbd-35cc-44f9-b6e4-37d771d1030c-etc-cni-netd\") pod \"cilium-zvflk\" (UID: \"d3db3bbd-35cc-44f9-b6e4-37d771d1030c\") " pod="kube-system/cilium-zvflk" May 16 00:54:27.223160 kubelet[1421]: I0516 00:54:27.223083 1421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d3db3bbd-35cc-44f9-b6e4-37d771d1030c-host-proc-sys-net\") pod \"cilium-zvflk\" (UID: \"d3db3bbd-35cc-44f9-b6e4-37d771d1030c\") " pod="kube-system/cilium-zvflk" May 16 00:54:27.223160 kubelet[1421]: I0516 00:54:27.223099 1421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d3db3bbd-35cc-44f9-b6e4-37d771d1030c-host-proc-sys-kernel\") pod \"cilium-zvflk\" (UID: \"d3db3bbd-35cc-44f9-b6e4-37d771d1030c\") " pod="kube-system/cilium-zvflk" May 16 00:54:27.223160 kubelet[1421]: I0516 00:54:27.223113 1421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2f6bc126-63f9-4680-8f19-2d2b92dcc416-xtables-lock\") pod \"kube-proxy-zbkq5\" (UID: \"2f6bc126-63f9-4680-8f19-2d2b92dcc416\") " pod="kube-system/kube-proxy-zbkq5" May 16 00:54:27.223273 kubelet[1421]: I0516 00:54:27.223126 1421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d3db3bbd-35cc-44f9-b6e4-37d771d1030c-hostproc\") pod \"cilium-zvflk\" (UID: \"d3db3bbd-35cc-44f9-b6e4-37d771d1030c\") " pod="kube-system/cilium-zvflk" May 16 00:54:27.223273 kubelet[1421]: I0516 00:54:27.223138 1421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d3db3bbd-35cc-44f9-b6e4-37d771d1030c-lib-modules\") pod \"cilium-zvflk\" (UID: \"d3db3bbd-35cc-44f9-b6e4-37d771d1030c\") " pod="kube-system/cilium-zvflk" May 16 00:54:27.223273 kubelet[1421]: I0516 00:54:27.223151 1421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d3db3bbd-35cc-44f9-b6e4-37d771d1030c-clustermesh-secrets\") pod \"cilium-zvflk\" (UID: \"d3db3bbd-35cc-44f9-b6e4-37d771d1030c\") " pod="kube-system/cilium-zvflk" May 16 00:54:27.223273 kubelet[1421]: I0516 00:54:27.223165 1421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d3db3bbd-35cc-44f9-b6e4-37d771d1030c-cilium-config-path\") pod \"cilium-zvflk\" (UID: \"d3db3bbd-35cc-44f9-b6e4-37d771d1030c\") " pod="kube-system/cilium-zvflk" May 16 00:54:27.324538 kubelet[1421]: I0516 00:54:27.324490 1421 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" May 16 00:54:27.515951 kubelet[1421]: E0516 00:54:27.515851 1421 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:54:27.517588 env[1216]: time="2025-05-16T00:54:27.517540150Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zbkq5,Uid:2f6bc126-63f9-4680-8f19-2d2b92dcc416,Namespace:kube-system,Attempt:0,}" May 16 00:54:27.527003 kubelet[1421]: E0516 00:54:27.526962 1421 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:54:27.528253 env[1216]: time="2025-05-16T00:54:27.527604150Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zvflk,Uid:d3db3bbd-35cc-44f9-b6e4-37d771d1030c,Namespace:kube-system,Attempt:0,}" May 16 00:54:28.089787 env[1216]: time="2025-05-16T00:54:28.089739310Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:54:28.090694 env[1216]: time="2025-05-16T00:54:28.090665590Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:54:28.092895 env[1216]: time="2025-05-16T00:54:28.092864630Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:54:28.094388 env[1216]: time="2025-05-16T00:54:28.094361270Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:54:28.096097 env[1216]: time="2025-05-16T00:54:28.096067270Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:54:28.098373 env[1216]: time="2025-05-16T00:54:28.098348510Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:54:28.099901 env[1216]: time="2025-05-16T00:54:28.099875590Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:54:28.101431 env[1216]: time="2025-05-16T00:54:28.101404590Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:54:28.128518 env[1216]: time="2025-05-16T00:54:28.128452470Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 16 00:54:28.128662 env[1216]: time="2025-05-16T00:54:28.128508110Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 16 00:54:28.128662 env[1216]: time="2025-05-16T00:54:28.128520310Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 00:54:28.128883 env[1216]: time="2025-05-16T00:54:28.128814510Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 16 00:54:28.128883 env[1216]: time="2025-05-16T00:54:28.128857310Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 16 00:54:28.128995 env[1216]: time="2025-05-16T00:54:28.128867750Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 00:54:28.129047 env[1216]: time="2025-05-16T00:54:28.128925670Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/40dd6c5d0ca464c357572c24c6024b4a4ee73249d103b79cbd44251830d28834 pid=1490 runtime=io.containerd.runc.v2 May 16 00:54:28.129877 env[1216]: time="2025-05-16T00:54:28.129840470Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a3a9f068bb97672a201a1b78e827414db3587ce5f18eaa5029e362b74c9c1b8c pid=1491 runtime=io.containerd.runc.v2 May 16 00:54:28.154915 systemd[1]: Started cri-containerd-a3a9f068bb97672a201a1b78e827414db3587ce5f18eaa5029e362b74c9c1b8c.scope. May 16 00:54:28.160288 systemd[1]: Started cri-containerd-40dd6c5d0ca464c357572c24c6024b4a4ee73249d103b79cbd44251830d28834.scope. May 16 00:54:28.191281 kubelet[1421]: E0516 00:54:28.191250 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:54:28.197282 env[1216]: time="2025-05-16T00:54:28.197247590Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zbkq5,Uid:2f6bc126-63f9-4680-8f19-2d2b92dcc416,Namespace:kube-system,Attempt:0,} returns sandbox id \"a3a9f068bb97672a201a1b78e827414db3587ce5f18eaa5029e362b74c9c1b8c\"" May 16 00:54:28.198905 kubelet[1421]: E0516 00:54:28.198374 1421 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:54:28.199684 env[1216]: time="2025-05-16T00:54:28.199655390Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.1\"" May 16 00:54:28.206250 env[1216]: time="2025-05-16T00:54:28.206213070Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zvflk,Uid:d3db3bbd-35cc-44f9-b6e4-37d771d1030c,Namespace:kube-system,Attempt:0,} returns sandbox id \"40dd6c5d0ca464c357572c24c6024b4a4ee73249d103b79cbd44251830d28834\"" May 16 00:54:28.206675 kubelet[1421]: E0516 00:54:28.206659 1421 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:54:28.330269 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1584320576.mount: Deactivated successfully. May 16 00:54:29.191664 kubelet[1421]: E0516 00:54:29.191604 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:54:29.235626 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2500826217.mount: Deactivated successfully. May 16 00:54:29.709428 env[1216]: time="2025-05-16T00:54:29.709373470Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.33.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:54:29.710604 env[1216]: time="2025-05-16T00:54:29.710569550Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e58848989f556e36aa29d7852ab1712163960651e074d11cae9d31fb27192db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:54:29.711891 env[1216]: time="2025-05-16T00:54:29.711857950Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.33.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:54:29.713036 env[1216]: time="2025-05-16T00:54:29.713009830Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:7ddf379897139ae8ade8b33cb9373b70c632a4d5491da6e234f5d830e0a50807,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:54:29.713785 env[1216]: time="2025-05-16T00:54:29.713732790Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.1\" returns image reference \"sha256:3e58848989f556e36aa29d7852ab1712163960651e074d11cae9d31fb27192db\"" May 16 00:54:29.717354 env[1216]: time="2025-05-16T00:54:29.717314910Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 16 00:54:29.718846 env[1216]: time="2025-05-16T00:54:29.718808630Z" level=info msg="CreateContainer within sandbox \"a3a9f068bb97672a201a1b78e827414db3587ce5f18eaa5029e362b74c9c1b8c\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 16 00:54:29.729832 env[1216]: time="2025-05-16T00:54:29.729794510Z" level=info msg="CreateContainer within sandbox \"a3a9f068bb97672a201a1b78e827414db3587ce5f18eaa5029e362b74c9c1b8c\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"2c6558cb7304d68b5a5ee41b4025244a264f2ec9d4153519e371b6edb4796f31\"" May 16 00:54:29.730510 env[1216]: time="2025-05-16T00:54:29.730485030Z" level=info msg="StartContainer for \"2c6558cb7304d68b5a5ee41b4025244a264f2ec9d4153519e371b6edb4796f31\"" May 16 00:54:29.748258 systemd[1]: Started cri-containerd-2c6558cb7304d68b5a5ee41b4025244a264f2ec9d4153519e371b6edb4796f31.scope. May 16 00:54:29.784779 env[1216]: time="2025-05-16T00:54:29.784727950Z" level=info msg="StartContainer for \"2c6558cb7304d68b5a5ee41b4025244a264f2ec9d4153519e371b6edb4796f31\" returns successfully" May 16 00:54:30.192510 kubelet[1421]: E0516 00:54:30.192470 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:54:30.387371 kubelet[1421]: E0516 00:54:30.386951 1421 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:54:30.396271 kubelet[1421]: I0516 00:54:30.396203 1421 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-zbkq5" podStartSLOduration=3.87951903 podStartE2EDuration="5.39618859s" podCreationTimestamp="2025-05-16 00:54:25 +0000 UTC" firstStartedPulling="2025-05-16 00:54:28.19933507 +0000 UTC m=+3.786330841" lastFinishedPulling="2025-05-16 00:54:29.71600459 +0000 UTC m=+5.303000401" observedRunningTime="2025-05-16 00:54:30.39536043 +0000 UTC m=+5.982356241" watchObservedRunningTime="2025-05-16 00:54:30.39618859 +0000 UTC m=+5.983184401" May 16 00:54:31.193305 kubelet[1421]: E0516 00:54:31.193269 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:54:31.388031 kubelet[1421]: E0516 00:54:31.387985 1421 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:54:32.193824 kubelet[1421]: E0516 00:54:32.193781 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:54:33.193954 kubelet[1421]: E0516 00:54:33.193900 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:54:34.194533 kubelet[1421]: E0516 00:54:34.194488 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:54:34.232169 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2091730412.mount: Deactivated successfully. May 16 00:54:35.194640 kubelet[1421]: E0516 00:54:35.194599 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:54:36.195774 kubelet[1421]: E0516 00:54:36.195700 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:54:36.324724 env[1216]: time="2025-05-16T00:54:36.324671910Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:54:36.326173 env[1216]: time="2025-05-16T00:54:36.326136350Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:54:36.327698 env[1216]: time="2025-05-16T00:54:36.327665150Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:54:36.328221 env[1216]: time="2025-05-16T00:54:36.328186110Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" May 16 00:54:36.332207 env[1216]: time="2025-05-16T00:54:36.332168630Z" level=info msg="CreateContainer within sandbox \"40dd6c5d0ca464c357572c24c6024b4a4ee73249d103b79cbd44251830d28834\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 16 00:54:36.339399 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount823855538.mount: Deactivated successfully. May 16 00:54:36.343594 env[1216]: time="2025-05-16T00:54:36.343553590Z" level=info msg="CreateContainer within sandbox \"40dd6c5d0ca464c357572c24c6024b4a4ee73249d103b79cbd44251830d28834\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"15e764e6c0407ed7f81ad030ada538176a6e4a9ff394cf3feaeb94ca8fec5653\"" May 16 00:54:36.344300 env[1216]: time="2025-05-16T00:54:36.344229550Z" level=info msg="StartContainer for \"15e764e6c0407ed7f81ad030ada538176a6e4a9ff394cf3feaeb94ca8fec5653\"" May 16 00:54:36.357840 systemd[1]: Started cri-containerd-15e764e6c0407ed7f81ad030ada538176a6e4a9ff394cf3feaeb94ca8fec5653.scope. May 16 00:54:36.400089 env[1216]: time="2025-05-16T00:54:36.396787230Z" level=info msg="StartContainer for \"15e764e6c0407ed7f81ad030ada538176a6e4a9ff394cf3feaeb94ca8fec5653\" returns successfully" May 16 00:54:36.449133 systemd[1]: cri-containerd-15e764e6c0407ed7f81ad030ada538176a6e4a9ff394cf3feaeb94ca8fec5653.scope: Deactivated successfully. May 16 00:54:36.594250 env[1216]: time="2025-05-16T00:54:36.594203590Z" level=info msg="shim disconnected" id=15e764e6c0407ed7f81ad030ada538176a6e4a9ff394cf3feaeb94ca8fec5653 May 16 00:54:36.594454 env[1216]: time="2025-05-16T00:54:36.594435430Z" level=warning msg="cleaning up after shim disconnected" id=15e764e6c0407ed7f81ad030ada538176a6e4a9ff394cf3feaeb94ca8fec5653 namespace=k8s.io May 16 00:54:36.594510 env[1216]: time="2025-05-16T00:54:36.594497910Z" level=info msg="cleaning up dead shim" May 16 00:54:36.600475 env[1216]: time="2025-05-16T00:54:36.600444790Z" level=warning msg="cleanup warnings time=\"2025-05-16T00:54:36Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1788 runtime=io.containerd.runc.v2\n" May 16 00:54:37.195961 kubelet[1421]: E0516 00:54:37.195908 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:54:37.337970 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-15e764e6c0407ed7f81ad030ada538176a6e4a9ff394cf3feaeb94ca8fec5653-rootfs.mount: Deactivated successfully. May 16 00:54:37.401733 kubelet[1421]: E0516 00:54:37.401686 1421 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:54:37.408146 env[1216]: time="2025-05-16T00:54:37.408099070Z" level=info msg="CreateContainer within sandbox \"40dd6c5d0ca464c357572c24c6024b4a4ee73249d103b79cbd44251830d28834\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 16 00:54:37.421156 env[1216]: time="2025-05-16T00:54:37.421103270Z" level=info msg="CreateContainer within sandbox \"40dd6c5d0ca464c357572c24c6024b4a4ee73249d103b79cbd44251830d28834\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"4a5b4934f3c40e2c1a93940096646fb30624f8dec74086c416dbf4fa7ac2f406\"" May 16 00:54:37.421647 env[1216]: time="2025-05-16T00:54:37.421607590Z" level=info msg="StartContainer for \"4a5b4934f3c40e2c1a93940096646fb30624f8dec74086c416dbf4fa7ac2f406\"" May 16 00:54:37.437971 systemd[1]: Started cri-containerd-4a5b4934f3c40e2c1a93940096646fb30624f8dec74086c416dbf4fa7ac2f406.scope. May 16 00:54:37.466826 env[1216]: time="2025-05-16T00:54:37.466718750Z" level=info msg="StartContainer for \"4a5b4934f3c40e2c1a93940096646fb30624f8dec74086c416dbf4fa7ac2f406\" returns successfully" May 16 00:54:37.479281 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 16 00:54:37.479464 systemd[1]: Stopped systemd-sysctl.service. May 16 00:54:37.479619 systemd[1]: Stopping systemd-sysctl.service... May 16 00:54:37.480990 systemd[1]: Starting systemd-sysctl.service... May 16 00:54:37.484051 systemd[1]: cri-containerd-4a5b4934f3c40e2c1a93940096646fb30624f8dec74086c416dbf4fa7ac2f406.scope: Deactivated successfully. May 16 00:54:37.489460 systemd[1]: Finished systemd-sysctl.service. May 16 00:54:37.502239 env[1216]: time="2025-05-16T00:54:37.502200150Z" level=info msg="shim disconnected" id=4a5b4934f3c40e2c1a93940096646fb30624f8dec74086c416dbf4fa7ac2f406 May 16 00:54:37.502391 env[1216]: time="2025-05-16T00:54:37.502240510Z" level=warning msg="cleaning up after shim disconnected" id=4a5b4934f3c40e2c1a93940096646fb30624f8dec74086c416dbf4fa7ac2f406 namespace=k8s.io May 16 00:54:37.502391 env[1216]: time="2025-05-16T00:54:37.502252630Z" level=info msg="cleaning up dead shim" May 16 00:54:37.508110 env[1216]: time="2025-05-16T00:54:37.508082550Z" level=warning msg="cleanup warnings time=\"2025-05-16T00:54:37Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1851 runtime=io.containerd.runc.v2\n" May 16 00:54:38.196683 kubelet[1421]: E0516 00:54:38.196605 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:54:38.337795 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4a5b4934f3c40e2c1a93940096646fb30624f8dec74086c416dbf4fa7ac2f406-rootfs.mount: Deactivated successfully. May 16 00:54:38.405241 kubelet[1421]: E0516 00:54:38.405008 1421 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:54:38.408300 env[1216]: time="2025-05-16T00:54:38.408258830Z" level=info msg="CreateContainer within sandbox \"40dd6c5d0ca464c357572c24c6024b4a4ee73249d103b79cbd44251830d28834\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 16 00:54:38.418660 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2286887097.mount: Deactivated successfully. May 16 00:54:38.427657 env[1216]: time="2025-05-16T00:54:38.427605910Z" level=info msg="CreateContainer within sandbox \"40dd6c5d0ca464c357572c24c6024b4a4ee73249d103b79cbd44251830d28834\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"f473f536ba6dda1bb0b73c91758c36813dd940c868b67a61b37bdb12af32041d\"" May 16 00:54:38.428246 env[1216]: time="2025-05-16T00:54:38.428140350Z" level=info msg="StartContainer for \"f473f536ba6dda1bb0b73c91758c36813dd940c868b67a61b37bdb12af32041d\"" May 16 00:54:38.442410 systemd[1]: Started cri-containerd-f473f536ba6dda1bb0b73c91758c36813dd940c868b67a61b37bdb12af32041d.scope. May 16 00:54:38.475854 env[1216]: time="2025-05-16T00:54:38.473487110Z" level=info msg="StartContainer for \"f473f536ba6dda1bb0b73c91758c36813dd940c868b67a61b37bdb12af32041d\" returns successfully" May 16 00:54:38.481483 systemd[1]: cri-containerd-f473f536ba6dda1bb0b73c91758c36813dd940c868b67a61b37bdb12af32041d.scope: Deactivated successfully. May 16 00:54:38.500100 env[1216]: time="2025-05-16T00:54:38.500044230Z" level=info msg="shim disconnected" id=f473f536ba6dda1bb0b73c91758c36813dd940c868b67a61b37bdb12af32041d May 16 00:54:38.500100 env[1216]: time="2025-05-16T00:54:38.500088630Z" level=warning msg="cleaning up after shim disconnected" id=f473f536ba6dda1bb0b73c91758c36813dd940c868b67a61b37bdb12af32041d namespace=k8s.io May 16 00:54:38.500100 env[1216]: time="2025-05-16T00:54:38.500098750Z" level=info msg="cleaning up dead shim" May 16 00:54:38.506107 env[1216]: time="2025-05-16T00:54:38.506076390Z" level=warning msg="cleanup warnings time=\"2025-05-16T00:54:38Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1910 runtime=io.containerd.runc.v2\n" May 16 00:54:39.197516 kubelet[1421]: E0516 00:54:39.197479 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:54:39.407972 kubelet[1421]: E0516 00:54:39.407903 1421 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:54:39.411153 env[1216]: time="2025-05-16T00:54:39.411114950Z" level=info msg="CreateContainer within sandbox \"40dd6c5d0ca464c357572c24c6024b4a4ee73249d103b79cbd44251830d28834\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 16 00:54:39.421911 env[1216]: time="2025-05-16T00:54:39.421869590Z" level=info msg="CreateContainer within sandbox \"40dd6c5d0ca464c357572c24c6024b4a4ee73249d103b79cbd44251830d28834\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"9d428492d93a717e8b4970f9a648dda7c4145a89d3c0ab898debd8cc040dd956\"" May 16 00:54:39.423178 env[1216]: time="2025-05-16T00:54:39.423094310Z" level=info msg="StartContainer for \"9d428492d93a717e8b4970f9a648dda7c4145a89d3c0ab898debd8cc040dd956\"" May 16 00:54:39.438872 systemd[1]: Started cri-containerd-9d428492d93a717e8b4970f9a648dda7c4145a89d3c0ab898debd8cc040dd956.scope. May 16 00:54:39.464827 systemd[1]: cri-containerd-9d428492d93a717e8b4970f9a648dda7c4145a89d3c0ab898debd8cc040dd956.scope: Deactivated successfully. May 16 00:54:39.465889 env[1216]: time="2025-05-16T00:54:39.465822750Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd3db3bbd_35cc_44f9_b6e4_37d771d1030c.slice/cri-containerd-9d428492d93a717e8b4970f9a648dda7c4145a89d3c0ab898debd8cc040dd956.scope/memory.events\": no such file or directory" May 16 00:54:39.467391 env[1216]: time="2025-05-16T00:54:39.467354950Z" level=info msg="StartContainer for \"9d428492d93a717e8b4970f9a648dda7c4145a89d3c0ab898debd8cc040dd956\" returns successfully" May 16 00:54:39.484794 env[1216]: time="2025-05-16T00:54:39.484737510Z" level=info msg="shim disconnected" id=9d428492d93a717e8b4970f9a648dda7c4145a89d3c0ab898debd8cc040dd956 May 16 00:54:39.484979 env[1216]: time="2025-05-16T00:54:39.484959270Z" level=warning msg="cleaning up after shim disconnected" id=9d428492d93a717e8b4970f9a648dda7c4145a89d3c0ab898debd8cc040dd956 namespace=k8s.io May 16 00:54:39.485038 env[1216]: time="2025-05-16T00:54:39.485024870Z" level=info msg="cleaning up dead shim" May 16 00:54:39.495238 env[1216]: time="2025-05-16T00:54:39.495203830Z" level=warning msg="cleanup warnings time=\"2025-05-16T00:54:39Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1966 runtime=io.containerd.runc.v2\n" May 16 00:54:40.198146 kubelet[1421]: E0516 00:54:40.198102 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:54:40.337880 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9d428492d93a717e8b4970f9a648dda7c4145a89d3c0ab898debd8cc040dd956-rootfs.mount: Deactivated successfully. May 16 00:54:40.412154 kubelet[1421]: E0516 00:54:40.412126 1421 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:54:40.415552 env[1216]: time="2025-05-16T00:54:40.415499430Z" level=info msg="CreateContainer within sandbox \"40dd6c5d0ca464c357572c24c6024b4a4ee73249d103b79cbd44251830d28834\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 16 00:54:40.425667 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount345834109.mount: Deactivated successfully. May 16 00:54:40.432117 env[1216]: time="2025-05-16T00:54:40.432079630Z" level=info msg="CreateContainer within sandbox \"40dd6c5d0ca464c357572c24c6024b4a4ee73249d103b79cbd44251830d28834\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"a224305ca3ed81623eb0b334b2b88dba308cdee1899d86f4f61df1558a3499aa\"" May 16 00:54:40.432683 env[1216]: time="2025-05-16T00:54:40.432651630Z" level=info msg="StartContainer for \"a224305ca3ed81623eb0b334b2b88dba308cdee1899d86f4f61df1558a3499aa\"" May 16 00:54:40.445168 systemd[1]: Started cri-containerd-a224305ca3ed81623eb0b334b2b88dba308cdee1899d86f4f61df1558a3499aa.scope. May 16 00:54:40.480793 env[1216]: time="2025-05-16T00:54:40.480693350Z" level=info msg="StartContainer for \"a224305ca3ed81623eb0b334b2b88dba308cdee1899d86f4f61df1558a3499aa\" returns successfully" May 16 00:54:40.575037 kubelet[1421]: I0516 00:54:40.574309 1421 kubelet_node_status.go:501] "Fast updating node status as it just became ready" May 16 00:54:40.722803 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! May 16 00:54:40.945782 kernel: Initializing XFRM netlink socket May 16 00:54:40.948779 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! May 16 00:54:41.199266 kubelet[1421]: E0516 00:54:41.199167 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:54:41.415450 kubelet[1421]: E0516 00:54:41.415417 1421 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:54:41.429846 kubelet[1421]: I0516 00:54:41.429791 1421 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-zvflk" podStartSLOduration=8.30799495 podStartE2EDuration="16.42977547s" podCreationTimestamp="2025-05-16 00:54:25 +0000 UTC" firstStartedPulling="2025-05-16 00:54:28.20719639 +0000 UTC m=+3.794192201" lastFinishedPulling="2025-05-16 00:54:36.32897691 +0000 UTC m=+11.915972721" observedRunningTime="2025-05-16 00:54:41.42908979 +0000 UTC m=+17.016085601" watchObservedRunningTime="2025-05-16 00:54:41.42977547 +0000 UTC m=+17.016771281" May 16 00:54:41.853767 systemd[1]: Created slice kubepods-besteffort-pod490fbcc9_2cef_437b_957b_642fbcbb7563.slice. May 16 00:54:41.904842 kubelet[1421]: I0516 00:54:41.904798 1421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jnbr\" (UniqueName: \"kubernetes.io/projected/490fbcc9-2cef-437b-957b-642fbcbb7563-kube-api-access-6jnbr\") pod \"nginx-deployment-7fcdb87857-9x449\" (UID: \"490fbcc9-2cef-437b-957b-642fbcbb7563\") " pod="default/nginx-deployment-7fcdb87857-9x449" May 16 00:54:42.156818 env[1216]: time="2025-05-16T00:54:42.156698990Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-9x449,Uid:490fbcc9-2cef-437b-957b-642fbcbb7563,Namespace:default,Attempt:0,}" May 16 00:54:42.200339 kubelet[1421]: E0516 00:54:42.200306 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:54:42.416600 kubelet[1421]: E0516 00:54:42.416509 1421 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:54:42.552711 systemd-networkd[1043]: cilium_host: Link UP May 16 00:54:42.552858 systemd-networkd[1043]: cilium_net: Link UP May 16 00:54:42.557883 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready May 16 00:54:42.557948 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready May 16 00:54:42.557235 systemd-networkd[1043]: cilium_net: Gained carrier May 16 00:54:42.557388 systemd-networkd[1043]: cilium_host: Gained carrier May 16 00:54:42.557474 systemd-networkd[1043]: cilium_net: Gained IPv6LL May 16 00:54:42.557578 systemd-networkd[1043]: cilium_host: Gained IPv6LL May 16 00:54:42.630377 systemd-networkd[1043]: cilium_vxlan: Link UP May 16 00:54:42.630384 systemd-networkd[1043]: cilium_vxlan: Gained carrier May 16 00:54:42.907777 kernel: NET: Registered PF_ALG protocol family May 16 00:54:43.201430 kubelet[1421]: E0516 00:54:43.201310 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:54:43.417971 kubelet[1421]: E0516 00:54:43.417935 1421 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:54:43.450024 systemd-networkd[1043]: lxc_health: Link UP May 16 00:54:43.460492 systemd-networkd[1043]: lxc_health: Gained carrier May 16 00:54:43.461162 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready May 16 00:54:43.700660 systemd-networkd[1043]: lxc5aa779c86e78: Link UP May 16 00:54:43.708886 kernel: eth0: renamed from tmpb4957 May 16 00:54:43.717785 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready May 16 00:54:43.717861 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc5aa779c86e78: link becomes ready May 16 00:54:43.717119 systemd-networkd[1043]: lxc5aa779c86e78: Gained carrier May 16 00:54:44.202333 kubelet[1421]: E0516 00:54:44.202290 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:54:44.419048 kubelet[1421]: E0516 00:54:44.418996 1421 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:54:44.611136 systemd-networkd[1043]: cilium_vxlan: Gained IPv6LL May 16 00:54:45.059274 systemd-networkd[1043]: lxc5aa779c86e78: Gained IPv6LL May 16 00:54:45.059506 systemd-networkd[1043]: lxc_health: Gained IPv6LL May 16 00:54:45.187710 kubelet[1421]: E0516 00:54:45.187660 1421 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:54:45.203108 kubelet[1421]: E0516 00:54:45.203084 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:54:46.204110 kubelet[1421]: E0516 00:54:46.204046 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:54:47.160656 env[1216]: time="2025-05-16T00:54:47.160579990Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 16 00:54:47.160656 env[1216]: time="2025-05-16T00:54:47.160624030Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 16 00:54:47.160656 env[1216]: time="2025-05-16T00:54:47.160635430Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 00:54:47.161020 env[1216]: time="2025-05-16T00:54:47.160789790Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b49578e01b8c9ae02aa93e5b3f512009e5c046d5114e47d93eeb9f031e02f7f5 pid=2498 runtime=io.containerd.runc.v2 May 16 00:54:47.175832 systemd[1]: run-containerd-runc-k8s.io-b49578e01b8c9ae02aa93e5b3f512009e5c046d5114e47d93eeb9f031e02f7f5-runc.KUtMQK.mount: Deactivated successfully. May 16 00:54:47.178537 systemd[1]: Started cri-containerd-b49578e01b8c9ae02aa93e5b3f512009e5c046d5114e47d93eeb9f031e02f7f5.scope. May 16 00:54:47.204320 kubelet[1421]: E0516 00:54:47.204292 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:54:47.247290 systemd-resolved[1157]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 16 00:54:47.260459 env[1216]: time="2025-05-16T00:54:47.260417350Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-9x449,Uid:490fbcc9-2cef-437b-957b-642fbcbb7563,Namespace:default,Attempt:0,} returns sandbox id \"b49578e01b8c9ae02aa93e5b3f512009e5c046d5114e47d93eeb9f031e02f7f5\"" May 16 00:54:47.261664 env[1216]: time="2025-05-16T00:54:47.261635030Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" May 16 00:54:48.205566 kubelet[1421]: E0516 00:54:48.205496 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:54:49.206604 kubelet[1421]: E0516 00:54:49.206556 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:54:49.238460 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1204180116.mount: Deactivated successfully. May 16 00:54:50.207395 kubelet[1421]: E0516 00:54:50.207338 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:54:50.454724 env[1216]: time="2025-05-16T00:54:50.454677725Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:54:50.455964 env[1216]: time="2025-05-16T00:54:50.455938469Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e8b1cb61bd96acc3ff3c695318c9cc691213d532eee3731d038af92816fcb5f4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:54:50.457980 env[1216]: time="2025-05-16T00:54:50.457794526Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:54:50.460334 env[1216]: time="2025-05-16T00:54:50.460310134Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx@sha256:beabce8f1782671ba500ddff99dd260fbf9c5ec85fb9c3162e35a3c40bafd023,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:54:50.461174 env[1216]: time="2025-05-16T00:54:50.461149643Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:e8b1cb61bd96acc3ff3c695318c9cc691213d532eee3731d038af92816fcb5f4\"" May 16 00:54:50.464774 env[1216]: time="2025-05-16T00:54:50.464721757Z" level=info msg="CreateContainer within sandbox \"b49578e01b8c9ae02aa93e5b3f512009e5c046d5114e47d93eeb9f031e02f7f5\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" May 16 00:54:50.474608 env[1216]: time="2025-05-16T00:54:50.474578631Z" level=info msg="CreateContainer within sandbox \"b49578e01b8c9ae02aa93e5b3f512009e5c046d5114e47d93eeb9f031e02f7f5\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"28d83894d9eaed82df11bde910968e5eb026af2257d3876f0732c156c464eddd\"" May 16 00:54:50.475161 env[1216]: time="2025-05-16T00:54:50.475136984Z" level=info msg="StartContainer for \"28d83894d9eaed82df11bde910968e5eb026af2257d3876f0732c156c464eddd\"" May 16 00:54:50.493053 systemd[1]: run-containerd-runc-k8s.io-28d83894d9eaed82df11bde910968e5eb026af2257d3876f0732c156c464eddd-runc.rbWtGD.mount: Deactivated successfully. May 16 00:54:50.494343 systemd[1]: Started cri-containerd-28d83894d9eaed82df11bde910968e5eb026af2257d3876f0732c156c464eddd.scope. May 16 00:54:50.529736 env[1216]: time="2025-05-16T00:54:50.529656968Z" level=info msg="StartContainer for \"28d83894d9eaed82df11bde910968e5eb026af2257d3876f0732c156c464eddd\" returns successfully" May 16 00:54:51.208331 kubelet[1421]: E0516 00:54:51.208264 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:54:52.208935 kubelet[1421]: E0516 00:54:52.208901 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:54:52.911920 kubelet[1421]: I0516 00:54:52.911882 1421 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 16 00:54:52.912380 kubelet[1421]: E0516 00:54:52.912301 1421 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:54:52.926683 kubelet[1421]: I0516 00:54:52.926616 1421 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-7fcdb87857-9x449" podStartSLOduration=8.725491236 podStartE2EDuration="11.926601915s" podCreationTimestamp="2025-05-16 00:54:41 +0000 UTC" firstStartedPulling="2025-05-16 00:54:47.26115411 +0000 UTC m=+22.848149921" lastFinishedPulling="2025-05-16 00:54:50.462264829 +0000 UTC m=+26.049260600" observedRunningTime="2025-05-16 00:54:51.435468939 +0000 UTC m=+27.022464750" watchObservedRunningTime="2025-05-16 00:54:52.926601915 +0000 UTC m=+28.513597726" May 16 00:54:53.209988 kubelet[1421]: E0516 00:54:53.209866 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:54:53.310273 systemd[1]: Created slice kubepods-besteffort-pod50403525_a697_4f63_a219_b6536277c2dd.slice. May 16 00:54:53.368712 kubelet[1421]: I0516 00:54:53.368661 1421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qv6p9\" (UniqueName: \"kubernetes.io/projected/50403525-a697-4f63-a219-b6536277c2dd-kube-api-access-qv6p9\") pod \"nfs-server-provisioner-0\" (UID: \"50403525-a697-4f63-a219-b6536277c2dd\") " pod="default/nfs-server-provisioner-0" May 16 00:54:53.368864 kubelet[1421]: I0516 00:54:53.368773 1421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/50403525-a697-4f63-a219-b6536277c2dd-data\") pod \"nfs-server-provisioner-0\" (UID: \"50403525-a697-4f63-a219-b6536277c2dd\") " pod="default/nfs-server-provisioner-0" May 16 00:54:53.431562 kubelet[1421]: E0516 00:54:53.431531 1421 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:54:53.612870 env[1216]: time="2025-05-16T00:54:53.612827748Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:50403525-a697-4f63-a219-b6536277c2dd,Namespace:default,Attempt:0,}" May 16 00:54:53.634547 systemd-networkd[1043]: lxca0f7ec2748ac: Link UP May 16 00:54:53.646785 kernel: eth0: renamed from tmpc2fdc May 16 00:54:53.657301 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready May 16 00:54:53.657388 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxca0f7ec2748ac: link becomes ready May 16 00:54:53.657417 systemd-networkd[1043]: lxca0f7ec2748ac: Gained carrier May 16 00:54:53.827627 env[1216]: time="2025-05-16T00:54:53.827544289Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 16 00:54:53.827627 env[1216]: time="2025-05-16T00:54:53.827587688Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 16 00:54:53.827627 env[1216]: time="2025-05-16T00:54:53.827598568Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 00:54:53.828136 env[1216]: time="2025-05-16T00:54:53.828076283Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c2fdc67a0edd8cf9f9e81721fd763690b447952ce3aa227c6574130cdb617c5c pid=2625 runtime=io.containerd.runc.v2 May 16 00:54:53.841224 systemd[1]: Started cri-containerd-c2fdc67a0edd8cf9f9e81721fd763690b447952ce3aa227c6574130cdb617c5c.scope. May 16 00:54:53.877946 systemd-resolved[1157]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 16 00:54:53.892739 env[1216]: time="2025-05-16T00:54:53.892701603Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:50403525-a697-4f63-a219-b6536277c2dd,Namespace:default,Attempt:0,} returns sandbox id \"c2fdc67a0edd8cf9f9e81721fd763690b447952ce3aa227c6574130cdb617c5c\"" May 16 00:54:53.894390 env[1216]: time="2025-05-16T00:54:53.894363906Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" May 16 00:54:54.210827 kubelet[1421]: E0516 00:54:54.210715 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:54:54.481025 systemd[1]: run-containerd-runc-k8s.io-c2fdc67a0edd8cf9f9e81721fd763690b447952ce3aa227c6574130cdb617c5c-runc.ihRgeO.mount: Deactivated successfully. May 16 00:54:55.106869 systemd-networkd[1043]: lxca0f7ec2748ac: Gained IPv6LL May 16 00:54:55.210925 kubelet[1421]: E0516 00:54:55.210859 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:54:56.068695 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3346187169.mount: Deactivated successfully. May 16 00:54:56.211016 kubelet[1421]: E0516 00:54:56.210948 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:54:57.212055 kubelet[1421]: E0516 00:54:57.211984 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:54:57.801399 env[1216]: time="2025-05-16T00:54:57.801328572Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:54:57.802694 env[1216]: time="2025-05-16T00:54:57.802659601Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:54:57.804459 env[1216]: time="2025-05-16T00:54:57.804430426Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:54:57.805799 env[1216]: time="2025-05-16T00:54:57.805776335Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:54:57.806546 env[1216]: time="2025-05-16T00:54:57.806504849Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\"" May 16 00:54:57.810644 env[1216]: time="2025-05-16T00:54:57.810440098Z" level=info msg="CreateContainer within sandbox \"c2fdc67a0edd8cf9f9e81721fd763690b447952ce3aa227c6574130cdb617c5c\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" May 16 00:54:57.819908 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount815116382.mount: Deactivated successfully. May 16 00:54:57.824107 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1684374827.mount: Deactivated successfully. May 16 00:54:57.826508 env[1216]: time="2025-05-16T00:54:57.826466167Z" level=info msg="CreateContainer within sandbox \"c2fdc67a0edd8cf9f9e81721fd763690b447952ce3aa227c6574130cdb617c5c\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"fa03af8f3f787ff4089c4a2171bb74ef11e5060079c5f24f439a31a60db75951\"" May 16 00:54:57.827140 env[1216]: time="2025-05-16T00:54:57.827115842Z" level=info msg="StartContainer for \"fa03af8f3f787ff4089c4a2171bb74ef11e5060079c5f24f439a31a60db75951\"" May 16 00:54:57.844907 systemd[1]: Started cri-containerd-fa03af8f3f787ff4089c4a2171bb74ef11e5060079c5f24f439a31a60db75951.scope. May 16 00:54:57.888771 env[1216]: time="2025-05-16T00:54:57.887945868Z" level=info msg="StartContainer for \"fa03af8f3f787ff4089c4a2171bb74ef11e5060079c5f24f439a31a60db75951\" returns successfully" May 16 00:54:58.212992 kubelet[1421]: E0516 00:54:58.212898 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:54:58.451238 kubelet[1421]: I0516 00:54:58.451163 1421 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=1.537459107 podStartE2EDuration="5.451148718s" podCreationTimestamp="2025-05-16 00:54:53 +0000 UTC" firstStartedPulling="2025-05-16 00:54:53.894025829 +0000 UTC m=+29.481021600" lastFinishedPulling="2025-05-16 00:54:57.8077154 +0000 UTC m=+33.394711211" observedRunningTime="2025-05-16 00:54:58.450959959 +0000 UTC m=+34.037955770" watchObservedRunningTime="2025-05-16 00:54:58.451148718 +0000 UTC m=+34.038144529" May 16 00:54:59.213834 kubelet[1421]: E0516 00:54:59.213792 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:55:00.214599 kubelet[1421]: E0516 00:55:00.214547 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:55:01.215045 kubelet[1421]: E0516 00:55:01.214998 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:55:02.215446 kubelet[1421]: E0516 00:55:02.215379 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:55:03.203069 systemd[1]: Created slice kubepods-besteffort-pod6d38fdee_1d4d_49dc_b2b0_d6ead18eb294.slice. May 16 00:55:03.216407 kubelet[1421]: E0516 00:55:03.216375 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:55:03.224205 kubelet[1421]: I0516 00:55:03.224176 1421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-302abc91-7e8e-432c-b3db-36a65482e1a6\" (UniqueName: \"kubernetes.io/nfs/6d38fdee-1d4d-49dc-b2b0-d6ead18eb294-pvc-302abc91-7e8e-432c-b3db-36a65482e1a6\") pod \"test-pod-1\" (UID: \"6d38fdee-1d4d-49dc-b2b0-d6ead18eb294\") " pod="default/test-pod-1" May 16 00:55:03.224291 kubelet[1421]: I0516 00:55:03.224217 1421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhd6s\" (UniqueName: \"kubernetes.io/projected/6d38fdee-1d4d-49dc-b2b0-d6ead18eb294-kube-api-access-xhd6s\") pod \"test-pod-1\" (UID: \"6d38fdee-1d4d-49dc-b2b0-d6ead18eb294\") " pod="default/test-pod-1" May 16 00:55:03.345783 kernel: FS-Cache: Loaded May 16 00:55:03.376935 kernel: RPC: Registered named UNIX socket transport module. May 16 00:55:03.377012 kernel: RPC: Registered udp transport module. May 16 00:55:03.377037 kernel: RPC: Registered tcp transport module. May 16 00:55:03.377769 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. May 16 00:55:03.417779 kernel: FS-Cache: Netfs 'nfs' registered for caching May 16 00:55:03.546923 kernel: NFS: Registering the id_resolver key type May 16 00:55:03.546995 kernel: Key type id_resolver registered May 16 00:55:03.547016 kernel: Key type id_legacy registered May 16 00:55:03.571801 nfsidmap[2743]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' May 16 00:55:03.575528 nfsidmap[2746]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' May 16 00:55:03.576888 update_engine[1207]: I0516 00:55:03.576823 1207 update_attempter.cc:509] Updating boot flags... May 16 00:55:03.806374 env[1216]: time="2025-05-16T00:55:03.806264947Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:6d38fdee-1d4d-49dc-b2b0-d6ead18eb294,Namespace:default,Attempt:0,}" May 16 00:55:03.835235 systemd-networkd[1043]: lxce4d3e9c0449f: Link UP May 16 00:55:03.844777 kernel: eth0: renamed from tmpeafa9 May 16 00:55:03.852390 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready May 16 00:55:03.852457 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxce4d3e9c0449f: link becomes ready May 16 00:55:03.852433 systemd-networkd[1043]: lxce4d3e9c0449f: Gained carrier May 16 00:55:03.984762 env[1216]: time="2025-05-16T00:55:03.984669603Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 16 00:55:03.984762 env[1216]: time="2025-05-16T00:55:03.984722202Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 16 00:55:03.984762 env[1216]: time="2025-05-16T00:55:03.984733282Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 00:55:03.984937 env[1216]: time="2025-05-16T00:55:03.984912281Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/eafa94c011b43b5eaf6c0e726579f6dce4d312ccf2e0c63e3a8847c11d5d0590 pid=2786 runtime=io.containerd.runc.v2 May 16 00:55:03.996858 systemd[1]: Started cri-containerd-eafa94c011b43b5eaf6c0e726579f6dce4d312ccf2e0c63e3a8847c11d5d0590.scope. May 16 00:55:04.052171 systemd-resolved[1157]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 16 00:55:04.068205 env[1216]: time="2025-05-16T00:55:04.068111445Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:6d38fdee-1d4d-49dc-b2b0-d6ead18eb294,Namespace:default,Attempt:0,} returns sandbox id \"eafa94c011b43b5eaf6c0e726579f6dce4d312ccf2e0c63e3a8847c11d5d0590\"" May 16 00:55:04.069830 env[1216]: time="2025-05-16T00:55:04.069798637Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" May 16 00:55:04.216694 kubelet[1421]: E0516 00:55:04.216651 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:55:04.301545 env[1216]: time="2025-05-16T00:55:04.301506598Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:55:04.302830 env[1216]: time="2025-05-16T00:55:04.302796631Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:e8b1cb61bd96acc3ff3c695318c9cc691213d532eee3731d038af92816fcb5f4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:55:04.304508 env[1216]: time="2025-05-16T00:55:04.304479863Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:55:04.306195 env[1216]: time="2025-05-16T00:55:04.306158334Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx@sha256:beabce8f1782671ba500ddff99dd260fbf9c5ec85fb9c3162e35a3c40bafd023,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:55:04.307802 env[1216]: time="2025-05-16T00:55:04.307749526Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:e8b1cb61bd96acc3ff3c695318c9cc691213d532eee3731d038af92816fcb5f4\"" May 16 00:55:04.311192 env[1216]: time="2025-05-16T00:55:04.311155708Z" level=info msg="CreateContainer within sandbox \"eafa94c011b43b5eaf6c0e726579f6dce4d312ccf2e0c63e3a8847c11d5d0590\" for container &ContainerMetadata{Name:test,Attempt:0,}" May 16 00:55:04.321464 env[1216]: time="2025-05-16T00:55:04.321374615Z" level=info msg="CreateContainer within sandbox \"eafa94c011b43b5eaf6c0e726579f6dce4d312ccf2e0c63e3a8847c11d5d0590\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"1f00ad2267d8f052aff778312e67f44f2ee684fb85e76c05aafc61a53abde400\"" May 16 00:55:04.322567 env[1216]: time="2025-05-16T00:55:04.322500649Z" level=info msg="StartContainer for \"1f00ad2267d8f052aff778312e67f44f2ee684fb85e76c05aafc61a53abde400\"" May 16 00:55:04.339084 systemd[1]: Started cri-containerd-1f00ad2267d8f052aff778312e67f44f2ee684fb85e76c05aafc61a53abde400.scope. May 16 00:55:04.370972 env[1216]: time="2025-05-16T00:55:04.370932719Z" level=info msg="StartContainer for \"1f00ad2267d8f052aff778312e67f44f2ee684fb85e76c05aafc61a53abde400\" returns successfully" May 16 00:55:04.460955 kubelet[1421]: I0516 00:55:04.460901 1421 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=11.221633371 podStartE2EDuration="11.460886813s" podCreationTimestamp="2025-05-16 00:54:53 +0000 UTC" firstStartedPulling="2025-05-16 00:55:04.0691312 +0000 UTC m=+39.656127011" lastFinishedPulling="2025-05-16 00:55:04.308384642 +0000 UTC m=+39.895380453" observedRunningTime="2025-05-16 00:55:04.460520615 +0000 UTC m=+40.047516426" watchObservedRunningTime="2025-05-16 00:55:04.460886813 +0000 UTC m=+40.047882624" May 16 00:55:05.188367 kubelet[1421]: E0516 00:55:05.188339 1421 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:55:05.217688 kubelet[1421]: E0516 00:55:05.217670 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:55:05.794995 systemd-networkd[1043]: lxce4d3e9c0449f: Gained IPv6LL May 16 00:55:06.219133 kubelet[1421]: E0516 00:55:06.218897 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:55:07.220067 kubelet[1421]: E0516 00:55:07.220015 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:55:08.220496 kubelet[1421]: E0516 00:55:08.220472 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:55:09.221486 kubelet[1421]: E0516 00:55:09.221432 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:55:10.221949 kubelet[1421]: E0516 00:55:10.221916 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:55:11.222857 kubelet[1421]: E0516 00:55:11.222779 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:55:11.295485 env[1216]: time="2025-05-16T00:55:11.295431268Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 16 00:55:11.301168 env[1216]: time="2025-05-16T00:55:11.301123130Z" level=info msg="StopContainer for \"a224305ca3ed81623eb0b334b2b88dba308cdee1899d86f4f61df1558a3499aa\" with timeout 2 (s)" May 16 00:55:11.301458 env[1216]: time="2025-05-16T00:55:11.301434609Z" level=info msg="Stop container \"a224305ca3ed81623eb0b334b2b88dba308cdee1899d86f4f61df1558a3499aa\" with signal terminated" May 16 00:55:11.306729 systemd-networkd[1043]: lxc_health: Link DOWN May 16 00:55:11.306735 systemd-networkd[1043]: lxc_health: Lost carrier May 16 00:55:11.340169 systemd[1]: cri-containerd-a224305ca3ed81623eb0b334b2b88dba308cdee1899d86f4f61df1558a3499aa.scope: Deactivated successfully. May 16 00:55:11.340501 systemd[1]: cri-containerd-a224305ca3ed81623eb0b334b2b88dba308cdee1899d86f4f61df1558a3499aa.scope: Consumed 6.193s CPU time. May 16 00:55:11.355778 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a224305ca3ed81623eb0b334b2b88dba308cdee1899d86f4f61df1558a3499aa-rootfs.mount: Deactivated successfully. May 16 00:55:11.515830 env[1216]: time="2025-05-16T00:55:11.515251465Z" level=info msg="shim disconnected" id=a224305ca3ed81623eb0b334b2b88dba308cdee1899d86f4f61df1558a3499aa May 16 00:55:11.515830 env[1216]: time="2025-05-16T00:55:11.515296104Z" level=warning msg="cleaning up after shim disconnected" id=a224305ca3ed81623eb0b334b2b88dba308cdee1899d86f4f61df1558a3499aa namespace=k8s.io May 16 00:55:11.515830 env[1216]: time="2025-05-16T00:55:11.515305904Z" level=info msg="cleaning up dead shim" May 16 00:55:11.522467 env[1216]: time="2025-05-16T00:55:11.522424161Z" level=warning msg="cleanup warnings time=\"2025-05-16T00:55:11Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2916 runtime=io.containerd.runc.v2\n" May 16 00:55:11.526300 env[1216]: time="2025-05-16T00:55:11.526257788Z" level=info msg="StopContainer for \"a224305ca3ed81623eb0b334b2b88dba308cdee1899d86f4f61df1558a3499aa\" returns successfully" May 16 00:55:11.526905 env[1216]: time="2025-05-16T00:55:11.526877066Z" level=info msg="StopPodSandbox for \"40dd6c5d0ca464c357572c24c6024b4a4ee73249d103b79cbd44251830d28834\"" May 16 00:55:11.526963 env[1216]: time="2025-05-16T00:55:11.526939346Z" level=info msg="Container to stop \"15e764e6c0407ed7f81ad030ada538176a6e4a9ff394cf3feaeb94ca8fec5653\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 16 00:55:11.526963 env[1216]: time="2025-05-16T00:55:11.526954506Z" level=info msg="Container to stop \"9d428492d93a717e8b4970f9a648dda7c4145a89d3c0ab898debd8cc040dd956\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 16 00:55:11.527013 env[1216]: time="2025-05-16T00:55:11.526968706Z" level=info msg="Container to stop \"4a5b4934f3c40e2c1a93940096646fb30624f8dec74086c416dbf4fa7ac2f406\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 16 00:55:11.527013 env[1216]: time="2025-05-16T00:55:11.526980186Z" level=info msg="Container to stop \"f473f536ba6dda1bb0b73c91758c36813dd940c868b67a61b37bdb12af32041d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 16 00:55:11.527013 env[1216]: time="2025-05-16T00:55:11.526991546Z" level=info msg="Container to stop \"a224305ca3ed81623eb0b334b2b88dba308cdee1899d86f4f61df1558a3499aa\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 16 00:55:11.528581 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-40dd6c5d0ca464c357572c24c6024b4a4ee73249d103b79cbd44251830d28834-shm.mount: Deactivated successfully. May 16 00:55:11.534887 systemd[1]: cri-containerd-40dd6c5d0ca464c357572c24c6024b4a4ee73249d103b79cbd44251830d28834.scope: Deactivated successfully. May 16 00:55:11.554171 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-40dd6c5d0ca464c357572c24c6024b4a4ee73249d103b79cbd44251830d28834-rootfs.mount: Deactivated successfully. May 16 00:55:11.556696 env[1216]: time="2025-05-16T00:55:11.556658288Z" level=info msg="shim disconnected" id=40dd6c5d0ca464c357572c24c6024b4a4ee73249d103b79cbd44251830d28834 May 16 00:55:11.556915 env[1216]: time="2025-05-16T00:55:11.556895207Z" level=warning msg="cleaning up after shim disconnected" id=40dd6c5d0ca464c357572c24c6024b4a4ee73249d103b79cbd44251830d28834 namespace=k8s.io May 16 00:55:11.556985 env[1216]: time="2025-05-16T00:55:11.556970727Z" level=info msg="cleaning up dead shim" May 16 00:55:11.563484 env[1216]: time="2025-05-16T00:55:11.563453266Z" level=warning msg="cleanup warnings time=\"2025-05-16T00:55:11Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2947 runtime=io.containerd.runc.v2\n" May 16 00:55:11.563902 env[1216]: time="2025-05-16T00:55:11.563876504Z" level=info msg="TearDown network for sandbox \"40dd6c5d0ca464c357572c24c6024b4a4ee73249d103b79cbd44251830d28834\" successfully" May 16 00:55:11.564056 env[1216]: time="2025-05-16T00:55:11.563974744Z" level=info msg="StopPodSandbox for \"40dd6c5d0ca464c357572c24c6024b4a4ee73249d103b79cbd44251830d28834\" returns successfully" May 16 00:55:11.667640 kubelet[1421]: I0516 00:55:11.667600 1421 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d3db3bbd-35cc-44f9-b6e4-37d771d1030c-cni-path\") pod \"d3db3bbd-35cc-44f9-b6e4-37d771d1030c\" (UID: \"d3db3bbd-35cc-44f9-b6e4-37d771d1030c\") " May 16 00:55:11.667640 kubelet[1421]: I0516 00:55:11.667644 1421 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6z7kc\" (UniqueName: \"kubernetes.io/projected/d3db3bbd-35cc-44f9-b6e4-37d771d1030c-kube-api-access-6z7kc\") pod \"d3db3bbd-35cc-44f9-b6e4-37d771d1030c\" (UID: \"d3db3bbd-35cc-44f9-b6e4-37d771d1030c\") " May 16 00:55:11.667894 kubelet[1421]: I0516 00:55:11.667663 1421 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d3db3bbd-35cc-44f9-b6e4-37d771d1030c-etc-cni-netd\") pod \"d3db3bbd-35cc-44f9-b6e4-37d771d1030c\" (UID: \"d3db3bbd-35cc-44f9-b6e4-37d771d1030c\") " May 16 00:55:11.667894 kubelet[1421]: I0516 00:55:11.667690 1421 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d3db3bbd-35cc-44f9-b6e4-37d771d1030c-hostproc\") pod \"d3db3bbd-35cc-44f9-b6e4-37d771d1030c\" (UID: \"d3db3bbd-35cc-44f9-b6e4-37d771d1030c\") " May 16 00:55:11.667894 kubelet[1421]: I0516 00:55:11.667710 1421 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d3db3bbd-35cc-44f9-b6e4-37d771d1030c-cilium-config-path\") pod \"d3db3bbd-35cc-44f9-b6e4-37d771d1030c\" (UID: \"d3db3bbd-35cc-44f9-b6e4-37d771d1030c\") " May 16 00:55:11.667894 kubelet[1421]: I0516 00:55:11.667727 1421 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d3db3bbd-35cc-44f9-b6e4-37d771d1030c-xtables-lock\") pod \"d3db3bbd-35cc-44f9-b6e4-37d771d1030c\" (UID: \"d3db3bbd-35cc-44f9-b6e4-37d771d1030c\") " May 16 00:55:11.667894 kubelet[1421]: I0516 00:55:11.667759 1421 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d3db3bbd-35cc-44f9-b6e4-37d771d1030c-hubble-tls\") pod \"d3db3bbd-35cc-44f9-b6e4-37d771d1030c\" (UID: \"d3db3bbd-35cc-44f9-b6e4-37d771d1030c\") " May 16 00:55:11.667894 kubelet[1421]: I0516 00:55:11.667774 1421 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d3db3bbd-35cc-44f9-b6e4-37d771d1030c-cilium-run\") pod \"d3db3bbd-35cc-44f9-b6e4-37d771d1030c\" (UID: \"d3db3bbd-35cc-44f9-b6e4-37d771d1030c\") " May 16 00:55:11.668039 kubelet[1421]: I0516 00:55:11.667796 1421 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d3db3bbd-35cc-44f9-b6e4-37d771d1030c-cilium-cgroup\") pod \"d3db3bbd-35cc-44f9-b6e4-37d771d1030c\" (UID: \"d3db3bbd-35cc-44f9-b6e4-37d771d1030c\") " May 16 00:55:11.668039 kubelet[1421]: I0516 00:55:11.667810 1421 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d3db3bbd-35cc-44f9-b6e4-37d771d1030c-host-proc-sys-kernel\") pod \"d3db3bbd-35cc-44f9-b6e4-37d771d1030c\" (UID: \"d3db3bbd-35cc-44f9-b6e4-37d771d1030c\") " May 16 00:55:11.668039 kubelet[1421]: I0516 00:55:11.667838 1421 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d3db3bbd-35cc-44f9-b6e4-37d771d1030c-clustermesh-secrets\") pod \"d3db3bbd-35cc-44f9-b6e4-37d771d1030c\" (UID: \"d3db3bbd-35cc-44f9-b6e4-37d771d1030c\") " May 16 00:55:11.668039 kubelet[1421]: I0516 00:55:11.667854 1421 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d3db3bbd-35cc-44f9-b6e4-37d771d1030c-lib-modules\") pod \"d3db3bbd-35cc-44f9-b6e4-37d771d1030c\" (UID: \"d3db3bbd-35cc-44f9-b6e4-37d771d1030c\") " May 16 00:55:11.668039 kubelet[1421]: I0516 00:55:11.667868 1421 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d3db3bbd-35cc-44f9-b6e4-37d771d1030c-bpf-maps\") pod \"d3db3bbd-35cc-44f9-b6e4-37d771d1030c\" (UID: \"d3db3bbd-35cc-44f9-b6e4-37d771d1030c\") " May 16 00:55:11.668039 kubelet[1421]: I0516 00:55:11.667883 1421 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d3db3bbd-35cc-44f9-b6e4-37d771d1030c-host-proc-sys-net\") pod \"d3db3bbd-35cc-44f9-b6e4-37d771d1030c\" (UID: \"d3db3bbd-35cc-44f9-b6e4-37d771d1030c\") " May 16 00:55:11.668169 kubelet[1421]: I0516 00:55:11.667984 1421 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d3db3bbd-35cc-44f9-b6e4-37d771d1030c-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "d3db3bbd-35cc-44f9-b6e4-37d771d1030c" (UID: "d3db3bbd-35cc-44f9-b6e4-37d771d1030c"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 16 00:55:11.668169 kubelet[1421]: I0516 00:55:11.668018 1421 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d3db3bbd-35cc-44f9-b6e4-37d771d1030c-cni-path" (OuterVolumeSpecName: "cni-path") pod "d3db3bbd-35cc-44f9-b6e4-37d771d1030c" (UID: "d3db3bbd-35cc-44f9-b6e4-37d771d1030c"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 16 00:55:11.668775 kubelet[1421]: I0516 00:55:11.668269 1421 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d3db3bbd-35cc-44f9-b6e4-37d771d1030c-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "d3db3bbd-35cc-44f9-b6e4-37d771d1030c" (UID: "d3db3bbd-35cc-44f9-b6e4-37d771d1030c"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 16 00:55:11.668775 kubelet[1421]: I0516 00:55:11.668309 1421 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d3db3bbd-35cc-44f9-b6e4-37d771d1030c-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "d3db3bbd-35cc-44f9-b6e4-37d771d1030c" (UID: "d3db3bbd-35cc-44f9-b6e4-37d771d1030c"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 16 00:55:11.668775 kubelet[1421]: I0516 00:55:11.668566 1421 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d3db3bbd-35cc-44f9-b6e4-37d771d1030c-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "d3db3bbd-35cc-44f9-b6e4-37d771d1030c" (UID: "d3db3bbd-35cc-44f9-b6e4-37d771d1030c"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 16 00:55:11.668917 kubelet[1421]: I0516 00:55:11.668782 1421 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d3db3bbd-35cc-44f9-b6e4-37d771d1030c-hostproc" (OuterVolumeSpecName: "hostproc") pod "d3db3bbd-35cc-44f9-b6e4-37d771d1030c" (UID: "d3db3bbd-35cc-44f9-b6e4-37d771d1030c"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 16 00:55:11.668917 kubelet[1421]: I0516 00:55:11.668809 1421 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d3db3bbd-35cc-44f9-b6e4-37d771d1030c-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "d3db3bbd-35cc-44f9-b6e4-37d771d1030c" (UID: "d3db3bbd-35cc-44f9-b6e4-37d771d1030c"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 16 00:55:11.668917 kubelet[1421]: I0516 00:55:11.668777 1421 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d3db3bbd-35cc-44f9-b6e4-37d771d1030c-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "d3db3bbd-35cc-44f9-b6e4-37d771d1030c" (UID: "d3db3bbd-35cc-44f9-b6e4-37d771d1030c"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 16 00:55:11.668917 kubelet[1421]: I0516 00:55:11.668840 1421 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d3db3bbd-35cc-44f9-b6e4-37d771d1030c-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "d3db3bbd-35cc-44f9-b6e4-37d771d1030c" (UID: "d3db3bbd-35cc-44f9-b6e4-37d771d1030c"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 16 00:55:11.668917 kubelet[1421]: I0516 00:55:11.668825 1421 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d3db3bbd-35cc-44f9-b6e4-37d771d1030c-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "d3db3bbd-35cc-44f9-b6e4-37d771d1030c" (UID: "d3db3bbd-35cc-44f9-b6e4-37d771d1030c"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 16 00:55:11.670440 kubelet[1421]: I0516 00:55:11.670409 1421 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d3db3bbd-35cc-44f9-b6e4-37d771d1030c-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "d3db3bbd-35cc-44f9-b6e4-37d771d1030c" (UID: "d3db3bbd-35cc-44f9-b6e4-37d771d1030c"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 16 00:55:11.672851 kubelet[1421]: I0516 00:55:11.671196 1421 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3db3bbd-35cc-44f9-b6e4-37d771d1030c-kube-api-access-6z7kc" (OuterVolumeSpecName: "kube-api-access-6z7kc") pod "d3db3bbd-35cc-44f9-b6e4-37d771d1030c" (UID: "d3db3bbd-35cc-44f9-b6e4-37d771d1030c"). InnerVolumeSpecName "kube-api-access-6z7kc". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 16 00:55:11.672851 kubelet[1421]: I0516 00:55:11.672172 1421 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3db3bbd-35cc-44f9-b6e4-37d771d1030c-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "d3db3bbd-35cc-44f9-b6e4-37d771d1030c" (UID: "d3db3bbd-35cc-44f9-b6e4-37d771d1030c"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 16 00:55:11.672259 systemd[1]: var-lib-kubelet-pods-d3db3bbd\x2d35cc\x2d44f9\x2db6e4\x2d37d771d1030c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d6z7kc.mount: Deactivated successfully. May 16 00:55:11.673070 kubelet[1421]: I0516 00:55:11.673043 1421 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3db3bbd-35cc-44f9-b6e4-37d771d1030c-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "d3db3bbd-35cc-44f9-b6e4-37d771d1030c" (UID: "d3db3bbd-35cc-44f9-b6e4-37d771d1030c"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 16 00:55:11.768404 kubelet[1421]: I0516 00:55:11.768322 1421 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d3db3bbd-35cc-44f9-b6e4-37d771d1030c-bpf-maps\") on node \"10.0.0.138\" DevicePath \"\"" May 16 00:55:11.768404 kubelet[1421]: I0516 00:55:11.768356 1421 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d3db3bbd-35cc-44f9-b6e4-37d771d1030c-host-proc-sys-net\") on node \"10.0.0.138\" DevicePath \"\"" May 16 00:55:11.768404 kubelet[1421]: I0516 00:55:11.768365 1421 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d3db3bbd-35cc-44f9-b6e4-37d771d1030c-cni-path\") on node \"10.0.0.138\" DevicePath \"\"" May 16 00:55:11.768404 kubelet[1421]: I0516 00:55:11.768373 1421 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6z7kc\" (UniqueName: \"kubernetes.io/projected/d3db3bbd-35cc-44f9-b6e4-37d771d1030c-kube-api-access-6z7kc\") on node \"10.0.0.138\" DevicePath \"\"" May 16 00:55:11.768404 kubelet[1421]: I0516 00:55:11.768383 1421 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d3db3bbd-35cc-44f9-b6e4-37d771d1030c-etc-cni-netd\") on node \"10.0.0.138\" DevicePath \"\"" May 16 00:55:11.768404 kubelet[1421]: I0516 00:55:11.768391 1421 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d3db3bbd-35cc-44f9-b6e4-37d771d1030c-hostproc\") on node \"10.0.0.138\" DevicePath \"\"" May 16 00:55:11.770073 kubelet[1421]: I0516 00:55:11.770040 1421 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d3db3bbd-35cc-44f9-b6e4-37d771d1030c-cilium-config-path\") on node \"10.0.0.138\" DevicePath \"\"" May 16 00:55:11.770136 kubelet[1421]: I0516 00:55:11.770122 1421 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d3db3bbd-35cc-44f9-b6e4-37d771d1030c-xtables-lock\") on node \"10.0.0.138\" DevicePath \"\"" May 16 00:55:11.770136 kubelet[1421]: I0516 00:55:11.770134 1421 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d3db3bbd-35cc-44f9-b6e4-37d771d1030c-hubble-tls\") on node \"10.0.0.138\" DevicePath \"\"" May 16 00:55:11.770193 kubelet[1421]: I0516 00:55:11.770143 1421 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d3db3bbd-35cc-44f9-b6e4-37d771d1030c-cilium-run\") on node \"10.0.0.138\" DevicePath \"\"" May 16 00:55:11.770193 kubelet[1421]: I0516 00:55:11.770152 1421 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d3db3bbd-35cc-44f9-b6e4-37d771d1030c-cilium-cgroup\") on node \"10.0.0.138\" DevicePath \"\"" May 16 00:55:11.770193 kubelet[1421]: I0516 00:55:11.770162 1421 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d3db3bbd-35cc-44f9-b6e4-37d771d1030c-host-proc-sys-kernel\") on node \"10.0.0.138\" DevicePath \"\"" May 16 00:55:11.770193 kubelet[1421]: I0516 00:55:11.770171 1421 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d3db3bbd-35cc-44f9-b6e4-37d771d1030c-clustermesh-secrets\") on node \"10.0.0.138\" DevicePath \"\"" May 16 00:55:11.770193 kubelet[1421]: I0516 00:55:11.770179 1421 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d3db3bbd-35cc-44f9-b6e4-37d771d1030c-lib-modules\") on node \"10.0.0.138\" DevicePath \"\"" May 16 00:55:12.223485 kubelet[1421]: E0516 00:55:12.223452 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:55:12.276615 systemd[1]: var-lib-kubelet-pods-d3db3bbd\x2d35cc\x2d44f9\x2db6e4\x2d37d771d1030c-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 16 00:55:12.276710 systemd[1]: var-lib-kubelet-pods-d3db3bbd\x2d35cc\x2d44f9\x2db6e4\x2d37d771d1030c-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 16 00:55:12.466957 kubelet[1421]: I0516 00:55:12.466931 1421 scope.go:117] "RemoveContainer" containerID="a224305ca3ed81623eb0b334b2b88dba308cdee1899d86f4f61df1558a3499aa" May 16 00:55:12.468821 env[1216]: time="2025-05-16T00:55:12.468788301Z" level=info msg="RemoveContainer for \"a224305ca3ed81623eb0b334b2b88dba308cdee1899d86f4f61df1558a3499aa\"" May 16 00:55:12.470295 systemd[1]: Removed slice kubepods-burstable-podd3db3bbd_35cc_44f9_b6e4_37d771d1030c.slice. May 16 00:55:12.470376 systemd[1]: kubepods-burstable-podd3db3bbd_35cc_44f9_b6e4_37d771d1030c.slice: Consumed 6.391s CPU time. May 16 00:55:12.472101 env[1216]: time="2025-05-16T00:55:12.472069811Z" level=info msg="RemoveContainer for \"a224305ca3ed81623eb0b334b2b88dba308cdee1899d86f4f61df1558a3499aa\" returns successfully" May 16 00:55:12.475873 kubelet[1421]: I0516 00:55:12.475807 1421 scope.go:117] "RemoveContainer" containerID="9d428492d93a717e8b4970f9a648dda7c4145a89d3c0ab898debd8cc040dd956" May 16 00:55:12.477553 env[1216]: time="2025-05-16T00:55:12.477497874Z" level=info msg="RemoveContainer for \"9d428492d93a717e8b4970f9a648dda7c4145a89d3c0ab898debd8cc040dd956\"" May 16 00:55:12.480027 env[1216]: time="2025-05-16T00:55:12.479998626Z" level=info msg="RemoveContainer for \"9d428492d93a717e8b4970f9a648dda7c4145a89d3c0ab898debd8cc040dd956\" returns successfully" May 16 00:55:12.480183 kubelet[1421]: I0516 00:55:12.480159 1421 scope.go:117] "RemoveContainer" containerID="f473f536ba6dda1bb0b73c91758c36813dd940c868b67a61b37bdb12af32041d" May 16 00:55:12.481151 env[1216]: time="2025-05-16T00:55:12.481121663Z" level=info msg="RemoveContainer for \"f473f536ba6dda1bb0b73c91758c36813dd940c868b67a61b37bdb12af32041d\"" May 16 00:55:12.483747 env[1216]: time="2025-05-16T00:55:12.483713655Z" level=info msg="RemoveContainer for \"f473f536ba6dda1bb0b73c91758c36813dd940c868b67a61b37bdb12af32041d\" returns successfully" May 16 00:55:12.484007 kubelet[1421]: I0516 00:55:12.483988 1421 scope.go:117] "RemoveContainer" containerID="4a5b4934f3c40e2c1a93940096646fb30624f8dec74086c416dbf4fa7ac2f406" May 16 00:55:12.485147 env[1216]: time="2025-05-16T00:55:12.485118250Z" level=info msg="RemoveContainer for \"4a5b4934f3c40e2c1a93940096646fb30624f8dec74086c416dbf4fa7ac2f406\"" May 16 00:55:12.492285 env[1216]: time="2025-05-16T00:55:12.492244028Z" level=info msg="RemoveContainer for \"4a5b4934f3c40e2c1a93940096646fb30624f8dec74086c416dbf4fa7ac2f406\" returns successfully" May 16 00:55:12.492462 kubelet[1421]: I0516 00:55:12.492437 1421 scope.go:117] "RemoveContainer" containerID="15e764e6c0407ed7f81ad030ada538176a6e4a9ff394cf3feaeb94ca8fec5653" May 16 00:55:12.494119 env[1216]: time="2025-05-16T00:55:12.494083463Z" level=info msg="RemoveContainer for \"15e764e6c0407ed7f81ad030ada538176a6e4a9ff394cf3feaeb94ca8fec5653\"" May 16 00:55:12.498240 env[1216]: time="2025-05-16T00:55:12.498206850Z" level=info msg="RemoveContainer for \"15e764e6c0407ed7f81ad030ada538176a6e4a9ff394cf3feaeb94ca8fec5653\" returns successfully" May 16 00:55:12.498418 kubelet[1421]: I0516 00:55:12.498397 1421 scope.go:117] "RemoveContainer" containerID="a224305ca3ed81623eb0b334b2b88dba308cdee1899d86f4f61df1558a3499aa" May 16 00:55:12.498638 env[1216]: time="2025-05-16T00:55:12.498571089Z" level=error msg="ContainerStatus for \"a224305ca3ed81623eb0b334b2b88dba308cdee1899d86f4f61df1558a3499aa\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a224305ca3ed81623eb0b334b2b88dba308cdee1899d86f4f61df1558a3499aa\": not found" May 16 00:55:12.498742 kubelet[1421]: E0516 00:55:12.498722 1421 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a224305ca3ed81623eb0b334b2b88dba308cdee1899d86f4f61df1558a3499aa\": not found" containerID="a224305ca3ed81623eb0b334b2b88dba308cdee1899d86f4f61df1558a3499aa" May 16 00:55:12.498814 kubelet[1421]: I0516 00:55:12.498774 1421 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a224305ca3ed81623eb0b334b2b88dba308cdee1899d86f4f61df1558a3499aa"} err="failed to get container status \"a224305ca3ed81623eb0b334b2b88dba308cdee1899d86f4f61df1558a3499aa\": rpc error: code = NotFound desc = an error occurred when try to find container \"a224305ca3ed81623eb0b334b2b88dba308cdee1899d86f4f61df1558a3499aa\": not found" May 16 00:55:12.498847 kubelet[1421]: I0516 00:55:12.498817 1421 scope.go:117] "RemoveContainer" containerID="9d428492d93a717e8b4970f9a648dda7c4145a89d3c0ab898debd8cc040dd956" May 16 00:55:12.499032 env[1216]: time="2025-05-16T00:55:12.498983247Z" level=error msg="ContainerStatus for \"9d428492d93a717e8b4970f9a648dda7c4145a89d3c0ab898debd8cc040dd956\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9d428492d93a717e8b4970f9a648dda7c4145a89d3c0ab898debd8cc040dd956\": not found" May 16 00:55:12.499148 kubelet[1421]: E0516 00:55:12.499128 1421 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9d428492d93a717e8b4970f9a648dda7c4145a89d3c0ab898debd8cc040dd956\": not found" containerID="9d428492d93a717e8b4970f9a648dda7c4145a89d3c0ab898debd8cc040dd956" May 16 00:55:12.499186 kubelet[1421]: I0516 00:55:12.499171 1421 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9d428492d93a717e8b4970f9a648dda7c4145a89d3c0ab898debd8cc040dd956"} err="failed to get container status \"9d428492d93a717e8b4970f9a648dda7c4145a89d3c0ab898debd8cc040dd956\": rpc error: code = NotFound desc = an error occurred when try to find container \"9d428492d93a717e8b4970f9a648dda7c4145a89d3c0ab898debd8cc040dd956\": not found" May 16 00:55:12.499211 kubelet[1421]: I0516 00:55:12.499189 1421 scope.go:117] "RemoveContainer" containerID="f473f536ba6dda1bb0b73c91758c36813dd940c868b67a61b37bdb12af32041d" May 16 00:55:12.499390 env[1216]: time="2025-05-16T00:55:12.499344086Z" level=error msg="ContainerStatus for \"f473f536ba6dda1bb0b73c91758c36813dd940c868b67a61b37bdb12af32041d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f473f536ba6dda1bb0b73c91758c36813dd940c868b67a61b37bdb12af32041d\": not found" May 16 00:55:12.499485 kubelet[1421]: E0516 00:55:12.499470 1421 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f473f536ba6dda1bb0b73c91758c36813dd940c868b67a61b37bdb12af32041d\": not found" containerID="f473f536ba6dda1bb0b73c91758c36813dd940c868b67a61b37bdb12af32041d" May 16 00:55:12.499515 kubelet[1421]: I0516 00:55:12.499490 1421 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f473f536ba6dda1bb0b73c91758c36813dd940c868b67a61b37bdb12af32041d"} err="failed to get container status \"f473f536ba6dda1bb0b73c91758c36813dd940c868b67a61b37bdb12af32041d\": rpc error: code = NotFound desc = an error occurred when try to find container \"f473f536ba6dda1bb0b73c91758c36813dd940c868b67a61b37bdb12af32041d\": not found" May 16 00:55:12.499540 kubelet[1421]: I0516 00:55:12.499519 1421 scope.go:117] "RemoveContainer" containerID="4a5b4934f3c40e2c1a93940096646fb30624f8dec74086c416dbf4fa7ac2f406" May 16 00:55:12.499674 env[1216]: time="2025-05-16T00:55:12.499634245Z" level=error msg="ContainerStatus for \"4a5b4934f3c40e2c1a93940096646fb30624f8dec74086c416dbf4fa7ac2f406\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4a5b4934f3c40e2c1a93940096646fb30624f8dec74086c416dbf4fa7ac2f406\": not found" May 16 00:55:12.499776 kubelet[1421]: E0516 00:55:12.499746 1421 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4a5b4934f3c40e2c1a93940096646fb30624f8dec74086c416dbf4fa7ac2f406\": not found" containerID="4a5b4934f3c40e2c1a93940096646fb30624f8dec74086c416dbf4fa7ac2f406" May 16 00:55:12.499851 kubelet[1421]: I0516 00:55:12.499780 1421 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4a5b4934f3c40e2c1a93940096646fb30624f8dec74086c416dbf4fa7ac2f406"} err="failed to get container status \"4a5b4934f3c40e2c1a93940096646fb30624f8dec74086c416dbf4fa7ac2f406\": rpc error: code = NotFound desc = an error occurred when try to find container \"4a5b4934f3c40e2c1a93940096646fb30624f8dec74086c416dbf4fa7ac2f406\": not found" May 16 00:55:12.499851 kubelet[1421]: I0516 00:55:12.499793 1421 scope.go:117] "RemoveContainer" containerID="15e764e6c0407ed7f81ad030ada538176a6e4a9ff394cf3feaeb94ca8fec5653" May 16 00:55:12.499984 env[1216]: time="2025-05-16T00:55:12.499937564Z" level=error msg="ContainerStatus for \"15e764e6c0407ed7f81ad030ada538176a6e4a9ff394cf3feaeb94ca8fec5653\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"15e764e6c0407ed7f81ad030ada538176a6e4a9ff394cf3feaeb94ca8fec5653\": not found" May 16 00:55:12.500096 kubelet[1421]: E0516 00:55:12.500076 1421 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"15e764e6c0407ed7f81ad030ada538176a6e4a9ff394cf3feaeb94ca8fec5653\": not found" containerID="15e764e6c0407ed7f81ad030ada538176a6e4a9ff394cf3feaeb94ca8fec5653" May 16 00:55:12.500130 kubelet[1421]: I0516 00:55:12.500117 1421 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"15e764e6c0407ed7f81ad030ada538176a6e4a9ff394cf3feaeb94ca8fec5653"} err="failed to get container status \"15e764e6c0407ed7f81ad030ada538176a6e4a9ff394cf3feaeb94ca8fec5653\": rpc error: code = NotFound desc = an error occurred when try to find container \"15e764e6c0407ed7f81ad030ada538176a6e4a9ff394cf3feaeb94ca8fec5653\": not found" May 16 00:55:13.223864 kubelet[1421]: E0516 00:55:13.223825 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:55:13.379820 kubelet[1421]: I0516 00:55:13.379339 1421 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d3db3bbd-35cc-44f9-b6e4-37d771d1030c" path="/var/lib/kubelet/pods/d3db3bbd-35cc-44f9-b6e4-37d771d1030c/volumes" May 16 00:55:14.224979 kubelet[1421]: E0516 00:55:14.224950 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:55:14.698040 systemd[1]: Created slice kubepods-besteffort-pod8db1baaa_b7c4_4aed_ae66_2409086bdd02.slice. May 16 00:55:14.701826 systemd[1]: Created slice kubepods-burstable-pod30f96f93_4845_45e8_8e4c_74ba3acee31c.slice. May 16 00:55:14.788235 kubelet[1421]: I0516 00:55:14.788191 1421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mvqkt\" (UniqueName: \"kubernetes.io/projected/8db1baaa-b7c4-4aed-ae66-2409086bdd02-kube-api-access-mvqkt\") pod \"cilium-operator-6c4d7847fc-7v79b\" (UID: \"8db1baaa-b7c4-4aed-ae66-2409086bdd02\") " pod="kube-system/cilium-operator-6c4d7847fc-7v79b" May 16 00:55:14.788235 kubelet[1421]: I0516 00:55:14.788234 1421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/30f96f93-4845-45e8-8e4c-74ba3acee31c-bpf-maps\") pod \"cilium-sqndp\" (UID: \"30f96f93-4845-45e8-8e4c-74ba3acee31c\") " pod="kube-system/cilium-sqndp" May 16 00:55:14.788363 kubelet[1421]: I0516 00:55:14.788254 1421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/30f96f93-4845-45e8-8e4c-74ba3acee31c-cilium-cgroup\") pod \"cilium-sqndp\" (UID: \"30f96f93-4845-45e8-8e4c-74ba3acee31c\") " pod="kube-system/cilium-sqndp" May 16 00:55:14.788363 kubelet[1421]: I0516 00:55:14.788271 1421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/30f96f93-4845-45e8-8e4c-74ba3acee31c-cilium-config-path\") pod \"cilium-sqndp\" (UID: \"30f96f93-4845-45e8-8e4c-74ba3acee31c\") " pod="kube-system/cilium-sqndp" May 16 00:55:14.788363 kubelet[1421]: I0516 00:55:14.788289 1421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/30f96f93-4845-45e8-8e4c-74ba3acee31c-cilium-ipsec-secrets\") pod \"cilium-sqndp\" (UID: \"30f96f93-4845-45e8-8e4c-74ba3acee31c\") " pod="kube-system/cilium-sqndp" May 16 00:55:14.788450 kubelet[1421]: I0516 00:55:14.788369 1421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/30f96f93-4845-45e8-8e4c-74ba3acee31c-cni-path\") pod \"cilium-sqndp\" (UID: \"30f96f93-4845-45e8-8e4c-74ba3acee31c\") " pod="kube-system/cilium-sqndp" May 16 00:55:14.788450 kubelet[1421]: I0516 00:55:14.788434 1421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/30f96f93-4845-45e8-8e4c-74ba3acee31c-etc-cni-netd\") pod \"cilium-sqndp\" (UID: \"30f96f93-4845-45e8-8e4c-74ba3acee31c\") " pod="kube-system/cilium-sqndp" May 16 00:55:14.788492 kubelet[1421]: I0516 00:55:14.788454 1421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/30f96f93-4845-45e8-8e4c-74ba3acee31c-xtables-lock\") pod \"cilium-sqndp\" (UID: \"30f96f93-4845-45e8-8e4c-74ba3acee31c\") " pod="kube-system/cilium-sqndp" May 16 00:55:14.788533 kubelet[1421]: I0516 00:55:14.788519 1421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8db1baaa-b7c4-4aed-ae66-2409086bdd02-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-7v79b\" (UID: \"8db1baaa-b7c4-4aed-ae66-2409086bdd02\") " pod="kube-system/cilium-operator-6c4d7847fc-7v79b" May 16 00:55:14.788573 kubelet[1421]: I0516 00:55:14.788541 1421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/30f96f93-4845-45e8-8e4c-74ba3acee31c-hostproc\") pod \"cilium-sqndp\" (UID: \"30f96f93-4845-45e8-8e4c-74ba3acee31c\") " pod="kube-system/cilium-sqndp" May 16 00:55:14.788604 kubelet[1421]: I0516 00:55:14.788578 1421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/30f96f93-4845-45e8-8e4c-74ba3acee31c-lib-modules\") pod \"cilium-sqndp\" (UID: \"30f96f93-4845-45e8-8e4c-74ba3acee31c\") " pod="kube-system/cilium-sqndp" May 16 00:55:14.788604 kubelet[1421]: I0516 00:55:14.788594 1421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/30f96f93-4845-45e8-8e4c-74ba3acee31c-clustermesh-secrets\") pod \"cilium-sqndp\" (UID: \"30f96f93-4845-45e8-8e4c-74ba3acee31c\") " pod="kube-system/cilium-sqndp" May 16 00:55:14.788647 kubelet[1421]: I0516 00:55:14.788609 1421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/30f96f93-4845-45e8-8e4c-74ba3acee31c-host-proc-sys-kernel\") pod \"cilium-sqndp\" (UID: \"30f96f93-4845-45e8-8e4c-74ba3acee31c\") " pod="kube-system/cilium-sqndp" May 16 00:55:14.788669 kubelet[1421]: I0516 00:55:14.788648 1421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/30f96f93-4845-45e8-8e4c-74ba3acee31c-hubble-tls\") pod \"cilium-sqndp\" (UID: \"30f96f93-4845-45e8-8e4c-74ba3acee31c\") " pod="kube-system/cilium-sqndp" May 16 00:55:14.788695 kubelet[1421]: I0516 00:55:14.788668 1421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/30f96f93-4845-45e8-8e4c-74ba3acee31c-cilium-run\") pod \"cilium-sqndp\" (UID: \"30f96f93-4845-45e8-8e4c-74ba3acee31c\") " pod="kube-system/cilium-sqndp" May 16 00:55:14.788718 kubelet[1421]: I0516 00:55:14.788699 1421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/30f96f93-4845-45e8-8e4c-74ba3acee31c-host-proc-sys-net\") pod \"cilium-sqndp\" (UID: \"30f96f93-4845-45e8-8e4c-74ba3acee31c\") " pod="kube-system/cilium-sqndp" May 16 00:55:14.788740 kubelet[1421]: I0516 00:55:14.788719 1421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tpzr2\" (UniqueName: \"kubernetes.io/projected/30f96f93-4845-45e8-8e4c-74ba3acee31c-kube-api-access-tpzr2\") pod \"cilium-sqndp\" (UID: \"30f96f93-4845-45e8-8e4c-74ba3acee31c\") " pod="kube-system/cilium-sqndp" May 16 00:55:14.863236 kubelet[1421]: E0516 00:55:14.863186 1421 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[bpf-maps cilium-cgroup cilium-config-path cilium-ipsec-secrets cilium-run clustermesh-secrets cni-path etc-cni-netd host-proc-sys-kernel host-proc-sys-net hostproc hubble-tls kube-api-access-tpzr2 lib-modules xtables-lock], unattached volumes=[], failed to process volumes=[]: context canceled" pod="kube-system/cilium-sqndp" podUID="30f96f93-4845-45e8-8e4c-74ba3acee31c" May 16 00:55:15.001636 kubelet[1421]: E0516 00:55:15.000902 1421 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:55:15.002105 env[1216]: time="2025-05-16T00:55:15.002046483Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-7v79b,Uid:8db1baaa-b7c4-4aed-ae66-2409086bdd02,Namespace:kube-system,Attempt:0,}" May 16 00:55:15.014222 env[1216]: time="2025-05-16T00:55:15.014159932Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 16 00:55:15.014367 env[1216]: time="2025-05-16T00:55:15.014210372Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 16 00:55:15.014367 env[1216]: time="2025-05-16T00:55:15.014346532Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 00:55:15.014646 env[1216]: time="2025-05-16T00:55:15.014609211Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/9b09405355cd19cbc279de483d9e65f57a2c1eeef0b2975028c9cce54d379d0f pid=2977 runtime=io.containerd.runc.v2 May 16 00:55:15.024527 systemd[1]: Started cri-containerd-9b09405355cd19cbc279de483d9e65f57a2c1eeef0b2975028c9cce54d379d0f.scope. May 16 00:55:15.076201 env[1216]: time="2025-05-16T00:55:15.076158415Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-7v79b,Uid:8db1baaa-b7c4-4aed-ae66-2409086bdd02,Namespace:kube-system,Attempt:0,} returns sandbox id \"9b09405355cd19cbc279de483d9e65f57a2c1eeef0b2975028c9cce54d379d0f\"" May 16 00:55:15.076866 kubelet[1421]: E0516 00:55:15.076835 1421 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:55:15.078031 env[1216]: time="2025-05-16T00:55:15.077990050Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 16 00:55:15.225895 kubelet[1421]: E0516 00:55:15.225863 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:55:15.338220 kubelet[1421]: E0516 00:55:15.338170 1421 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 16 00:55:15.494836 kubelet[1421]: I0516 00:55:15.494805 1421 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/30f96f93-4845-45e8-8e4c-74ba3acee31c-cilium-cgroup\") pod \"30f96f93-4845-45e8-8e4c-74ba3acee31c\" (UID: \"30f96f93-4845-45e8-8e4c-74ba3acee31c\") " May 16 00:55:15.494911 kubelet[1421]: I0516 00:55:15.494838 1421 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/30f96f93-4845-45e8-8e4c-74ba3acee31c-host-proc-sys-kernel\") pod \"30f96f93-4845-45e8-8e4c-74ba3acee31c\" (UID: \"30f96f93-4845-45e8-8e4c-74ba3acee31c\") " May 16 00:55:15.494911 kubelet[1421]: I0516 00:55:15.494855 1421 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/30f96f93-4845-45e8-8e4c-74ba3acee31c-host-proc-sys-net\") pod \"30f96f93-4845-45e8-8e4c-74ba3acee31c\" (UID: \"30f96f93-4845-45e8-8e4c-74ba3acee31c\") " May 16 00:55:15.494911 kubelet[1421]: I0516 00:55:15.494876 1421 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/30f96f93-4845-45e8-8e4c-74ba3acee31c-cilium-config-path\") pod \"30f96f93-4845-45e8-8e4c-74ba3acee31c\" (UID: \"30f96f93-4845-45e8-8e4c-74ba3acee31c\") " May 16 00:55:15.494911 kubelet[1421]: I0516 00:55:15.494891 1421 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/30f96f93-4845-45e8-8e4c-74ba3acee31c-cni-path\") pod \"30f96f93-4845-45e8-8e4c-74ba3acee31c\" (UID: \"30f96f93-4845-45e8-8e4c-74ba3acee31c\") " May 16 00:55:15.494911 kubelet[1421]: I0516 00:55:15.494906 1421 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/30f96f93-4845-45e8-8e4c-74ba3acee31c-etc-cni-netd\") pod \"30f96f93-4845-45e8-8e4c-74ba3acee31c\" (UID: \"30f96f93-4845-45e8-8e4c-74ba3acee31c\") " May 16 00:55:15.495032 kubelet[1421]: I0516 00:55:15.494905 1421 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/30f96f93-4845-45e8-8e4c-74ba3acee31c-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "30f96f93-4845-45e8-8e4c-74ba3acee31c" (UID: "30f96f93-4845-45e8-8e4c-74ba3acee31c"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 16 00:55:15.495032 kubelet[1421]: I0516 00:55:15.494920 1421 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/30f96f93-4845-45e8-8e4c-74ba3acee31c-hostproc\") pod \"30f96f93-4845-45e8-8e4c-74ba3acee31c\" (UID: \"30f96f93-4845-45e8-8e4c-74ba3acee31c\") " May 16 00:55:15.495032 kubelet[1421]: I0516 00:55:15.494939 1421 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/30f96f93-4845-45e8-8e4c-74ba3acee31c-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "30f96f93-4845-45e8-8e4c-74ba3acee31c" (UID: "30f96f93-4845-45e8-8e4c-74ba3acee31c"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 16 00:55:15.495032 kubelet[1421]: I0516 00:55:15.494942 1421 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/30f96f93-4845-45e8-8e4c-74ba3acee31c-cilium-ipsec-secrets\") pod \"30f96f93-4845-45e8-8e4c-74ba3acee31c\" (UID: \"30f96f93-4845-45e8-8e4c-74ba3acee31c\") " May 16 00:55:15.495032 kubelet[1421]: I0516 00:55:15.494948 1421 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/30f96f93-4845-45e8-8e4c-74ba3acee31c-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "30f96f93-4845-45e8-8e4c-74ba3acee31c" (UID: "30f96f93-4845-45e8-8e4c-74ba3acee31c"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 16 00:55:15.495136 kubelet[1421]: I0516 00:55:15.494974 1421 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/30f96f93-4845-45e8-8e4c-74ba3acee31c-clustermesh-secrets\") pod \"30f96f93-4845-45e8-8e4c-74ba3acee31c\" (UID: \"30f96f93-4845-45e8-8e4c-74ba3acee31c\") " May 16 00:55:15.495136 kubelet[1421]: I0516 00:55:15.494993 1421 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/30f96f93-4845-45e8-8e4c-74ba3acee31c-hubble-tls\") pod \"30f96f93-4845-45e8-8e4c-74ba3acee31c\" (UID: \"30f96f93-4845-45e8-8e4c-74ba3acee31c\") " May 16 00:55:15.495136 kubelet[1421]: I0516 00:55:15.494998 1421 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/30f96f93-4845-45e8-8e4c-74ba3acee31c-cni-path" (OuterVolumeSpecName: "cni-path") pod "30f96f93-4845-45e8-8e4c-74ba3acee31c" (UID: "30f96f93-4845-45e8-8e4c-74ba3acee31c"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 16 00:55:15.495136 kubelet[1421]: I0516 00:55:15.495009 1421 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/30f96f93-4845-45e8-8e4c-74ba3acee31c-cilium-run\") pod \"30f96f93-4845-45e8-8e4c-74ba3acee31c\" (UID: \"30f96f93-4845-45e8-8e4c-74ba3acee31c\") " May 16 00:55:15.495136 kubelet[1421]: I0516 00:55:15.495026 1421 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/30f96f93-4845-45e8-8e4c-74ba3acee31c-xtables-lock\") pod \"30f96f93-4845-45e8-8e4c-74ba3acee31c\" (UID: \"30f96f93-4845-45e8-8e4c-74ba3acee31c\") " May 16 00:55:15.495136 kubelet[1421]: I0516 00:55:15.495041 1421 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/30f96f93-4845-45e8-8e4c-74ba3acee31c-lib-modules\") pod \"30f96f93-4845-45e8-8e4c-74ba3acee31c\" (UID: \"30f96f93-4845-45e8-8e4c-74ba3acee31c\") " May 16 00:55:15.495275 kubelet[1421]: I0516 00:55:15.495057 1421 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tpzr2\" (UniqueName: \"kubernetes.io/projected/30f96f93-4845-45e8-8e4c-74ba3acee31c-kube-api-access-tpzr2\") pod \"30f96f93-4845-45e8-8e4c-74ba3acee31c\" (UID: \"30f96f93-4845-45e8-8e4c-74ba3acee31c\") " May 16 00:55:15.495275 kubelet[1421]: I0516 00:55:15.495071 1421 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/30f96f93-4845-45e8-8e4c-74ba3acee31c-bpf-maps\") pod \"30f96f93-4845-45e8-8e4c-74ba3acee31c\" (UID: \"30f96f93-4845-45e8-8e4c-74ba3acee31c\") " May 16 00:55:15.495275 kubelet[1421]: I0516 00:55:15.495101 1421 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/30f96f93-4845-45e8-8e4c-74ba3acee31c-cilium-cgroup\") on node \"10.0.0.138\" DevicePath \"\"" May 16 00:55:15.495275 kubelet[1421]: I0516 00:55:15.495110 1421 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/30f96f93-4845-45e8-8e4c-74ba3acee31c-host-proc-sys-kernel\") on node \"10.0.0.138\" DevicePath \"\"" May 16 00:55:15.495275 kubelet[1421]: I0516 00:55:15.495119 1421 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/30f96f93-4845-45e8-8e4c-74ba3acee31c-host-proc-sys-net\") on node \"10.0.0.138\" DevicePath \"\"" May 16 00:55:15.495275 kubelet[1421]: I0516 00:55:15.495129 1421 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/30f96f93-4845-45e8-8e4c-74ba3acee31c-cni-path\") on node \"10.0.0.138\" DevicePath \"\"" May 16 00:55:15.495275 kubelet[1421]: I0516 00:55:15.495144 1421 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/30f96f93-4845-45e8-8e4c-74ba3acee31c-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "30f96f93-4845-45e8-8e4c-74ba3acee31c" (UID: "30f96f93-4845-45e8-8e4c-74ba3acee31c"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 16 00:55:15.495483 kubelet[1421]: I0516 00:55:15.495461 1421 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/30f96f93-4845-45e8-8e4c-74ba3acee31c-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "30f96f93-4845-45e8-8e4c-74ba3acee31c" (UID: "30f96f93-4845-45e8-8e4c-74ba3acee31c"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 16 00:55:15.495587 kubelet[1421]: I0516 00:55:15.495572 1421 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/30f96f93-4845-45e8-8e4c-74ba3acee31c-hostproc" (OuterVolumeSpecName: "hostproc") pod "30f96f93-4845-45e8-8e4c-74ba3acee31c" (UID: "30f96f93-4845-45e8-8e4c-74ba3acee31c"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 16 00:55:15.495665 kubelet[1421]: I0516 00:55:15.495652 1421 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/30f96f93-4845-45e8-8e4c-74ba3acee31c-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "30f96f93-4845-45e8-8e4c-74ba3acee31c" (UID: "30f96f93-4845-45e8-8e4c-74ba3acee31c"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 16 00:55:15.496816 kubelet[1421]: I0516 00:55:15.496775 1421 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/30f96f93-4845-45e8-8e4c-74ba3acee31c-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "30f96f93-4845-45e8-8e4c-74ba3acee31c" (UID: "30f96f93-4845-45e8-8e4c-74ba3acee31c"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 16 00:55:15.496886 kubelet[1421]: I0516 00:55:15.496824 1421 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/30f96f93-4845-45e8-8e4c-74ba3acee31c-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "30f96f93-4845-45e8-8e4c-74ba3acee31c" (UID: "30f96f93-4845-45e8-8e4c-74ba3acee31c"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 16 00:55:15.497724 kubelet[1421]: I0516 00:55:15.497681 1421 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30f96f93-4845-45e8-8e4c-74ba3acee31c-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "30f96f93-4845-45e8-8e4c-74ba3acee31c" (UID: "30f96f93-4845-45e8-8e4c-74ba3acee31c"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 16 00:55:15.497812 kubelet[1421]: I0516 00:55:15.497732 1421 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/30f96f93-4845-45e8-8e4c-74ba3acee31c-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "30f96f93-4845-45e8-8e4c-74ba3acee31c" (UID: "30f96f93-4845-45e8-8e4c-74ba3acee31c"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 16 00:55:15.497812 kubelet[1421]: I0516 00:55:15.497783 1421 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/30f96f93-4845-45e8-8e4c-74ba3acee31c-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "30f96f93-4845-45e8-8e4c-74ba3acee31c" (UID: "30f96f93-4845-45e8-8e4c-74ba3acee31c"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 16 00:55:15.498314 kubelet[1421]: I0516 00:55:15.498289 1421 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30f96f93-4845-45e8-8e4c-74ba3acee31c-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "30f96f93-4845-45e8-8e4c-74ba3acee31c" (UID: "30f96f93-4845-45e8-8e4c-74ba3acee31c"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 16 00:55:15.499042 kubelet[1421]: I0516 00:55:15.499005 1421 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/30f96f93-4845-45e8-8e4c-74ba3acee31c-kube-api-access-tpzr2" (OuterVolumeSpecName: "kube-api-access-tpzr2") pod "30f96f93-4845-45e8-8e4c-74ba3acee31c" (UID: "30f96f93-4845-45e8-8e4c-74ba3acee31c"). InnerVolumeSpecName "kube-api-access-tpzr2". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 16 00:55:15.596807 kubelet[1421]: I0516 00:55:15.595822 1421 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/30f96f93-4845-45e8-8e4c-74ba3acee31c-clustermesh-secrets\") on node \"10.0.0.138\" DevicePath \"\"" May 16 00:55:15.596807 kubelet[1421]: I0516 00:55:15.595849 1421 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/30f96f93-4845-45e8-8e4c-74ba3acee31c-hubble-tls\") on node \"10.0.0.138\" DevicePath \"\"" May 16 00:55:15.596807 kubelet[1421]: I0516 00:55:15.595859 1421 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/30f96f93-4845-45e8-8e4c-74ba3acee31c-cilium-run\") on node \"10.0.0.138\" DevicePath \"\"" May 16 00:55:15.596807 kubelet[1421]: I0516 00:55:15.595867 1421 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/30f96f93-4845-45e8-8e4c-74ba3acee31c-xtables-lock\") on node \"10.0.0.138\" DevicePath \"\"" May 16 00:55:15.596807 kubelet[1421]: I0516 00:55:15.595875 1421 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/30f96f93-4845-45e8-8e4c-74ba3acee31c-lib-modules\") on node \"10.0.0.138\" DevicePath \"\"" May 16 00:55:15.596807 kubelet[1421]: I0516 00:55:15.595889 1421 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tpzr2\" (UniqueName: \"kubernetes.io/projected/30f96f93-4845-45e8-8e4c-74ba3acee31c-kube-api-access-tpzr2\") on node \"10.0.0.138\" DevicePath \"\"" May 16 00:55:15.596807 kubelet[1421]: I0516 00:55:15.595897 1421 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/30f96f93-4845-45e8-8e4c-74ba3acee31c-bpf-maps\") on node \"10.0.0.138\" DevicePath \"\"" May 16 00:55:15.596807 kubelet[1421]: I0516 00:55:15.595904 1421 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/30f96f93-4845-45e8-8e4c-74ba3acee31c-cilium-config-path\") on node \"10.0.0.138\" DevicePath \"\"" May 16 00:55:15.597030 kubelet[1421]: I0516 00:55:15.595912 1421 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/30f96f93-4845-45e8-8e4c-74ba3acee31c-etc-cni-netd\") on node \"10.0.0.138\" DevicePath \"\"" May 16 00:55:15.597030 kubelet[1421]: I0516 00:55:15.595919 1421 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/30f96f93-4845-45e8-8e4c-74ba3acee31c-hostproc\") on node \"10.0.0.138\" DevicePath \"\"" May 16 00:55:15.597030 kubelet[1421]: I0516 00:55:15.595926 1421 reconciler_common.go:299] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/30f96f93-4845-45e8-8e4c-74ba3acee31c-cilium-ipsec-secrets\") on node \"10.0.0.138\" DevicePath \"\"" May 16 00:55:15.894064 systemd[1]: var-lib-kubelet-pods-30f96f93\x2d4845\x2d45e8\x2d8e4c\x2d74ba3acee31c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dtpzr2.mount: Deactivated successfully. May 16 00:55:15.894154 systemd[1]: var-lib-kubelet-pods-30f96f93\x2d4845\x2d45e8\x2d8e4c\x2d74ba3acee31c-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. May 16 00:55:15.894203 systemd[1]: var-lib-kubelet-pods-30f96f93\x2d4845\x2d45e8\x2d8e4c\x2d74ba3acee31c-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 16 00:55:15.894249 systemd[1]: var-lib-kubelet-pods-30f96f93\x2d4845\x2d45e8\x2d8e4c\x2d74ba3acee31c-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 16 00:55:16.226997 kubelet[1421]: E0516 00:55:16.226917 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:55:16.478878 systemd[1]: Removed slice kubepods-burstable-pod30f96f93_4845_45e8_8e4c_74ba3acee31c.slice. May 16 00:55:16.479695 kubelet[1421]: I0516 00:55:16.479656 1421 setters.go:618] "Node became not ready" node="10.0.0.138" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-05-16T00:55:16Z","lastTransitionTime":"2025-05-16T00:55:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} May 16 00:55:16.512677 systemd[1]: Created slice kubepods-burstable-pod4c989a24_3c45_4f14_93f4_c07095e36994.slice. May 16 00:55:16.600806 kubelet[1421]: I0516 00:55:16.600741 1421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4c989a24-3c45-4f14-93f4-c07095e36994-bpf-maps\") pod \"cilium-d5mkm\" (UID: \"4c989a24-3c45-4f14-93f4-c07095e36994\") " pod="kube-system/cilium-d5mkm" May 16 00:55:16.601047 kubelet[1421]: I0516 00:55:16.601012 1421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4c989a24-3c45-4f14-93f4-c07095e36994-cilium-cgroup\") pod \"cilium-d5mkm\" (UID: \"4c989a24-3c45-4f14-93f4-c07095e36994\") " pod="kube-system/cilium-d5mkm" May 16 00:55:16.601161 kubelet[1421]: I0516 00:55:16.601146 1421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4c989a24-3c45-4f14-93f4-c07095e36994-cni-path\") pod \"cilium-d5mkm\" (UID: \"4c989a24-3c45-4f14-93f4-c07095e36994\") " pod="kube-system/cilium-d5mkm" May 16 00:55:16.601238 kubelet[1421]: I0516 00:55:16.601225 1421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4c989a24-3c45-4f14-93f4-c07095e36994-etc-cni-netd\") pod \"cilium-d5mkm\" (UID: \"4c989a24-3c45-4f14-93f4-c07095e36994\") " pod="kube-system/cilium-d5mkm" May 16 00:55:16.601307 kubelet[1421]: I0516 00:55:16.601295 1421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/4c989a24-3c45-4f14-93f4-c07095e36994-cilium-ipsec-secrets\") pod \"cilium-d5mkm\" (UID: \"4c989a24-3c45-4f14-93f4-c07095e36994\") " pod="kube-system/cilium-d5mkm" May 16 00:55:16.601387 kubelet[1421]: I0516 00:55:16.601375 1421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4c989a24-3c45-4f14-93f4-c07095e36994-host-proc-sys-net\") pod \"cilium-d5mkm\" (UID: \"4c989a24-3c45-4f14-93f4-c07095e36994\") " pod="kube-system/cilium-d5mkm" May 16 00:55:16.601466 kubelet[1421]: I0516 00:55:16.601452 1421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5p8hv\" (UniqueName: \"kubernetes.io/projected/4c989a24-3c45-4f14-93f4-c07095e36994-kube-api-access-5p8hv\") pod \"cilium-d5mkm\" (UID: \"4c989a24-3c45-4f14-93f4-c07095e36994\") " pod="kube-system/cilium-d5mkm" May 16 00:55:16.601569 kubelet[1421]: I0516 00:55:16.601554 1421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4c989a24-3c45-4f14-93f4-c07095e36994-cilium-run\") pod \"cilium-d5mkm\" (UID: \"4c989a24-3c45-4f14-93f4-c07095e36994\") " pod="kube-system/cilium-d5mkm" May 16 00:55:16.601646 kubelet[1421]: I0516 00:55:16.601632 1421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4c989a24-3c45-4f14-93f4-c07095e36994-xtables-lock\") pod \"cilium-d5mkm\" (UID: \"4c989a24-3c45-4f14-93f4-c07095e36994\") " pod="kube-system/cilium-d5mkm" May 16 00:55:16.601766 kubelet[1421]: I0516 00:55:16.601712 1421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4c989a24-3c45-4f14-93f4-c07095e36994-clustermesh-secrets\") pod \"cilium-d5mkm\" (UID: \"4c989a24-3c45-4f14-93f4-c07095e36994\") " pod="kube-system/cilium-d5mkm" May 16 00:55:16.601766 kubelet[1421]: I0516 00:55:16.601763 1421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4c989a24-3c45-4f14-93f4-c07095e36994-cilium-config-path\") pod \"cilium-d5mkm\" (UID: \"4c989a24-3c45-4f14-93f4-c07095e36994\") " pod="kube-system/cilium-d5mkm" May 16 00:55:16.601842 kubelet[1421]: I0516 00:55:16.601782 1421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4c989a24-3c45-4f14-93f4-c07095e36994-host-proc-sys-kernel\") pod \"cilium-d5mkm\" (UID: \"4c989a24-3c45-4f14-93f4-c07095e36994\") " pod="kube-system/cilium-d5mkm" May 16 00:55:16.601842 kubelet[1421]: I0516 00:55:16.601798 1421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4c989a24-3c45-4f14-93f4-c07095e36994-hubble-tls\") pod \"cilium-d5mkm\" (UID: \"4c989a24-3c45-4f14-93f4-c07095e36994\") " pod="kube-system/cilium-d5mkm" May 16 00:55:16.601842 kubelet[1421]: I0516 00:55:16.601815 1421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4c989a24-3c45-4f14-93f4-c07095e36994-hostproc\") pod \"cilium-d5mkm\" (UID: \"4c989a24-3c45-4f14-93f4-c07095e36994\") " pod="kube-system/cilium-d5mkm" May 16 00:55:16.601842 kubelet[1421]: I0516 00:55:16.601827 1421 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4c989a24-3c45-4f14-93f4-c07095e36994-lib-modules\") pod \"cilium-d5mkm\" (UID: \"4c989a24-3c45-4f14-93f4-c07095e36994\") " pod="kube-system/cilium-d5mkm" May 16 00:55:16.641046 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2838713456.mount: Deactivated successfully. May 16 00:55:16.824779 kubelet[1421]: E0516 00:55:16.824723 1421 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:55:16.825407 env[1216]: time="2025-05-16T00:55:16.825363495Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-d5mkm,Uid:4c989a24-3c45-4f14-93f4-c07095e36994,Namespace:kube-system,Attempt:0,}" May 16 00:55:16.841072 env[1216]: time="2025-05-16T00:55:16.841004818Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 16 00:55:16.841072 env[1216]: time="2025-05-16T00:55:16.841046498Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 16 00:55:16.841072 env[1216]: time="2025-05-16T00:55:16.841065938Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 00:55:16.841248 env[1216]: time="2025-05-16T00:55:16.841217097Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ac11ab4a5057c76b8dc7511c0af670c8870f46021a8e02af21ebdf05f025e193 pid=3026 runtime=io.containerd.runc.v2 May 16 00:55:16.850851 systemd[1]: Started cri-containerd-ac11ab4a5057c76b8dc7511c0af670c8870f46021a8e02af21ebdf05f025e193.scope. May 16 00:55:16.882554 env[1216]: time="2025-05-16T00:55:16.882513559Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-d5mkm,Uid:4c989a24-3c45-4f14-93f4-c07095e36994,Namespace:kube-system,Attempt:0,} returns sandbox id \"ac11ab4a5057c76b8dc7511c0af670c8870f46021a8e02af21ebdf05f025e193\"" May 16 00:55:16.883637 kubelet[1421]: E0516 00:55:16.883145 1421 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:55:16.887976 env[1216]: time="2025-05-16T00:55:16.887928666Z" level=info msg="CreateContainer within sandbox \"ac11ab4a5057c76b8dc7511c0af670c8870f46021a8e02af21ebdf05f025e193\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 16 00:55:16.898983 env[1216]: time="2025-05-16T00:55:16.898933640Z" level=info msg="CreateContainer within sandbox \"ac11ab4a5057c76b8dc7511c0af670c8870f46021a8e02af21ebdf05f025e193\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"abfc8319ba6eb1602f9760d880b345267f85a0e4949f488654aab041dbf13960\"" May 16 00:55:16.899444 env[1216]: time="2025-05-16T00:55:16.899395118Z" level=info msg="StartContainer for \"abfc8319ba6eb1602f9760d880b345267f85a0e4949f488654aab041dbf13960\"" May 16 00:55:16.916159 systemd[1]: Started cri-containerd-abfc8319ba6eb1602f9760d880b345267f85a0e4949f488654aab041dbf13960.scope. May 16 00:55:16.953321 env[1216]: time="2025-05-16T00:55:16.953271830Z" level=info msg="StartContainer for \"abfc8319ba6eb1602f9760d880b345267f85a0e4949f488654aab041dbf13960\" returns successfully" May 16 00:55:16.959146 systemd[1]: cri-containerd-abfc8319ba6eb1602f9760d880b345267f85a0e4949f488654aab041dbf13960.scope: Deactivated successfully. May 16 00:55:16.973621 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-abfc8319ba6eb1602f9760d880b345267f85a0e4949f488654aab041dbf13960-rootfs.mount: Deactivated successfully. May 16 00:55:17.001172 env[1216]: time="2025-05-16T00:55:17.001125836Z" level=info msg="shim disconnected" id=abfc8319ba6eb1602f9760d880b345267f85a0e4949f488654aab041dbf13960 May 16 00:55:17.001172 env[1216]: time="2025-05-16T00:55:17.001172596Z" level=warning msg="cleaning up after shim disconnected" id=abfc8319ba6eb1602f9760d880b345267f85a0e4949f488654aab041dbf13960 namespace=k8s.io May 16 00:55:17.001172 env[1216]: time="2025-05-16T00:55:17.001181356Z" level=info msg="cleaning up dead shim" May 16 00:55:17.007476 env[1216]: time="2025-05-16T00:55:17.007443022Z" level=warning msg="cleanup warnings time=\"2025-05-16T00:55:17Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3112 runtime=io.containerd.runc.v2\n" May 16 00:55:17.228093 kubelet[1421]: E0516 00:55:17.227982 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:55:17.266162 env[1216]: time="2025-05-16T00:55:17.266116083Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:55:17.267333 env[1216]: time="2025-05-16T00:55:17.267295641Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:55:17.268789 env[1216]: time="2025-05-16T00:55:17.268742038Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:55:17.269353 env[1216]: time="2025-05-16T00:55:17.269328756Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" May 16 00:55:17.272805 env[1216]: time="2025-05-16T00:55:17.272765829Z" level=info msg="CreateContainer within sandbox \"9b09405355cd19cbc279de483d9e65f57a2c1eeef0b2975028c9cce54d379d0f\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 16 00:55:17.282094 env[1216]: time="2025-05-16T00:55:17.282034568Z" level=info msg="CreateContainer within sandbox \"9b09405355cd19cbc279de483d9e65f57a2c1eeef0b2975028c9cce54d379d0f\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"e591c95c2b4f2d51b6b6d491c7c2ca0c505697440234ea894990e796dd765a51\"" May 16 00:55:17.282803 env[1216]: time="2025-05-16T00:55:17.282736486Z" level=info msg="StartContainer for \"e591c95c2b4f2d51b6b6d491c7c2ca0c505697440234ea894990e796dd765a51\"" May 16 00:55:17.298897 systemd[1]: Started cri-containerd-e591c95c2b4f2d51b6b6d491c7c2ca0c505697440234ea894990e796dd765a51.scope. May 16 00:55:17.336403 env[1216]: time="2025-05-16T00:55:17.336360206Z" level=info msg="StartContainer for \"e591c95c2b4f2d51b6b6d491c7c2ca0c505697440234ea894990e796dd765a51\" returns successfully" May 16 00:55:17.380254 kubelet[1421]: I0516 00:55:17.380106 1421 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="30f96f93-4845-45e8-8e4c-74ba3acee31c" path="/var/lib/kubelet/pods/30f96f93-4845-45e8-8e4c-74ba3acee31c/volumes" May 16 00:55:17.475895 kubelet[1421]: E0516 00:55:17.475851 1421 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:55:17.477064 kubelet[1421]: E0516 00:55:17.477028 1421 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:55:17.480769 env[1216]: time="2025-05-16T00:55:17.480674684Z" level=info msg="CreateContainer within sandbox \"ac11ab4a5057c76b8dc7511c0af670c8870f46021a8e02af21ebdf05f025e193\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 16 00:55:17.484918 kubelet[1421]: I0516 00:55:17.484873 1421 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-7v79b" podStartSLOduration=1.292349411 podStartE2EDuration="3.484859754s" podCreationTimestamp="2025-05-16 00:55:14 +0000 UTC" firstStartedPulling="2025-05-16 00:55:15.077609091 +0000 UTC m=+50.664604902" lastFinishedPulling="2025-05-16 00:55:17.270119434 +0000 UTC m=+52.857115245" observedRunningTime="2025-05-16 00:55:17.484460835 +0000 UTC m=+53.071456646" watchObservedRunningTime="2025-05-16 00:55:17.484859754 +0000 UTC m=+53.071855605" May 16 00:55:17.491073 env[1216]: time="2025-05-16T00:55:17.491026341Z" level=info msg="CreateContainer within sandbox \"ac11ab4a5057c76b8dc7511c0af670c8870f46021a8e02af21ebdf05f025e193\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"93f8329ec0311a37dfcc40421d97af8955b4745ff65aee277aef70506a01a7b7\"" May 16 00:55:17.491820 env[1216]: time="2025-05-16T00:55:17.491783219Z" level=info msg="StartContainer for \"93f8329ec0311a37dfcc40421d97af8955b4745ff65aee277aef70506a01a7b7\"" May 16 00:55:17.506437 systemd[1]: Started cri-containerd-93f8329ec0311a37dfcc40421d97af8955b4745ff65aee277aef70506a01a7b7.scope. May 16 00:55:17.571355 env[1216]: time="2025-05-16T00:55:17.571289281Z" level=info msg="StartContainer for \"93f8329ec0311a37dfcc40421d97af8955b4745ff65aee277aef70506a01a7b7\" returns successfully" May 16 00:55:17.577647 systemd[1]: cri-containerd-93f8329ec0311a37dfcc40421d97af8955b4745ff65aee277aef70506a01a7b7.scope: Deactivated successfully. May 16 00:55:17.593307 env[1216]: time="2025-05-16T00:55:17.593265272Z" level=info msg="shim disconnected" id=93f8329ec0311a37dfcc40421d97af8955b4745ff65aee277aef70506a01a7b7 May 16 00:55:17.593556 env[1216]: time="2025-05-16T00:55:17.593509551Z" level=warning msg="cleaning up after shim disconnected" id=93f8329ec0311a37dfcc40421d97af8955b4745ff65aee277aef70506a01a7b7 namespace=k8s.io May 16 00:55:17.593556 env[1216]: time="2025-05-16T00:55:17.593541591Z" level=info msg="cleaning up dead shim" May 16 00:55:17.599611 env[1216]: time="2025-05-16T00:55:17.599581418Z" level=warning msg="cleanup warnings time=\"2025-05-16T00:55:17Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3214 runtime=io.containerd.runc.v2\n" May 16 00:55:18.228566 kubelet[1421]: E0516 00:55:18.228524 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:55:18.480502 kubelet[1421]: E0516 00:55:18.480424 1421 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:55:18.480797 kubelet[1421]: E0516 00:55:18.480772 1421 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:55:18.484008 env[1216]: time="2025-05-16T00:55:18.483968707Z" level=info msg="CreateContainer within sandbox \"ac11ab4a5057c76b8dc7511c0af670c8870f46021a8e02af21ebdf05f025e193\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 16 00:55:18.495491 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3115690559.mount: Deactivated successfully. May 16 00:55:18.498059 env[1216]: time="2025-05-16T00:55:18.498006277Z" level=info msg="CreateContainer within sandbox \"ac11ab4a5057c76b8dc7511c0af670c8870f46021a8e02af21ebdf05f025e193\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"f3f355874ebeb019c1d8e144c4f2b6835351306deab40c70f8b71b29c18975b0\"" May 16 00:55:18.498943 env[1216]: time="2025-05-16T00:55:18.498881836Z" level=info msg="StartContainer for \"f3f355874ebeb019c1d8e144c4f2b6835351306deab40c70f8b71b29c18975b0\"" May 16 00:55:18.519812 systemd[1]: Started cri-containerd-f3f355874ebeb019c1d8e144c4f2b6835351306deab40c70f8b71b29c18975b0.scope. May 16 00:55:18.555199 env[1216]: time="2025-05-16T00:55:18.555074558Z" level=info msg="StartContainer for \"f3f355874ebeb019c1d8e144c4f2b6835351306deab40c70f8b71b29c18975b0\" returns successfully" May 16 00:55:18.557382 systemd[1]: cri-containerd-f3f355874ebeb019c1d8e144c4f2b6835351306deab40c70f8b71b29c18975b0.scope: Deactivated successfully. May 16 00:55:18.574205 env[1216]: time="2025-05-16T00:55:18.574152838Z" level=info msg="shim disconnected" id=f3f355874ebeb019c1d8e144c4f2b6835351306deab40c70f8b71b29c18975b0 May 16 00:55:18.574205 env[1216]: time="2025-05-16T00:55:18.574195438Z" level=warning msg="cleaning up after shim disconnected" id=f3f355874ebeb019c1d8e144c4f2b6835351306deab40c70f8b71b29c18975b0 namespace=k8s.io May 16 00:55:18.574205 env[1216]: time="2025-05-16T00:55:18.574204878Z" level=info msg="cleaning up dead shim" May 16 00:55:18.580462 env[1216]: time="2025-05-16T00:55:18.580423465Z" level=warning msg="cleanup warnings time=\"2025-05-16T00:55:18Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3272 runtime=io.containerd.runc.v2\n" May 16 00:55:18.896534 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f3f355874ebeb019c1d8e144c4f2b6835351306deab40c70f8b71b29c18975b0-rootfs.mount: Deactivated successfully. May 16 00:55:19.228913 kubelet[1421]: E0516 00:55:19.228798 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:55:19.483857 kubelet[1421]: E0516 00:55:19.483783 1421 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:55:19.487812 env[1216]: time="2025-05-16T00:55:19.487775267Z" level=info msg="CreateContainer within sandbox \"ac11ab4a5057c76b8dc7511c0af670c8870f46021a8e02af21ebdf05f025e193\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 16 00:55:19.499099 env[1216]: time="2025-05-16T00:55:19.499031004Z" level=info msg="CreateContainer within sandbox \"ac11ab4a5057c76b8dc7511c0af670c8870f46021a8e02af21ebdf05f025e193\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"321ce3e4a0ffd670cc6cf99b4b66eb65285de16dfee429f472fdb41492ea5337\"" May 16 00:55:19.499540 env[1216]: time="2025-05-16T00:55:19.499513724Z" level=info msg="StartContainer for \"321ce3e4a0ffd670cc6cf99b4b66eb65285de16dfee429f472fdb41492ea5337\"" May 16 00:55:19.517190 systemd[1]: Started cri-containerd-321ce3e4a0ffd670cc6cf99b4b66eb65285de16dfee429f472fdb41492ea5337.scope. May 16 00:55:19.548038 systemd[1]: cri-containerd-321ce3e4a0ffd670cc6cf99b4b66eb65285de16dfee429f472fdb41492ea5337.scope: Deactivated successfully. May 16 00:55:19.548160 env[1216]: time="2025-05-16T00:55:19.548117628Z" level=info msg="StartContainer for \"321ce3e4a0ffd670cc6cf99b4b66eb65285de16dfee429f472fdb41492ea5337\" returns successfully" May 16 00:55:19.565388 env[1216]: time="2025-05-16T00:55:19.565335674Z" level=info msg="shim disconnected" id=321ce3e4a0ffd670cc6cf99b4b66eb65285de16dfee429f472fdb41492ea5337 May 16 00:55:19.565388 env[1216]: time="2025-05-16T00:55:19.565379114Z" level=warning msg="cleaning up after shim disconnected" id=321ce3e4a0ffd670cc6cf99b4b66eb65285de16dfee429f472fdb41492ea5337 namespace=k8s.io May 16 00:55:19.565388 env[1216]: time="2025-05-16T00:55:19.565390234Z" level=info msg="cleaning up dead shim" May 16 00:55:19.571065 env[1216]: time="2025-05-16T00:55:19.571037903Z" level=warning msg="cleanup warnings time=\"2025-05-16T00:55:19Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3327 runtime=io.containerd.runc.v2\n" May 16 00:55:19.896616 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-321ce3e4a0ffd670cc6cf99b4b66eb65285de16dfee429f472fdb41492ea5337-rootfs.mount: Deactivated successfully. May 16 00:55:20.229731 kubelet[1421]: E0516 00:55:20.229649 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:55:20.339730 kubelet[1421]: E0516 00:55:20.339697 1421 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 16 00:55:20.487670 kubelet[1421]: E0516 00:55:20.487596 1421 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:55:20.491051 env[1216]: time="2025-05-16T00:55:20.491000395Z" level=info msg="CreateContainer within sandbox \"ac11ab4a5057c76b8dc7511c0af670c8870f46021a8e02af21ebdf05f025e193\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 16 00:55:20.503288 env[1216]: time="2025-05-16T00:55:20.503240533Z" level=info msg="CreateContainer within sandbox \"ac11ab4a5057c76b8dc7511c0af670c8870f46021a8e02af21ebdf05f025e193\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"2e593eb4a67848c9261aeab6b407327ef5145b7f62b6a64e3bd3c8bae4f8d8c0\"" May 16 00:55:20.503931 env[1216]: time="2025-05-16T00:55:20.503843732Z" level=info msg="StartContainer for \"2e593eb4a67848c9261aeab6b407327ef5145b7f62b6a64e3bd3c8bae4f8d8c0\"" May 16 00:55:20.520641 systemd[1]: Started cri-containerd-2e593eb4a67848c9261aeab6b407327ef5145b7f62b6a64e3bd3c8bae4f8d8c0.scope. May 16 00:55:20.555603 env[1216]: time="2025-05-16T00:55:20.555542436Z" level=info msg="StartContainer for \"2e593eb4a67848c9261aeab6b407327ef5145b7f62b6a64e3bd3c8bae4f8d8c0\" returns successfully" May 16 00:55:20.776772 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm-aes-ce))) May 16 00:55:21.229963 kubelet[1421]: E0516 00:55:21.229864 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:55:21.491057 kubelet[1421]: E0516 00:55:21.490930 1421 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:55:22.230138 kubelet[1421]: E0516 00:55:22.230071 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:55:22.826377 kubelet[1421]: E0516 00:55:22.826345 1421 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:55:23.230572 kubelet[1421]: E0516 00:55:23.230297 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:55:23.241141 systemd[1]: run-containerd-runc-k8s.io-2e593eb4a67848c9261aeab6b407327ef5145b7f62b6a64e3bd3c8bae4f8d8c0-runc.sjz2dP.mount: Deactivated successfully. May 16 00:55:23.542856 systemd-networkd[1043]: lxc_health: Link UP May 16 00:55:23.553199 systemd-networkd[1043]: lxc_health: Gained carrier May 16 00:55:23.553782 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready May 16 00:55:24.231621 kubelet[1421]: E0516 00:55:24.231564 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:55:24.827047 kubelet[1421]: E0516 00:55:24.826970 1421 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:55:24.848566 kubelet[1421]: I0516 00:55:24.848306 1421 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-d5mkm" podStartSLOduration=8.848289825 podStartE2EDuration="8.848289825s" podCreationTimestamp="2025-05-16 00:55:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-16 00:55:21.506941182 +0000 UTC m=+57.093936953" watchObservedRunningTime="2025-05-16 00:55:24.848289825 +0000 UTC m=+60.435285596" May 16 00:55:25.122977 systemd-networkd[1043]: lxc_health: Gained IPv6LL May 16 00:55:25.187557 kubelet[1421]: E0516 00:55:25.187515 1421 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:55:25.218146 env[1216]: time="2025-05-16T00:55:25.218107758Z" level=info msg="StopPodSandbox for \"40dd6c5d0ca464c357572c24c6024b4a4ee73249d103b79cbd44251830d28834\"" May 16 00:55:25.218432 env[1216]: time="2025-05-16T00:55:25.218195478Z" level=info msg="TearDown network for sandbox \"40dd6c5d0ca464c357572c24c6024b4a4ee73249d103b79cbd44251830d28834\" successfully" May 16 00:55:25.218432 env[1216]: time="2025-05-16T00:55:25.218228198Z" level=info msg="StopPodSandbox for \"40dd6c5d0ca464c357572c24c6024b4a4ee73249d103b79cbd44251830d28834\" returns successfully" May 16 00:55:25.218672 env[1216]: time="2025-05-16T00:55:25.218638558Z" level=info msg="RemovePodSandbox for \"40dd6c5d0ca464c357572c24c6024b4a4ee73249d103b79cbd44251830d28834\"" May 16 00:55:25.218801 env[1216]: time="2025-05-16T00:55:25.218743437Z" level=info msg="Forcibly stopping sandbox \"40dd6c5d0ca464c357572c24c6024b4a4ee73249d103b79cbd44251830d28834\"" May 16 00:55:25.218913 env[1216]: time="2025-05-16T00:55:25.218893477Z" level=info msg="TearDown network for sandbox \"40dd6c5d0ca464c357572c24c6024b4a4ee73249d103b79cbd44251830d28834\" successfully" May 16 00:55:25.222703 env[1216]: time="2025-05-16T00:55:25.222673712Z" level=info msg="RemovePodSandbox \"40dd6c5d0ca464c357572c24c6024b4a4ee73249d103b79cbd44251830d28834\" returns successfully" May 16 00:55:25.232530 kubelet[1421]: E0516 00:55:25.232502 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:55:25.377176 systemd[1]: run-containerd-runc-k8s.io-2e593eb4a67848c9261aeab6b407327ef5145b7f62b6a64e3bd3c8bae4f8d8c0-runc.Tz02Si.mount: Deactivated successfully. May 16 00:55:25.496911 kubelet[1421]: E0516 00:55:25.496883 1421 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:55:26.233454 kubelet[1421]: E0516 00:55:26.233412 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:55:26.498941 kubelet[1421]: E0516 00:55:26.498640 1421 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:55:27.234141 kubelet[1421]: E0516 00:55:27.234094 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:55:28.234258 kubelet[1421]: E0516 00:55:28.234212 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:55:29.235703 kubelet[1421]: E0516 00:55:29.235667 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:55:29.628870 systemd[1]: run-containerd-runc-k8s.io-2e593eb4a67848c9261aeab6b407327ef5145b7f62b6a64e3bd3c8bae4f8d8c0-runc.1Dij7E.mount: Deactivated successfully. May 16 00:55:30.236183 kubelet[1421]: E0516 00:55:30.236146 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:55:31.236463 kubelet[1421]: E0516 00:55:31.236423 1421 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"