May 16 00:48:10.741040 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] May 16 00:48:10.741060 kernel: Linux version 5.15.181-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Thu May 15 23:21:39 -00 2025 May 16 00:48:10.741068 kernel: efi: EFI v2.70 by EDK II May 16 00:48:10.741074 kernel: efi: SMBIOS 3.0=0xd9260000 ACPI 2.0=0xd9240000 MEMATTR=0xda32b018 RNG=0xd9220018 MEMRESERVE=0xd9521c18 May 16 00:48:10.741079 kernel: random: crng init done May 16 00:48:10.741085 kernel: ACPI: Early table checksum verification disabled May 16 00:48:10.741091 kernel: ACPI: RSDP 0x00000000D9240000 000024 (v02 BOCHS ) May 16 00:48:10.741098 kernel: ACPI: XSDT 0x00000000D9230000 000064 (v01 BOCHS BXPC 00000001 01000013) May 16 00:48:10.741104 kernel: ACPI: FACP 0x00000000D91E0000 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) May 16 00:48:10.741121 kernel: ACPI: DSDT 0x00000000D91F0000 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 16 00:48:10.741128 kernel: ACPI: APIC 0x00000000D91D0000 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) May 16 00:48:10.741133 kernel: ACPI: PPTT 0x00000000D91C0000 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) May 16 00:48:10.741138 kernel: ACPI: GTDT 0x00000000D91B0000 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 16 00:48:10.741144 kernel: ACPI: MCFG 0x00000000D91A0000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 16 00:48:10.741152 kernel: ACPI: SPCR 0x00000000D9190000 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 16 00:48:10.741158 kernel: ACPI: DBG2 0x00000000D9180000 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) May 16 00:48:10.741164 kernel: ACPI: IORT 0x00000000D9170000 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 16 00:48:10.741170 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 May 16 00:48:10.741176 kernel: NUMA: Failed to initialise from firmware May 16 00:48:10.741182 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] May 16 00:48:10.741188 kernel: NUMA: NODE_DATA [mem 0xdcb0b900-0xdcb10fff] May 16 00:48:10.741194 kernel: Zone ranges: May 16 00:48:10.741200 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] May 16 00:48:10.741207 kernel: DMA32 empty May 16 00:48:10.741212 kernel: Normal empty May 16 00:48:10.741218 kernel: Movable zone start for each node May 16 00:48:10.741224 kernel: Early memory node ranges May 16 00:48:10.741230 kernel: node 0: [mem 0x0000000040000000-0x00000000d924ffff] May 16 00:48:10.741235 kernel: node 0: [mem 0x00000000d9250000-0x00000000d951ffff] May 16 00:48:10.741241 kernel: node 0: [mem 0x00000000d9520000-0x00000000dc7fffff] May 16 00:48:10.741247 kernel: node 0: [mem 0x00000000dc800000-0x00000000dc88ffff] May 16 00:48:10.741253 kernel: node 0: [mem 0x00000000dc890000-0x00000000dc89ffff] May 16 00:48:10.741258 kernel: node 0: [mem 0x00000000dc8a0000-0x00000000dc9bffff] May 16 00:48:10.741264 kernel: node 0: [mem 0x00000000dc9c0000-0x00000000dcffffff] May 16 00:48:10.741270 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] May 16 00:48:10.741278 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges May 16 00:48:10.741283 kernel: psci: probing for conduit method from ACPI. May 16 00:48:10.741289 kernel: psci: PSCIv1.1 detected in firmware. May 16 00:48:10.741295 kernel: psci: Using standard PSCI v0.2 function IDs May 16 00:48:10.741301 kernel: psci: Trusted OS migration not required May 16 00:48:10.741310 kernel: psci: SMC Calling Convention v1.1 May 16 00:48:10.741316 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) May 16 00:48:10.741324 kernel: ACPI: SRAT not present May 16 00:48:10.741330 kernel: percpu: Embedded 30 pages/cpu s83032 r8192 d31656 u122880 May 16 00:48:10.741337 kernel: pcpu-alloc: s83032 r8192 d31656 u122880 alloc=30*4096 May 16 00:48:10.741343 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 May 16 00:48:10.741349 kernel: Detected PIPT I-cache on CPU0 May 16 00:48:10.741355 kernel: CPU features: detected: GIC system register CPU interface May 16 00:48:10.741361 kernel: CPU features: detected: Hardware dirty bit management May 16 00:48:10.741367 kernel: CPU features: detected: Spectre-v4 May 16 00:48:10.741373 kernel: CPU features: detected: Spectre-BHB May 16 00:48:10.741381 kernel: CPU features: kernel page table isolation forced ON by KASLR May 16 00:48:10.741387 kernel: CPU features: detected: Kernel page table isolation (KPTI) May 16 00:48:10.741393 kernel: CPU features: detected: ARM erratum 1418040 May 16 00:48:10.741399 kernel: CPU features: detected: SSBS not fully self-synchronizing May 16 00:48:10.741406 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 May 16 00:48:10.741413 kernel: Policy zone: DMA May 16 00:48:10.741421 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=2d88e96fdc9dc9b028836e57c250f3fd2abd3e6490e27ecbf72d8b216e3efce8 May 16 00:48:10.741430 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 16 00:48:10.741436 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 16 00:48:10.741442 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 16 00:48:10.741449 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 16 00:48:10.741456 kernel: Memory: 2457340K/2572288K available (9792K kernel code, 2094K rwdata, 7584K rodata, 36480K init, 777K bss, 114948K reserved, 0K cma-reserved) May 16 00:48:10.741463 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 16 00:48:10.741469 kernel: trace event string verifier disabled May 16 00:48:10.741475 kernel: rcu: Preemptible hierarchical RCU implementation. May 16 00:48:10.741481 kernel: rcu: RCU event tracing is enabled. May 16 00:48:10.741488 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 16 00:48:10.741494 kernel: Trampoline variant of Tasks RCU enabled. May 16 00:48:10.741501 kernel: Tracing variant of Tasks RCU enabled. May 16 00:48:10.741507 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 16 00:48:10.741513 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 16 00:48:10.741519 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 May 16 00:48:10.741527 kernel: GICv3: 256 SPIs implemented May 16 00:48:10.741533 kernel: GICv3: 0 Extended SPIs implemented May 16 00:48:10.741539 kernel: GICv3: Distributor has no Range Selector support May 16 00:48:10.741545 kernel: Root IRQ handler: gic_handle_irq May 16 00:48:10.741551 kernel: GICv3: 16 PPIs implemented May 16 00:48:10.741557 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 May 16 00:48:10.741563 kernel: ACPI: SRAT not present May 16 00:48:10.741569 kernel: ITS [mem 0x08080000-0x0809ffff] May 16 00:48:10.741575 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400b0000 (indirect, esz 8, psz 64K, shr 1) May 16 00:48:10.741581 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400c0000 (flat, esz 8, psz 64K, shr 1) May 16 00:48:10.741588 kernel: GICv3: using LPI property table @0x00000000400d0000 May 16 00:48:10.741594 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000000400e0000 May 16 00:48:10.741601 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 16 00:48:10.741607 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). May 16 00:48:10.741614 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns May 16 00:48:10.741620 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns May 16 00:48:10.741626 kernel: arm-pv: using stolen time PV May 16 00:48:10.741644 kernel: Console: colour dummy device 80x25 May 16 00:48:10.741650 kernel: ACPI: Core revision 20210730 May 16 00:48:10.741657 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) May 16 00:48:10.741663 kernel: pid_max: default: 32768 minimum: 301 May 16 00:48:10.741670 kernel: LSM: Security Framework initializing May 16 00:48:10.741678 kernel: SELinux: Initializing. May 16 00:48:10.741684 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 16 00:48:10.741691 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 16 00:48:10.741697 kernel: ACPI PPTT: PPTT table found, but unable to locate core 3 (3) May 16 00:48:10.741704 kernel: rcu: Hierarchical SRCU implementation. May 16 00:48:10.741710 kernel: Platform MSI: ITS@0x8080000 domain created May 16 00:48:10.741717 kernel: PCI/MSI: ITS@0x8080000 domain created May 16 00:48:10.741723 kernel: Remapping and enabling EFI services. May 16 00:48:10.741729 kernel: smp: Bringing up secondary CPUs ... May 16 00:48:10.741737 kernel: Detected PIPT I-cache on CPU1 May 16 00:48:10.741743 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 May 16 00:48:10.741749 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000000400f0000 May 16 00:48:10.741756 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 16 00:48:10.741762 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] May 16 00:48:10.741769 kernel: Detected PIPT I-cache on CPU2 May 16 00:48:10.741775 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 May 16 00:48:10.741782 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040100000 May 16 00:48:10.741788 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 16 00:48:10.741794 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] May 16 00:48:10.741802 kernel: Detected PIPT I-cache on CPU3 May 16 00:48:10.741808 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 May 16 00:48:10.741814 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040110000 May 16 00:48:10.741821 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 16 00:48:10.741832 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] May 16 00:48:10.741839 kernel: smp: Brought up 1 node, 4 CPUs May 16 00:48:10.741846 kernel: SMP: Total of 4 processors activated. May 16 00:48:10.741853 kernel: CPU features: detected: 32-bit EL0 Support May 16 00:48:10.741860 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence May 16 00:48:10.741866 kernel: CPU features: detected: Common not Private translations May 16 00:48:10.741873 kernel: CPU features: detected: CRC32 instructions May 16 00:48:10.741880 kernel: CPU features: detected: RCpc load-acquire (LDAPR) May 16 00:48:10.741888 kernel: CPU features: detected: LSE atomic instructions May 16 00:48:10.741895 kernel: CPU features: detected: Privileged Access Never May 16 00:48:10.741902 kernel: CPU features: detected: RAS Extension Support May 16 00:48:10.741909 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) May 16 00:48:10.741915 kernel: CPU: All CPU(s) started at EL1 May 16 00:48:10.741923 kernel: alternatives: patching kernel code May 16 00:48:10.741930 kernel: devtmpfs: initialized May 16 00:48:10.741936 kernel: KASLR enabled May 16 00:48:10.741943 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 16 00:48:10.741950 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 16 00:48:10.741958 kernel: pinctrl core: initialized pinctrl subsystem May 16 00:48:10.741964 kernel: SMBIOS 3.0.0 present. May 16 00:48:10.741971 kernel: DMI: QEMU KVM Virtual Machine, BIOS 0.0.0 02/06/2015 May 16 00:48:10.741977 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 16 00:48:10.741985 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations May 16 00:48:10.741992 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations May 16 00:48:10.741999 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations May 16 00:48:10.742006 kernel: audit: initializing netlink subsys (disabled) May 16 00:48:10.742013 kernel: audit: type=2000 audit(0.031:1): state=initialized audit_enabled=0 res=1 May 16 00:48:10.742019 kernel: thermal_sys: Registered thermal governor 'step_wise' May 16 00:48:10.742026 kernel: cpuidle: using governor menu May 16 00:48:10.742033 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. May 16 00:48:10.742039 kernel: ASID allocator initialised with 32768 entries May 16 00:48:10.742047 kernel: ACPI: bus type PCI registered May 16 00:48:10.742054 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 16 00:48:10.742061 kernel: Serial: AMBA PL011 UART driver May 16 00:48:10.742067 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages May 16 00:48:10.742074 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages May 16 00:48:10.742081 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages May 16 00:48:10.742087 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages May 16 00:48:10.742094 kernel: cryptd: max_cpu_qlen set to 1000 May 16 00:48:10.742101 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) May 16 00:48:10.742115 kernel: ACPI: Added _OSI(Module Device) May 16 00:48:10.742122 kernel: ACPI: Added _OSI(Processor Device) May 16 00:48:10.742129 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 16 00:48:10.742136 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 16 00:48:10.742142 kernel: ACPI: Added _OSI(Linux-Dell-Video) May 16 00:48:10.742149 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) May 16 00:48:10.742155 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) May 16 00:48:10.742162 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 16 00:48:10.742169 kernel: ACPI: Interpreter enabled May 16 00:48:10.742177 kernel: ACPI: Using GIC for interrupt routing May 16 00:48:10.742184 kernel: ACPI: MCFG table detected, 1 entries May 16 00:48:10.742191 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA May 16 00:48:10.742197 kernel: printk: console [ttyAMA0] enabled May 16 00:48:10.742204 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 16 00:48:10.742330 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 16 00:48:10.742397 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] May 16 00:48:10.742461 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] May 16 00:48:10.742521 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 May 16 00:48:10.742581 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] May 16 00:48:10.742590 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] May 16 00:48:10.742597 kernel: PCI host bridge to bus 0000:00 May 16 00:48:10.742672 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] May 16 00:48:10.742730 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] May 16 00:48:10.742784 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] May 16 00:48:10.742841 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 16 00:48:10.742915 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 May 16 00:48:10.742991 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 May 16 00:48:10.743055 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] May 16 00:48:10.743127 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] May 16 00:48:10.743192 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] May 16 00:48:10.743275 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] May 16 00:48:10.743340 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] May 16 00:48:10.743401 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] May 16 00:48:10.743463 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] May 16 00:48:10.743522 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] May 16 00:48:10.743578 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] May 16 00:48:10.743587 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 May 16 00:48:10.743594 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 May 16 00:48:10.743603 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 May 16 00:48:10.743609 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 May 16 00:48:10.743616 kernel: iommu: Default domain type: Translated May 16 00:48:10.743623 kernel: iommu: DMA domain TLB invalidation policy: strict mode May 16 00:48:10.743637 kernel: vgaarb: loaded May 16 00:48:10.743645 kernel: pps_core: LinuxPPS API ver. 1 registered May 16 00:48:10.743652 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti May 16 00:48:10.743659 kernel: PTP clock support registered May 16 00:48:10.743666 kernel: Registered efivars operations May 16 00:48:10.743675 kernel: clocksource: Switched to clocksource arch_sys_counter May 16 00:48:10.743681 kernel: VFS: Disk quotas dquot_6.6.0 May 16 00:48:10.743688 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 16 00:48:10.743695 kernel: pnp: PnP ACPI init May 16 00:48:10.743768 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved May 16 00:48:10.743778 kernel: pnp: PnP ACPI: found 1 devices May 16 00:48:10.743785 kernel: NET: Registered PF_INET protocol family May 16 00:48:10.743792 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 16 00:48:10.743801 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 16 00:48:10.743808 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 16 00:48:10.743815 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 16 00:48:10.743821 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) May 16 00:48:10.743828 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 16 00:48:10.743835 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 16 00:48:10.743841 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 16 00:48:10.743848 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 16 00:48:10.743855 kernel: PCI: CLS 0 bytes, default 64 May 16 00:48:10.743863 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available May 16 00:48:10.743870 kernel: kvm [1]: HYP mode not available May 16 00:48:10.743876 kernel: Initialise system trusted keyrings May 16 00:48:10.743883 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 16 00:48:10.743890 kernel: Key type asymmetric registered May 16 00:48:10.743896 kernel: Asymmetric key parser 'x509' registered May 16 00:48:10.743903 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) May 16 00:48:10.743910 kernel: io scheduler mq-deadline registered May 16 00:48:10.743916 kernel: io scheduler kyber registered May 16 00:48:10.743924 kernel: io scheduler bfq registered May 16 00:48:10.743931 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 May 16 00:48:10.743938 kernel: ACPI: button: Power Button [PWRB] May 16 00:48:10.743945 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 May 16 00:48:10.744008 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) May 16 00:48:10.744017 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 16 00:48:10.744024 kernel: thunder_xcv, ver 1.0 May 16 00:48:10.744031 kernel: thunder_bgx, ver 1.0 May 16 00:48:10.744038 kernel: nicpf, ver 1.0 May 16 00:48:10.744046 kernel: nicvf, ver 1.0 May 16 00:48:10.744130 kernel: rtc-efi rtc-efi.0: registered as rtc0 May 16 00:48:10.744193 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-05-16T00:48:10 UTC (1747356490) May 16 00:48:10.744202 kernel: hid: raw HID events driver (C) Jiri Kosina May 16 00:48:10.744209 kernel: NET: Registered PF_INET6 protocol family May 16 00:48:10.744215 kernel: Segment Routing with IPv6 May 16 00:48:10.744222 kernel: In-situ OAM (IOAM) with IPv6 May 16 00:48:10.744229 kernel: NET: Registered PF_PACKET protocol family May 16 00:48:10.744238 kernel: Key type dns_resolver registered May 16 00:48:10.744244 kernel: registered taskstats version 1 May 16 00:48:10.744251 kernel: Loading compiled-in X.509 certificates May 16 00:48:10.744258 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.181-flatcar: 2793d535c1de6f1789b22ef06bd5666144f4eeb2' May 16 00:48:10.744265 kernel: Key type .fscrypt registered May 16 00:48:10.744271 kernel: Key type fscrypt-provisioning registered May 16 00:48:10.744278 kernel: ima: No TPM chip found, activating TPM-bypass! May 16 00:48:10.744285 kernel: ima: Allocated hash algorithm: sha1 May 16 00:48:10.744291 kernel: ima: No architecture policies found May 16 00:48:10.744299 kernel: clk: Disabling unused clocks May 16 00:48:10.744306 kernel: Freeing unused kernel memory: 36480K May 16 00:48:10.744312 kernel: Run /init as init process May 16 00:48:10.744319 kernel: with arguments: May 16 00:48:10.744325 kernel: /init May 16 00:48:10.744332 kernel: with environment: May 16 00:48:10.744341 kernel: HOME=/ May 16 00:48:10.744348 kernel: TERM=linux May 16 00:48:10.744355 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 16 00:48:10.744365 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) May 16 00:48:10.744374 systemd[1]: Detected virtualization kvm. May 16 00:48:10.744382 systemd[1]: Detected architecture arm64. May 16 00:48:10.744389 systemd[1]: Running in initrd. May 16 00:48:10.744396 systemd[1]: No hostname configured, using default hostname. May 16 00:48:10.744403 systemd[1]: Hostname set to . May 16 00:48:10.744410 systemd[1]: Initializing machine ID from VM UUID. May 16 00:48:10.744419 systemd[1]: Queued start job for default target initrd.target. May 16 00:48:10.744426 systemd[1]: Started systemd-ask-password-console.path. May 16 00:48:10.744433 systemd[1]: Reached target cryptsetup.target. May 16 00:48:10.744440 systemd[1]: Reached target paths.target. May 16 00:48:10.744447 systemd[1]: Reached target slices.target. May 16 00:48:10.744454 systemd[1]: Reached target swap.target. May 16 00:48:10.744461 systemd[1]: Reached target timers.target. May 16 00:48:10.744469 systemd[1]: Listening on iscsid.socket. May 16 00:48:10.744477 systemd[1]: Listening on iscsiuio.socket. May 16 00:48:10.744484 systemd[1]: Listening on systemd-journald-audit.socket. May 16 00:48:10.744492 systemd[1]: Listening on systemd-journald-dev-log.socket. May 16 00:48:10.744499 systemd[1]: Listening on systemd-journald.socket. May 16 00:48:10.744506 systemd[1]: Listening on systemd-networkd.socket. May 16 00:48:10.744513 systemd[1]: Listening on systemd-udevd-control.socket. May 16 00:48:10.744521 systemd[1]: Listening on systemd-udevd-kernel.socket. May 16 00:48:10.744528 systemd[1]: Reached target sockets.target. May 16 00:48:10.744536 systemd[1]: Starting kmod-static-nodes.service... May 16 00:48:10.744543 systemd[1]: Finished network-cleanup.service. May 16 00:48:10.744551 systemd[1]: Starting systemd-fsck-usr.service... May 16 00:48:10.744558 systemd[1]: Starting systemd-journald.service... May 16 00:48:10.744565 systemd[1]: Starting systemd-modules-load.service... May 16 00:48:10.744572 systemd[1]: Starting systemd-resolved.service... May 16 00:48:10.744580 systemd[1]: Starting systemd-vconsole-setup.service... May 16 00:48:10.744587 systemd[1]: Finished kmod-static-nodes.service. May 16 00:48:10.744594 systemd[1]: Finished systemd-fsck-usr.service. May 16 00:48:10.744602 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... May 16 00:48:10.744609 systemd[1]: Finished systemd-vconsole-setup.service. May 16 00:48:10.744616 systemd[1]: Starting dracut-cmdline-ask.service... May 16 00:48:10.744623 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. May 16 00:48:10.744640 systemd-journald[290]: Journal started May 16 00:48:10.744684 systemd-journald[290]: Runtime Journal (/run/log/journal/142e800691b940458afd01801b6953c5) is 6.0M, max 48.7M, 42.6M free. May 16 00:48:10.737160 systemd-modules-load[291]: Inserted module 'overlay' May 16 00:48:10.748000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:48:10.751158 kernel: audit: type=1130 audit(1747356490.748:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:48:10.751176 systemd[1]: Started systemd-journald.service. May 16 00:48:10.752000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:48:10.755141 kernel: audit: type=1130 audit(1747356490.752:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:48:10.763792 systemd[1]: Finished dracut-cmdline-ask.service. May 16 00:48:10.769098 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 16 00:48:10.769133 kernel: audit: type=1130 audit(1747356490.764:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:48:10.764000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:48:10.765334 systemd[1]: Starting dracut-cmdline.service... May 16 00:48:10.766321 systemd-resolved[292]: Positive Trust Anchors: May 16 00:48:10.766328 systemd-resolved[292]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 16 00:48:10.766356 systemd-resolved[292]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test May 16 00:48:10.778183 kernel: Bridge firewalling registered May 16 00:48:10.772910 systemd-resolved[292]: Defaulting to hostname 'linux'. May 16 00:48:10.778000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:48:10.772929 systemd-modules-load[291]: Inserted module 'br_netfilter' May 16 00:48:10.782640 kernel: audit: type=1130 audit(1747356490.778:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:48:10.774377 systemd[1]: Started systemd-resolved.service. May 16 00:48:10.778908 systemd[1]: Reached target nss-lookup.target. May 16 00:48:10.783850 dracut-cmdline[307]: dracut-dracut-053 May 16 00:48:10.786615 dracut-cmdline[307]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=2d88e96fdc9dc9b028836e57c250f3fd2abd3e6490e27ecbf72d8b216e3efce8 May 16 00:48:10.792146 kernel: SCSI subsystem initialized May 16 00:48:10.800407 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 16 00:48:10.800443 kernel: device-mapper: uevent: version 1.0.3 May 16 00:48:10.801460 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com May 16 00:48:10.803791 systemd-modules-load[291]: Inserted module 'dm_multipath' May 16 00:48:10.804641 systemd[1]: Finished systemd-modules-load.service. May 16 00:48:10.805000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:48:10.806564 systemd[1]: Starting systemd-sysctl.service... May 16 00:48:10.809148 kernel: audit: type=1130 audit(1747356490.805:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:48:10.816098 systemd[1]: Finished systemd-sysctl.service. May 16 00:48:10.816000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:48:10.820143 kernel: audit: type=1130 audit(1747356490.816:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:48:10.855142 kernel: Loading iSCSI transport class v2.0-870. May 16 00:48:10.871138 kernel: iscsi: registered transport (tcp) May 16 00:48:10.890138 kernel: iscsi: registered transport (qla4xxx) May 16 00:48:10.890160 kernel: QLogic iSCSI HBA Driver May 16 00:48:10.924000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:48:10.924137 systemd[1]: Finished dracut-cmdline.service. May 16 00:48:10.925743 systemd[1]: Starting dracut-pre-udev.service... May 16 00:48:10.928891 kernel: audit: type=1130 audit(1747356490.924:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:48:10.970141 kernel: raid6: neonx8 gen() 13747 MB/s May 16 00:48:10.987137 kernel: raid6: neonx8 xor() 10776 MB/s May 16 00:48:11.004133 kernel: raid6: neonx4 gen() 13552 MB/s May 16 00:48:11.021131 kernel: raid6: neonx4 xor() 11102 MB/s May 16 00:48:11.038130 kernel: raid6: neonx2 gen() 12990 MB/s May 16 00:48:11.055134 kernel: raid6: neonx2 xor() 10240 MB/s May 16 00:48:11.072135 kernel: raid6: neonx1 gen() 10590 MB/s May 16 00:48:11.089145 kernel: raid6: neonx1 xor() 8761 MB/s May 16 00:48:11.106129 kernel: raid6: int64x8 gen() 6191 MB/s May 16 00:48:11.123128 kernel: raid6: int64x8 xor() 3491 MB/s May 16 00:48:11.140136 kernel: raid6: int64x4 gen() 7141 MB/s May 16 00:48:11.157131 kernel: raid6: int64x4 xor() 3796 MB/s May 16 00:48:11.174129 kernel: raid6: int64x2 gen() 6071 MB/s May 16 00:48:11.191129 kernel: raid6: int64x2 xor() 3268 MB/s May 16 00:48:11.208144 kernel: raid6: int64x1 gen() 4979 MB/s May 16 00:48:11.225396 kernel: raid6: int64x1 xor() 2614 MB/s May 16 00:48:11.225413 kernel: raid6: using algorithm neonx8 gen() 13747 MB/s May 16 00:48:11.225422 kernel: raid6: .... xor() 10776 MB/s, rmw enabled May 16 00:48:11.226579 kernel: raid6: using neon recovery algorithm May 16 00:48:11.237556 kernel: xor: measuring software checksum speed May 16 00:48:11.237579 kernel: 8regs : 17177 MB/sec May 16 00:48:11.238262 kernel: 32regs : 19773 MB/sec May 16 00:48:11.240237 kernel: arm64_neon : 27710 MB/sec May 16 00:48:11.240249 kernel: xor: using function: arm64_neon (27710 MB/sec) May 16 00:48:11.297145 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no May 16 00:48:11.309000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:48:11.308837 systemd[1]: Finished dracut-pre-udev.service. May 16 00:48:11.313341 kernel: audit: type=1130 audit(1747356491.309:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:48:11.313361 kernel: audit: type=1334 audit(1747356491.311:10): prog-id=7 op=LOAD May 16 00:48:11.311000 audit: BPF prog-id=7 op=LOAD May 16 00:48:11.312000 audit: BPF prog-id=8 op=LOAD May 16 00:48:11.313329 systemd[1]: Starting systemd-udevd.service... May 16 00:48:11.325293 systemd-udevd[491]: Using default interface naming scheme 'v252'. May 16 00:48:11.328644 systemd[1]: Started systemd-udevd.service. May 16 00:48:11.329000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:48:11.330069 systemd[1]: Starting dracut-pre-trigger.service... May 16 00:48:11.341765 dracut-pre-trigger[497]: rd.md=0: removing MD RAID activation May 16 00:48:11.371643 systemd[1]: Finished dracut-pre-trigger.service. May 16 00:48:11.372000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:48:11.373035 systemd[1]: Starting systemd-udev-trigger.service... May 16 00:48:11.407767 systemd[1]: Finished systemd-udev-trigger.service. May 16 00:48:11.408000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:48:11.442141 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 16 00:48:11.447543 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 16 00:48:11.447572 kernel: GPT:9289727 != 19775487 May 16 00:48:11.447582 kernel: GPT:Alternate GPT header not at the end of the disk. May 16 00:48:11.447591 kernel: GPT:9289727 != 19775487 May 16 00:48:11.447600 kernel: GPT: Use GNU Parted to correct GPT errors. May 16 00:48:11.447608 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 16 00:48:11.459167 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (555) May 16 00:48:11.459244 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. May 16 00:48:11.464420 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. May 16 00:48:11.472749 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. May 16 00:48:11.475356 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. May 16 00:48:11.476270 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. May 16 00:48:11.478410 systemd[1]: Starting disk-uuid.service... May 16 00:48:11.484307 disk-uuid[563]: Primary Header is updated. May 16 00:48:11.484307 disk-uuid[563]: Secondary Entries is updated. May 16 00:48:11.484307 disk-uuid[563]: Secondary Header is updated. May 16 00:48:11.488128 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 16 00:48:12.499960 disk-uuid[564]: The operation has completed successfully. May 16 00:48:12.500917 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 16 00:48:12.528061 systemd[1]: disk-uuid.service: Deactivated successfully. May 16 00:48:12.529279 systemd[1]: Finished disk-uuid.service. May 16 00:48:12.529000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:48:12.529000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:48:12.533358 systemd[1]: Starting verity-setup.service... May 16 00:48:12.551147 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" May 16 00:48:12.587671 systemd[1]: Found device dev-mapper-usr.device. May 16 00:48:12.590698 systemd[1]: Mounting sysusr-usr.mount... May 16 00:48:12.591430 systemd[1]: Finished verity-setup.service. May 16 00:48:12.591000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:48:12.646315 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. May 16 00:48:12.646568 systemd[1]: Mounted sysusr-usr.mount. May 16 00:48:12.647248 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. May 16 00:48:12.647986 systemd[1]: Starting ignition-setup.service... May 16 00:48:12.649921 systemd[1]: Starting parse-ip-for-networkd.service... May 16 00:48:12.659799 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 16 00:48:12.659855 kernel: BTRFS info (device vda6): using free space tree May 16 00:48:12.659866 kernel: BTRFS info (device vda6): has skinny extents May 16 00:48:12.668972 systemd[1]: mnt-oem.mount: Deactivated successfully. May 16 00:48:12.674944 systemd[1]: Finished ignition-setup.service. May 16 00:48:12.675000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:48:12.676441 systemd[1]: Starting ignition-fetch-offline.service... May 16 00:48:12.745172 systemd[1]: Finished parse-ip-for-networkd.service. May 16 00:48:12.745000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:48:12.746000 audit: BPF prog-id=9 op=LOAD May 16 00:48:12.747548 systemd[1]: Starting systemd-networkd.service... May 16 00:48:12.774394 systemd-networkd[740]: lo: Link UP May 16 00:48:12.774407 systemd-networkd[740]: lo: Gained carrier May 16 00:48:12.776000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:48:12.774790 systemd-networkd[740]: Enumeration completed May 16 00:48:12.774977 systemd-networkd[740]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 16 00:48:12.775083 systemd[1]: Started systemd-networkd.service. May 16 00:48:12.776091 systemd-networkd[740]: eth0: Link UP May 16 00:48:12.776095 systemd-networkd[740]: eth0: Gained carrier May 16 00:48:12.776565 systemd[1]: Reached target network.target. May 16 00:48:12.778184 systemd[1]: Starting iscsiuio.service... May 16 00:48:12.792373 systemd[1]: Started iscsiuio.service. May 16 00:48:12.793000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:48:12.794186 systemd[1]: Starting iscsid.service... May 16 00:48:12.798149 iscsid[745]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi May 16 00:48:12.798149 iscsid[745]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. May 16 00:48:12.798149 iscsid[745]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. May 16 00:48:12.798149 iscsid[745]: If using hardware iscsi like qla4xxx this message can be ignored. May 16 00:48:12.798149 iscsid[745]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi May 16 00:48:12.798149 iscsid[745]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf May 16 00:48:12.803000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:48:12.805887 ignition[652]: Ignition 2.14.0 May 16 00:48:12.799236 systemd-networkd[740]: eth0: DHCPv4 address 10.0.0.110/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 16 00:48:12.805925 ignition[652]: Stage: fetch-offline May 16 00:48:12.801499 systemd[1]: Started iscsid.service. May 16 00:48:12.805968 ignition[652]: no configs at "/usr/lib/ignition/base.d" May 16 00:48:12.804802 systemd[1]: Starting dracut-initqueue.service... May 16 00:48:12.805977 ignition[652]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 16 00:48:12.806101 ignition[652]: parsed url from cmdline: "" May 16 00:48:12.806105 ignition[652]: no config URL provided May 16 00:48:12.806158 ignition[652]: reading system config file "/usr/lib/ignition/user.ign" May 16 00:48:12.806167 ignition[652]: no config at "/usr/lib/ignition/user.ign" May 16 00:48:12.806186 ignition[652]: op(1): [started] loading QEMU firmware config module May 16 00:48:12.806190 ignition[652]: op(1): executing: "modprobe" "qemu_fw_cfg" May 16 00:48:12.817321 systemd[1]: Finished dracut-initqueue.service. May 16 00:48:12.818000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:48:12.818596 systemd[1]: Reached target remote-fs-pre.target. May 16 00:48:12.819697 systemd[1]: Reached target remote-cryptsetup.target. May 16 00:48:12.819928 ignition[652]: op(1): [finished] loading QEMU firmware config module May 16 00:48:12.820851 systemd[1]: Reached target remote-fs.target. May 16 00:48:12.822739 systemd[1]: Starting dracut-pre-mount.service... May 16 00:48:12.829683 ignition[652]: parsing config with SHA512: 539b42259159389430bba8859c56663c3384733d08bc1059d0375b4409ca2e4fef5144418109263180458ebb6da3ebffe1a763e915816b27286be18eee382131 May 16 00:48:12.831000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:48:12.831220 systemd[1]: Finished dracut-pre-mount.service. May 16 00:48:12.836636 unknown[652]: fetched base config from "system" May 16 00:48:12.836652 unknown[652]: fetched user config from "qemu" May 16 00:48:12.836981 ignition[652]: fetch-offline: fetch-offline passed May 16 00:48:12.839000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:48:12.838167 systemd[1]: Finished ignition-fetch-offline.service. May 16 00:48:12.837047 ignition[652]: Ignition finished successfully May 16 00:48:12.839500 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 16 00:48:12.840215 systemd[1]: Starting ignition-kargs.service... May 16 00:48:12.849128 ignition[762]: Ignition 2.14.0 May 16 00:48:12.849136 ignition[762]: Stage: kargs May 16 00:48:12.849227 ignition[762]: no configs at "/usr/lib/ignition/base.d" May 16 00:48:12.849236 ignition[762]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 16 00:48:12.851446 systemd[1]: Finished ignition-kargs.service. May 16 00:48:12.852000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:48:12.849955 ignition[762]: kargs: kargs passed May 16 00:48:12.849998 ignition[762]: Ignition finished successfully May 16 00:48:12.853508 systemd[1]: Starting ignition-disks.service... May 16 00:48:12.859574 ignition[768]: Ignition 2.14.0 May 16 00:48:12.859582 ignition[768]: Stage: disks May 16 00:48:12.859711 ignition[768]: no configs at "/usr/lib/ignition/base.d" May 16 00:48:12.861390 systemd[1]: Finished ignition-disks.service. May 16 00:48:12.862000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:48:12.859720 ignition[768]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 16 00:48:12.862755 systemd[1]: Reached target initrd-root-device.target. May 16 00:48:12.860526 ignition[768]: disks: disks passed May 16 00:48:12.863872 systemd[1]: Reached target local-fs-pre.target. May 16 00:48:12.860567 ignition[768]: Ignition finished successfully May 16 00:48:12.865323 systemd[1]: Reached target local-fs.target. May 16 00:48:12.866532 systemd[1]: Reached target sysinit.target. May 16 00:48:12.867545 systemd[1]: Reached target basic.target. May 16 00:48:12.869467 systemd[1]: Starting systemd-fsck-root.service... May 16 00:48:12.882256 systemd-fsck[776]: ROOT: clean, 619/553520 files, 56022/553472 blocks May 16 00:48:12.886709 systemd[1]: Finished systemd-fsck-root.service. May 16 00:48:12.887000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:48:12.888526 systemd[1]: Mounting sysroot.mount... May 16 00:48:12.898741 systemd[1]: Mounted sysroot.mount. May 16 00:48:12.899811 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. May 16 00:48:12.899429 systemd[1]: Reached target initrd-root-fs.target. May 16 00:48:12.903131 systemd[1]: Mounting sysroot-usr.mount... May 16 00:48:12.903871 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. May 16 00:48:12.903910 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 16 00:48:12.903933 systemd[1]: Reached target ignition-diskful.target. May 16 00:48:12.905871 systemd[1]: Mounted sysroot-usr.mount. May 16 00:48:12.907200 systemd[1]: Starting initrd-setup-root.service... May 16 00:48:12.911593 initrd-setup-root[786]: cut: /sysroot/etc/passwd: No such file or directory May 16 00:48:12.915923 initrd-setup-root[794]: cut: /sysroot/etc/group: No such file or directory May 16 00:48:12.919981 initrd-setup-root[802]: cut: /sysroot/etc/shadow: No such file or directory May 16 00:48:12.924242 initrd-setup-root[810]: cut: /sysroot/etc/gshadow: No such file or directory May 16 00:48:12.957906 systemd[1]: Finished initrd-setup-root.service. May 16 00:48:12.958000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:48:12.959356 systemd[1]: Starting ignition-mount.service... May 16 00:48:12.960549 systemd[1]: Starting sysroot-boot.service... May 16 00:48:12.965082 bash[827]: umount: /sysroot/usr/share/oem: not mounted. May 16 00:48:12.972912 ignition[828]: INFO : Ignition 2.14.0 May 16 00:48:12.972912 ignition[828]: INFO : Stage: mount May 16 00:48:12.974316 ignition[828]: INFO : no configs at "/usr/lib/ignition/base.d" May 16 00:48:12.974316 ignition[828]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 16 00:48:12.974316 ignition[828]: INFO : mount: mount passed May 16 00:48:12.974316 ignition[828]: INFO : Ignition finished successfully May 16 00:48:12.975000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:48:12.974800 systemd[1]: Finished ignition-mount.service. May 16 00:48:12.984453 systemd[1]: Finished sysroot-boot.service. May 16 00:48:12.984000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:48:13.602020 systemd[1]: Mounting sysroot-usr-share-oem.mount... May 16 00:48:13.610145 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (837) May 16 00:48:13.612711 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 16 00:48:13.612746 kernel: BTRFS info (device vda6): using free space tree May 16 00:48:13.612774 kernel: BTRFS info (device vda6): has skinny extents May 16 00:48:13.615576 systemd[1]: Mounted sysroot-usr-share-oem.mount. May 16 00:48:13.616993 systemd[1]: Starting ignition-files.service... May 16 00:48:13.632169 ignition[857]: INFO : Ignition 2.14.0 May 16 00:48:13.632169 ignition[857]: INFO : Stage: files May 16 00:48:13.633512 ignition[857]: INFO : no configs at "/usr/lib/ignition/base.d" May 16 00:48:13.633512 ignition[857]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 16 00:48:13.633512 ignition[857]: DEBUG : files: compiled without relabeling support, skipping May 16 00:48:13.641074 ignition[857]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 16 00:48:13.641074 ignition[857]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 16 00:48:13.646502 ignition[857]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 16 00:48:13.647604 ignition[857]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 16 00:48:13.648535 ignition[857]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 16 00:48:13.647929 unknown[857]: wrote ssh authorized keys file for user: core May 16 00:48:13.650392 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" May 16 00:48:13.650392 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" May 16 00:48:13.650392 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" May 16 00:48:13.650392 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" May 16 00:48:13.650392 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/etc/flatcar/update.conf" May 16 00:48:13.657267 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 16 00:48:13.657267 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" May 16 00:48:13.657267 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" May 16 00:48:13.657267 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" May 16 00:48:13.657267 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-arm64.raw: attempt #1 May 16 00:48:13.941385 systemd-resolved[292]: Detected conflict on linux IN A 10.0.0.110 May 16 00:48:13.941399 systemd-resolved[292]: Hostname conflict, changing published hostname from 'linux' to 'linux9'. May 16 00:48:14.166288 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET result: OK May 16 00:48:14.186301 systemd-networkd[740]: eth0: Gained IPv6LL May 16 00:48:14.562746 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" May 16 00:48:14.564435 ignition[857]: INFO : files: op(8): [started] processing unit "containerd.service" May 16 00:48:14.566971 ignition[857]: INFO : files: op(8): op(9): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" May 16 00:48:14.568846 ignition[857]: INFO : files: op(8): op(9): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" May 16 00:48:14.570546 ignition[857]: INFO : files: op(8): [finished] processing unit "containerd.service" May 16 00:48:14.570546 ignition[857]: INFO : files: op(a): [started] processing unit "coreos-metadata.service" May 16 00:48:14.570546 ignition[857]: INFO : files: op(a): op(b): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 16 00:48:14.570546 ignition[857]: INFO : files: op(a): op(b): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 16 00:48:14.570546 ignition[857]: INFO : files: op(a): [finished] processing unit "coreos-metadata.service" May 16 00:48:14.570546 ignition[857]: INFO : files: op(c): [started] setting preset to disabled for "coreos-metadata.service" May 16 00:48:14.570546 ignition[857]: INFO : files: op(c): op(d): [started] removing enablement symlink(s) for "coreos-metadata.service" May 16 00:48:14.621814 ignition[857]: INFO : files: op(c): op(d): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 16 00:48:14.624084 ignition[857]: INFO : files: op(c): [finished] setting preset to disabled for "coreos-metadata.service" May 16 00:48:14.624084 ignition[857]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" May 16 00:48:14.624084 ignition[857]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" May 16 00:48:14.624084 ignition[857]: INFO : files: files passed May 16 00:48:14.624084 ignition[857]: INFO : Ignition finished successfully May 16 00:48:14.625000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:48:14.624101 systemd[1]: Finished ignition-files.service. May 16 00:48:14.626849 systemd[1]: Starting initrd-setup-root-after-ignition.service... May 16 00:48:14.627960 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). May 16 00:48:14.628610 systemd[1]: Starting ignition-quench.service... May 16 00:48:14.635000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:48:14.635000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:48:14.635904 initrd-setup-root-after-ignition[883]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory May 16 00:48:14.636000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:48:14.633481 systemd[1]: ignition-quench.service: Deactivated successfully. May 16 00:48:14.638954 initrd-setup-root-after-ignition[885]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 16 00:48:14.633571 systemd[1]: Finished ignition-quench.service. May 16 00:48:14.635663 systemd[1]: Finished initrd-setup-root-after-ignition.service. May 16 00:48:14.636689 systemd[1]: Reached target ignition-complete.target. May 16 00:48:14.638967 systemd[1]: Starting initrd-parse-etc.service... May 16 00:48:14.651549 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 16 00:48:14.651646 systemd[1]: Finished initrd-parse-etc.service. May 16 00:48:14.652000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:48:14.652000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:48:14.653046 systemd[1]: Reached target initrd-fs.target. May 16 00:48:14.653970 systemd[1]: Reached target initrd.target. May 16 00:48:14.655190 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. May 16 00:48:14.655917 systemd[1]: Starting dracut-pre-pivot.service... May 16 00:48:14.666254 systemd[1]: Finished dracut-pre-pivot.service. May 16 00:48:14.666000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:48:14.667614 systemd[1]: Starting initrd-cleanup.service... May 16 00:48:14.675772 systemd[1]: Stopped target nss-lookup.target. May 16 00:48:14.676491 systemd[1]: Stopped target remote-cryptsetup.target. May 16 00:48:14.677608 systemd[1]: Stopped target timers.target. May 16 00:48:14.678637 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 16 00:48:14.679000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:48:14.678742 systemd[1]: Stopped dracut-pre-pivot.service. May 16 00:48:14.679738 systemd[1]: Stopped target initrd.target. May 16 00:48:14.680756 systemd[1]: Stopped target basic.target. May 16 00:48:14.681803 systemd[1]: Stopped target ignition-complete.target. May 16 00:48:14.682902 systemd[1]: Stopped target ignition-diskful.target. May 16 00:48:14.683910 systemd[1]: Stopped target initrd-root-device.target. May 16 00:48:14.685126 systemd[1]: Stopped target remote-fs.target. May 16 00:48:14.686308 systemd[1]: Stopped target remote-fs-pre.target. May 16 00:48:14.687527 systemd[1]: Stopped target sysinit.target. May 16 00:48:14.688653 systemd[1]: Stopped target local-fs.target. May 16 00:48:14.689849 systemd[1]: Stopped target local-fs-pre.target. May 16 00:48:14.690938 systemd[1]: Stopped target swap.target. May 16 00:48:14.693000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:48:14.691931 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 16 00:48:14.692061 systemd[1]: Stopped dracut-pre-mount.service. May 16 00:48:14.695000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:48:14.693252 systemd[1]: Stopped target cryptsetup.target. May 16 00:48:14.696000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:48:14.694325 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 16 00:48:14.694425 systemd[1]: Stopped dracut-initqueue.service. May 16 00:48:14.695713 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 16 00:48:14.695812 systemd[1]: Stopped ignition-fetch-offline.service. May 16 00:48:14.696894 systemd[1]: Stopped target paths.target. May 16 00:48:14.697785 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 16 00:48:14.702145 systemd[1]: Stopped systemd-ask-password-console.path. May 16 00:48:14.702912 systemd[1]: Stopped target slices.target. May 16 00:48:14.704061 systemd[1]: Stopped target sockets.target. May 16 00:48:14.705083 systemd[1]: iscsid.socket: Deactivated successfully. May 16 00:48:14.705185 systemd[1]: Closed iscsid.socket. May 16 00:48:14.707000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:48:14.706017 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 16 00:48:14.707000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:48:14.706127 systemd[1]: Stopped initrd-setup-root-after-ignition.service. May 16 00:48:14.707206 systemd[1]: ignition-files.service: Deactivated successfully. May 16 00:48:14.707297 systemd[1]: Stopped ignition-files.service. May 16 00:48:14.708972 systemd[1]: Stopping ignition-mount.service... May 16 00:48:14.710227 systemd[1]: Stopping iscsiuio.service... May 16 00:48:14.713800 systemd[1]: Stopping sysroot-boot.service... May 16 00:48:14.714475 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 16 00:48:14.714608 systemd[1]: Stopped systemd-udev-trigger.service. May 16 00:48:14.715728 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 16 00:48:14.715000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:48:14.716000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:48:14.715818 systemd[1]: Stopped dracut-pre-trigger.service. May 16 00:48:14.718575 systemd[1]: iscsiuio.service: Deactivated successfully. May 16 00:48:14.719000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:48:14.718692 systemd[1]: Stopped iscsiuio.service. May 16 00:48:14.719842 systemd[1]: iscsiuio.socket: Deactivated successfully. May 16 00:48:14.719905 systemd[1]: Closed iscsiuio.socket. May 16 00:48:14.722550 ignition[898]: INFO : Ignition 2.14.0 May 16 00:48:14.722550 ignition[898]: INFO : Stage: umount May 16 00:48:14.722550 ignition[898]: INFO : no configs at "/usr/lib/ignition/base.d" May 16 00:48:14.722550 ignition[898]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 16 00:48:14.722550 ignition[898]: INFO : umount: umount passed May 16 00:48:14.723000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:48:14.723000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:48:14.726000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:48:14.729000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:48:14.722599 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 16 00:48:14.730000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:48:14.732218 ignition[898]: INFO : Ignition finished successfully May 16 00:48:14.732000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:48:14.722700 systemd[1]: Finished initrd-cleanup.service. May 16 00:48:14.724917 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 16 00:48:14.725355 systemd[1]: ignition-mount.service: Deactivated successfully. May 16 00:48:14.725441 systemd[1]: Stopped ignition-mount.service. May 16 00:48:14.726341 systemd[1]: Stopped target network.target. May 16 00:48:14.728300 systemd[1]: ignition-disks.service: Deactivated successfully. May 16 00:48:14.728362 systemd[1]: Stopped ignition-disks.service. May 16 00:48:14.729777 systemd[1]: ignition-kargs.service: Deactivated successfully. May 16 00:48:14.729816 systemd[1]: Stopped ignition-kargs.service. May 16 00:48:14.730936 systemd[1]: ignition-setup.service: Deactivated successfully. May 16 00:48:14.730973 systemd[1]: Stopped ignition-setup.service. May 16 00:48:14.733089 systemd[1]: Stopping systemd-networkd.service... May 16 00:48:14.734480 systemd[1]: Stopping systemd-resolved.service... May 16 00:48:14.746191 systemd-networkd[740]: eth0: DHCPv6 lease lost May 16 00:48:14.747965 systemd[1]: systemd-resolved.service: Deactivated successfully. May 16 00:48:14.748078 systemd[1]: Stopped systemd-resolved.service. May 16 00:48:14.749000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:48:14.750042 systemd[1]: systemd-networkd.service: Deactivated successfully. May 16 00:48:14.750000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:48:14.750158 systemd[1]: Stopped systemd-networkd.service. May 16 00:48:14.751149 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 16 00:48:14.751178 systemd[1]: Closed systemd-networkd.socket. May 16 00:48:14.754000 audit: BPF prog-id=6 op=UNLOAD May 16 00:48:14.754000 audit: BPF prog-id=9 op=UNLOAD May 16 00:48:14.752949 systemd[1]: Stopping network-cleanup.service... May 16 00:48:14.755000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:48:14.754405 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 16 00:48:14.756000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:48:14.754455 systemd[1]: Stopped parse-ip-for-networkd.service. May 16 00:48:14.755693 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 16 00:48:14.758000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:48:14.755728 systemd[1]: Stopped systemd-sysctl.service. May 16 00:48:14.757596 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 16 00:48:14.757643 systemd[1]: Stopped systemd-modules-load.service. May 16 00:48:14.760785 systemd[1]: Stopping systemd-udevd.service... May 16 00:48:14.763000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:48:14.762235 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 16 00:48:14.762743 systemd[1]: sysroot-boot.service: Deactivated successfully. May 16 00:48:14.765000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:48:14.762818 systemd[1]: Stopped sysroot-boot.service. May 16 00:48:14.764617 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 16 00:48:14.764678 systemd[1]: Stopped initrd-setup-root.service. May 16 00:48:14.767883 systemd[1]: network-cleanup.service: Deactivated successfully. May 16 00:48:14.769000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:48:14.767985 systemd[1]: Stopped network-cleanup.service. May 16 00:48:14.770000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:48:14.769440 systemd[1]: systemd-udevd.service: Deactivated successfully. May 16 00:48:14.769557 systemd[1]: Stopped systemd-udevd.service. May 16 00:48:14.770699 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 16 00:48:14.773000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:48:14.770736 systemd[1]: Closed systemd-udevd-control.socket. May 16 00:48:14.775000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:48:14.771657 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 16 00:48:14.776000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:48:14.771691 systemd[1]: Closed systemd-udevd-kernel.socket. May 16 00:48:14.772881 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 16 00:48:14.772921 systemd[1]: Stopped dracut-pre-udev.service. May 16 00:48:14.779000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:48:14.773987 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 16 00:48:14.780000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:48:14.774023 systemd[1]: Stopped dracut-cmdline.service. May 16 00:48:14.782000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:48:14.775249 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 16 00:48:14.775282 systemd[1]: Stopped dracut-cmdline-ask.service. May 16 00:48:14.784000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:48:14.784000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:48:14.777045 systemd[1]: Starting initrd-udevadm-cleanup-db.service... May 16 00:48:14.778267 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 16 00:48:14.778319 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. May 16 00:48:14.780183 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 16 00:48:14.780224 systemd[1]: Stopped kmod-static-nodes.service. May 16 00:48:14.780938 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 16 00:48:14.780972 systemd[1]: Stopped systemd-vconsole-setup.service. May 16 00:48:14.783019 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. May 16 00:48:14.783444 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 16 00:48:14.792000 audit: BPF prog-id=5 op=UNLOAD May 16 00:48:14.792000 audit: BPF prog-id=4 op=UNLOAD May 16 00:48:14.792000 audit: BPF prog-id=3 op=UNLOAD May 16 00:48:14.783539 systemd[1]: Finished initrd-udevadm-cleanup-db.service. May 16 00:48:14.793000 audit: BPF prog-id=8 op=UNLOAD May 16 00:48:14.793000 audit: BPF prog-id=7 op=UNLOAD May 16 00:48:14.784520 systemd[1]: Reached target initrd-switch-root.target. May 16 00:48:14.786363 systemd[1]: Starting initrd-switch-root.service... May 16 00:48:14.792574 systemd[1]: Switching root. May 16 00:48:14.812671 iscsid[745]: iscsid shutting down. May 16 00:48:14.813362 systemd-journald[290]: Received SIGTERM from PID 1 (systemd). May 16 00:48:14.813424 systemd-journald[290]: Journal stopped May 16 00:48:16.911240 kernel: SELinux: Class mctp_socket not defined in policy. May 16 00:48:16.911299 kernel: SELinux: Class anon_inode not defined in policy. May 16 00:48:16.911323 kernel: SELinux: the above unknown classes and permissions will be allowed May 16 00:48:16.911334 kernel: SELinux: policy capability network_peer_controls=1 May 16 00:48:16.911344 kernel: SELinux: policy capability open_perms=1 May 16 00:48:16.911353 kernel: SELinux: policy capability extended_socket_class=1 May 16 00:48:16.911364 kernel: SELinux: policy capability always_check_network=0 May 16 00:48:16.911374 kernel: SELinux: policy capability cgroup_seclabel=1 May 16 00:48:16.911384 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 16 00:48:16.911393 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 16 00:48:16.911404 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 16 00:48:16.911415 systemd[1]: Successfully loaded SELinux policy in 36.246ms. May 16 00:48:16.911433 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 8.330ms. May 16 00:48:16.911444 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) May 16 00:48:16.911456 systemd[1]: Detected virtualization kvm. May 16 00:48:16.911467 systemd[1]: Detected architecture arm64. May 16 00:48:16.911477 systemd[1]: Detected first boot. May 16 00:48:16.911490 systemd[1]: Initializing machine ID from VM UUID. May 16 00:48:16.911500 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). May 16 00:48:16.911511 kernel: kauditd_printk_skb: 70 callbacks suppressed May 16 00:48:16.911523 kernel: audit: type=1400 audit(1747356495.096:81): avc: denied { associate } for pid=950 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" May 16 00:48:16.911535 kernel: audit: type=1300 audit(1747356495.096:81): arch=c00000b7 syscall=5 success=yes exit=0 a0=40001cf66c a1=4000152ae0 a2=4000158a00 a3=32 items=0 ppid=933 pid=950 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:48:16.911546 kernel: audit: type=1327 audit(1747356495.096:81): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 May 16 00:48:16.911556 kernel: audit: type=1400 audit(1747356495.098:82): avc: denied { associate } for pid=950 comm="torcx-generator" name="lib" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 May 16 00:48:16.911568 kernel: audit: type=1300 audit(1747356495.098:82): arch=c00000b7 syscall=34 success=yes exit=0 a0=ffffffffffffff9c a1=40001cf745 a2=1ed a3=0 items=2 ppid=933 pid=950 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:48:16.911586 kernel: audit: type=1307 audit(1747356495.098:82): cwd="/" May 16 00:48:16.911597 kernel: audit: type=1302 audit(1747356495.098:82): item=0 name=(null) inode=2 dev=00:29 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 16 00:48:16.911607 kernel: audit: type=1302 audit(1747356495.098:82): item=1 name=(null) inode=3 dev=00:29 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 16 00:48:16.911622 kernel: audit: type=1327 audit(1747356495.098:82): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 May 16 00:48:16.911644 systemd[1]: Populated /etc with preset unit settings. May 16 00:48:16.911658 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 16 00:48:16.911672 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 16 00:48:16.911684 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 16 00:48:16.911696 systemd[1]: Queued start job for default target multi-user.target. May 16 00:48:16.911706 systemd[1]: Unnecessary job was removed for dev-vda6.device. May 16 00:48:16.911717 systemd[1]: Created slice system-addon\x2dconfig.slice. May 16 00:48:16.911727 systemd[1]: Created slice system-addon\x2drun.slice. May 16 00:48:16.911737 systemd[1]: Created slice system-getty.slice. May 16 00:48:16.911748 systemd[1]: Created slice system-modprobe.slice. May 16 00:48:16.911759 systemd[1]: Created slice system-serial\x2dgetty.slice. May 16 00:48:16.911770 systemd[1]: Created slice system-system\x2dcloudinit.slice. May 16 00:48:16.911782 systemd[1]: Created slice system-systemd\x2dfsck.slice. May 16 00:48:16.911793 systemd[1]: Created slice user.slice. May 16 00:48:16.911804 systemd[1]: Started systemd-ask-password-console.path. May 16 00:48:16.911815 systemd[1]: Started systemd-ask-password-wall.path. May 16 00:48:16.911825 systemd[1]: Set up automount boot.automount. May 16 00:48:16.911837 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. May 16 00:48:16.911848 systemd[1]: Reached target integritysetup.target. May 16 00:48:16.911859 systemd[1]: Reached target remote-cryptsetup.target. May 16 00:48:16.911869 systemd[1]: Reached target remote-fs.target. May 16 00:48:16.911880 systemd[1]: Reached target slices.target. May 16 00:48:16.911890 systemd[1]: Reached target swap.target. May 16 00:48:16.911901 systemd[1]: Reached target torcx.target. May 16 00:48:16.911912 systemd[1]: Reached target veritysetup.target. May 16 00:48:16.911922 systemd[1]: Listening on systemd-coredump.socket. May 16 00:48:16.911935 systemd[1]: Listening on systemd-initctl.socket. May 16 00:48:16.911950 kernel: audit: type=1400 audit(1747356496.816:83): avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 May 16 00:48:16.911962 systemd[1]: Listening on systemd-journald-audit.socket. May 16 00:48:16.912529 systemd[1]: Listening on systemd-journald-dev-log.socket. May 16 00:48:16.912554 systemd[1]: Listening on systemd-journald.socket. May 16 00:48:16.912567 systemd[1]: Listening on systemd-networkd.socket. May 16 00:48:16.912578 systemd[1]: Listening on systemd-udevd-control.socket. May 16 00:48:16.912589 systemd[1]: Listening on systemd-udevd-kernel.socket. May 16 00:48:16.912604 systemd[1]: Listening on systemd-userdbd.socket. May 16 00:48:16.912615 systemd[1]: Mounting dev-hugepages.mount... May 16 00:48:16.912626 systemd[1]: Mounting dev-mqueue.mount... May 16 00:48:16.912650 systemd[1]: Mounting media.mount... May 16 00:48:16.912662 systemd[1]: Mounting sys-kernel-debug.mount... May 16 00:48:16.912673 systemd[1]: Mounting sys-kernel-tracing.mount... May 16 00:48:16.912684 systemd[1]: Mounting tmp.mount... May 16 00:48:16.912694 systemd[1]: Starting flatcar-tmpfiles.service... May 16 00:48:16.912706 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 16 00:48:16.912718 systemd[1]: Starting kmod-static-nodes.service... May 16 00:48:16.912730 systemd[1]: Starting modprobe@configfs.service... May 16 00:48:16.912741 systemd[1]: Starting modprobe@dm_mod.service... May 16 00:48:16.912751 systemd[1]: Starting modprobe@drm.service... May 16 00:48:16.912762 systemd[1]: Starting modprobe@efi_pstore.service... May 16 00:48:16.912773 systemd[1]: Starting modprobe@fuse.service... May 16 00:48:16.912783 systemd[1]: Starting modprobe@loop.service... May 16 00:48:16.912804 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 16 00:48:16.912817 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. May 16 00:48:16.912833 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) May 16 00:48:16.912844 systemd[1]: Starting systemd-journald.service... May 16 00:48:16.912855 kernel: fuse: init (API version 7.34) May 16 00:48:16.912866 systemd[1]: Starting systemd-modules-load.service... May 16 00:48:16.912877 systemd[1]: Starting systemd-network-generator.service... May 16 00:48:16.912888 systemd[1]: Starting systemd-remount-fs.service... May 16 00:48:16.912898 systemd[1]: Starting systemd-udev-trigger.service... May 16 00:48:16.912910 kernel: loop: module loaded May 16 00:48:16.912920 systemd[1]: Mounted dev-hugepages.mount. May 16 00:48:16.912933 systemd[1]: Mounted dev-mqueue.mount. May 16 00:48:16.912944 systemd[1]: Mounted media.mount. May 16 00:48:16.912954 systemd[1]: Mounted sys-kernel-debug.mount. May 16 00:48:16.912965 systemd[1]: Mounted sys-kernel-tracing.mount. May 16 00:48:16.912976 systemd[1]: Mounted tmp.mount. May 16 00:48:16.912986 systemd[1]: Finished kmod-static-nodes.service. May 16 00:48:16.912997 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 16 00:48:16.913011 systemd-journald[1030]: Journal started May 16 00:48:16.913064 systemd-journald[1030]: Runtime Journal (/run/log/journal/142e800691b940458afd01801b6953c5) is 6.0M, max 48.7M, 42.6M free. May 16 00:48:16.816000 audit[1]: AVC avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 May 16 00:48:16.816000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 May 16 00:48:16.909000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 May 16 00:48:16.909000 audit[1030]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=3 a1=ffffc41e03d0 a2=4000 a3=1 items=0 ppid=1 pid=1030 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:48:16.909000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" May 16 00:48:16.911000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:48:16.914878 systemd[1]: Finished modprobe@configfs.service. May 16 00:48:16.915000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:48:16.915000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:48:16.917489 systemd[1]: Started systemd-journald.service. May 16 00:48:16.917000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:48:16.919000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:48:16.919000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:48:16.918160 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 16 00:48:16.918363 systemd[1]: Finished modprobe@dm_mod.service. May 16 00:48:16.919587 systemd[1]: modprobe@drm.service: Deactivated successfully. May 16 00:48:16.919879 systemd[1]: Finished modprobe@drm.service. May 16 00:48:16.920000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:48:16.920000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:48:16.920871 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 16 00:48:16.921097 systemd[1]: Finished modprobe@efi_pstore.service. May 16 00:48:16.921000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:48:16.921000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:48:16.922205 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 16 00:48:16.922415 systemd[1]: Finished modprobe@fuse.service. May 16 00:48:16.922000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:48:16.922000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:48:16.923541 systemd[1]: modprobe@loop.service: Deactivated successfully. May 16 00:48:16.924000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:48:16.924000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:48:16.923793 systemd[1]: Finished modprobe@loop.service. May 16 00:48:16.924900 systemd[1]: Finished systemd-modules-load.service. May 16 00:48:16.925000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:48:16.926187 systemd[1]: Finished systemd-network-generator.service. May 16 00:48:16.926000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:48:16.927339 systemd[1]: Finished systemd-remount-fs.service. May 16 00:48:16.927000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:48:16.928545 systemd[1]: Reached target network-pre.target. May 16 00:48:16.930522 systemd[1]: Mounting sys-fs-fuse-connections.mount... May 16 00:48:16.932582 systemd[1]: Mounting sys-kernel-config.mount... May 16 00:48:16.933499 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 16 00:48:16.935754 systemd[1]: Starting systemd-hwdb-update.service... May 16 00:48:16.937945 systemd[1]: Starting systemd-journal-flush.service... May 16 00:48:16.938738 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 16 00:48:16.940073 systemd[1]: Starting systemd-random-seed.service... May 16 00:48:16.941015 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 16 00:48:16.942565 systemd[1]: Starting systemd-sysctl.service... May 16 00:48:16.945269 systemd[1]: Mounted sys-fs-fuse-connections.mount. May 16 00:48:16.946209 systemd[1]: Mounted sys-kernel-config.mount. May 16 00:48:16.951302 systemd[1]: Finished systemd-random-seed.service. May 16 00:48:16.951000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:48:16.952238 systemd[1]: Reached target first-boot-complete.target. May 16 00:48:16.956102 systemd-journald[1030]: Time spent on flushing to /var/log/journal/142e800691b940458afd01801b6953c5 is 12.199ms for 921 entries. May 16 00:48:16.956102 systemd-journald[1030]: System Journal (/var/log/journal/142e800691b940458afd01801b6953c5) is 8.0M, max 195.6M, 187.6M free. May 16 00:48:16.983333 systemd-journald[1030]: Received client request to flush runtime journal. May 16 00:48:16.958000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:48:16.979000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:48:16.957345 systemd[1]: Finished flatcar-tmpfiles.service. May 16 00:48:16.959770 systemd[1]: Starting systemd-sysusers.service... May 16 00:48:16.979642 systemd[1]: Finished systemd-udev-trigger.service. May 16 00:48:16.981895 systemd[1]: Starting systemd-udev-settle.service... May 16 00:48:16.982938 systemd[1]: Finished systemd-sysctl.service. May 16 00:48:16.983000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:48:16.990099 systemd[1]: Finished systemd-journal-flush.service. May 16 00:48:16.990000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:48:16.992474 udevadm[1082]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. May 16 00:48:16.994928 systemd[1]: Finished systemd-sysusers.service. May 16 00:48:16.995000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:48:16.997370 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... May 16 00:48:17.017283 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. May 16 00:48:17.017000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:48:17.368774 systemd[1]: Finished systemd-hwdb-update.service. May 16 00:48:17.369000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:48:17.370985 systemd[1]: Starting systemd-udevd.service... May 16 00:48:17.400932 systemd-udevd[1090]: Using default interface naming scheme 'v252'. May 16 00:48:17.427058 systemd[1]: Started systemd-udevd.service. May 16 00:48:17.427000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:48:17.429443 systemd[1]: Starting systemd-networkd.service... May 16 00:48:17.439340 systemd[1]: Starting systemd-userdbd.service... May 16 00:48:17.463192 systemd[1]: Found device dev-ttyAMA0.device. May 16 00:48:17.481887 systemd[1]: Started systemd-userdbd.service. May 16 00:48:17.482000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:48:17.522348 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. May 16 00:48:17.546661 systemd[1]: Finished systemd-udev-settle.service. May 16 00:48:17.547000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:48:17.549032 systemd[1]: Starting lvm2-activation-early.service... May 16 00:48:17.559669 systemd-networkd[1097]: lo: Link UP May 16 00:48:17.559681 systemd-networkd[1097]: lo: Gained carrier May 16 00:48:17.560062 systemd-networkd[1097]: Enumeration completed May 16 00:48:17.560258 systemd[1]: Started systemd-networkd.service. May 16 00:48:17.559000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:48:17.561652 systemd-networkd[1097]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 16 00:48:17.562851 systemd-networkd[1097]: eth0: Link UP May 16 00:48:17.562862 systemd-networkd[1097]: eth0: Gained carrier May 16 00:48:17.563577 lvm[1124]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 16 00:48:17.586279 systemd-networkd[1097]: eth0: DHCPv4 address 10.0.0.110/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 16 00:48:17.593169 systemd[1]: Finished lvm2-activation-early.service. May 16 00:48:17.593000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:48:17.594065 systemd[1]: Reached target cryptsetup.target. May 16 00:48:17.596153 systemd[1]: Starting lvm2-activation.service... May 16 00:48:17.600112 lvm[1126]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 16 00:48:17.624229 systemd[1]: Finished lvm2-activation.service. May 16 00:48:17.624000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:48:17.625137 systemd[1]: Reached target local-fs-pre.target. May 16 00:48:17.625888 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 16 00:48:17.625928 systemd[1]: Reached target local-fs.target. May 16 00:48:17.626657 systemd[1]: Reached target machines.target. May 16 00:48:17.628832 systemd[1]: Starting ldconfig.service... May 16 00:48:17.629921 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 16 00:48:17.629993 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 16 00:48:17.631458 systemd[1]: Starting systemd-boot-update.service... May 16 00:48:17.633663 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... May 16 00:48:17.636202 systemd[1]: Starting systemd-machine-id-commit.service... May 16 00:48:17.638953 systemd[1]: Starting systemd-sysext.service... May 16 00:48:17.640232 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1129 (bootctl) May 16 00:48:17.641828 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... May 16 00:48:17.645432 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. May 16 00:48:17.646000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:48:17.651027 systemd[1]: Unmounting usr-share-oem.mount... May 16 00:48:17.655902 systemd[1]: usr-share-oem.mount: Deactivated successfully. May 16 00:48:17.656225 systemd[1]: Unmounted usr-share-oem.mount. May 16 00:48:17.675138 kernel: loop0: detected capacity change from 0 to 203944 May 16 00:48:17.716885 systemd[1]: Finished systemd-machine-id-commit.service. May 16 00:48:17.717000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:48:17.726721 systemd-fsck[1141]: fsck.fat 4.2 (2021-01-31) May 16 00:48:17.726721 systemd-fsck[1141]: /dev/vda1: 236 files, 117310/258078 clusters May 16 00:48:17.728148 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 16 00:48:17.729037 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. May 16 00:48:17.730000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:48:17.749141 kernel: loop1: detected capacity change from 0 to 203944 May 16 00:48:17.756243 (sd-sysext)[1147]: Using extensions 'kubernetes'. May 16 00:48:17.757262 (sd-sysext)[1147]: Merged extensions into '/usr'. May 16 00:48:17.775790 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 16 00:48:17.777343 systemd[1]: Starting modprobe@dm_mod.service... May 16 00:48:17.779320 systemd[1]: Starting modprobe@efi_pstore.service... May 16 00:48:17.781520 systemd[1]: Starting modprobe@loop.service... May 16 00:48:17.782479 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 16 00:48:17.782662 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 16 00:48:17.783474 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 16 00:48:17.783702 systemd[1]: Finished modprobe@dm_mod.service. May 16 00:48:17.784000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:48:17.784000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:48:17.785073 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 16 00:48:17.786000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:48:17.786000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:48:17.785269 systemd[1]: Finished modprobe@efi_pstore.service. May 16 00:48:17.786653 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 16 00:48:17.789059 systemd[1]: modprobe@loop.service: Deactivated successfully. May 16 00:48:17.789263 systemd[1]: Finished modprobe@loop.service. May 16 00:48:17.789000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:48:17.789000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:48:17.790448 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 16 00:48:17.833254 ldconfig[1128]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 16 00:48:17.839624 systemd[1]: Finished ldconfig.service. May 16 00:48:17.840000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:48:17.901043 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 16 00:48:17.903238 systemd[1]: Mounting boot.mount... May 16 00:48:17.905545 systemd[1]: Mounting usr-share-oem.mount... May 16 00:48:17.913221 systemd[1]: Mounted boot.mount. May 16 00:48:17.914170 systemd[1]: Mounted usr-share-oem.mount. May 16 00:48:17.916377 systemd[1]: Finished systemd-sysext.service. May 16 00:48:17.917000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:48:17.918869 systemd[1]: Starting ensure-sysext.service... May 16 00:48:17.921277 systemd[1]: Starting systemd-tmpfiles-setup.service... May 16 00:48:17.923000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:48:17.922836 systemd[1]: Finished systemd-boot-update.service. May 16 00:48:17.927677 systemd[1]: Reloading. May 16 00:48:17.932668 systemd-tmpfiles[1165]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. May 16 00:48:17.933438 systemd-tmpfiles[1165]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 16 00:48:17.934883 systemd-tmpfiles[1165]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 16 00:48:17.970317 /usr/lib/systemd/system-generators/torcx-generator[1187]: time="2025-05-16T00:48:17Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 16 00:48:17.970781 /usr/lib/systemd/system-generators/torcx-generator[1187]: time="2025-05-16T00:48:17Z" level=info msg="torcx already run" May 16 00:48:18.042504 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 16 00:48:18.042527 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 16 00:48:18.059946 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 16 00:48:18.101480 systemd[1]: Finished systemd-tmpfiles-setup.service. May 16 00:48:18.102000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:48:18.106133 systemd[1]: Starting audit-rules.service... May 16 00:48:18.108247 systemd[1]: Starting clean-ca-certificates.service... May 16 00:48:18.110539 systemd[1]: Starting systemd-journal-catalog-update.service... May 16 00:48:18.113277 systemd[1]: Starting systemd-resolved.service... May 16 00:48:18.115792 systemd[1]: Starting systemd-timesyncd.service... May 16 00:48:18.118010 systemd[1]: Starting systemd-update-utmp.service... May 16 00:48:18.120000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:48:18.119786 systemd[1]: Finished clean-ca-certificates.service. May 16 00:48:18.122667 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 16 00:48:18.126987 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 16 00:48:18.128733 systemd[1]: Starting modprobe@dm_mod.service... May 16 00:48:18.130960 systemd[1]: Starting modprobe@efi_pstore.service... May 16 00:48:18.133165 systemd[1]: Starting modprobe@loop.service... May 16 00:48:18.133840 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 16 00:48:18.133997 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 16 00:48:18.134132 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 16 00:48:18.135000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:48:18.137000 audit[1240]: SYSTEM_BOOT pid=1240 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' May 16 00:48:18.137000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:48:18.137000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:48:18.140000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:48:18.140000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:48:18.135162 systemd[1]: Finished systemd-journal-catalog-update.service. May 16 00:48:18.136451 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 16 00:48:18.136622 systemd[1]: Finished modprobe@efi_pstore.service. May 16 00:48:18.138055 systemd[1]: modprobe@loop.service: Deactivated successfully. May 16 00:48:18.140051 systemd[1]: Finished modprobe@loop.service. May 16 00:48:18.144613 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 16 00:48:18.146384 systemd[1]: Starting modprobe@efi_pstore.service... May 16 00:48:18.148483 systemd[1]: Starting modprobe@loop.service... May 16 00:48:18.149141 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 16 00:48:18.149323 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 16 00:48:18.151197 systemd[1]: Starting systemd-update-done.service... May 16 00:48:18.151860 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 16 00:48:18.153218 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 16 00:48:18.154000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:48:18.154000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:48:18.153430 systemd[1]: Finished modprobe@dm_mod.service. May 16 00:48:18.154729 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 16 00:48:18.154909 systemd[1]: Finished modprobe@efi_pstore.service. May 16 00:48:18.155000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:48:18.155000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:48:18.156209 systemd[1]: modprobe@loop.service: Deactivated successfully. May 16 00:48:18.156399 systemd[1]: Finished modprobe@loop.service. May 16 00:48:18.157000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:48:18.157000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:48:18.159000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:48:18.157664 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 16 00:48:18.157805 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 16 00:48:18.159100 systemd[1]: Finished systemd-update-utmp.service. May 16 00:48:18.164442 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 16 00:48:18.165930 systemd[1]: Starting modprobe@dm_mod.service... May 16 00:48:18.168179 systemd[1]: Starting modprobe@drm.service... May 16 00:48:18.170315 systemd[1]: Starting modprobe@efi_pstore.service... May 16 00:48:18.172598 systemd[1]: Starting modprobe@loop.service... May 16 00:48:18.173370 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 16 00:48:18.173511 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 16 00:48:18.175443 systemd[1]: Starting systemd-networkd-wait-online.service... May 16 00:48:18.176414 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 16 00:48:18.175000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:48:18.178107 systemd[1]: Finished systemd-update-done.service. May 16 00:48:18.179373 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 16 00:48:18.179543 systemd[1]: Finished modprobe@dm_mod.service. May 16 00:48:18.180000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:48:18.180000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:48:18.182075 systemd[1]: modprobe@drm.service: Deactivated successfully. May 16 00:48:18.182570 systemd[1]: Finished modprobe@drm.service. May 16 00:48:18.183000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:48:18.184000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:48:18.185325 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 16 00:48:18.185490 systemd[1]: Finished modprobe@efi_pstore.service. May 16 00:48:18.186000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:48:18.186000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:48:18.186977 systemd[1]: modprobe@loop.service: Deactivated successfully. May 16 00:48:18.187356 systemd[1]: Finished modprobe@loop.service. May 16 00:48:18.188000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:48:18.188000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:48:18.188775 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 16 00:48:18.188873 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 16 00:48:18.191000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:48:18.190869 systemd[1]: Finished ensure-sysext.service. May 16 00:48:18.216000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 May 16 00:48:18.216000 audit[1279]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffe920c540 a2=420 a3=0 items=0 ppid=1231 pid=1279 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:48:18.216000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 May 16 00:48:18.216998 augenrules[1279]: No rules May 16 00:48:18.217959 systemd[1]: Finished audit-rules.service. May 16 00:48:18.220383 systemd-resolved[1235]: Positive Trust Anchors: May 16 00:48:18.220396 systemd-resolved[1235]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 16 00:48:18.220423 systemd-resolved[1235]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test May 16 00:48:18.221341 systemd[1]: Started systemd-timesyncd.service. May 16 00:48:18.222284 systemd-timesyncd[1237]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 16 00:48:18.222658 systemd-timesyncd[1237]: Initial clock synchronization to Fri 2025-05-16 00:48:18.037083 UTC. May 16 00:48:18.222781 systemd[1]: Reached target time-set.target. May 16 00:48:18.233466 systemd-resolved[1235]: Defaulting to hostname 'linux'. May 16 00:48:18.235014 systemd[1]: Started systemd-resolved.service. May 16 00:48:18.235802 systemd[1]: Reached target network.target. May 16 00:48:18.236426 systemd[1]: Reached target nss-lookup.target. May 16 00:48:18.237028 systemd[1]: Reached target sysinit.target. May 16 00:48:18.237718 systemd[1]: Started motdgen.path. May 16 00:48:18.238356 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. May 16 00:48:18.239350 systemd[1]: Started logrotate.timer. May 16 00:48:18.240017 systemd[1]: Started mdadm.timer. May 16 00:48:18.240602 systemd[1]: Started systemd-tmpfiles-clean.timer. May 16 00:48:18.241232 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 16 00:48:18.241266 systemd[1]: Reached target paths.target. May 16 00:48:18.241823 systemd[1]: Reached target timers.target. May 16 00:48:18.242841 systemd[1]: Listening on dbus.socket. May 16 00:48:18.244810 systemd[1]: Starting docker.socket... May 16 00:48:18.246649 systemd[1]: Listening on sshd.socket. May 16 00:48:18.247459 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 16 00:48:18.247869 systemd[1]: Listening on docker.socket. May 16 00:48:18.248560 systemd[1]: Reached target sockets.target. May 16 00:48:18.249178 systemd[1]: Reached target basic.target. May 16 00:48:18.249892 systemd[1]: System is tainted: cgroupsv1 May 16 00:48:18.249955 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. May 16 00:48:18.249979 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. May 16 00:48:18.251332 systemd[1]: Starting containerd.service... May 16 00:48:18.253504 systemd[1]: Starting dbus.service... May 16 00:48:18.255516 systemd[1]: Starting enable-oem-cloudinit.service... May 16 00:48:18.257679 systemd[1]: Starting extend-filesystems.service... May 16 00:48:18.258462 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). May 16 00:48:18.260351 systemd[1]: Starting motdgen.service... May 16 00:48:18.262274 systemd[1]: Starting ssh-key-proc-cmdline.service... May 16 00:48:18.264472 systemd[1]: Starting sshd-keygen.service... May 16 00:48:18.272543 jq[1291]: false May 16 00:48:18.267820 systemd[1]: Starting systemd-logind.service... May 16 00:48:18.270254 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 16 00:48:18.270342 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 16 00:48:18.271876 systemd[1]: Starting update-engine.service... May 16 00:48:18.273933 systemd[1]: Starting update-ssh-keys-after-ignition.service... May 16 00:48:18.277240 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 16 00:48:18.278189 jq[1308]: true May 16 00:48:18.277552 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. May 16 00:48:18.278063 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 16 00:48:18.278306 systemd[1]: Finished ssh-key-proc-cmdline.service. May 16 00:48:18.287959 systemd[1]: motdgen.service: Deactivated successfully. May 16 00:48:18.288354 jq[1311]: true May 16 00:48:18.296903 systemd[1]: Finished motdgen.service. May 16 00:48:18.301891 extend-filesystems[1292]: Found loop1 May 16 00:48:18.301891 extend-filesystems[1292]: Found vda May 16 00:48:18.310223 extend-filesystems[1292]: Found vda1 May 16 00:48:18.310223 extend-filesystems[1292]: Found vda2 May 16 00:48:18.310223 extend-filesystems[1292]: Found vda3 May 16 00:48:18.310223 extend-filesystems[1292]: Found usr May 16 00:48:18.310223 extend-filesystems[1292]: Found vda4 May 16 00:48:18.310223 extend-filesystems[1292]: Found vda6 May 16 00:48:18.310223 extend-filesystems[1292]: Found vda7 May 16 00:48:18.310223 extend-filesystems[1292]: Found vda9 May 16 00:48:18.310223 extend-filesystems[1292]: Checking size of /dev/vda9 May 16 00:48:18.321748 dbus-daemon[1290]: [system] SELinux support is enabled May 16 00:48:18.321987 systemd[1]: Started dbus.service. May 16 00:48:18.324600 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 16 00:48:18.324650 systemd[1]: Reached target system-config.target. May 16 00:48:18.325476 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 16 00:48:18.325501 systemd[1]: Reached target user-config.target. May 16 00:48:18.335350 extend-filesystems[1292]: Resized partition /dev/vda9 May 16 00:48:18.355522 extend-filesystems[1340]: resize2fs 1.46.5 (30-Dec-2021) May 16 00:48:18.368169 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 16 00:48:18.387300 update_engine[1307]: I0516 00:48:18.387033 1307 main.cc:92] Flatcar Update Engine starting May 16 00:48:18.391847 systemd[1]: Started update-engine.service. May 16 00:48:18.392052 update_engine[1307]: I0516 00:48:18.392025 1307 update_check_scheduler.cc:74] Next update check in 11m13s May 16 00:48:18.394768 systemd[1]: Started locksmithd.service. May 16 00:48:18.400494 systemd-logind[1300]: Watching system buttons on /dev/input/event0 (Power Button) May 16 00:48:18.400954 systemd-logind[1300]: New seat seat0. May 16 00:48:18.415120 systemd[1]: Started systemd-logind.service. May 16 00:48:18.417143 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 16 00:48:18.432983 extend-filesystems[1340]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 16 00:48:18.432983 extend-filesystems[1340]: old_desc_blocks = 1, new_desc_blocks = 1 May 16 00:48:18.432983 extend-filesystems[1340]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 16 00:48:18.436539 extend-filesystems[1292]: Resized filesystem in /dev/vda9 May 16 00:48:18.437294 bash[1337]: Updated "/home/core/.ssh/authorized_keys" May 16 00:48:18.436874 systemd[1]: extend-filesystems.service: Deactivated successfully. May 16 00:48:18.437165 systemd[1]: Finished extend-filesystems.service. May 16 00:48:18.439414 systemd[1]: Finished update-ssh-keys-after-ignition.service. May 16 00:48:18.450833 env[1315]: time="2025-05-16T00:48:18.449255160Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 May 16 00:48:18.469509 env[1315]: time="2025-05-16T00:48:18.469350680Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 16 00:48:18.469660 env[1315]: time="2025-05-16T00:48:18.469547360Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 16 00:48:18.471089 env[1315]: time="2025-05-16T00:48:18.471042080Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.181-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 16 00:48:18.471089 env[1315]: time="2025-05-16T00:48:18.471085280Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 16 00:48:18.471612 env[1315]: time="2025-05-16T00:48:18.471584880Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 16 00:48:18.471612 env[1315]: time="2025-05-16T00:48:18.471612920Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 16 00:48:18.471719 env[1315]: time="2025-05-16T00:48:18.471636120Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" May 16 00:48:18.471719 env[1315]: time="2025-05-16T00:48:18.471649240Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 16 00:48:18.471915 env[1315]: time="2025-05-16T00:48:18.471878840Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 16 00:48:18.472345 env[1315]: time="2025-05-16T00:48:18.472319560Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 16 00:48:18.472557 env[1315]: time="2025-05-16T00:48:18.472531640Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 16 00:48:18.472848 env[1315]: time="2025-05-16T00:48:18.472743400Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 16 00:48:18.472848 env[1315]: time="2025-05-16T00:48:18.472829040Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" May 16 00:48:18.472848 env[1315]: time="2025-05-16T00:48:18.472843360Z" level=info msg="metadata content store policy set" policy=shared May 16 00:48:18.476383 env[1315]: time="2025-05-16T00:48:18.476060720Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 16 00:48:18.476383 env[1315]: time="2025-05-16T00:48:18.476106360Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 16 00:48:18.476383 env[1315]: time="2025-05-16T00:48:18.476149120Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 16 00:48:18.476383 env[1315]: time="2025-05-16T00:48:18.476187680Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 16 00:48:18.476383 env[1315]: time="2025-05-16T00:48:18.476205840Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 16 00:48:18.476383 env[1315]: time="2025-05-16T00:48:18.476220400Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 16 00:48:18.476383 env[1315]: time="2025-05-16T00:48:18.476233680Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 16 00:48:18.476695 env[1315]: time="2025-05-16T00:48:18.476644480Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 16 00:48:18.476695 env[1315]: time="2025-05-16T00:48:18.476673280Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 May 16 00:48:18.476695 env[1315]: time="2025-05-16T00:48:18.476689720Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 16 00:48:18.476772 env[1315]: time="2025-05-16T00:48:18.476702840Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 16 00:48:18.476772 env[1315]: time="2025-05-16T00:48:18.476716600Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 16 00:48:18.476931 env[1315]: time="2025-05-16T00:48:18.476852880Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 16 00:48:18.476961 env[1315]: time="2025-05-16T00:48:18.476936640Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 16 00:48:18.477711 env[1315]: time="2025-05-16T00:48:18.477471840Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 16 00:48:18.477711 env[1315]: time="2025-05-16T00:48:18.477611440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 16 00:48:18.477711 env[1315]: time="2025-05-16T00:48:18.477637080Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 16 00:48:18.477953 env[1315]: time="2025-05-16T00:48:18.477826520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 16 00:48:18.477953 env[1315]: time="2025-05-16T00:48:18.477844920Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 16 00:48:18.477953 env[1315]: time="2025-05-16T00:48:18.477857600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 16 00:48:18.477953 env[1315]: time="2025-05-16T00:48:18.477869560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 16 00:48:18.477953 env[1315]: time="2025-05-16T00:48:18.477882840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 16 00:48:18.477953 env[1315]: time="2025-05-16T00:48:18.477897680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 16 00:48:18.477953 env[1315]: time="2025-05-16T00:48:18.477908960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 16 00:48:18.477953 env[1315]: time="2025-05-16T00:48:18.477921480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 16 00:48:18.477953 env[1315]: time="2025-05-16T00:48:18.477934640Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 16 00:48:18.478166 env[1315]: time="2025-05-16T00:48:18.478070280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 16 00:48:18.478166 env[1315]: time="2025-05-16T00:48:18.478094360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 16 00:48:18.478166 env[1315]: time="2025-05-16T00:48:18.478142000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 16 00:48:18.478166 env[1315]: time="2025-05-16T00:48:18.478156440Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 16 00:48:18.478241 env[1315]: time="2025-05-16T00:48:18.478173640Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 May 16 00:48:18.478241 env[1315]: time="2025-05-16T00:48:18.478186240Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 16 00:48:18.478241 env[1315]: time="2025-05-16T00:48:18.478205040Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" May 16 00:48:18.478301 env[1315]: time="2025-05-16T00:48:18.478247000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 16 00:48:18.478586 env[1315]: time="2025-05-16T00:48:18.478443480Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 16 00:48:18.478586 env[1315]: time="2025-05-16T00:48:18.478513680Z" level=info msg="Connect containerd service" May 16 00:48:18.478586 env[1315]: time="2025-05-16T00:48:18.478546120Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 16 00:48:18.479652 env[1315]: time="2025-05-16T00:48:18.479615560Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 16 00:48:18.480001 env[1315]: time="2025-05-16T00:48:18.479961520Z" level=info msg="Start subscribing containerd event" May 16 00:48:18.480039 env[1315]: time="2025-05-16T00:48:18.480022560Z" level=info msg="Start recovering state" May 16 00:48:18.480105 env[1315]: time="2025-05-16T00:48:18.480091040Z" level=info msg="Start event monitor" May 16 00:48:18.480105 env[1315]: time="2025-05-16T00:48:18.480132520Z" level=info msg="Start snapshots syncer" May 16 00:48:18.480186 env[1315]: time="2025-05-16T00:48:18.480146360Z" level=info msg="Start cni network conf syncer for default" May 16 00:48:18.480186 env[1315]: time="2025-05-16T00:48:18.480153880Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 16 00:48:18.480237 env[1315]: time="2025-05-16T00:48:18.480203000Z" level=info msg=serving... address=/run/containerd/containerd.sock May 16 00:48:18.480237 env[1315]: time="2025-05-16T00:48:18.480154440Z" level=info msg="Start streaming server" May 16 00:48:18.480279 env[1315]: time="2025-05-16T00:48:18.480270840Z" level=info msg="containerd successfully booted in 0.032026s" May 16 00:48:18.480396 systemd[1]: Started containerd.service. May 16 00:48:18.489282 locksmithd[1347]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 16 00:48:18.666300 systemd-networkd[1097]: eth0: Gained IPv6LL May 16 00:48:18.668068 systemd[1]: Finished systemd-networkd-wait-online.service. May 16 00:48:18.669104 systemd[1]: Reached target network-online.target. May 16 00:48:18.671580 systemd[1]: Starting kubelet.service... May 16 00:48:19.248675 systemd[1]: Started kubelet.service. May 16 00:48:19.392122 sshd_keygen[1319]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 16 00:48:19.414027 systemd[1]: Finished sshd-keygen.service. May 16 00:48:19.416618 systemd[1]: Starting issuegen.service... May 16 00:48:19.421724 systemd[1]: issuegen.service: Deactivated successfully. May 16 00:48:19.421988 systemd[1]: Finished issuegen.service. May 16 00:48:19.424603 systemd[1]: Starting systemd-user-sessions.service... May 16 00:48:19.431670 systemd[1]: Finished systemd-user-sessions.service. May 16 00:48:19.434159 systemd[1]: Started getty@tty1.service. May 16 00:48:19.436541 systemd[1]: Started serial-getty@ttyAMA0.service. May 16 00:48:19.437588 systemd[1]: Reached target getty.target. May 16 00:48:19.438523 systemd[1]: Reached target multi-user.target. May 16 00:48:19.441071 systemd[1]: Starting systemd-update-utmp-runlevel.service... May 16 00:48:19.449999 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. May 16 00:48:19.450285 systemd[1]: Finished systemd-update-utmp-runlevel.service. May 16 00:48:19.451227 systemd[1]: Startup finished in 4.868s (kernel) + 4.587s (userspace) = 9.456s. May 16 00:48:19.711253 kubelet[1370]: E0516 00:48:19.711208 1370 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 16 00:48:19.713309 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 16 00:48:19.713453 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 16 00:48:22.849671 systemd[1]: Created slice system-sshd.slice. May 16 00:48:22.851027 systemd[1]: Started sshd@0-10.0.0.110:22-10.0.0.1:48516.service. May 16 00:48:22.901916 sshd[1396]: Accepted publickey for core from 10.0.0.1 port 48516 ssh2: RSA SHA256:czXyODm5lEdSCdxgc4UKYE1H3sjGZqNxHBxH/SPqyp4 May 16 00:48:22.903989 sshd[1396]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 16 00:48:22.911888 systemd[1]: Created slice user-500.slice. May 16 00:48:22.912978 systemd[1]: Starting user-runtime-dir@500.service... May 16 00:48:22.915942 systemd-logind[1300]: New session 1 of user core. May 16 00:48:22.922816 systemd[1]: Finished user-runtime-dir@500.service. May 16 00:48:22.924232 systemd[1]: Starting user@500.service... May 16 00:48:22.927283 (systemd)[1401]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 16 00:48:22.987641 systemd[1401]: Queued start job for default target default.target. May 16 00:48:22.987906 systemd[1401]: Reached target paths.target. May 16 00:48:22.987921 systemd[1401]: Reached target sockets.target. May 16 00:48:22.987932 systemd[1401]: Reached target timers.target. May 16 00:48:22.987941 systemd[1401]: Reached target basic.target. May 16 00:48:22.987986 systemd[1401]: Reached target default.target. May 16 00:48:22.988009 systemd[1401]: Startup finished in 54ms. May 16 00:48:22.988253 systemd[1]: Started user@500.service. May 16 00:48:22.989284 systemd[1]: Started session-1.scope. May 16 00:48:23.038752 systemd[1]: Started sshd@1-10.0.0.110:22-10.0.0.1:48528.service. May 16 00:48:23.079664 sshd[1410]: Accepted publickey for core from 10.0.0.1 port 48528 ssh2: RSA SHA256:czXyODm5lEdSCdxgc4UKYE1H3sjGZqNxHBxH/SPqyp4 May 16 00:48:23.081416 sshd[1410]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 16 00:48:23.085929 systemd[1]: Started session-2.scope. May 16 00:48:23.086964 systemd-logind[1300]: New session 2 of user core. May 16 00:48:23.140778 sshd[1410]: pam_unix(sshd:session): session closed for user core May 16 00:48:23.143221 systemd[1]: Started sshd@2-10.0.0.110:22-10.0.0.1:48544.service. May 16 00:48:23.143750 systemd[1]: sshd@1-10.0.0.110:22-10.0.0.1:48528.service: Deactivated successfully. May 16 00:48:23.144689 systemd-logind[1300]: Session 2 logged out. Waiting for processes to exit. May 16 00:48:23.144709 systemd[1]: session-2.scope: Deactivated successfully. May 16 00:48:23.145802 systemd-logind[1300]: Removed session 2. May 16 00:48:23.183883 sshd[1415]: Accepted publickey for core from 10.0.0.1 port 48544 ssh2: RSA SHA256:czXyODm5lEdSCdxgc4UKYE1H3sjGZqNxHBxH/SPqyp4 May 16 00:48:23.185163 sshd[1415]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 16 00:48:23.188842 systemd-logind[1300]: New session 3 of user core. May 16 00:48:23.189333 systemd[1]: Started session-3.scope. May 16 00:48:23.238223 sshd[1415]: pam_unix(sshd:session): session closed for user core May 16 00:48:23.240573 systemd[1]: Started sshd@3-10.0.0.110:22-10.0.0.1:48556.service. May 16 00:48:23.241330 systemd-logind[1300]: Session 3 logged out. Waiting for processes to exit. May 16 00:48:23.241753 systemd[1]: sshd@2-10.0.0.110:22-10.0.0.1:48544.service: Deactivated successfully. May 16 00:48:23.242776 systemd[1]: session-3.scope: Deactivated successfully. May 16 00:48:23.243556 systemd-logind[1300]: Removed session 3. May 16 00:48:23.283126 sshd[1422]: Accepted publickey for core from 10.0.0.1 port 48556 ssh2: RSA SHA256:czXyODm5lEdSCdxgc4UKYE1H3sjGZqNxHBxH/SPqyp4 May 16 00:48:23.284504 sshd[1422]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 16 00:48:23.287958 systemd-logind[1300]: New session 4 of user core. May 16 00:48:23.288837 systemd[1]: Started session-4.scope. May 16 00:48:23.341879 sshd[1422]: pam_unix(sshd:session): session closed for user core May 16 00:48:23.344397 systemd[1]: Started sshd@4-10.0.0.110:22-10.0.0.1:48560.service. May 16 00:48:23.345003 systemd[1]: sshd@3-10.0.0.110:22-10.0.0.1:48556.service: Deactivated successfully. May 16 00:48:23.345952 systemd-logind[1300]: Session 4 logged out. Waiting for processes to exit. May 16 00:48:23.346017 systemd[1]: session-4.scope: Deactivated successfully. May 16 00:48:23.346964 systemd-logind[1300]: Removed session 4. May 16 00:48:23.385166 sshd[1430]: Accepted publickey for core from 10.0.0.1 port 48560 ssh2: RSA SHA256:czXyODm5lEdSCdxgc4UKYE1H3sjGZqNxHBxH/SPqyp4 May 16 00:48:23.386514 sshd[1430]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 16 00:48:23.389776 systemd-logind[1300]: New session 5 of user core. May 16 00:48:23.390656 systemd[1]: Started session-5.scope. May 16 00:48:23.449239 sudo[1436]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 16 00:48:23.450176 sudo[1436]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) May 16 00:48:23.462310 systemd[1]: Starting coreos-metadata.service... May 16 00:48:23.469686 systemd[1]: coreos-metadata.service: Deactivated successfully. May 16 00:48:23.469932 systemd[1]: Finished coreos-metadata.service. May 16 00:48:23.926833 systemd[1]: Stopped kubelet.service. May 16 00:48:23.928992 systemd[1]: Starting kubelet.service... May 16 00:48:23.955351 systemd[1]: Reloading. May 16 00:48:23.999899 /usr/lib/systemd/system-generators/torcx-generator[1498]: time="2025-05-16T00:48:23Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 16 00:48:23.999929 /usr/lib/systemd/system-generators/torcx-generator[1498]: time="2025-05-16T00:48:23Z" level=info msg="torcx already run" May 16 00:48:24.105548 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 16 00:48:24.105569 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 16 00:48:24.122611 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 16 00:48:24.177056 systemd[1]: Started kubelet.service. May 16 00:48:24.181414 systemd[1]: Stopping kubelet.service... May 16 00:48:24.181992 systemd[1]: kubelet.service: Deactivated successfully. May 16 00:48:24.182326 systemd[1]: Stopped kubelet.service. May 16 00:48:24.184199 systemd[1]: Starting kubelet.service... May 16 00:48:24.276387 systemd[1]: Started kubelet.service. May 16 00:48:24.310954 kubelet[1561]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 16 00:48:24.310954 kubelet[1561]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 16 00:48:24.310954 kubelet[1561]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 16 00:48:24.311349 kubelet[1561]: I0516 00:48:24.311007 1561 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 16 00:48:25.112665 kubelet[1561]: I0516 00:48:25.112615 1561 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" May 16 00:48:25.112665 kubelet[1561]: I0516 00:48:25.112653 1561 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 16 00:48:25.112996 kubelet[1561]: I0516 00:48:25.112969 1561 server.go:934] "Client rotation is on, will bootstrap in background" May 16 00:48:25.190677 kubelet[1561]: I0516 00:48:25.190629 1561 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 16 00:48:25.216864 kubelet[1561]: E0516 00:48:25.216810 1561 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 16 00:48:25.216864 kubelet[1561]: I0516 00:48:25.216860 1561 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 16 00:48:25.221249 kubelet[1561]: I0516 00:48:25.221220 1561 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 16 00:48:25.222755 kubelet[1561]: I0516 00:48:25.222715 1561 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 16 00:48:25.222935 kubelet[1561]: I0516 00:48:25.222896 1561 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 16 00:48:25.223102 kubelet[1561]: I0516 00:48:25.222928 1561 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"10.0.0.110","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} May 16 00:48:25.223255 kubelet[1561]: I0516 00:48:25.223233 1561 topology_manager.go:138] "Creating topology manager with none policy" May 16 00:48:25.223255 kubelet[1561]: I0516 00:48:25.223248 1561 container_manager_linux.go:300] "Creating device plugin manager" May 16 00:48:25.223644 kubelet[1561]: I0516 00:48:25.223624 1561 state_mem.go:36] "Initialized new in-memory state store" May 16 00:48:25.234474 kubelet[1561]: I0516 00:48:25.234437 1561 kubelet.go:408] "Attempting to sync node with API server" May 16 00:48:25.234474 kubelet[1561]: I0516 00:48:25.234477 1561 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 16 00:48:25.234625 kubelet[1561]: I0516 00:48:25.234505 1561 kubelet.go:314] "Adding apiserver pod source" May 16 00:48:25.234625 kubelet[1561]: I0516 00:48:25.234590 1561 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 16 00:48:25.234798 kubelet[1561]: E0516 00:48:25.234690 1561 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:48:25.234798 kubelet[1561]: E0516 00:48:25.234705 1561 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:48:25.238936 kubelet[1561]: I0516 00:48:25.238896 1561 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" May 16 00:48:25.240218 kubelet[1561]: I0516 00:48:25.240196 1561 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 16 00:48:25.240440 kubelet[1561]: W0516 00:48:25.240425 1561 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 16 00:48:25.241910 kubelet[1561]: I0516 00:48:25.241862 1561 server.go:1274] "Started kubelet" May 16 00:48:25.262180 kubelet[1561]: I0516 00:48:25.262134 1561 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 16 00:48:25.263611 kubelet[1561]: I0516 00:48:25.263578 1561 server.go:449] "Adding debug handlers to kubelet server" May 16 00:48:25.264686 kubelet[1561]: I0516 00:48:25.264633 1561 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 16 00:48:25.264887 kubelet[1561]: I0516 00:48:25.264870 1561 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 16 00:48:25.266313 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). May 16 00:48:25.266480 kubelet[1561]: I0516 00:48:25.266457 1561 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 16 00:48:25.274820 kubelet[1561]: E0516 00:48:25.274795 1561 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 16 00:48:25.275173 kubelet[1561]: I0516 00:48:25.275088 1561 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 16 00:48:25.276248 kubelet[1561]: I0516 00:48:25.276221 1561 volume_manager.go:289] "Starting Kubelet Volume Manager" May 16 00:48:25.277782 kubelet[1561]: I0516 00:48:25.277752 1561 desired_state_of_world_populator.go:147] "Desired state populator starts to run" May 16 00:48:25.277995 kubelet[1561]: I0516 00:48:25.277058 1561 factory.go:221] Registration of the systemd container factory successfully May 16 00:48:25.278377 kubelet[1561]: I0516 00:48:25.278263 1561 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 16 00:48:25.279453 kubelet[1561]: I0516 00:48:25.279428 1561 reconciler.go:26] "Reconciler: start to sync state" May 16 00:48:25.281609 kubelet[1561]: E0516 00:48:25.281525 1561 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.110\" not found" May 16 00:48:25.284896 kubelet[1561]: I0516 00:48:25.284777 1561 factory.go:221] Registration of the containerd container factory successfully May 16 00:48:25.285672 kubelet[1561]: E0516 00:48:25.285597 1561 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.0.0.110\" not found" node="10.0.0.110" May 16 00:48:25.306602 kubelet[1561]: I0516 00:48:25.306384 1561 cpu_manager.go:214] "Starting CPU manager" policy="none" May 16 00:48:25.306602 kubelet[1561]: I0516 00:48:25.306404 1561 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 16 00:48:25.306602 kubelet[1561]: I0516 00:48:25.306425 1561 state_mem.go:36] "Initialized new in-memory state store" May 16 00:48:25.381408 kubelet[1561]: I0516 00:48:25.381297 1561 policy_none.go:49] "None policy: Start" May 16 00:48:25.381752 kubelet[1561]: E0516 00:48:25.381628 1561 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.110\" not found" May 16 00:48:25.382997 kubelet[1561]: I0516 00:48:25.382979 1561 memory_manager.go:170] "Starting memorymanager" policy="None" May 16 00:48:25.383153 kubelet[1561]: I0516 00:48:25.383141 1561 state_mem.go:35] "Initializing new in-memory state store" May 16 00:48:25.387794 kubelet[1561]: I0516 00:48:25.387764 1561 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 16 00:48:25.388068 kubelet[1561]: I0516 00:48:25.388050 1561 eviction_manager.go:189] "Eviction manager: starting control loop" May 16 00:48:25.388194 kubelet[1561]: I0516 00:48:25.388152 1561 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 16 00:48:25.388468 kubelet[1561]: I0516 00:48:25.388451 1561 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 16 00:48:25.389733 kubelet[1561]: E0516 00:48:25.389706 1561 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.0.110\" not found" May 16 00:48:25.423891 kubelet[1561]: I0516 00:48:25.423847 1561 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 16 00:48:25.425432 kubelet[1561]: I0516 00:48:25.425407 1561 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 16 00:48:25.425571 kubelet[1561]: I0516 00:48:25.425560 1561 status_manager.go:217] "Starting to sync pod status with apiserver" May 16 00:48:25.425775 kubelet[1561]: I0516 00:48:25.425762 1561 kubelet.go:2321] "Starting kubelet main sync loop" May 16 00:48:25.425928 kubelet[1561]: E0516 00:48:25.425914 1561 kubelet.go:2345] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" May 16 00:48:25.489401 kubelet[1561]: I0516 00:48:25.489368 1561 kubelet_node_status.go:72] "Attempting to register node" node="10.0.0.110" May 16 00:48:25.495414 kubelet[1561]: I0516 00:48:25.495377 1561 kubelet_node_status.go:75] "Successfully registered node" node="10.0.0.110" May 16 00:48:25.495414 kubelet[1561]: E0516 00:48:25.495412 1561 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"10.0.0.110\": node \"10.0.0.110\" not found" May 16 00:48:25.512892 kubelet[1561]: I0516 00:48:25.512859 1561 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" May 16 00:48:25.513277 env[1315]: time="2025-05-16T00:48:25.513218510Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 16 00:48:25.513569 kubelet[1561]: I0516 00:48:25.513431 1561 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" May 16 00:48:25.797953 sudo[1436]: pam_unix(sudo:session): session closed for user root May 16 00:48:25.799903 sshd[1430]: pam_unix(sshd:session): session closed for user core May 16 00:48:25.803506 systemd[1]: sshd@4-10.0.0.110:22-10.0.0.1:48560.service: Deactivated successfully. May 16 00:48:25.804756 systemd[1]: session-5.scope: Deactivated successfully. May 16 00:48:25.804777 systemd-logind[1300]: Session 5 logged out. Waiting for processes to exit. May 16 00:48:25.805834 systemd-logind[1300]: Removed session 5. May 16 00:48:26.114939 kubelet[1561]: I0516 00:48:26.114842 1561 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" May 16 00:48:26.115375 kubelet[1561]: W0516 00:48:26.115280 1561 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received May 16 00:48:26.115556 kubelet[1561]: W0516 00:48:26.115537 1561 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.Service ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received May 16 00:48:26.115654 kubelet[1561]: W0516 00:48:26.115643 1561 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received May 16 00:48:26.235394 kubelet[1561]: E0516 00:48:26.235350 1561 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:48:26.235530 kubelet[1561]: I0516 00:48:26.235390 1561 apiserver.go:52] "Watching apiserver" May 16 00:48:26.279055 kubelet[1561]: I0516 00:48:26.279016 1561 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" May 16 00:48:26.285817 kubelet[1561]: I0516 00:48:26.285779 1561 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d153b887-8bf8-4659-a6ae-0c3752c3e20e-xtables-lock\") pod \"kube-proxy-8wjls\" (UID: \"d153b887-8bf8-4659-a6ae-0c3752c3e20e\") " pod="kube-system/kube-proxy-8wjls" May 16 00:48:26.285817 kubelet[1561]: I0516 00:48:26.285819 1561 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4e0e21be-1d6f-4e21-a43f-a8c39b92009e-lib-modules\") pod \"cilium-gtpzk\" (UID: \"4e0e21be-1d6f-4e21-a43f-a8c39b92009e\") " pod="kube-system/cilium-gtpzk" May 16 00:48:26.285977 kubelet[1561]: I0516 00:48:26.285841 1561 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4e0e21be-1d6f-4e21-a43f-a8c39b92009e-cilium-config-path\") pod \"cilium-gtpzk\" (UID: \"4e0e21be-1d6f-4e21-a43f-a8c39b92009e\") " pod="kube-system/cilium-gtpzk" May 16 00:48:26.285977 kubelet[1561]: I0516 00:48:26.285859 1561 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dmt6h\" (UniqueName: \"kubernetes.io/projected/4e0e21be-1d6f-4e21-a43f-a8c39b92009e-kube-api-access-dmt6h\") pod \"cilium-gtpzk\" (UID: \"4e0e21be-1d6f-4e21-a43f-a8c39b92009e\") " pod="kube-system/cilium-gtpzk" May 16 00:48:26.285977 kubelet[1561]: I0516 00:48:26.285877 1561 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d153b887-8bf8-4659-a6ae-0c3752c3e20e-kube-proxy\") pod \"kube-proxy-8wjls\" (UID: \"d153b887-8bf8-4659-a6ae-0c3752c3e20e\") " pod="kube-system/kube-proxy-8wjls" May 16 00:48:26.285977 kubelet[1561]: I0516 00:48:26.285893 1561 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4e0e21be-1d6f-4e21-a43f-a8c39b92009e-cilium-cgroup\") pod \"cilium-gtpzk\" (UID: \"4e0e21be-1d6f-4e21-a43f-a8c39b92009e\") " pod="kube-system/cilium-gtpzk" May 16 00:48:26.285977 kubelet[1561]: I0516 00:48:26.285907 1561 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4e0e21be-1d6f-4e21-a43f-a8c39b92009e-cni-path\") pod \"cilium-gtpzk\" (UID: \"4e0e21be-1d6f-4e21-a43f-a8c39b92009e\") " pod="kube-system/cilium-gtpzk" May 16 00:48:26.285977 kubelet[1561]: I0516 00:48:26.285922 1561 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4e0e21be-1d6f-4e21-a43f-a8c39b92009e-clustermesh-secrets\") pod \"cilium-gtpzk\" (UID: \"4e0e21be-1d6f-4e21-a43f-a8c39b92009e\") " pod="kube-system/cilium-gtpzk" May 16 00:48:26.286104 kubelet[1561]: I0516 00:48:26.285939 1561 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4e0e21be-1d6f-4e21-a43f-a8c39b92009e-host-proc-sys-net\") pod \"cilium-gtpzk\" (UID: \"4e0e21be-1d6f-4e21-a43f-a8c39b92009e\") " pod="kube-system/cilium-gtpzk" May 16 00:48:26.286104 kubelet[1561]: I0516 00:48:26.285955 1561 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4e0e21be-1d6f-4e21-a43f-a8c39b92009e-host-proc-sys-kernel\") pod \"cilium-gtpzk\" (UID: \"4e0e21be-1d6f-4e21-a43f-a8c39b92009e\") " pod="kube-system/cilium-gtpzk" May 16 00:48:26.286104 kubelet[1561]: I0516 00:48:26.285969 1561 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4e0e21be-1d6f-4e21-a43f-a8c39b92009e-cilium-run\") pod \"cilium-gtpzk\" (UID: \"4e0e21be-1d6f-4e21-a43f-a8c39b92009e\") " pod="kube-system/cilium-gtpzk" May 16 00:48:26.286104 kubelet[1561]: I0516 00:48:26.285984 1561 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4e0e21be-1d6f-4e21-a43f-a8c39b92009e-bpf-maps\") pod \"cilium-gtpzk\" (UID: \"4e0e21be-1d6f-4e21-a43f-a8c39b92009e\") " pod="kube-system/cilium-gtpzk" May 16 00:48:26.286104 kubelet[1561]: I0516 00:48:26.285998 1561 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4e0e21be-1d6f-4e21-a43f-a8c39b92009e-hostproc\") pod \"cilium-gtpzk\" (UID: \"4e0e21be-1d6f-4e21-a43f-a8c39b92009e\") " pod="kube-system/cilium-gtpzk" May 16 00:48:26.286104 kubelet[1561]: I0516 00:48:26.286012 1561 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4e0e21be-1d6f-4e21-a43f-a8c39b92009e-hubble-tls\") pod \"cilium-gtpzk\" (UID: \"4e0e21be-1d6f-4e21-a43f-a8c39b92009e\") " pod="kube-system/cilium-gtpzk" May 16 00:48:26.286250 kubelet[1561]: I0516 00:48:26.286026 1561 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d153b887-8bf8-4659-a6ae-0c3752c3e20e-lib-modules\") pod \"kube-proxy-8wjls\" (UID: \"d153b887-8bf8-4659-a6ae-0c3752c3e20e\") " pod="kube-system/kube-proxy-8wjls" May 16 00:48:26.286250 kubelet[1561]: I0516 00:48:26.286040 1561 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2msqv\" (UniqueName: \"kubernetes.io/projected/d153b887-8bf8-4659-a6ae-0c3752c3e20e-kube-api-access-2msqv\") pod \"kube-proxy-8wjls\" (UID: \"d153b887-8bf8-4659-a6ae-0c3752c3e20e\") " pod="kube-system/kube-proxy-8wjls" May 16 00:48:26.286250 kubelet[1561]: I0516 00:48:26.286055 1561 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4e0e21be-1d6f-4e21-a43f-a8c39b92009e-etc-cni-netd\") pod \"cilium-gtpzk\" (UID: \"4e0e21be-1d6f-4e21-a43f-a8c39b92009e\") " pod="kube-system/cilium-gtpzk" May 16 00:48:26.286250 kubelet[1561]: I0516 00:48:26.286076 1561 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4e0e21be-1d6f-4e21-a43f-a8c39b92009e-xtables-lock\") pod \"cilium-gtpzk\" (UID: \"4e0e21be-1d6f-4e21-a43f-a8c39b92009e\") " pod="kube-system/cilium-gtpzk" May 16 00:48:26.386901 kubelet[1561]: I0516 00:48:26.386758 1561 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" May 16 00:48:26.550589 kubelet[1561]: E0516 00:48:26.550543 1561 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:48:26.551391 env[1315]: time="2025-05-16T00:48:26.551332773Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-gtpzk,Uid:4e0e21be-1d6f-4e21-a43f-a8c39b92009e,Namespace:kube-system,Attempt:0,}" May 16 00:48:26.552864 kubelet[1561]: E0516 00:48:26.552824 1561 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:48:26.553247 env[1315]: time="2025-05-16T00:48:26.553205362Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8wjls,Uid:d153b887-8bf8-4659-a6ae-0c3752c3e20e,Namespace:kube-system,Attempt:0,}" May 16 00:48:27.184278 env[1315]: time="2025-05-16T00:48:27.184224002Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:48:27.185087 env[1315]: time="2025-05-16T00:48:27.185050538Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:48:27.186729 env[1315]: time="2025-05-16T00:48:27.186698213Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:48:27.188178 env[1315]: time="2025-05-16T00:48:27.188143312Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:48:27.190568 env[1315]: time="2025-05-16T00:48:27.190539350Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:48:27.193009 env[1315]: time="2025-05-16T00:48:27.192973165Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:48:27.196678 env[1315]: time="2025-05-16T00:48:27.196639106Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:48:27.197525 env[1315]: time="2025-05-16T00:48:27.197501832Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:48:27.223102 env[1315]: time="2025-05-16T00:48:27.223034134Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 16 00:48:27.223102 env[1315]: time="2025-05-16T00:48:27.223087030Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 16 00:48:27.223246 env[1315]: time="2025-05-16T00:48:27.223098260Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 00:48:27.223404 env[1315]: time="2025-05-16T00:48:27.223373216Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/fc03990516c793dd9666f6aac897d503096f61cc37be604d8604fbfe517f930b pid=1625 runtime=io.containerd.runc.v2 May 16 00:48:27.224343 env[1315]: time="2025-05-16T00:48:27.224281656Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 16 00:48:27.224343 env[1315]: time="2025-05-16T00:48:27.224323798Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 16 00:48:27.224343 env[1315]: time="2025-05-16T00:48:27.224335187Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 00:48:27.224547 env[1315]: time="2025-05-16T00:48:27.224502526Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c6714b16dbc1f4dca39c6ac01dd9cd8be5ee7fd14c1b0beed7ff9cd8c91e56ae pid=1629 runtime=io.containerd.runc.v2 May 16 00:48:27.236159 kubelet[1561]: E0516 00:48:27.236094 1561 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:48:27.296015 env[1315]: time="2025-05-16T00:48:27.295967241Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-gtpzk,Uid:4e0e21be-1d6f-4e21-a43f-a8c39b92009e,Namespace:kube-system,Attempt:0,} returns sandbox id \"c6714b16dbc1f4dca39c6ac01dd9cd8be5ee7fd14c1b0beed7ff9cd8c91e56ae\"" May 16 00:48:27.296940 kubelet[1561]: E0516 00:48:27.296911 1561 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:48:27.298258 env[1315]: time="2025-05-16T00:48:27.298224233Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 16 00:48:27.299032 env[1315]: time="2025-05-16T00:48:27.298995849Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8wjls,Uid:d153b887-8bf8-4659-a6ae-0c3752c3e20e,Namespace:kube-system,Attempt:0,} returns sandbox id \"fc03990516c793dd9666f6aac897d503096f61cc37be604d8604fbfe517f930b\"" May 16 00:48:27.299707 kubelet[1561]: E0516 00:48:27.299685 1561 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:48:27.394199 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3196576113.mount: Deactivated successfully. May 16 00:48:28.237320 kubelet[1561]: E0516 00:48:28.237141 1561 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:48:29.238560 kubelet[1561]: E0516 00:48:29.238500 1561 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:48:30.238792 kubelet[1561]: E0516 00:48:30.238705 1561 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:48:30.743228 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount325572481.mount: Deactivated successfully. May 16 00:48:31.239460 kubelet[1561]: E0516 00:48:31.239422 1561 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:48:32.239655 kubelet[1561]: E0516 00:48:32.239585 1561 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:48:32.934554 env[1315]: time="2025-05-16T00:48:32.934505535Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:48:32.935631 env[1315]: time="2025-05-16T00:48:32.935606706Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:48:32.937129 env[1315]: time="2025-05-16T00:48:32.937086009Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:48:32.937828 env[1315]: time="2025-05-16T00:48:32.937794907Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" May 16 00:48:32.939068 env[1315]: time="2025-05-16T00:48:32.939039769Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.9\"" May 16 00:48:32.940564 env[1315]: time="2025-05-16T00:48:32.940532936Z" level=info msg="CreateContainer within sandbox \"c6714b16dbc1f4dca39c6ac01dd9cd8be5ee7fd14c1b0beed7ff9cd8c91e56ae\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 16 00:48:32.951267 env[1315]: time="2025-05-16T00:48:32.951225594Z" level=info msg="CreateContainer within sandbox \"c6714b16dbc1f4dca39c6ac01dd9cd8be5ee7fd14c1b0beed7ff9cd8c91e56ae\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"d54ff57bbd2af9e683543d4c3c71f0ad45ac0cd9b1c821735e652d4dc19cbd96\"" May 16 00:48:32.951854 env[1315]: time="2025-05-16T00:48:32.951822630Z" level=info msg="StartContainer for \"d54ff57bbd2af9e683543d4c3c71f0ad45ac0cd9b1c821735e652d4dc19cbd96\"" May 16 00:48:33.010358 env[1315]: time="2025-05-16T00:48:33.010310557Z" level=info msg="StartContainer for \"d54ff57bbd2af9e683543d4c3c71f0ad45ac0cd9b1c821735e652d4dc19cbd96\" returns successfully" May 16 00:48:33.206042 env[1315]: time="2025-05-16T00:48:33.205710017Z" level=info msg="shim disconnected" id=d54ff57bbd2af9e683543d4c3c71f0ad45ac0cd9b1c821735e652d4dc19cbd96 May 16 00:48:33.206255 env[1315]: time="2025-05-16T00:48:33.206227404Z" level=warning msg="cleaning up after shim disconnected" id=d54ff57bbd2af9e683543d4c3c71f0ad45ac0cd9b1c821735e652d4dc19cbd96 namespace=k8s.io May 16 00:48:33.206316 env[1315]: time="2025-05-16T00:48:33.206303013Z" level=info msg="cleaning up dead shim" May 16 00:48:33.213412 env[1315]: time="2025-05-16T00:48:33.213377002Z" level=warning msg="cleanup warnings time=\"2025-05-16T00:48:33Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1742 runtime=io.containerd.runc.v2\n" May 16 00:48:33.240496 kubelet[1561]: E0516 00:48:33.240457 1561 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:48:33.443024 kubelet[1561]: E0516 00:48:33.442997 1561 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:48:33.444599 env[1315]: time="2025-05-16T00:48:33.444561800Z" level=info msg="CreateContainer within sandbox \"c6714b16dbc1f4dca39c6ac01dd9cd8be5ee7fd14c1b0beed7ff9cd8c91e56ae\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 16 00:48:33.456715 env[1315]: time="2025-05-16T00:48:33.456622970Z" level=info msg="CreateContainer within sandbox \"c6714b16dbc1f4dca39c6ac01dd9cd8be5ee7fd14c1b0beed7ff9cd8c91e56ae\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"bf6584d0280edf953909cc8d552d98b1018bda3186aee33f772de4858a959efc\"" May 16 00:48:33.457658 env[1315]: time="2025-05-16T00:48:33.457618725Z" level=info msg="StartContainer for \"bf6584d0280edf953909cc8d552d98b1018bda3186aee33f772de4858a959efc\"" May 16 00:48:33.507405 env[1315]: time="2025-05-16T00:48:33.507363595Z" level=info msg="StartContainer for \"bf6584d0280edf953909cc8d552d98b1018bda3186aee33f772de4858a959efc\" returns successfully" May 16 00:48:33.521828 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 16 00:48:33.522072 systemd[1]: Stopped systemd-sysctl.service. May 16 00:48:33.522240 systemd[1]: Stopping systemd-sysctl.service... May 16 00:48:33.523651 systemd[1]: Starting systemd-sysctl.service... May 16 00:48:33.532306 systemd[1]: Finished systemd-sysctl.service. May 16 00:48:33.544334 env[1315]: time="2025-05-16T00:48:33.544292039Z" level=info msg="shim disconnected" id=bf6584d0280edf953909cc8d552d98b1018bda3186aee33f772de4858a959efc May 16 00:48:33.544465 env[1315]: time="2025-05-16T00:48:33.544336400Z" level=warning msg="cleaning up after shim disconnected" id=bf6584d0280edf953909cc8d552d98b1018bda3186aee33f772de4858a959efc namespace=k8s.io May 16 00:48:33.544465 env[1315]: time="2025-05-16T00:48:33.544347161Z" level=info msg="cleaning up dead shim" May 16 00:48:33.550473 env[1315]: time="2025-05-16T00:48:33.550427987Z" level=warning msg="cleanup warnings time=\"2025-05-16T00:48:33Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1808 runtime=io.containerd.runc.v2\n" May 16 00:48:33.947852 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d54ff57bbd2af9e683543d4c3c71f0ad45ac0cd9b1c821735e652d4dc19cbd96-rootfs.mount: Deactivated successfully. May 16 00:48:34.167426 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1267634134.mount: Deactivated successfully. May 16 00:48:34.241541 kubelet[1561]: E0516 00:48:34.241429 1561 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:48:34.446500 kubelet[1561]: E0516 00:48:34.446475 1561 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:48:34.448111 env[1315]: time="2025-05-16T00:48:34.448060886Z" level=info msg="CreateContainer within sandbox \"c6714b16dbc1f4dca39c6ac01dd9cd8be5ee7fd14c1b0beed7ff9cd8c91e56ae\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 16 00:48:34.461772 env[1315]: time="2025-05-16T00:48:34.461720306Z" level=info msg="CreateContainer within sandbox \"c6714b16dbc1f4dca39c6ac01dd9cd8be5ee7fd14c1b0beed7ff9cd8c91e56ae\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"c7b6298e3e55e04689d75ed663a6387ed244b0bc346243e7927020e8891a45d6\"" May 16 00:48:34.462390 env[1315]: time="2025-05-16T00:48:34.462359624Z" level=info msg="StartContainer for \"c7b6298e3e55e04689d75ed663a6387ed244b0bc346243e7927020e8891a45d6\"" May 16 00:48:34.526600 env[1315]: time="2025-05-16T00:48:34.526552661Z" level=info msg="StartContainer for \"c7b6298e3e55e04689d75ed663a6387ed244b0bc346243e7927020e8891a45d6\" returns successfully" May 16 00:48:34.689952 env[1315]: time="2025-05-16T00:48:34.689904027Z" level=info msg="shim disconnected" id=c7b6298e3e55e04689d75ed663a6387ed244b0bc346243e7927020e8891a45d6 May 16 00:48:34.690204 env[1315]: time="2025-05-16T00:48:34.690181917Z" level=warning msg="cleaning up after shim disconnected" id=c7b6298e3e55e04689d75ed663a6387ed244b0bc346243e7927020e8891a45d6 namespace=k8s.io May 16 00:48:34.690280 env[1315]: time="2025-05-16T00:48:34.690266253Z" level=info msg="cleaning up dead shim" May 16 00:48:34.697437 env[1315]: time="2025-05-16T00:48:34.697400190Z" level=warning msg="cleanup warnings time=\"2025-05-16T00:48:34Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1865 runtime=io.containerd.runc.v2\n" May 16 00:48:34.699696 env[1315]: time="2025-05-16T00:48:34.699663263Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.31.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:48:34.701176 env[1315]: time="2025-05-16T00:48:34.701142031Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:cbfba5e6542fe387b24d9e73bf5a054a6b07b95af1392268fd82b6f449ef1c27,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:48:34.702483 env[1315]: time="2025-05-16T00:48:34.702449457Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.31.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:48:34.704016 env[1315]: time="2025-05-16T00:48:34.703988437Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:fdf026cf2434537e499e9c739d189ca8fc57101d929ac5ccd8e24f979a9738c1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:48:34.704485 env[1315]: time="2025-05-16T00:48:34.704455813Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.9\" returns image reference \"sha256:cbfba5e6542fe387b24d9e73bf5a054a6b07b95af1392268fd82b6f449ef1c27\"" May 16 00:48:34.706707 env[1315]: time="2025-05-16T00:48:34.706673787Z" level=info msg="CreateContainer within sandbox \"fc03990516c793dd9666f6aac897d503096f61cc37be604d8604fbfe517f930b\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 16 00:48:34.720024 env[1315]: time="2025-05-16T00:48:34.719967712Z" level=info msg="CreateContainer within sandbox \"fc03990516c793dd9666f6aac897d503096f61cc37be604d8604fbfe517f930b\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"7d4d312f322eb75140fb6069ce0c744a3e514679ac3f893e164519aa32c6d34b\"" May 16 00:48:34.720474 env[1315]: time="2025-05-16T00:48:34.720450959Z" level=info msg="StartContainer for \"7d4d312f322eb75140fb6069ce0c744a3e514679ac3f893e164519aa32c6d34b\"" May 16 00:48:34.793663 env[1315]: time="2025-05-16T00:48:34.793568565Z" level=info msg="StartContainer for \"7d4d312f322eb75140fb6069ce0c744a3e514679ac3f893e164519aa32c6d34b\" returns successfully" May 16 00:48:34.947846 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c7b6298e3e55e04689d75ed663a6387ed244b0bc346243e7927020e8891a45d6-rootfs.mount: Deactivated successfully. May 16 00:48:35.241951 kubelet[1561]: E0516 00:48:35.241857 1561 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:48:35.451437 kubelet[1561]: E0516 00:48:35.451407 1561 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:48:35.453838 kubelet[1561]: E0516 00:48:35.453788 1561 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:48:35.456961 env[1315]: time="2025-05-16T00:48:35.456876250Z" level=info msg="CreateContainer within sandbox \"c6714b16dbc1f4dca39c6ac01dd9cd8be5ee7fd14c1b0beed7ff9cd8c91e56ae\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 16 00:48:35.476992 env[1315]: time="2025-05-16T00:48:35.476915477Z" level=info msg="CreateContainer within sandbox \"c6714b16dbc1f4dca39c6ac01dd9cd8be5ee7fd14c1b0beed7ff9cd8c91e56ae\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"14543cfaf9599b93e009cb756ee58af5162919572e18bbbf5400aa4cae5a6508\"" May 16 00:48:35.477755 env[1315]: time="2025-05-16T00:48:35.477694822Z" level=info msg="StartContainer for \"14543cfaf9599b93e009cb756ee58af5162919572e18bbbf5400aa4cae5a6508\"" May 16 00:48:35.487471 kubelet[1561]: I0516 00:48:35.487389 1561 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-8wjls" podStartSLOduration=3.082269784 podStartE2EDuration="10.487371993s" podCreationTimestamp="2025-05-16 00:48:25 +0000 UTC" firstStartedPulling="2025-05-16 00:48:27.300086628 +0000 UTC m=+3.019433581" lastFinishedPulling="2025-05-16 00:48:34.705188837 +0000 UTC m=+10.424535790" observedRunningTime="2025-05-16 00:48:35.463086997 +0000 UTC m=+11.182433910" watchObservedRunningTime="2025-05-16 00:48:35.487371993 +0000 UTC m=+11.206718946" May 16 00:48:35.532348 env[1315]: time="2025-05-16T00:48:35.532299762Z" level=info msg="StartContainer for \"14543cfaf9599b93e009cb756ee58af5162919572e18bbbf5400aa4cae5a6508\" returns successfully" May 16 00:48:35.562233 env[1315]: time="2025-05-16T00:48:35.562093508Z" level=info msg="shim disconnected" id=14543cfaf9599b93e009cb756ee58af5162919572e18bbbf5400aa4cae5a6508 May 16 00:48:35.562233 env[1315]: time="2025-05-16T00:48:35.562234602Z" level=warning msg="cleaning up after shim disconnected" id=14543cfaf9599b93e009cb756ee58af5162919572e18bbbf5400aa4cae5a6508 namespace=k8s.io May 16 00:48:35.562441 env[1315]: time="2025-05-16T00:48:35.562245292Z" level=info msg="cleaning up dead shim" May 16 00:48:35.568755 env[1315]: time="2025-05-16T00:48:35.568708188Z" level=warning msg="cleanup warnings time=\"2025-05-16T00:48:35Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2090 runtime=io.containerd.runc.v2\n" May 16 00:48:35.947358 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-14543cfaf9599b93e009cb756ee58af5162919572e18bbbf5400aa4cae5a6508-rootfs.mount: Deactivated successfully. May 16 00:48:36.242822 kubelet[1561]: E0516 00:48:36.242596 1561 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:48:36.457970 kubelet[1561]: E0516 00:48:36.457646 1561 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:48:36.458151 kubelet[1561]: E0516 00:48:36.458016 1561 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:48:36.459850 env[1315]: time="2025-05-16T00:48:36.459811682Z" level=info msg="CreateContainer within sandbox \"c6714b16dbc1f4dca39c6ac01dd9cd8be5ee7fd14c1b0beed7ff9cd8c91e56ae\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 16 00:48:36.482589 env[1315]: time="2025-05-16T00:48:36.482533519Z" level=info msg="CreateContainer within sandbox \"c6714b16dbc1f4dca39c6ac01dd9cd8be5ee7fd14c1b0beed7ff9cd8c91e56ae\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"ba0d50adbac6c4785c2914770db45c92e1649f24aab116fb9be679b98d03ac37\"" May 16 00:48:36.483034 env[1315]: time="2025-05-16T00:48:36.483003752Z" level=info msg="StartContainer for \"ba0d50adbac6c4785c2914770db45c92e1649f24aab116fb9be679b98d03ac37\"" May 16 00:48:36.536179 env[1315]: time="2025-05-16T00:48:36.536138118Z" level=info msg="StartContainer for \"ba0d50adbac6c4785c2914770db45c92e1649f24aab116fb9be679b98d03ac37\" returns successfully" May 16 00:48:36.617439 kubelet[1561]: I0516 00:48:36.617408 1561 kubelet_node_status.go:488] "Fast updating node status as it just became ready" May 16 00:48:36.885137 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! May 16 00:48:37.135139 kernel: Initializing XFRM netlink socket May 16 00:48:37.138184 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! May 16 00:48:37.243192 kubelet[1561]: E0516 00:48:37.243155 1561 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:48:37.461837 kubelet[1561]: E0516 00:48:37.461738 1561 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:48:38.244681 kubelet[1561]: E0516 00:48:38.244632 1561 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:48:38.463310 kubelet[1561]: E0516 00:48:38.463274 1561 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:48:38.741161 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready May 16 00:48:38.741265 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready May 16 00:48:38.740400 systemd-networkd[1097]: cilium_host: Link UP May 16 00:48:38.740526 systemd-networkd[1097]: cilium_net: Link UP May 16 00:48:38.741303 systemd-networkd[1097]: cilium_net: Gained carrier May 16 00:48:38.742181 systemd-networkd[1097]: cilium_host: Gained carrier May 16 00:48:38.818525 systemd-networkd[1097]: cilium_vxlan: Link UP May 16 00:48:38.818532 systemd-networkd[1097]: cilium_vxlan: Gained carrier May 16 00:48:39.131613 kernel: NET: Registered PF_ALG protocol family May 16 00:48:39.245695 kubelet[1561]: E0516 00:48:39.245646 1561 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:48:39.402213 systemd-networkd[1097]: cilium_net: Gained IPv6LL May 16 00:48:39.465013 kubelet[1561]: E0516 00:48:39.464969 1561 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:48:39.530213 systemd-networkd[1097]: cilium_host: Gained IPv6LL May 16 00:48:39.688181 systemd-networkd[1097]: lxc_health: Link UP May 16 00:48:39.695426 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready May 16 00:48:39.694773 systemd-networkd[1097]: lxc_health: Gained carrier May 16 00:48:39.951811 kubelet[1561]: I0516 00:48:39.951689 1561 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-gtpzk" podStartSLOduration=9.310582823 podStartE2EDuration="14.951671579s" podCreationTimestamp="2025-05-16 00:48:25 +0000 UTC" firstStartedPulling="2025-05-16 00:48:27.297769517 +0000 UTC m=+3.017116430" lastFinishedPulling="2025-05-16 00:48:32.938858233 +0000 UTC m=+8.658205186" observedRunningTime="2025-05-16 00:48:37.476463424 +0000 UTC m=+13.195810377" watchObservedRunningTime="2025-05-16 00:48:39.951671579 +0000 UTC m=+15.671018532" May 16 00:48:40.246813 kubelet[1561]: E0516 00:48:40.246705 1561 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:48:40.466305 kubelet[1561]: E0516 00:48:40.466272 1561 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:48:40.554210 systemd-networkd[1097]: cilium_vxlan: Gained IPv6LL May 16 00:48:41.247524 kubelet[1561]: E0516 00:48:41.247482 1561 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:48:41.706234 systemd-networkd[1097]: lxc_health: Gained IPv6LL May 16 00:48:41.867586 kubelet[1561]: I0516 00:48:41.867539 1561 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w488z\" (UniqueName: \"kubernetes.io/projected/67e3b978-2163-4dec-a94b-b04ce66fdd85-kube-api-access-w488z\") pod \"nginx-deployment-8587fbcb89-596l9\" (UID: \"67e3b978-2163-4dec-a94b-b04ce66fdd85\") " pod="default/nginx-deployment-8587fbcb89-596l9" May 16 00:48:42.049128 env[1315]: time="2025-05-16T00:48:42.049070294Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-596l9,Uid:67e3b978-2163-4dec-a94b-b04ce66fdd85,Namespace:default,Attempt:0,}" May 16 00:48:42.093188 systemd-networkd[1097]: lxc66e8a81f0712: Link UP May 16 00:48:42.103135 kernel: eth0: renamed from tmpc396f May 16 00:48:42.117167 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready May 16 00:48:42.117217 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc66e8a81f0712: link becomes ready May 16 00:48:42.117243 systemd-networkd[1097]: lxc66e8a81f0712: Gained carrier May 16 00:48:42.248205 kubelet[1561]: E0516 00:48:42.248160 1561 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:48:43.249185 kubelet[1561]: E0516 00:48:43.249116 1561 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:48:43.946508 systemd-networkd[1097]: lxc66e8a81f0712: Gained IPv6LL May 16 00:48:44.141105 env[1315]: time="2025-05-16T00:48:44.141035862Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 16 00:48:44.141435 env[1315]: time="2025-05-16T00:48:44.141127307Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 16 00:48:44.141435 env[1315]: time="2025-05-16T00:48:44.141156723Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 00:48:44.141435 env[1315]: time="2025-05-16T00:48:44.141328741Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c396f819ee26611e8b043f5ff2b04f4fca8f0f4ae5b03468a60c05da2b692e03 pid=2628 runtime=io.containerd.runc.v2 May 16 00:48:44.202823 systemd-resolved[1235]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 16 00:48:44.220524 env[1315]: time="2025-05-16T00:48:44.220479718Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-596l9,Uid:67e3b978-2163-4dec-a94b-b04ce66fdd85,Namespace:default,Attempt:0,} returns sandbox id \"c396f819ee26611e8b043f5ff2b04f4fca8f0f4ae5b03468a60c05da2b692e03\"" May 16 00:48:44.221884 env[1315]: time="2025-05-16T00:48:44.221818297Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" May 16 00:48:44.249737 kubelet[1561]: E0516 00:48:44.249686 1561 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:48:45.234859 kubelet[1561]: E0516 00:48:45.234804 1561 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:48:45.250400 kubelet[1561]: E0516 00:48:45.250355 1561 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:48:46.124459 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1955591812.mount: Deactivated successfully. May 16 00:48:46.250700 kubelet[1561]: E0516 00:48:46.250654 1561 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:48:47.251227 kubelet[1561]: E0516 00:48:47.251177 1561 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:48:47.335141 env[1315]: time="2025-05-16T00:48:47.335085496Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:48:47.336752 env[1315]: time="2025-05-16T00:48:47.336715319Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e8b1cb61bd96acc3ff3c695318c9cc691213d532eee3731d038af92816fcb5f4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:48:47.338492 env[1315]: time="2025-05-16T00:48:47.338462437Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:48:47.340404 env[1315]: time="2025-05-16T00:48:47.340377982Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx@sha256:beabce8f1782671ba500ddff99dd260fbf9c5ec85fb9c3162e35a3c40bafd023,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:48:47.341105 env[1315]: time="2025-05-16T00:48:47.341075958Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:e8b1cb61bd96acc3ff3c695318c9cc691213d532eee3731d038af92816fcb5f4\"" May 16 00:48:47.343623 env[1315]: time="2025-05-16T00:48:47.343592693Z" level=info msg="CreateContainer within sandbox \"c396f819ee26611e8b043f5ff2b04f4fca8f0f4ae5b03468a60c05da2b692e03\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" May 16 00:48:47.356853 env[1315]: time="2025-05-16T00:48:47.356799502Z" level=info msg="CreateContainer within sandbox \"c396f819ee26611e8b043f5ff2b04f4fca8f0f4ae5b03468a60c05da2b692e03\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"fd375e33bc5945daebd6b00855c745d902d940f4b2db702d13b2b18e194cd8c5\"" May 16 00:48:47.357639 env[1315]: time="2025-05-16T00:48:47.357486204Z" level=info msg="StartContainer for \"fd375e33bc5945daebd6b00855c745d902d940f4b2db702d13b2b18e194cd8c5\"" May 16 00:48:47.415214 env[1315]: time="2025-05-16T00:48:47.415159412Z" level=info msg="StartContainer for \"fd375e33bc5945daebd6b00855c745d902d940f4b2db702d13b2b18e194cd8c5\" returns successfully" May 16 00:48:47.486172 kubelet[1561]: I0516 00:48:47.486072 1561 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-8587fbcb89-596l9" podStartSLOduration=3.365226571 podStartE2EDuration="6.486055501s" podCreationTimestamp="2025-05-16 00:48:41 +0000 UTC" firstStartedPulling="2025-05-16 00:48:44.221578095 +0000 UTC m=+19.940925048" lastFinishedPulling="2025-05-16 00:48:47.342407025 +0000 UTC m=+23.061753978" observedRunningTime="2025-05-16 00:48:47.485879598 +0000 UTC m=+23.205226551" watchObservedRunningTime="2025-05-16 00:48:47.486055501 +0000 UTC m=+23.205402454" May 16 00:48:48.252323 kubelet[1561]: E0516 00:48:48.252240 1561 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:48:49.253087 kubelet[1561]: E0516 00:48:49.253020 1561 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:48:49.355036 kubelet[1561]: I0516 00:48:49.355005 1561 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 16 00:48:49.355934 kubelet[1561]: E0516 00:48:49.355909 1561 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:48:49.481882 kubelet[1561]: E0516 00:48:49.481856 1561 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:48:50.253964 kubelet[1561]: E0516 00:48:50.253923 1561 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:48:51.254889 kubelet[1561]: E0516 00:48:51.254853 1561 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:48:52.256070 kubelet[1561]: E0516 00:48:52.256029 1561 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:48:53.256551 kubelet[1561]: E0516 00:48:53.256507 1561 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:48:54.030859 kubelet[1561]: I0516 00:48:54.030813 1561 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jj859\" (UniqueName: \"kubernetes.io/projected/10b3deff-d870-42e7-9f29-d931ebc322c1-kube-api-access-jj859\") pod \"nfs-server-provisioner-0\" (UID: \"10b3deff-d870-42e7-9f29-d931ebc322c1\") " pod="default/nfs-server-provisioner-0" May 16 00:48:54.030859 kubelet[1561]: I0516 00:48:54.030859 1561 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/10b3deff-d870-42e7-9f29-d931ebc322c1-data\") pod \"nfs-server-provisioner-0\" (UID: \"10b3deff-d870-42e7-9f29-d931ebc322c1\") " pod="default/nfs-server-provisioner-0" May 16 00:48:54.193458 env[1315]: time="2025-05-16T00:48:54.193404166Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:10b3deff-d870-42e7-9f29-d931ebc322c1,Namespace:default,Attempt:0,}" May 16 00:48:54.220144 systemd-networkd[1097]: lxc89502f5ba510: Link UP May 16 00:48:54.231157 kernel: eth0: renamed from tmp5abb6 May 16 00:48:54.239294 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready May 16 00:48:54.239430 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc89502f5ba510: link becomes ready May 16 00:48:54.239711 systemd-networkd[1097]: lxc89502f5ba510: Gained carrier May 16 00:48:54.256870 kubelet[1561]: E0516 00:48:54.256827 1561 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:48:54.383038 env[1315]: time="2025-05-16T00:48:54.382896562Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 16 00:48:54.383038 env[1315]: time="2025-05-16T00:48:54.382936081Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 16 00:48:54.383038 env[1315]: time="2025-05-16T00:48:54.382946961Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 00:48:54.383765 env[1315]: time="2025-05-16T00:48:54.383690659Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/5abb608922b74f164755d09421300c753d08e590a20ecc369250125576154001 pid=2757 runtime=io.containerd.runc.v2 May 16 00:48:54.417342 systemd-resolved[1235]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 16 00:48:54.434064 env[1315]: time="2025-05-16T00:48:54.434015109Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:10b3deff-d870-42e7-9f29-d931ebc322c1,Namespace:default,Attempt:0,} returns sandbox id \"5abb608922b74f164755d09421300c753d08e590a20ecc369250125576154001\"" May 16 00:48:54.435793 env[1315]: time="2025-05-16T00:48:54.435699739Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" May 16 00:48:55.257884 kubelet[1561]: E0516 00:48:55.257837 1561 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:48:56.234245 systemd-networkd[1097]: lxc89502f5ba510: Gained IPv6LL May 16 00:48:56.258952 kubelet[1561]: E0516 00:48:56.258908 1561 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:48:56.819319 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2985169628.mount: Deactivated successfully. May 16 00:48:57.259338 kubelet[1561]: E0516 00:48:57.259227 1561 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:48:58.260089 kubelet[1561]: E0516 00:48:58.260041 1561 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:48:58.588218 env[1315]: time="2025-05-16T00:48:58.588094820Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:48:58.592620 env[1315]: time="2025-05-16T00:48:58.592585832Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:48:58.594635 env[1315]: time="2025-05-16T00:48:58.594606543Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:48:58.596498 env[1315]: time="2025-05-16T00:48:58.596453659Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:48:58.597298 env[1315]: time="2025-05-16T00:48:58.597263119Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\"" May 16 00:48:58.600010 env[1315]: time="2025-05-16T00:48:58.599977974Z" level=info msg="CreateContainer within sandbox \"5abb608922b74f164755d09421300c753d08e590a20ecc369250125576154001\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" May 16 00:48:58.610876 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1632678143.mount: Deactivated successfully. May 16 00:48:58.614995 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1180876029.mount: Deactivated successfully. May 16 00:48:58.620723 env[1315]: time="2025-05-16T00:48:58.620676716Z" level=info msg="CreateContainer within sandbox \"5abb608922b74f164755d09421300c753d08e590a20ecc369250125576154001\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"d17fca231a4c84675f2d5e5397e8e47c7c2deb6bf1ae0f71bc2417b97cb55369\"" May 16 00:48:58.621345 env[1315]: time="2025-05-16T00:48:58.621317340Z" level=info msg="StartContainer for \"d17fca231a4c84675f2d5e5397e8e47c7c2deb6bf1ae0f71bc2417b97cb55369\"" May 16 00:48:58.670142 env[1315]: time="2025-05-16T00:48:58.668592922Z" level=info msg="StartContainer for \"d17fca231a4c84675f2d5e5397e8e47c7c2deb6bf1ae0f71bc2417b97cb55369\" returns successfully" May 16 00:48:59.260802 kubelet[1561]: E0516 00:48:59.260732 1561 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:48:59.509453 kubelet[1561]: I0516 00:48:59.509390 1561 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=2.346190747 podStartE2EDuration="6.509375046s" podCreationTimestamp="2025-05-16 00:48:53 +0000 UTC" firstStartedPulling="2025-05-16 00:48:54.435213473 +0000 UTC m=+30.154560426" lastFinishedPulling="2025-05-16 00:48:58.598397772 +0000 UTC m=+34.317744725" observedRunningTime="2025-05-16 00:48:59.50917085 +0000 UTC m=+35.228517803" watchObservedRunningTime="2025-05-16 00:48:59.509375046 +0000 UTC m=+35.228721959" May 16 00:49:00.261248 kubelet[1561]: E0516 00:49:00.261200 1561 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:49:01.261703 kubelet[1561]: E0516 00:49:01.261654 1561 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:49:02.262682 kubelet[1561]: E0516 00:49:02.262639 1561 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:49:03.262815 kubelet[1561]: E0516 00:49:03.262766 1561 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:49:04.033936 update_engine[1307]: I0516 00:49:04.033495 1307 update_attempter.cc:509] Updating boot flags... May 16 00:49:04.263865 kubelet[1561]: E0516 00:49:04.263817 1561 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:49:05.235080 kubelet[1561]: E0516 00:49:05.235039 1561 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:49:05.264588 kubelet[1561]: E0516 00:49:05.264566 1561 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:49:06.265573 kubelet[1561]: E0516 00:49:06.265510 1561 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:49:07.265968 kubelet[1561]: E0516 00:49:07.265890 1561 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:49:08.266830 kubelet[1561]: E0516 00:49:08.266791 1561 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:49:08.505760 kubelet[1561]: I0516 00:49:08.505695 1561 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n4f96\" (UniqueName: \"kubernetes.io/projected/6250d7f4-7981-42f2-9067-9bb4ba269b1e-kube-api-access-n4f96\") pod \"test-pod-1\" (UID: \"6250d7f4-7981-42f2-9067-9bb4ba269b1e\") " pod="default/test-pod-1" May 16 00:49:08.505760 kubelet[1561]: I0516 00:49:08.505753 1561 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-377744fe-82cd-4596-86cb-56dea8c81e27\" (UniqueName: \"kubernetes.io/nfs/6250d7f4-7981-42f2-9067-9bb4ba269b1e-pvc-377744fe-82cd-4596-86cb-56dea8c81e27\") pod \"test-pod-1\" (UID: \"6250d7f4-7981-42f2-9067-9bb4ba269b1e\") " pod="default/test-pod-1" May 16 00:49:08.631134 kernel: FS-Cache: Loaded May 16 00:49:08.661341 kernel: RPC: Registered named UNIX socket transport module. May 16 00:49:08.661439 kernel: RPC: Registered udp transport module. May 16 00:49:08.661464 kernel: RPC: Registered tcp transport module. May 16 00:49:08.662599 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. May 16 00:49:08.707126 kernel: FS-Cache: Netfs 'nfs' registered for caching May 16 00:49:08.836325 kernel: NFS: Registering the id_resolver key type May 16 00:49:08.836426 kernel: Key type id_resolver registered May 16 00:49:08.836453 kernel: Key type id_legacy registered May 16 00:49:08.862912 nfsidmap[2890]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' May 16 00:49:08.866283 nfsidmap[2893]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' May 16 00:49:08.930334 env[1315]: time="2025-05-16T00:49:08.930222571Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:6250d7f4-7981-42f2-9067-9bb4ba269b1e,Namespace:default,Attempt:0,}" May 16 00:49:08.953888 systemd-networkd[1097]: lxcbc530588a15b: Link UP May 16 00:49:08.960154 kernel: eth0: renamed from tmpadaf3 May 16 00:49:08.971700 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready May 16 00:49:08.971788 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcbc530588a15b: link becomes ready May 16 00:49:08.971899 systemd-networkd[1097]: lxcbc530588a15b: Gained carrier May 16 00:49:09.147262 env[1315]: time="2025-05-16T00:49:09.147202081Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 16 00:49:09.147262 env[1315]: time="2025-05-16T00:49:09.147239841Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 16 00:49:09.147262 env[1315]: time="2025-05-16T00:49:09.147250200Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 00:49:09.147605 env[1315]: time="2025-05-16T00:49:09.147574956Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/adaf33cbb8bc035c8994ebdf0328c509ee5a314e5caa3d770ef9d96e543f54c7 pid=2928 runtime=io.containerd.runc.v2 May 16 00:49:09.185672 systemd-resolved[1235]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 16 00:49:09.201262 env[1315]: time="2025-05-16T00:49:09.201226733Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:6250d7f4-7981-42f2-9067-9bb4ba269b1e,Namespace:default,Attempt:0,} returns sandbox id \"adaf33cbb8bc035c8994ebdf0328c509ee5a314e5caa3d770ef9d96e543f54c7\"" May 16 00:49:09.203579 env[1315]: time="2025-05-16T00:49:09.203541021Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" May 16 00:49:09.267511 kubelet[1561]: E0516 00:49:09.267468 1561 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:49:09.423836 env[1315]: time="2025-05-16T00:49:09.423780653Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:49:09.430212 env[1315]: time="2025-05-16T00:49:09.430175964Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:e8b1cb61bd96acc3ff3c695318c9cc691213d532eee3731d038af92816fcb5f4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:49:09.432678 env[1315]: time="2025-05-16T00:49:09.432622410Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:49:09.433814 env[1315]: time="2025-05-16T00:49:09.433782954Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx@sha256:beabce8f1782671ba500ddff99dd260fbf9c5ec85fb9c3162e35a3c40bafd023,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:49:09.434632 env[1315]: time="2025-05-16T00:49:09.434583983Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:e8b1cb61bd96acc3ff3c695318c9cc691213d532eee3731d038af92816fcb5f4\"" May 16 00:49:09.439078 env[1315]: time="2025-05-16T00:49:09.438979042Z" level=info msg="CreateContainer within sandbox \"adaf33cbb8bc035c8994ebdf0328c509ee5a314e5caa3d770ef9d96e543f54c7\" for container &ContainerMetadata{Name:test,Attempt:0,}" May 16 00:49:09.452805 env[1315]: time="2025-05-16T00:49:09.452741532Z" level=info msg="CreateContainer within sandbox \"adaf33cbb8bc035c8994ebdf0328c509ee5a314e5caa3d770ef9d96e543f54c7\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"cf405f7f5d461e59bd3d84c36f85f93a77052b8a86f426b618de72f0586e6b88\"" May 16 00:49:09.453483 env[1315]: time="2025-05-16T00:49:09.453449802Z" level=info msg="StartContainer for \"cf405f7f5d461e59bd3d84c36f85f93a77052b8a86f426b618de72f0586e6b88\"" May 16 00:49:09.535180 env[1315]: time="2025-05-16T00:49:09.535137911Z" level=info msg="StartContainer for \"cf405f7f5d461e59bd3d84c36f85f93a77052b8a86f426b618de72f0586e6b88\" returns successfully" May 16 00:49:10.267796 kubelet[1561]: E0516 00:49:10.267735 1561 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:49:10.442284 systemd-networkd[1097]: lxcbc530588a15b: Gained IPv6LL May 16 00:49:10.527596 kubelet[1561]: I0516 00:49:10.527357 1561 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=16.292966507 podStartE2EDuration="16.527342863s" podCreationTimestamp="2025-05-16 00:48:54 +0000 UTC" firstStartedPulling="2025-05-16 00:49:09.203177586 +0000 UTC m=+44.922524539" lastFinishedPulling="2025-05-16 00:49:09.437553942 +0000 UTC m=+45.156900895" observedRunningTime="2025-05-16 00:49:10.52678875 +0000 UTC m=+46.246135703" watchObservedRunningTime="2025-05-16 00:49:10.527342863 +0000 UTC m=+46.246689816" May 16 00:49:11.268004 kubelet[1561]: E0516 00:49:11.267963 1561 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:49:12.268700 kubelet[1561]: E0516 00:49:12.268656 1561 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:49:13.269787 kubelet[1561]: E0516 00:49:13.269741 1561 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:49:14.270457 kubelet[1561]: E0516 00:49:14.270406 1561 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:49:15.270776 kubelet[1561]: E0516 00:49:15.270703 1561 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:49:16.271445 kubelet[1561]: E0516 00:49:16.271406 1561 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:49:16.678853 systemd[1]: run-containerd-runc-k8s.io-ba0d50adbac6c4785c2914770db45c92e1649f24aab116fb9be679b98d03ac37-runc.Srt5AR.mount: Deactivated successfully. May 16 00:49:16.704178 env[1315]: time="2025-05-16T00:49:16.704060334Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 16 00:49:16.712982 env[1315]: time="2025-05-16T00:49:16.712943883Z" level=info msg="StopContainer for \"ba0d50adbac6c4785c2914770db45c92e1649f24aab116fb9be679b98d03ac37\" with timeout 2 (s)" May 16 00:49:16.713652 env[1315]: time="2025-05-16T00:49:16.713603196Z" level=info msg="Stop container \"ba0d50adbac6c4785c2914770db45c92e1649f24aab116fb9be679b98d03ac37\" with signal terminated" May 16 00:49:16.720475 systemd-networkd[1097]: lxc_health: Link DOWN May 16 00:49:16.720480 systemd-networkd[1097]: lxc_health: Lost carrier May 16 00:49:16.771029 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ba0d50adbac6c4785c2914770db45c92e1649f24aab116fb9be679b98d03ac37-rootfs.mount: Deactivated successfully. May 16 00:49:16.779237 env[1315]: time="2025-05-16T00:49:16.779180524Z" level=info msg="shim disconnected" id=ba0d50adbac6c4785c2914770db45c92e1649f24aab116fb9be679b98d03ac37 May 16 00:49:16.779468 env[1315]: time="2025-05-16T00:49:16.779448682Z" level=warning msg="cleaning up after shim disconnected" id=ba0d50adbac6c4785c2914770db45c92e1649f24aab116fb9be679b98d03ac37 namespace=k8s.io May 16 00:49:16.779528 env[1315]: time="2025-05-16T00:49:16.779516281Z" level=info msg="cleaning up dead shim" May 16 00:49:16.786421 env[1315]: time="2025-05-16T00:49:16.786382771Z" level=warning msg="cleanup warnings time=\"2025-05-16T00:49:16Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3056 runtime=io.containerd.runc.v2\n" May 16 00:49:16.788606 env[1315]: time="2025-05-16T00:49:16.788570628Z" level=info msg="StopContainer for \"ba0d50adbac6c4785c2914770db45c92e1649f24aab116fb9be679b98d03ac37\" returns successfully" May 16 00:49:16.789610 env[1315]: time="2025-05-16T00:49:16.789383740Z" level=info msg="StopPodSandbox for \"c6714b16dbc1f4dca39c6ac01dd9cd8be5ee7fd14c1b0beed7ff9cd8c91e56ae\"" May 16 00:49:16.789610 env[1315]: time="2025-05-16T00:49:16.789437379Z" level=info msg="Container to stop \"c7b6298e3e55e04689d75ed663a6387ed244b0bc346243e7927020e8891a45d6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 16 00:49:16.789610 env[1315]: time="2025-05-16T00:49:16.789451779Z" level=info msg="Container to stop \"bf6584d0280edf953909cc8d552d98b1018bda3186aee33f772de4858a959efc\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 16 00:49:16.789610 env[1315]: time="2025-05-16T00:49:16.789463339Z" level=info msg="Container to stop \"14543cfaf9599b93e009cb756ee58af5162919572e18bbbf5400aa4cae5a6508\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 16 00:49:16.789610 env[1315]: time="2025-05-16T00:49:16.789477019Z" level=info msg="Container to stop \"ba0d50adbac6c4785c2914770db45c92e1649f24aab116fb9be679b98d03ac37\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 16 00:49:16.789610 env[1315]: time="2025-05-16T00:49:16.789488499Z" level=info msg="Container to stop \"d54ff57bbd2af9e683543d4c3c71f0ad45ac0cd9b1c821735e652d4dc19cbd96\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 16 00:49:16.791270 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c6714b16dbc1f4dca39c6ac01dd9cd8be5ee7fd14c1b0beed7ff9cd8c91e56ae-shm.mount: Deactivated successfully. May 16 00:49:16.822923 env[1315]: time="2025-05-16T00:49:16.822668239Z" level=info msg="shim disconnected" id=c6714b16dbc1f4dca39c6ac01dd9cd8be5ee7fd14c1b0beed7ff9cd8c91e56ae May 16 00:49:16.822923 env[1315]: time="2025-05-16T00:49:16.822714158Z" level=warning msg="cleaning up after shim disconnected" id=c6714b16dbc1f4dca39c6ac01dd9cd8be5ee7fd14c1b0beed7ff9cd8c91e56ae namespace=k8s.io May 16 00:49:16.822923 env[1315]: time="2025-05-16T00:49:16.822725718Z" level=info msg="cleaning up dead shim" May 16 00:49:16.829743 env[1315]: time="2025-05-16T00:49:16.829680367Z" level=warning msg="cleanup warnings time=\"2025-05-16T00:49:16Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3088 runtime=io.containerd.runc.v2\n" May 16 00:49:16.830007 env[1315]: time="2025-05-16T00:49:16.829980044Z" level=info msg="TearDown network for sandbox \"c6714b16dbc1f4dca39c6ac01dd9cd8be5ee7fd14c1b0beed7ff9cd8c91e56ae\" successfully" May 16 00:49:16.830040 env[1315]: time="2025-05-16T00:49:16.830007084Z" level=info msg="StopPodSandbox for \"c6714b16dbc1f4dca39c6ac01dd9cd8be5ee7fd14c1b0beed7ff9cd8c91e56ae\" returns successfully" May 16 00:49:16.951241 kubelet[1561]: I0516 00:49:16.950354 1561 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4e0e21be-1d6f-4e21-a43f-a8c39b92009e-clustermesh-secrets\") pod \"4e0e21be-1d6f-4e21-a43f-a8c39b92009e\" (UID: \"4e0e21be-1d6f-4e21-a43f-a8c39b92009e\") " May 16 00:49:16.951241 kubelet[1561]: I0516 00:49:16.950406 1561 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4e0e21be-1d6f-4e21-a43f-a8c39b92009e-bpf-maps\") pod \"4e0e21be-1d6f-4e21-a43f-a8c39b92009e\" (UID: \"4e0e21be-1d6f-4e21-a43f-a8c39b92009e\") " May 16 00:49:16.951241 kubelet[1561]: I0516 00:49:16.950425 1561 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4e0e21be-1d6f-4e21-a43f-a8c39b92009e-xtables-lock\") pod \"4e0e21be-1d6f-4e21-a43f-a8c39b92009e\" (UID: \"4e0e21be-1d6f-4e21-a43f-a8c39b92009e\") " May 16 00:49:16.951241 kubelet[1561]: I0516 00:49:16.950441 1561 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4e0e21be-1d6f-4e21-a43f-a8c39b92009e-host-proc-sys-net\") pod \"4e0e21be-1d6f-4e21-a43f-a8c39b92009e\" (UID: \"4e0e21be-1d6f-4e21-a43f-a8c39b92009e\") " May 16 00:49:16.951241 kubelet[1561]: I0516 00:49:16.950456 1561 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4e0e21be-1d6f-4e21-a43f-a8c39b92009e-host-proc-sys-kernel\") pod \"4e0e21be-1d6f-4e21-a43f-a8c39b92009e\" (UID: \"4e0e21be-1d6f-4e21-a43f-a8c39b92009e\") " May 16 00:49:16.951241 kubelet[1561]: I0516 00:49:16.950471 1561 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4e0e21be-1d6f-4e21-a43f-a8c39b92009e-etc-cni-netd\") pod \"4e0e21be-1d6f-4e21-a43f-a8c39b92009e\" (UID: \"4e0e21be-1d6f-4e21-a43f-a8c39b92009e\") " May 16 00:49:16.951482 kubelet[1561]: I0516 00:49:16.950489 1561 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dmt6h\" (UniqueName: \"kubernetes.io/projected/4e0e21be-1d6f-4e21-a43f-a8c39b92009e-kube-api-access-dmt6h\") pod \"4e0e21be-1d6f-4e21-a43f-a8c39b92009e\" (UID: \"4e0e21be-1d6f-4e21-a43f-a8c39b92009e\") " May 16 00:49:16.951482 kubelet[1561]: I0516 00:49:16.950505 1561 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4e0e21be-1d6f-4e21-a43f-a8c39b92009e-cilium-cgroup\") pod \"4e0e21be-1d6f-4e21-a43f-a8c39b92009e\" (UID: \"4e0e21be-1d6f-4e21-a43f-a8c39b92009e\") " May 16 00:49:16.951482 kubelet[1561]: I0516 00:49:16.950519 1561 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4e0e21be-1d6f-4e21-a43f-a8c39b92009e-cni-path\") pod \"4e0e21be-1d6f-4e21-a43f-a8c39b92009e\" (UID: \"4e0e21be-1d6f-4e21-a43f-a8c39b92009e\") " May 16 00:49:16.951482 kubelet[1561]: I0516 00:49:16.950533 1561 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4e0e21be-1d6f-4e21-a43f-a8c39b92009e-hostproc\") pod \"4e0e21be-1d6f-4e21-a43f-a8c39b92009e\" (UID: \"4e0e21be-1d6f-4e21-a43f-a8c39b92009e\") " May 16 00:49:16.951482 kubelet[1561]: I0516 00:49:16.950547 1561 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4e0e21be-1d6f-4e21-a43f-a8c39b92009e-lib-modules\") pod \"4e0e21be-1d6f-4e21-a43f-a8c39b92009e\" (UID: \"4e0e21be-1d6f-4e21-a43f-a8c39b92009e\") " May 16 00:49:16.951482 kubelet[1561]: I0516 00:49:16.950564 1561 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4e0e21be-1d6f-4e21-a43f-a8c39b92009e-cilium-config-path\") pod \"4e0e21be-1d6f-4e21-a43f-a8c39b92009e\" (UID: \"4e0e21be-1d6f-4e21-a43f-a8c39b92009e\") " May 16 00:49:16.951615 kubelet[1561]: I0516 00:49:16.950581 1561 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4e0e21be-1d6f-4e21-a43f-a8c39b92009e-cilium-run\") pod \"4e0e21be-1d6f-4e21-a43f-a8c39b92009e\" (UID: \"4e0e21be-1d6f-4e21-a43f-a8c39b92009e\") " May 16 00:49:16.951615 kubelet[1561]: I0516 00:49:16.950598 1561 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4e0e21be-1d6f-4e21-a43f-a8c39b92009e-hubble-tls\") pod \"4e0e21be-1d6f-4e21-a43f-a8c39b92009e\" (UID: \"4e0e21be-1d6f-4e21-a43f-a8c39b92009e\") " May 16 00:49:16.951615 kubelet[1561]: I0516 00:49:16.951271 1561 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4e0e21be-1d6f-4e21-a43f-a8c39b92009e-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "4e0e21be-1d6f-4e21-a43f-a8c39b92009e" (UID: "4e0e21be-1d6f-4e21-a43f-a8c39b92009e"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 16 00:49:16.951615 kubelet[1561]: I0516 00:49:16.951364 1561 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4e0e21be-1d6f-4e21-a43f-a8c39b92009e-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "4e0e21be-1d6f-4e21-a43f-a8c39b92009e" (UID: "4e0e21be-1d6f-4e21-a43f-a8c39b92009e"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 16 00:49:16.951615 kubelet[1561]: I0516 00:49:16.951400 1561 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4e0e21be-1d6f-4e21-a43f-a8c39b92009e-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "4e0e21be-1d6f-4e21-a43f-a8c39b92009e" (UID: "4e0e21be-1d6f-4e21-a43f-a8c39b92009e"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 16 00:49:16.951732 kubelet[1561]: I0516 00:49:16.951435 1561 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4e0e21be-1d6f-4e21-a43f-a8c39b92009e-hostproc" (OuterVolumeSpecName: "hostproc") pod "4e0e21be-1d6f-4e21-a43f-a8c39b92009e" (UID: "4e0e21be-1d6f-4e21-a43f-a8c39b92009e"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 16 00:49:16.951732 kubelet[1561]: I0516 00:49:16.951451 1561 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4e0e21be-1d6f-4e21-a43f-a8c39b92009e-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "4e0e21be-1d6f-4e21-a43f-a8c39b92009e" (UID: "4e0e21be-1d6f-4e21-a43f-a8c39b92009e"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 16 00:49:16.951732 kubelet[1561]: I0516 00:49:16.951456 1561 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4e0e21be-1d6f-4e21-a43f-a8c39b92009e-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "4e0e21be-1d6f-4e21-a43f-a8c39b92009e" (UID: "4e0e21be-1d6f-4e21-a43f-a8c39b92009e"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 16 00:49:16.951802 kubelet[1561]: I0516 00:49:16.951774 1561 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4e0e21be-1d6f-4e21-a43f-a8c39b92009e-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "4e0e21be-1d6f-4e21-a43f-a8c39b92009e" (UID: "4e0e21be-1d6f-4e21-a43f-a8c39b92009e"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 16 00:49:16.951827 kubelet[1561]: I0516 00:49:16.951803 1561 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4e0e21be-1d6f-4e21-a43f-a8c39b92009e-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "4e0e21be-1d6f-4e21-a43f-a8c39b92009e" (UID: "4e0e21be-1d6f-4e21-a43f-a8c39b92009e"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 16 00:49:16.951827 kubelet[1561]: I0516 00:49:16.951805 1561 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4e0e21be-1d6f-4e21-a43f-a8c39b92009e-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "4e0e21be-1d6f-4e21-a43f-a8c39b92009e" (UID: "4e0e21be-1d6f-4e21-a43f-a8c39b92009e"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 16 00:49:16.951873 kubelet[1561]: I0516 00:49:16.951846 1561 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4e0e21be-1d6f-4e21-a43f-a8c39b92009e-cni-path" (OuterVolumeSpecName: "cni-path") pod "4e0e21be-1d6f-4e21-a43f-a8c39b92009e" (UID: "4e0e21be-1d6f-4e21-a43f-a8c39b92009e"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 16 00:49:16.953413 kubelet[1561]: I0516 00:49:16.953367 1561 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4e0e21be-1d6f-4e21-a43f-a8c39b92009e-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "4e0e21be-1d6f-4e21-a43f-a8c39b92009e" (UID: "4e0e21be-1d6f-4e21-a43f-a8c39b92009e"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 16 00:49:16.955278 kubelet[1561]: I0516 00:49:16.954203 1561 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e0e21be-1d6f-4e21-a43f-a8c39b92009e-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "4e0e21be-1d6f-4e21-a43f-a8c39b92009e" (UID: "4e0e21be-1d6f-4e21-a43f-a8c39b92009e"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" May 16 00:49:16.955278 kubelet[1561]: I0516 00:49:16.954491 1561 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e0e21be-1d6f-4e21-a43f-a8c39b92009e-kube-api-access-dmt6h" (OuterVolumeSpecName: "kube-api-access-dmt6h") pod "4e0e21be-1d6f-4e21-a43f-a8c39b92009e" (UID: "4e0e21be-1d6f-4e21-a43f-a8c39b92009e"). InnerVolumeSpecName "kube-api-access-dmt6h". PluginName "kubernetes.io/projected", VolumeGidValue "" May 16 00:49:16.958157 kubelet[1561]: I0516 00:49:16.957480 1561 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e0e21be-1d6f-4e21-a43f-a8c39b92009e-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "4e0e21be-1d6f-4e21-a43f-a8c39b92009e" (UID: "4e0e21be-1d6f-4e21-a43f-a8c39b92009e"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" May 16 00:49:17.050896 kubelet[1561]: I0516 00:49:17.050831 1561 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4e0e21be-1d6f-4e21-a43f-a8c39b92009e-xtables-lock\") on node \"10.0.0.110\" DevicePath \"\"" May 16 00:49:17.050896 kubelet[1561]: I0516 00:49:17.050871 1561 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4e0e21be-1d6f-4e21-a43f-a8c39b92009e-host-proc-sys-net\") on node \"10.0.0.110\" DevicePath \"\"" May 16 00:49:17.050896 kubelet[1561]: I0516 00:49:17.050885 1561 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4e0e21be-1d6f-4e21-a43f-a8c39b92009e-host-proc-sys-kernel\") on node \"10.0.0.110\" DevicePath \"\"" May 16 00:49:17.050896 kubelet[1561]: I0516 00:49:17.050894 1561 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4e0e21be-1d6f-4e21-a43f-a8c39b92009e-etc-cni-netd\") on node \"10.0.0.110\" DevicePath \"\"" May 16 00:49:17.050896 kubelet[1561]: I0516 00:49:17.050902 1561 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4e0e21be-1d6f-4e21-a43f-a8c39b92009e-hostproc\") on node \"10.0.0.110\" DevicePath \"\"" May 16 00:49:17.050896 kubelet[1561]: I0516 00:49:17.050911 1561 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dmt6h\" (UniqueName: \"kubernetes.io/projected/4e0e21be-1d6f-4e21-a43f-a8c39b92009e-kube-api-access-dmt6h\") on node \"10.0.0.110\" DevicePath \"\"" May 16 00:49:17.051232 kubelet[1561]: I0516 00:49:17.050920 1561 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4e0e21be-1d6f-4e21-a43f-a8c39b92009e-cilium-cgroup\") on node \"10.0.0.110\" DevicePath \"\"" May 16 00:49:17.051232 kubelet[1561]: I0516 00:49:17.050928 1561 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4e0e21be-1d6f-4e21-a43f-a8c39b92009e-cni-path\") on node \"10.0.0.110\" DevicePath \"\"" May 16 00:49:17.051232 kubelet[1561]: I0516 00:49:17.050936 1561 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4e0e21be-1d6f-4e21-a43f-a8c39b92009e-lib-modules\") on node \"10.0.0.110\" DevicePath \"\"" May 16 00:49:17.051232 kubelet[1561]: I0516 00:49:17.050944 1561 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4e0e21be-1d6f-4e21-a43f-a8c39b92009e-cilium-config-path\") on node \"10.0.0.110\" DevicePath \"\"" May 16 00:49:17.051232 kubelet[1561]: I0516 00:49:17.050952 1561 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4e0e21be-1d6f-4e21-a43f-a8c39b92009e-cilium-run\") on node \"10.0.0.110\" DevicePath \"\"" May 16 00:49:17.051232 kubelet[1561]: I0516 00:49:17.050961 1561 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4e0e21be-1d6f-4e21-a43f-a8c39b92009e-hubble-tls\") on node \"10.0.0.110\" DevicePath \"\"" May 16 00:49:17.051232 kubelet[1561]: I0516 00:49:17.050968 1561 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4e0e21be-1d6f-4e21-a43f-a8c39b92009e-clustermesh-secrets\") on node \"10.0.0.110\" DevicePath \"\"" May 16 00:49:17.051232 kubelet[1561]: I0516 00:49:17.050976 1561 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4e0e21be-1d6f-4e21-a43f-a8c39b92009e-bpf-maps\") on node \"10.0.0.110\" DevicePath \"\"" May 16 00:49:17.272606 kubelet[1561]: E0516 00:49:17.272557 1561 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:49:17.536840 kubelet[1561]: I0516 00:49:17.536690 1561 scope.go:117] "RemoveContainer" containerID="ba0d50adbac6c4785c2914770db45c92e1649f24aab116fb9be679b98d03ac37" May 16 00:49:17.541117 env[1315]: time="2025-05-16T00:49:17.541060968Z" level=info msg="RemoveContainer for \"ba0d50adbac6c4785c2914770db45c92e1649f24aab116fb9be679b98d03ac37\"" May 16 00:49:17.544538 env[1315]: time="2025-05-16T00:49:17.544500535Z" level=info msg="RemoveContainer for \"ba0d50adbac6c4785c2914770db45c92e1649f24aab116fb9be679b98d03ac37\" returns successfully" May 16 00:49:17.544764 kubelet[1561]: I0516 00:49:17.544719 1561 scope.go:117] "RemoveContainer" containerID="14543cfaf9599b93e009cb756ee58af5162919572e18bbbf5400aa4cae5a6508" May 16 00:49:17.545728 env[1315]: time="2025-05-16T00:49:17.545679963Z" level=info msg="RemoveContainer for \"14543cfaf9599b93e009cb756ee58af5162919572e18bbbf5400aa4cae5a6508\"" May 16 00:49:17.548535 env[1315]: time="2025-05-16T00:49:17.548498135Z" level=info msg="RemoveContainer for \"14543cfaf9599b93e009cb756ee58af5162919572e18bbbf5400aa4cae5a6508\" returns successfully" May 16 00:49:17.548680 kubelet[1561]: I0516 00:49:17.548655 1561 scope.go:117] "RemoveContainer" containerID="c7b6298e3e55e04689d75ed663a6387ed244b0bc346243e7927020e8891a45d6" May 16 00:49:17.550025 env[1315]: time="2025-05-16T00:49:17.549709003Z" level=info msg="RemoveContainer for \"c7b6298e3e55e04689d75ed663a6387ed244b0bc346243e7927020e8891a45d6\"" May 16 00:49:17.553283 env[1315]: time="2025-05-16T00:49:17.552846172Z" level=info msg="RemoveContainer for \"c7b6298e3e55e04689d75ed663a6387ed244b0bc346243e7927020e8891a45d6\" returns successfully" May 16 00:49:17.553652 kubelet[1561]: I0516 00:49:17.553232 1561 scope.go:117] "RemoveContainer" containerID="bf6584d0280edf953909cc8d552d98b1018bda3186aee33f772de4858a959efc" May 16 00:49:17.555728 env[1315]: time="2025-05-16T00:49:17.555470786Z" level=info msg="RemoveContainer for \"bf6584d0280edf953909cc8d552d98b1018bda3186aee33f772de4858a959efc\"" May 16 00:49:17.558084 env[1315]: time="2025-05-16T00:49:17.557932282Z" level=info msg="RemoveContainer for \"bf6584d0280edf953909cc8d552d98b1018bda3186aee33f772de4858a959efc\" returns successfully" May 16 00:49:17.558339 kubelet[1561]: I0516 00:49:17.558296 1561 scope.go:117] "RemoveContainer" containerID="d54ff57bbd2af9e683543d4c3c71f0ad45ac0cd9b1c821735e652d4dc19cbd96" May 16 00:49:17.560085 env[1315]: time="2025-05-16T00:49:17.560038101Z" level=info msg="RemoveContainer for \"d54ff57bbd2af9e683543d4c3c71f0ad45ac0cd9b1c821735e652d4dc19cbd96\"" May 16 00:49:17.562096 env[1315]: time="2025-05-16T00:49:17.562023962Z" level=info msg="RemoveContainer for \"d54ff57bbd2af9e683543d4c3c71f0ad45ac0cd9b1c821735e652d4dc19cbd96\" returns successfully" May 16 00:49:17.562854 kubelet[1561]: I0516 00:49:17.562308 1561 scope.go:117] "RemoveContainer" containerID="ba0d50adbac6c4785c2914770db45c92e1649f24aab116fb9be679b98d03ac37" May 16 00:49:17.562939 env[1315]: time="2025-05-16T00:49:17.562705995Z" level=error msg="ContainerStatus for \"ba0d50adbac6c4785c2914770db45c92e1649f24aab116fb9be679b98d03ac37\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ba0d50adbac6c4785c2914770db45c92e1649f24aab116fb9be679b98d03ac37\": not found" May 16 00:49:17.563090 kubelet[1561]: E0516 00:49:17.563065 1561 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ba0d50adbac6c4785c2914770db45c92e1649f24aab116fb9be679b98d03ac37\": not found" containerID="ba0d50adbac6c4785c2914770db45c92e1649f24aab116fb9be679b98d03ac37" May 16 00:49:17.563257 kubelet[1561]: I0516 00:49:17.563171 1561 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ba0d50adbac6c4785c2914770db45c92e1649f24aab116fb9be679b98d03ac37"} err="failed to get container status \"ba0d50adbac6c4785c2914770db45c92e1649f24aab116fb9be679b98d03ac37\": rpc error: code = NotFound desc = an error occurred when try to find container \"ba0d50adbac6c4785c2914770db45c92e1649f24aab116fb9be679b98d03ac37\": not found" May 16 00:49:17.563342 kubelet[1561]: I0516 00:49:17.563329 1561 scope.go:117] "RemoveContainer" containerID="14543cfaf9599b93e009cb756ee58af5162919572e18bbbf5400aa4cae5a6508" May 16 00:49:17.563615 env[1315]: time="2025-05-16T00:49:17.563557987Z" level=error msg="ContainerStatus for \"14543cfaf9599b93e009cb756ee58af5162919572e18bbbf5400aa4cae5a6508\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"14543cfaf9599b93e009cb756ee58af5162919572e18bbbf5400aa4cae5a6508\": not found" May 16 00:49:17.563726 kubelet[1561]: E0516 00:49:17.563701 1561 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"14543cfaf9599b93e009cb756ee58af5162919572e18bbbf5400aa4cae5a6508\": not found" containerID="14543cfaf9599b93e009cb756ee58af5162919572e18bbbf5400aa4cae5a6508" May 16 00:49:17.563769 kubelet[1561]: I0516 00:49:17.563730 1561 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"14543cfaf9599b93e009cb756ee58af5162919572e18bbbf5400aa4cae5a6508"} err="failed to get container status \"14543cfaf9599b93e009cb756ee58af5162919572e18bbbf5400aa4cae5a6508\": rpc error: code = NotFound desc = an error occurred when try to find container \"14543cfaf9599b93e009cb756ee58af5162919572e18bbbf5400aa4cae5a6508\": not found" May 16 00:49:17.563769 kubelet[1561]: I0516 00:49:17.563748 1561 scope.go:117] "RemoveContainer" containerID="c7b6298e3e55e04689d75ed663a6387ed244b0bc346243e7927020e8891a45d6" May 16 00:49:17.564045 env[1315]: time="2025-05-16T00:49:17.563960783Z" level=error msg="ContainerStatus for \"c7b6298e3e55e04689d75ed663a6387ed244b0bc346243e7927020e8891a45d6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c7b6298e3e55e04689d75ed663a6387ed244b0bc346243e7927020e8891a45d6\": not found" May 16 00:49:17.564139 kubelet[1561]: E0516 00:49:17.564089 1561 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c7b6298e3e55e04689d75ed663a6387ed244b0bc346243e7927020e8891a45d6\": not found" containerID="c7b6298e3e55e04689d75ed663a6387ed244b0bc346243e7927020e8891a45d6" May 16 00:49:17.564139 kubelet[1561]: I0516 00:49:17.564106 1561 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c7b6298e3e55e04689d75ed663a6387ed244b0bc346243e7927020e8891a45d6"} err="failed to get container status \"c7b6298e3e55e04689d75ed663a6387ed244b0bc346243e7927020e8891a45d6\": rpc error: code = NotFound desc = an error occurred when try to find container \"c7b6298e3e55e04689d75ed663a6387ed244b0bc346243e7927020e8891a45d6\": not found" May 16 00:49:17.564139 kubelet[1561]: I0516 00:49:17.564131 1561 scope.go:117] "RemoveContainer" containerID="bf6584d0280edf953909cc8d552d98b1018bda3186aee33f772de4858a959efc" May 16 00:49:17.564491 env[1315]: time="2025-05-16T00:49:17.564408858Z" level=error msg="ContainerStatus for \"bf6584d0280edf953909cc8d552d98b1018bda3186aee33f772de4858a959efc\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"bf6584d0280edf953909cc8d552d98b1018bda3186aee33f772de4858a959efc\": not found" May 16 00:49:17.564550 kubelet[1561]: E0516 00:49:17.564527 1561 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"bf6584d0280edf953909cc8d552d98b1018bda3186aee33f772de4858a959efc\": not found" containerID="bf6584d0280edf953909cc8d552d98b1018bda3186aee33f772de4858a959efc" May 16 00:49:17.564579 kubelet[1561]: I0516 00:49:17.564548 1561 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"bf6584d0280edf953909cc8d552d98b1018bda3186aee33f772de4858a959efc"} err="failed to get container status \"bf6584d0280edf953909cc8d552d98b1018bda3186aee33f772de4858a959efc\": rpc error: code = NotFound desc = an error occurred when try to find container \"bf6584d0280edf953909cc8d552d98b1018bda3186aee33f772de4858a959efc\": not found" May 16 00:49:17.564579 kubelet[1561]: I0516 00:49:17.564564 1561 scope.go:117] "RemoveContainer" containerID="d54ff57bbd2af9e683543d4c3c71f0ad45ac0cd9b1c821735e652d4dc19cbd96" May 16 00:49:17.564833 env[1315]: time="2025-05-16T00:49:17.564770375Z" level=error msg="ContainerStatus for \"d54ff57bbd2af9e683543d4c3c71f0ad45ac0cd9b1c821735e652d4dc19cbd96\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d54ff57bbd2af9e683543d4c3c71f0ad45ac0cd9b1c821735e652d4dc19cbd96\": not found" May 16 00:49:17.565004 kubelet[1561]: E0516 00:49:17.564983 1561 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d54ff57bbd2af9e683543d4c3c71f0ad45ac0cd9b1c821735e652d4dc19cbd96\": not found" containerID="d54ff57bbd2af9e683543d4c3c71f0ad45ac0cd9b1c821735e652d4dc19cbd96" May 16 00:49:17.565107 kubelet[1561]: I0516 00:49:17.565067 1561 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d54ff57bbd2af9e683543d4c3c71f0ad45ac0cd9b1c821735e652d4dc19cbd96"} err="failed to get container status \"d54ff57bbd2af9e683543d4c3c71f0ad45ac0cd9b1c821735e652d4dc19cbd96\": rpc error: code = NotFound desc = an error occurred when try to find container \"d54ff57bbd2af9e683543d4c3c71f0ad45ac0cd9b1c821735e652d4dc19cbd96\": not found" May 16 00:49:17.675720 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c6714b16dbc1f4dca39c6ac01dd9cd8be5ee7fd14c1b0beed7ff9cd8c91e56ae-rootfs.mount: Deactivated successfully. May 16 00:49:17.675866 systemd[1]: var-lib-kubelet-pods-4e0e21be\x2d1d6f\x2d4e21\x2da43f\x2da8c39b92009e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ddmt6h.mount: Deactivated successfully. May 16 00:49:17.675950 systemd[1]: var-lib-kubelet-pods-4e0e21be\x2d1d6f\x2d4e21\x2da43f\x2da8c39b92009e-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 16 00:49:17.676027 systemd[1]: var-lib-kubelet-pods-4e0e21be\x2d1d6f\x2d4e21\x2da43f\x2da8c39b92009e-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 16 00:49:18.273731 kubelet[1561]: E0516 00:49:18.273682 1561 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:49:19.274288 kubelet[1561]: E0516 00:49:19.274227 1561 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:49:19.428300 kubelet[1561]: I0516 00:49:19.428250 1561 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4e0e21be-1d6f-4e21-a43f-a8c39b92009e" path="/var/lib/kubelet/pods/4e0e21be-1d6f-4e21-a43f-a8c39b92009e/volumes" May 16 00:49:19.542291 kubelet[1561]: E0516 00:49:19.542157 1561 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4e0e21be-1d6f-4e21-a43f-a8c39b92009e" containerName="clean-cilium-state" May 16 00:49:19.542291 kubelet[1561]: E0516 00:49:19.542194 1561 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4e0e21be-1d6f-4e21-a43f-a8c39b92009e" containerName="mount-cgroup" May 16 00:49:19.542291 kubelet[1561]: E0516 00:49:19.542201 1561 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4e0e21be-1d6f-4e21-a43f-a8c39b92009e" containerName="mount-bpf-fs" May 16 00:49:19.542291 kubelet[1561]: E0516 00:49:19.542209 1561 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4e0e21be-1d6f-4e21-a43f-a8c39b92009e" containerName="cilium-agent" May 16 00:49:19.542291 kubelet[1561]: E0516 00:49:19.542216 1561 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4e0e21be-1d6f-4e21-a43f-a8c39b92009e" containerName="apply-sysctl-overwrites" May 16 00:49:19.542291 kubelet[1561]: I0516 00:49:19.542236 1561 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e0e21be-1d6f-4e21-a43f-a8c39b92009e" containerName="cilium-agent" May 16 00:49:19.664654 kubelet[1561]: I0516 00:49:19.664611 1561 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6stcw\" (UniqueName: \"kubernetes.io/projected/8635b28c-d37d-44e0-a386-cd84cb407b55-kube-api-access-6stcw\") pod \"cilium-mjpsw\" (UID: \"8635b28c-d37d-44e0-a386-cd84cb407b55\") " pod="kube-system/cilium-mjpsw" May 16 00:49:19.664654 kubelet[1561]: I0516 00:49:19.664653 1561 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zmgxv\" (UniqueName: \"kubernetes.io/projected/5cf6153b-4b18-411a-b8db-053ec2b9686b-kube-api-access-zmgxv\") pod \"cilium-operator-5d85765b45-j29fs\" (UID: \"5cf6153b-4b18-411a-b8db-053ec2b9686b\") " pod="kube-system/cilium-operator-5d85765b45-j29fs" May 16 00:49:19.664850 kubelet[1561]: I0516 00:49:19.664678 1561 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8635b28c-d37d-44e0-a386-cd84cb407b55-lib-modules\") pod \"cilium-mjpsw\" (UID: \"8635b28c-d37d-44e0-a386-cd84cb407b55\") " pod="kube-system/cilium-mjpsw" May 16 00:49:19.664850 kubelet[1561]: I0516 00:49:19.664695 1561 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/8635b28c-d37d-44e0-a386-cd84cb407b55-cilium-ipsec-secrets\") pod \"cilium-mjpsw\" (UID: \"8635b28c-d37d-44e0-a386-cd84cb407b55\") " pod="kube-system/cilium-mjpsw" May 16 00:49:19.664850 kubelet[1561]: I0516 00:49:19.664711 1561 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8635b28c-d37d-44e0-a386-cd84cb407b55-cni-path\") pod \"cilium-mjpsw\" (UID: \"8635b28c-d37d-44e0-a386-cd84cb407b55\") " pod="kube-system/cilium-mjpsw" May 16 00:49:19.664850 kubelet[1561]: I0516 00:49:19.664726 1561 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8635b28c-d37d-44e0-a386-cd84cb407b55-etc-cni-netd\") pod \"cilium-mjpsw\" (UID: \"8635b28c-d37d-44e0-a386-cd84cb407b55\") " pod="kube-system/cilium-mjpsw" May 16 00:49:19.664850 kubelet[1561]: I0516 00:49:19.664743 1561 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8635b28c-d37d-44e0-a386-cd84cb407b55-host-proc-sys-net\") pod \"cilium-mjpsw\" (UID: \"8635b28c-d37d-44e0-a386-cd84cb407b55\") " pod="kube-system/cilium-mjpsw" May 16 00:49:19.664850 kubelet[1561]: I0516 00:49:19.664758 1561 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8635b28c-d37d-44e0-a386-cd84cb407b55-host-proc-sys-kernel\") pod \"cilium-mjpsw\" (UID: \"8635b28c-d37d-44e0-a386-cd84cb407b55\") " pod="kube-system/cilium-mjpsw" May 16 00:49:19.664986 kubelet[1561]: I0516 00:49:19.664773 1561 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8635b28c-d37d-44e0-a386-cd84cb407b55-bpf-maps\") pod \"cilium-mjpsw\" (UID: \"8635b28c-d37d-44e0-a386-cd84cb407b55\") " pod="kube-system/cilium-mjpsw" May 16 00:49:19.664986 kubelet[1561]: I0516 00:49:19.664788 1561 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8635b28c-d37d-44e0-a386-cd84cb407b55-hostproc\") pod \"cilium-mjpsw\" (UID: \"8635b28c-d37d-44e0-a386-cd84cb407b55\") " pod="kube-system/cilium-mjpsw" May 16 00:49:19.664986 kubelet[1561]: I0516 00:49:19.664803 1561 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8635b28c-d37d-44e0-a386-cd84cb407b55-cilium-cgroup\") pod \"cilium-mjpsw\" (UID: \"8635b28c-d37d-44e0-a386-cd84cb407b55\") " pod="kube-system/cilium-mjpsw" May 16 00:49:19.664986 kubelet[1561]: I0516 00:49:19.664818 1561 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8635b28c-d37d-44e0-a386-cd84cb407b55-xtables-lock\") pod \"cilium-mjpsw\" (UID: \"8635b28c-d37d-44e0-a386-cd84cb407b55\") " pod="kube-system/cilium-mjpsw" May 16 00:49:19.664986 kubelet[1561]: I0516 00:49:19.664834 1561 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8635b28c-d37d-44e0-a386-cd84cb407b55-clustermesh-secrets\") pod \"cilium-mjpsw\" (UID: \"8635b28c-d37d-44e0-a386-cd84cb407b55\") " pod="kube-system/cilium-mjpsw" May 16 00:49:19.664986 kubelet[1561]: I0516 00:49:19.664850 1561 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8635b28c-d37d-44e0-a386-cd84cb407b55-cilium-config-path\") pod \"cilium-mjpsw\" (UID: \"8635b28c-d37d-44e0-a386-cd84cb407b55\") " pod="kube-system/cilium-mjpsw" May 16 00:49:19.665155 kubelet[1561]: I0516 00:49:19.664864 1561 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8635b28c-d37d-44e0-a386-cd84cb407b55-hubble-tls\") pod \"cilium-mjpsw\" (UID: \"8635b28c-d37d-44e0-a386-cd84cb407b55\") " pod="kube-system/cilium-mjpsw" May 16 00:49:19.665155 kubelet[1561]: I0516 00:49:19.664881 1561 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8635b28c-d37d-44e0-a386-cd84cb407b55-cilium-run\") pod \"cilium-mjpsw\" (UID: \"8635b28c-d37d-44e0-a386-cd84cb407b55\") " pod="kube-system/cilium-mjpsw" May 16 00:49:19.665155 kubelet[1561]: I0516 00:49:19.664897 1561 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5cf6153b-4b18-411a-b8db-053ec2b9686b-cilium-config-path\") pod \"cilium-operator-5d85765b45-j29fs\" (UID: \"5cf6153b-4b18-411a-b8db-053ec2b9686b\") " pod="kube-system/cilium-operator-5d85765b45-j29fs" May 16 00:49:19.732039 kubelet[1561]: E0516 00:49:19.731981 1561 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[bpf-maps cilium-cgroup cilium-config-path cilium-ipsec-secrets cilium-run clustermesh-secrets cni-path etc-cni-netd host-proc-sys-kernel host-proc-sys-net hostproc hubble-tls kube-api-access-6stcw lib-modules xtables-lock], unattached volumes=[], failed to process volumes=[]: context canceled" pod="kube-system/cilium-mjpsw" podUID="8635b28c-d37d-44e0-a386-cd84cb407b55" May 16 00:49:19.846276 kubelet[1561]: E0516 00:49:19.846172 1561 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:49:19.847269 env[1315]: time="2025-05-16T00:49:19.847206616Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-j29fs,Uid:5cf6153b-4b18-411a-b8db-053ec2b9686b,Namespace:kube-system,Attempt:0,}" May 16 00:49:19.860464 env[1315]: time="2025-05-16T00:49:19.860403856Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 16 00:49:19.860556 env[1315]: time="2025-05-16T00:49:19.860481455Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 16 00:49:19.860556 env[1315]: time="2025-05-16T00:49:19.860518015Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 00:49:19.860705 env[1315]: time="2025-05-16T00:49:19.860678053Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d10739a01dd4c2000805eddf011423971e5e64f0b7b1b8076356c8ab04cf863f pid=3123 runtime=io.containerd.runc.v2 May 16 00:49:19.904578 env[1315]: time="2025-05-16T00:49:19.903851179Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-j29fs,Uid:5cf6153b-4b18-411a-b8db-053ec2b9686b,Namespace:kube-system,Attempt:0,} returns sandbox id \"d10739a01dd4c2000805eddf011423971e5e64f0b7b1b8076356c8ab04cf863f\"" May 16 00:49:19.905652 kubelet[1561]: E0516 00:49:19.905203 1561 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:49:19.906304 env[1315]: time="2025-05-16T00:49:19.906272077Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 16 00:49:20.275044 kubelet[1561]: E0516 00:49:20.274979 1561 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:49:20.399535 kubelet[1561]: E0516 00:49:20.399489 1561 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 16 00:49:20.670147 kubelet[1561]: I0516 00:49:20.670008 1561 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8635b28c-d37d-44e0-a386-cd84cb407b55-cilium-config-path\") pod \"8635b28c-d37d-44e0-a386-cd84cb407b55\" (UID: \"8635b28c-d37d-44e0-a386-cd84cb407b55\") " May 16 00:49:20.670147 kubelet[1561]: I0516 00:49:20.670051 1561 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/8635b28c-d37d-44e0-a386-cd84cb407b55-cilium-ipsec-secrets\") pod \"8635b28c-d37d-44e0-a386-cd84cb407b55\" (UID: \"8635b28c-d37d-44e0-a386-cd84cb407b55\") " May 16 00:49:20.670147 kubelet[1561]: I0516 00:49:20.670069 1561 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8635b28c-d37d-44e0-a386-cd84cb407b55-cni-path\") pod \"8635b28c-d37d-44e0-a386-cd84cb407b55\" (UID: \"8635b28c-d37d-44e0-a386-cd84cb407b55\") " May 16 00:49:20.670147 kubelet[1561]: I0516 00:49:20.670086 1561 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8635b28c-d37d-44e0-a386-cd84cb407b55-host-proc-sys-kernel\") pod \"8635b28c-d37d-44e0-a386-cd84cb407b55\" (UID: \"8635b28c-d37d-44e0-a386-cd84cb407b55\") " May 16 00:49:20.670147 kubelet[1561]: I0516 00:49:20.670102 1561 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8635b28c-d37d-44e0-a386-cd84cb407b55-host-proc-sys-net\") pod \"8635b28c-d37d-44e0-a386-cd84cb407b55\" (UID: \"8635b28c-d37d-44e0-a386-cd84cb407b55\") " May 16 00:49:20.670509 kubelet[1561]: I0516 00:49:20.670317 1561 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8635b28c-d37d-44e0-a386-cd84cb407b55-bpf-maps\") pod \"8635b28c-d37d-44e0-a386-cd84cb407b55\" (UID: \"8635b28c-d37d-44e0-a386-cd84cb407b55\") " May 16 00:49:20.670509 kubelet[1561]: I0516 00:49:20.670340 1561 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8635b28c-d37d-44e0-a386-cd84cb407b55-hostproc\") pod \"8635b28c-d37d-44e0-a386-cd84cb407b55\" (UID: \"8635b28c-d37d-44e0-a386-cd84cb407b55\") " May 16 00:49:20.670509 kubelet[1561]: I0516 00:49:20.670355 1561 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8635b28c-d37d-44e0-a386-cd84cb407b55-cilium-cgroup\") pod \"8635b28c-d37d-44e0-a386-cd84cb407b55\" (UID: \"8635b28c-d37d-44e0-a386-cd84cb407b55\") " May 16 00:49:20.670509 kubelet[1561]: I0516 00:49:20.670369 1561 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8635b28c-d37d-44e0-a386-cd84cb407b55-lib-modules\") pod \"8635b28c-d37d-44e0-a386-cd84cb407b55\" (UID: \"8635b28c-d37d-44e0-a386-cd84cb407b55\") " May 16 00:49:20.670509 kubelet[1561]: I0516 00:49:20.670387 1561 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8635b28c-d37d-44e0-a386-cd84cb407b55-hubble-tls\") pod \"8635b28c-d37d-44e0-a386-cd84cb407b55\" (UID: \"8635b28c-d37d-44e0-a386-cd84cb407b55\") " May 16 00:49:20.670509 kubelet[1561]: I0516 00:49:20.670410 1561 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8635b28c-d37d-44e0-a386-cd84cb407b55-cilium-run\") pod \"8635b28c-d37d-44e0-a386-cd84cb407b55\" (UID: \"8635b28c-d37d-44e0-a386-cd84cb407b55\") " May 16 00:49:20.670648 kubelet[1561]: I0516 00:49:20.670428 1561 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6stcw\" (UniqueName: \"kubernetes.io/projected/8635b28c-d37d-44e0-a386-cd84cb407b55-kube-api-access-6stcw\") pod \"8635b28c-d37d-44e0-a386-cd84cb407b55\" (UID: \"8635b28c-d37d-44e0-a386-cd84cb407b55\") " May 16 00:49:20.670648 kubelet[1561]: I0516 00:49:20.670445 1561 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8635b28c-d37d-44e0-a386-cd84cb407b55-etc-cni-netd\") pod \"8635b28c-d37d-44e0-a386-cd84cb407b55\" (UID: \"8635b28c-d37d-44e0-a386-cd84cb407b55\") " May 16 00:49:20.670648 kubelet[1561]: I0516 00:49:20.670460 1561 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8635b28c-d37d-44e0-a386-cd84cb407b55-xtables-lock\") pod \"8635b28c-d37d-44e0-a386-cd84cb407b55\" (UID: \"8635b28c-d37d-44e0-a386-cd84cb407b55\") " May 16 00:49:20.670648 kubelet[1561]: I0516 00:49:20.670488 1561 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8635b28c-d37d-44e0-a386-cd84cb407b55-clustermesh-secrets\") pod \"8635b28c-d37d-44e0-a386-cd84cb407b55\" (UID: \"8635b28c-d37d-44e0-a386-cd84cb407b55\") " May 16 00:49:20.671495 kubelet[1561]: I0516 00:49:20.670802 1561 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8635b28c-d37d-44e0-a386-cd84cb407b55-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "8635b28c-d37d-44e0-a386-cd84cb407b55" (UID: "8635b28c-d37d-44e0-a386-cd84cb407b55"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 16 00:49:20.671495 kubelet[1561]: I0516 00:49:20.670800 1561 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8635b28c-d37d-44e0-a386-cd84cb407b55-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "8635b28c-d37d-44e0-a386-cd84cb407b55" (UID: "8635b28c-d37d-44e0-a386-cd84cb407b55"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 16 00:49:20.671495 kubelet[1561]: I0516 00:49:20.670846 1561 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8635b28c-d37d-44e0-a386-cd84cb407b55-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "8635b28c-d37d-44e0-a386-cd84cb407b55" (UID: "8635b28c-d37d-44e0-a386-cd84cb407b55"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 16 00:49:20.671495 kubelet[1561]: I0516 00:49:20.671294 1561 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8635b28c-d37d-44e0-a386-cd84cb407b55-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "8635b28c-d37d-44e0-a386-cd84cb407b55" (UID: "8635b28c-d37d-44e0-a386-cd84cb407b55"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 16 00:49:20.671495 kubelet[1561]: I0516 00:49:20.671327 1561 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8635b28c-d37d-44e0-a386-cd84cb407b55-cni-path" (OuterVolumeSpecName: "cni-path") pod "8635b28c-d37d-44e0-a386-cd84cb407b55" (UID: "8635b28c-d37d-44e0-a386-cd84cb407b55"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 16 00:49:20.671675 kubelet[1561]: I0516 00:49:20.671350 1561 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8635b28c-d37d-44e0-a386-cd84cb407b55-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "8635b28c-d37d-44e0-a386-cd84cb407b55" (UID: "8635b28c-d37d-44e0-a386-cd84cb407b55"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 16 00:49:20.671675 kubelet[1561]: I0516 00:49:20.671371 1561 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8635b28c-d37d-44e0-a386-cd84cb407b55-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "8635b28c-d37d-44e0-a386-cd84cb407b55" (UID: "8635b28c-d37d-44e0-a386-cd84cb407b55"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 16 00:49:20.671675 kubelet[1561]: I0516 00:49:20.671388 1561 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8635b28c-d37d-44e0-a386-cd84cb407b55-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "8635b28c-d37d-44e0-a386-cd84cb407b55" (UID: "8635b28c-d37d-44e0-a386-cd84cb407b55"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 16 00:49:20.671675 kubelet[1561]: I0516 00:49:20.671394 1561 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8635b28c-d37d-44e0-a386-cd84cb407b55-hostproc" (OuterVolumeSpecName: "hostproc") pod "8635b28c-d37d-44e0-a386-cd84cb407b55" (UID: "8635b28c-d37d-44e0-a386-cd84cb407b55"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 16 00:49:20.671675 kubelet[1561]: I0516 00:49:20.671403 1561 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8635b28c-d37d-44e0-a386-cd84cb407b55-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "8635b28c-d37d-44e0-a386-cd84cb407b55" (UID: "8635b28c-d37d-44e0-a386-cd84cb407b55"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 16 00:49:20.672277 kubelet[1561]: I0516 00:49:20.672241 1561 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8635b28c-d37d-44e0-a386-cd84cb407b55-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "8635b28c-d37d-44e0-a386-cd84cb407b55" (UID: "8635b28c-d37d-44e0-a386-cd84cb407b55"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 16 00:49:20.673369 kubelet[1561]: I0516 00:49:20.673335 1561 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8635b28c-d37d-44e0-a386-cd84cb407b55-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "8635b28c-d37d-44e0-a386-cd84cb407b55" (UID: "8635b28c-d37d-44e0-a386-cd84cb407b55"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" May 16 00:49:20.673710 kubelet[1561]: I0516 00:49:20.673679 1561 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8635b28c-d37d-44e0-a386-cd84cb407b55-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "8635b28c-d37d-44e0-a386-cd84cb407b55" (UID: "8635b28c-d37d-44e0-a386-cd84cb407b55"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" May 16 00:49:20.673843 kubelet[1561]: I0516 00:49:20.673818 1561 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8635b28c-d37d-44e0-a386-cd84cb407b55-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "8635b28c-d37d-44e0-a386-cd84cb407b55" (UID: "8635b28c-d37d-44e0-a386-cd84cb407b55"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" May 16 00:49:20.675182 kubelet[1561]: I0516 00:49:20.675136 1561 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8635b28c-d37d-44e0-a386-cd84cb407b55-kube-api-access-6stcw" (OuterVolumeSpecName: "kube-api-access-6stcw") pod "8635b28c-d37d-44e0-a386-cd84cb407b55" (UID: "8635b28c-d37d-44e0-a386-cd84cb407b55"). InnerVolumeSpecName "kube-api-access-6stcw". PluginName "kubernetes.io/projected", VolumeGidValue "" May 16 00:49:20.772198 systemd[1]: var-lib-kubelet-pods-8635b28c\x2dd37d\x2d44e0\x2da386\x2dcd84cb407b55-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d6stcw.mount: Deactivated successfully. May 16 00:49:20.772341 systemd[1]: var-lib-kubelet-pods-8635b28c\x2dd37d\x2d44e0\x2da386\x2dcd84cb407b55-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 16 00:49:20.772425 systemd[1]: var-lib-kubelet-pods-8635b28c\x2dd37d\x2d44e0\x2da386\x2dcd84cb407b55-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 16 00:49:20.772516 systemd[1]: var-lib-kubelet-pods-8635b28c\x2dd37d\x2d44e0\x2da386\x2dcd84cb407b55-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. May 16 00:49:20.772958 kubelet[1561]: I0516 00:49:20.772893 1561 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8635b28c-d37d-44e0-a386-cd84cb407b55-lib-modules\") on node \"10.0.0.110\" DevicePath \"\"" May 16 00:49:20.772958 kubelet[1561]: I0516 00:49:20.772930 1561 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8635b28c-d37d-44e0-a386-cd84cb407b55-host-proc-sys-net\") on node \"10.0.0.110\" DevicePath \"\"" May 16 00:49:20.772958 kubelet[1561]: I0516 00:49:20.772941 1561 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8635b28c-d37d-44e0-a386-cd84cb407b55-bpf-maps\") on node \"10.0.0.110\" DevicePath \"\"" May 16 00:49:20.772958 kubelet[1561]: I0516 00:49:20.772949 1561 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8635b28c-d37d-44e0-a386-cd84cb407b55-hostproc\") on node \"10.0.0.110\" DevicePath \"\"" May 16 00:49:20.772958 kubelet[1561]: I0516 00:49:20.772957 1561 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8635b28c-d37d-44e0-a386-cd84cb407b55-cilium-cgroup\") on node \"10.0.0.110\" DevicePath \"\"" May 16 00:49:20.772958 kubelet[1561]: I0516 00:49:20.772966 1561 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8635b28c-d37d-44e0-a386-cd84cb407b55-cilium-run\") on node \"10.0.0.110\" DevicePath \"\"" May 16 00:49:20.773150 kubelet[1561]: I0516 00:49:20.772974 1561 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8635b28c-d37d-44e0-a386-cd84cb407b55-hubble-tls\") on node \"10.0.0.110\" DevicePath \"\"" May 16 00:49:20.773150 kubelet[1561]: I0516 00:49:20.772981 1561 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8635b28c-d37d-44e0-a386-cd84cb407b55-etc-cni-netd\") on node \"10.0.0.110\" DevicePath \"\"" May 16 00:49:20.773150 kubelet[1561]: I0516 00:49:20.772990 1561 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6stcw\" (UniqueName: \"kubernetes.io/projected/8635b28c-d37d-44e0-a386-cd84cb407b55-kube-api-access-6stcw\") on node \"10.0.0.110\" DevicePath \"\"" May 16 00:49:20.773150 kubelet[1561]: I0516 00:49:20.772998 1561 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8635b28c-d37d-44e0-a386-cd84cb407b55-xtables-lock\") on node \"10.0.0.110\" DevicePath \"\"" May 16 00:49:20.773150 kubelet[1561]: I0516 00:49:20.773007 1561 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8635b28c-d37d-44e0-a386-cd84cb407b55-clustermesh-secrets\") on node \"10.0.0.110\" DevicePath \"\"" May 16 00:49:20.773150 kubelet[1561]: I0516 00:49:20.773014 1561 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8635b28c-d37d-44e0-a386-cd84cb407b55-cni-path\") on node \"10.0.0.110\" DevicePath \"\"" May 16 00:49:20.773150 kubelet[1561]: I0516 00:49:20.773022 1561 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8635b28c-d37d-44e0-a386-cd84cb407b55-host-proc-sys-kernel\") on node \"10.0.0.110\" DevicePath \"\"" May 16 00:49:20.773150 kubelet[1561]: I0516 00:49:20.773030 1561 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8635b28c-d37d-44e0-a386-cd84cb407b55-cilium-config-path\") on node \"10.0.0.110\" DevicePath \"\"" May 16 00:49:20.773361 kubelet[1561]: I0516 00:49:20.773038 1561 reconciler_common.go:293] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/8635b28c-d37d-44e0-a386-cd84cb407b55-cilium-ipsec-secrets\") on node \"10.0.0.110\" DevicePath \"\"" May 16 00:49:21.275516 kubelet[1561]: E0516 00:49:21.275457 1561 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:49:21.303080 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount605901442.mount: Deactivated successfully. May 16 00:49:21.677484 kubelet[1561]: I0516 00:49:21.677230 1561 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f2af1768-31c9-4a99-baa1-09b4d2c5db46-cilium-cgroup\") pod \"cilium-6hrcs\" (UID: \"f2af1768-31c9-4a99-baa1-09b4d2c5db46\") " pod="kube-system/cilium-6hrcs" May 16 00:49:21.677484 kubelet[1561]: I0516 00:49:21.677283 1561 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f2af1768-31c9-4a99-baa1-09b4d2c5db46-etc-cni-netd\") pod \"cilium-6hrcs\" (UID: \"f2af1768-31c9-4a99-baa1-09b4d2c5db46\") " pod="kube-system/cilium-6hrcs" May 16 00:49:21.677484 kubelet[1561]: I0516 00:49:21.677303 1561 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f2af1768-31c9-4a99-baa1-09b4d2c5db46-host-proc-sys-net\") pod \"cilium-6hrcs\" (UID: \"f2af1768-31c9-4a99-baa1-09b4d2c5db46\") " pod="kube-system/cilium-6hrcs" May 16 00:49:21.677484 kubelet[1561]: I0516 00:49:21.677321 1561 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f2af1768-31c9-4a99-baa1-09b4d2c5db46-cilium-run\") pod \"cilium-6hrcs\" (UID: \"f2af1768-31c9-4a99-baa1-09b4d2c5db46\") " pod="kube-system/cilium-6hrcs" May 16 00:49:21.677484 kubelet[1561]: I0516 00:49:21.677345 1561 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f2af1768-31c9-4a99-baa1-09b4d2c5db46-cni-path\") pod \"cilium-6hrcs\" (UID: \"f2af1768-31c9-4a99-baa1-09b4d2c5db46\") " pod="kube-system/cilium-6hrcs" May 16 00:49:21.677484 kubelet[1561]: I0516 00:49:21.677362 1561 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/f2af1768-31c9-4a99-baa1-09b4d2c5db46-cilium-ipsec-secrets\") pod \"cilium-6hrcs\" (UID: \"f2af1768-31c9-4a99-baa1-09b4d2c5db46\") " pod="kube-system/cilium-6hrcs" May 16 00:49:21.677702 kubelet[1561]: I0516 00:49:21.677377 1561 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f2af1768-31c9-4a99-baa1-09b4d2c5db46-host-proc-sys-kernel\") pod \"cilium-6hrcs\" (UID: \"f2af1768-31c9-4a99-baa1-09b4d2c5db46\") " pod="kube-system/cilium-6hrcs" May 16 00:49:21.677702 kubelet[1561]: I0516 00:49:21.677394 1561 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f2af1768-31c9-4a99-baa1-09b4d2c5db46-bpf-maps\") pod \"cilium-6hrcs\" (UID: \"f2af1768-31c9-4a99-baa1-09b4d2c5db46\") " pod="kube-system/cilium-6hrcs" May 16 00:49:21.677702 kubelet[1561]: I0516 00:49:21.677409 1561 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f2af1768-31c9-4a99-baa1-09b4d2c5db46-hostproc\") pod \"cilium-6hrcs\" (UID: \"f2af1768-31c9-4a99-baa1-09b4d2c5db46\") " pod="kube-system/cilium-6hrcs" May 16 00:49:21.677702 kubelet[1561]: I0516 00:49:21.677432 1561 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f2af1768-31c9-4a99-baa1-09b4d2c5db46-hubble-tls\") pod \"cilium-6hrcs\" (UID: \"f2af1768-31c9-4a99-baa1-09b4d2c5db46\") " pod="kube-system/cilium-6hrcs" May 16 00:49:21.677702 kubelet[1561]: I0516 00:49:21.677448 1561 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f2af1768-31c9-4a99-baa1-09b4d2c5db46-xtables-lock\") pod \"cilium-6hrcs\" (UID: \"f2af1768-31c9-4a99-baa1-09b4d2c5db46\") " pod="kube-system/cilium-6hrcs" May 16 00:49:21.677702 kubelet[1561]: I0516 00:49:21.677463 1561 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f2af1768-31c9-4a99-baa1-09b4d2c5db46-clustermesh-secrets\") pod \"cilium-6hrcs\" (UID: \"f2af1768-31c9-4a99-baa1-09b4d2c5db46\") " pod="kube-system/cilium-6hrcs" May 16 00:49:21.677833 kubelet[1561]: I0516 00:49:21.677477 1561 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f2af1768-31c9-4a99-baa1-09b4d2c5db46-cilium-config-path\") pod \"cilium-6hrcs\" (UID: \"f2af1768-31c9-4a99-baa1-09b4d2c5db46\") " pod="kube-system/cilium-6hrcs" May 16 00:49:21.677833 kubelet[1561]: I0516 00:49:21.677575 1561 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z26gx\" (UniqueName: \"kubernetes.io/projected/f2af1768-31c9-4a99-baa1-09b4d2c5db46-kube-api-access-z26gx\") pod \"cilium-6hrcs\" (UID: \"f2af1768-31c9-4a99-baa1-09b4d2c5db46\") " pod="kube-system/cilium-6hrcs" May 16 00:49:21.677833 kubelet[1561]: I0516 00:49:21.677594 1561 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f2af1768-31c9-4a99-baa1-09b4d2c5db46-lib-modules\") pod \"cilium-6hrcs\" (UID: \"f2af1768-31c9-4a99-baa1-09b4d2c5db46\") " pod="kube-system/cilium-6hrcs" May 16 00:49:21.886217 kubelet[1561]: E0516 00:49:21.886177 1561 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:49:21.907253 env[1315]: time="2025-05-16T00:49:21.905529654Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6hrcs,Uid:f2af1768-31c9-4a99-baa1-09b4d2c5db46,Namespace:kube-system,Attempt:0,}" May 16 00:49:21.948790 env[1315]: time="2025-05-16T00:49:21.943220574Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 16 00:49:21.948790 env[1315]: time="2025-05-16T00:49:21.943258973Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 16 00:49:21.948790 env[1315]: time="2025-05-16T00:49:21.943269773Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 00:49:21.948790 env[1315]: time="2025-05-16T00:49:21.943381412Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e486330da70a566c6b70b40d1b3fbb998913a9d62a31f74c3231cacf4afb7134 pid=3172 runtime=io.containerd.runc.v2 May 16 00:49:21.995501 env[1315]: time="2025-05-16T00:49:21.995392690Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6hrcs,Uid:f2af1768-31c9-4a99-baa1-09b4d2c5db46,Namespace:kube-system,Attempt:0,} returns sandbox id \"e486330da70a566c6b70b40d1b3fbb998913a9d62a31f74c3231cacf4afb7134\"" May 16 00:49:21.996259 kubelet[1561]: E0516 00:49:21.996227 1561 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:49:21.998197 env[1315]: time="2025-05-16T00:49:21.998170466Z" level=info msg="CreateContainer within sandbox \"e486330da70a566c6b70b40d1b3fbb998913a9d62a31f74c3231cacf4afb7134\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 16 00:49:22.003123 env[1315]: time="2025-05-16T00:49:22.003061545Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:49:22.005285 env[1315]: time="2025-05-16T00:49:22.005247727Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:49:22.008709 env[1315]: time="2025-05-16T00:49:22.008655059Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:49:22.008997 env[1315]: time="2025-05-16T00:49:22.008957177Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" May 16 00:49:22.010198 env[1315]: time="2025-05-16T00:49:22.010132287Z" level=info msg="CreateContainer within sandbox \"e486330da70a566c6b70b40d1b3fbb998913a9d62a31f74c3231cacf4afb7134\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"feda87b2f3e0e849486a825c503900d896d275c137566a2c15ee082910936f5a\"" May 16 00:49:22.010759 env[1315]: time="2025-05-16T00:49:22.010737082Z" level=info msg="StartContainer for \"feda87b2f3e0e849486a825c503900d896d275c137566a2c15ee082910936f5a\"" May 16 00:49:22.011350 env[1315]: time="2025-05-16T00:49:22.011316357Z" level=info msg="CreateContainer within sandbox \"d10739a01dd4c2000805eddf011423971e5e64f0b7b1b8076356c8ab04cf863f\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 16 00:49:22.021061 env[1315]: time="2025-05-16T00:49:22.021019637Z" level=info msg="CreateContainer within sandbox \"d10739a01dd4c2000805eddf011423971e5e64f0b7b1b8076356c8ab04cf863f\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"f346b9c49d8103cec5d6951e9f30a10ecf39740a5fddbfff150851d02b83681e\"" May 16 00:49:22.021624 env[1315]: time="2025-05-16T00:49:22.021574233Z" level=info msg="StartContainer for \"f346b9c49d8103cec5d6951e9f30a10ecf39740a5fddbfff150851d02b83681e\"" May 16 00:49:22.082425 env[1315]: time="2025-05-16T00:49:22.082382573Z" level=info msg="StartContainer for \"feda87b2f3e0e849486a825c503900d896d275c137566a2c15ee082910936f5a\" returns successfully" May 16 00:49:22.099849 env[1315]: time="2025-05-16T00:49:22.099794029Z" level=info msg="StartContainer for \"f346b9c49d8103cec5d6951e9f30a10ecf39740a5fddbfff150851d02b83681e\" returns successfully" May 16 00:49:22.162450 env[1315]: time="2025-05-16T00:49:22.162336515Z" level=info msg="shim disconnected" id=feda87b2f3e0e849486a825c503900d896d275c137566a2c15ee082910936f5a May 16 00:49:22.162450 env[1315]: time="2025-05-16T00:49:22.162392635Z" level=warning msg="cleaning up after shim disconnected" id=feda87b2f3e0e849486a825c503900d896d275c137566a2c15ee082910936f5a namespace=k8s.io May 16 00:49:22.162450 env[1315]: time="2025-05-16T00:49:22.162401034Z" level=info msg="cleaning up dead shim" May 16 00:49:22.176701 env[1315]: time="2025-05-16T00:49:22.174589054Z" level=warning msg="cleanup warnings time=\"2025-05-16T00:49:22Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3296 runtime=io.containerd.runc.v2\n" May 16 00:49:22.276277 kubelet[1561]: E0516 00:49:22.276240 1561 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:49:22.548337 kubelet[1561]: E0516 00:49:22.548092 1561 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:49:22.549475 kubelet[1561]: E0516 00:49:22.549439 1561 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:49:22.551547 env[1315]: time="2025-05-16T00:49:22.551495914Z" level=info msg="CreateContainer within sandbox \"e486330da70a566c6b70b40d1b3fbb998913a9d62a31f74c3231cacf4afb7134\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 16 00:49:22.557415 kubelet[1561]: I0516 00:49:22.557355 1561 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-j29fs" podStartSLOduration=1.4532255379999999 podStartE2EDuration="3.557341346s" podCreationTimestamp="2025-05-16 00:49:19 +0000 UTC" firstStartedPulling="2025-05-16 00:49:19.906019679 +0000 UTC m=+55.625366592" lastFinishedPulling="2025-05-16 00:49:22.010135447 +0000 UTC m=+57.729482400" observedRunningTime="2025-05-16 00:49:22.556511153 +0000 UTC m=+58.275858106" watchObservedRunningTime="2025-05-16 00:49:22.557341346 +0000 UTC m=+58.276688299" May 16 00:49:22.563648 env[1315]: time="2025-05-16T00:49:22.563602535Z" level=info msg="CreateContainer within sandbox \"e486330da70a566c6b70b40d1b3fbb998913a9d62a31f74c3231cacf4afb7134\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"75c8acfbfd012796e49617327e33037c6c30c17723d3abf2d2b65a6615ec91d8\"" May 16 00:49:22.564076 env[1315]: time="2025-05-16T00:49:22.564039891Z" level=info msg="StartContainer for \"75c8acfbfd012796e49617327e33037c6c30c17723d3abf2d2b65a6615ec91d8\"" May 16 00:49:22.608635 env[1315]: time="2025-05-16T00:49:22.608581245Z" level=info msg="StartContainer for \"75c8acfbfd012796e49617327e33037c6c30c17723d3abf2d2b65a6615ec91d8\" returns successfully" May 16 00:49:22.631579 env[1315]: time="2025-05-16T00:49:22.631520856Z" level=info msg="shim disconnected" id=75c8acfbfd012796e49617327e33037c6c30c17723d3abf2d2b65a6615ec91d8 May 16 00:49:22.631579 env[1315]: time="2025-05-16T00:49:22.631578135Z" level=warning msg="cleaning up after shim disconnected" id=75c8acfbfd012796e49617327e33037c6c30c17723d3abf2d2b65a6615ec91d8 namespace=k8s.io May 16 00:49:22.631791 env[1315]: time="2025-05-16T00:49:22.631587815Z" level=info msg="cleaning up dead shim" May 16 00:49:22.637499 env[1315]: time="2025-05-16T00:49:22.637466767Z" level=warning msg="cleanup warnings time=\"2025-05-16T00:49:22Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3360 runtime=io.containerd.runc.v2\n" May 16 00:49:23.276869 kubelet[1561]: E0516 00:49:23.276832 1561 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:49:23.428761 kubelet[1561]: I0516 00:49:23.428728 1561 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8635b28c-d37d-44e0-a386-cd84cb407b55" path="/var/lib/kubelet/pods/8635b28c-d37d-44e0-a386-cd84cb407b55/volumes" May 16 00:49:23.552439 kubelet[1561]: E0516 00:49:23.552234 1561 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:49:23.552709 kubelet[1561]: E0516 00:49:23.552680 1561 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:49:23.554803 env[1315]: time="2025-05-16T00:49:23.554760728Z" level=info msg="CreateContainer within sandbox \"e486330da70a566c6b70b40d1b3fbb998913a9d62a31f74c3231cacf4afb7134\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 16 00:49:23.566681 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount203108024.mount: Deactivated successfully. May 16 00:49:23.570161 env[1315]: time="2025-05-16T00:49:23.570092046Z" level=info msg="CreateContainer within sandbox \"e486330da70a566c6b70b40d1b3fbb998913a9d62a31f74c3231cacf4afb7134\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"755800117c70f608ed63c83eddbee8500172c109dee283fa50fdffade23509f1\"" May 16 00:49:23.570779 env[1315]: time="2025-05-16T00:49:23.570739961Z" level=info msg="StartContainer for \"755800117c70f608ed63c83eddbee8500172c109dee283fa50fdffade23509f1\"" May 16 00:49:23.617900 env[1315]: time="2025-05-16T00:49:23.617854626Z" level=info msg="StartContainer for \"755800117c70f608ed63c83eddbee8500172c109dee283fa50fdffade23509f1\" returns successfully" May 16 00:49:23.639794 env[1315]: time="2025-05-16T00:49:23.639746772Z" level=info msg="shim disconnected" id=755800117c70f608ed63c83eddbee8500172c109dee283fa50fdffade23509f1 May 16 00:49:23.639794 env[1315]: time="2025-05-16T00:49:23.639790451Z" level=warning msg="cleaning up after shim disconnected" id=755800117c70f608ed63c83eddbee8500172c109dee283fa50fdffade23509f1 namespace=k8s.io May 16 00:49:23.639794 env[1315]: time="2025-05-16T00:49:23.639799731Z" level=info msg="cleaning up dead shim" May 16 00:49:23.645844 env[1315]: time="2025-05-16T00:49:23.645809603Z" level=warning msg="cleanup warnings time=\"2025-05-16T00:49:23Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3418 runtime=io.containerd.runc.v2\n" May 16 00:49:23.783062 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-755800117c70f608ed63c83eddbee8500172c109dee283fa50fdffade23509f1-rootfs.mount: Deactivated successfully. May 16 00:49:24.277980 kubelet[1561]: E0516 00:49:24.277919 1561 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:49:24.555708 kubelet[1561]: E0516 00:49:24.555539 1561 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:49:24.557413 env[1315]: time="2025-05-16T00:49:24.557377688Z" level=info msg="CreateContainer within sandbox \"e486330da70a566c6b70b40d1b3fbb998913a9d62a31f74c3231cacf4afb7134\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 16 00:49:24.572498 env[1315]: time="2025-05-16T00:49:24.572436172Z" level=info msg="CreateContainer within sandbox \"e486330da70a566c6b70b40d1b3fbb998913a9d62a31f74c3231cacf4afb7134\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"02e5d5d5360a6ce6e921fec566d02a6301e3c98e27b3bf7cc0504d31844264bd\"" May 16 00:49:24.572993 env[1315]: time="2025-05-16T00:49:24.572971807Z" level=info msg="StartContainer for \"02e5d5d5360a6ce6e921fec566d02a6301e3c98e27b3bf7cc0504d31844264bd\"" May 16 00:49:24.635124 env[1315]: time="2025-05-16T00:49:24.635073129Z" level=info msg="StartContainer for \"02e5d5d5360a6ce6e921fec566d02a6301e3c98e27b3bf7cc0504d31844264bd\" returns successfully" May 16 00:49:24.650501 env[1315]: time="2025-05-16T00:49:24.650460650Z" level=info msg="shim disconnected" id=02e5d5d5360a6ce6e921fec566d02a6301e3c98e27b3bf7cc0504d31844264bd May 16 00:49:24.650755 env[1315]: time="2025-05-16T00:49:24.650719928Z" level=warning msg="cleaning up after shim disconnected" id=02e5d5d5360a6ce6e921fec566d02a6301e3c98e27b3bf7cc0504d31844264bd namespace=k8s.io May 16 00:49:24.650843 env[1315]: time="2025-05-16T00:49:24.650828327Z" level=info msg="cleaning up dead shim" May 16 00:49:24.656894 env[1315]: time="2025-05-16T00:49:24.656864761Z" level=warning msg="cleanup warnings time=\"2025-05-16T00:49:24Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3472 runtime=io.containerd.runc.v2\n" May 16 00:49:24.783129 systemd[1]: run-containerd-runc-k8s.io-02e5d5d5360a6ce6e921fec566d02a6301e3c98e27b3bf7cc0504d31844264bd-runc.2e3Cs6.mount: Deactivated successfully. May 16 00:49:24.783275 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-02e5d5d5360a6ce6e921fec566d02a6301e3c98e27b3bf7cc0504d31844264bd-rootfs.mount: Deactivated successfully. May 16 00:49:25.235105 kubelet[1561]: E0516 00:49:25.235059 1561 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:49:25.278039 kubelet[1561]: E0516 00:49:25.278012 1561 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:49:25.278771 env[1315]: time="2025-05-16T00:49:25.278717033Z" level=info msg="StopPodSandbox for \"c6714b16dbc1f4dca39c6ac01dd9cd8be5ee7fd14c1b0beed7ff9cd8c91e56ae\"" May 16 00:49:25.278860 env[1315]: time="2025-05-16T00:49:25.278819032Z" level=info msg="TearDown network for sandbox \"c6714b16dbc1f4dca39c6ac01dd9cd8be5ee7fd14c1b0beed7ff9cd8c91e56ae\" successfully" May 16 00:49:25.278860 env[1315]: time="2025-05-16T00:49:25.278855272Z" level=info msg="StopPodSandbox for \"c6714b16dbc1f4dca39c6ac01dd9cd8be5ee7fd14c1b0beed7ff9cd8c91e56ae\" returns successfully" May 16 00:49:25.279308 env[1315]: time="2025-05-16T00:49:25.279279189Z" level=info msg="RemovePodSandbox for \"c6714b16dbc1f4dca39c6ac01dd9cd8be5ee7fd14c1b0beed7ff9cd8c91e56ae\"" May 16 00:49:25.279386 env[1315]: time="2025-05-16T00:49:25.279309509Z" level=info msg="Forcibly stopping sandbox \"c6714b16dbc1f4dca39c6ac01dd9cd8be5ee7fd14c1b0beed7ff9cd8c91e56ae\"" May 16 00:49:25.279412 env[1315]: time="2025-05-16T00:49:25.279382548Z" level=info msg="TearDown network for sandbox \"c6714b16dbc1f4dca39c6ac01dd9cd8be5ee7fd14c1b0beed7ff9cd8c91e56ae\" successfully" May 16 00:49:25.282053 env[1315]: time="2025-05-16T00:49:25.282017249Z" level=info msg="RemovePodSandbox \"c6714b16dbc1f4dca39c6ac01dd9cd8be5ee7fd14c1b0beed7ff9cd8c91e56ae\" returns successfully" May 16 00:49:25.400894 kubelet[1561]: E0516 00:49:25.400851 1561 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 16 00:49:25.559156 kubelet[1561]: E0516 00:49:25.559095 1561 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:49:25.560776 env[1315]: time="2025-05-16T00:49:25.560735766Z" level=info msg="CreateContainer within sandbox \"e486330da70a566c6b70b40d1b3fbb998913a9d62a31f74c3231cacf4afb7134\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 16 00:49:25.571323 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3872447154.mount: Deactivated successfully. May 16 00:49:25.575099 env[1315]: time="2025-05-16T00:49:25.575034579Z" level=info msg="CreateContainer within sandbox \"e486330da70a566c6b70b40d1b3fbb998913a9d62a31f74c3231cacf4afb7134\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"af94f205c6bfc6b6a1165b3f7df52ca18729955d7e44b01cc5136b8309b1d34e\"" May 16 00:49:25.575669 env[1315]: time="2025-05-16T00:49:25.575642734Z" level=info msg="StartContainer for \"af94f205c6bfc6b6a1165b3f7df52ca18729955d7e44b01cc5136b8309b1d34e\"" May 16 00:49:25.626514 env[1315]: time="2025-05-16T00:49:25.626460915Z" level=info msg="StartContainer for \"af94f205c6bfc6b6a1165b3f7df52ca18729955d7e44b01cc5136b8309b1d34e\" returns successfully" May 16 00:49:25.880139 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm-aes-ce))) May 16 00:49:26.279007 kubelet[1561]: E0516 00:49:26.278967 1561 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:49:26.563695 kubelet[1561]: E0516 00:49:26.563500 1561 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:49:26.710027 kubelet[1561]: I0516 00:49:26.709976 1561 setters.go:600] "Node became not ready" node="10.0.0.110" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-05-16T00:49:26Z","lastTransitionTime":"2025-05-16T00:49:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} May 16 00:49:27.279777 kubelet[1561]: E0516 00:49:27.279720 1561 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:49:27.887404 kubelet[1561]: E0516 00:49:27.887360 1561 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:49:28.156967 systemd[1]: run-containerd-runc-k8s.io-af94f205c6bfc6b6a1165b3f7df52ca18729955d7e44b01cc5136b8309b1d34e-runc.yGLPJs.mount: Deactivated successfully. May 16 00:49:28.280054 kubelet[1561]: E0516 00:49:28.279999 1561 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:49:28.643491 systemd-networkd[1097]: lxc_health: Link UP May 16 00:49:28.654184 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready May 16 00:49:28.654017 systemd-networkd[1097]: lxc_health: Gained carrier May 16 00:49:29.280631 kubelet[1561]: E0516 00:49:29.280573 1561 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:49:29.834287 systemd-networkd[1097]: lxc_health: Gained IPv6LL May 16 00:49:29.888748 kubelet[1561]: E0516 00:49:29.888705 1561 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:49:29.903403 kubelet[1561]: I0516 00:49:29.903339 1561 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-6hrcs" podStartSLOduration=8.903325785 podStartE2EDuration="8.903325785s" podCreationTimestamp="2025-05-16 00:49:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-16 00:49:26.585928074 +0000 UTC m=+62.305275027" watchObservedRunningTime="2025-05-16 00:49:29.903325785 +0000 UTC m=+65.622672738" May 16 00:49:30.280753 kubelet[1561]: E0516 00:49:30.280685 1561 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:49:30.570479 kubelet[1561]: E0516 00:49:30.570381 1561 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:49:31.281544 kubelet[1561]: E0516 00:49:31.281491 1561 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:49:31.572478 kubelet[1561]: E0516 00:49:31.572370 1561 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:49:32.281816 kubelet[1561]: E0516 00:49:32.281756 1561 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:49:33.282616 kubelet[1561]: E0516 00:49:33.282573 1561 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:49:34.282709 kubelet[1561]: E0516 00:49:34.282673 1561 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:49:34.505663 systemd[1]: run-containerd-runc-k8s.io-af94f205c6bfc6b6a1165b3f7df52ca18729955d7e44b01cc5136b8309b1d34e-runc.rt4dfv.mount: Deactivated successfully. May 16 00:49:35.283592 kubelet[1561]: E0516 00:49:35.283526 1561 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:49:36.284866 kubelet[1561]: E0516 00:49:36.284824 1561 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"