May 15 10:16:01.806134 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] May 15 10:16:01.806153 kernel: Linux version 5.15.182-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Thu May 15 09:09:56 -00 2025 May 15 10:16:01.806161 kernel: efi: EFI v2.70 by EDK II May 15 10:16:01.806167 kernel: efi: SMBIOS 3.0=0xd9260000 ACPI 2.0=0xd9240000 MEMATTR=0xda32b018 RNG=0xd9220018 MEMRESERVE=0xd9521c18 May 15 10:16:01.806172 kernel: random: crng init done May 15 10:16:01.806177 kernel: ACPI: Early table checksum verification disabled May 15 10:16:01.806184 kernel: ACPI: RSDP 0x00000000D9240000 000024 (v02 BOCHS ) May 15 10:16:01.806191 kernel: ACPI: XSDT 0x00000000D9230000 000064 (v01 BOCHS BXPC 00000001 01000013) May 15 10:16:01.806196 kernel: ACPI: FACP 0x00000000D91E0000 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) May 15 10:16:01.806201 kernel: ACPI: DSDT 0x00000000D91F0000 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 15 10:16:01.806207 kernel: ACPI: APIC 0x00000000D91D0000 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) May 15 10:16:01.806212 kernel: ACPI: PPTT 0x00000000D91C0000 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) May 15 10:16:01.806217 kernel: ACPI: GTDT 0x00000000D91B0000 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 15 10:16:01.806222 kernel: ACPI: MCFG 0x00000000D91A0000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 15 10:16:01.806230 kernel: ACPI: SPCR 0x00000000D9190000 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 15 10:16:01.806236 kernel: ACPI: DBG2 0x00000000D9180000 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) May 15 10:16:01.806242 kernel: ACPI: IORT 0x00000000D9170000 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 15 10:16:01.806248 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 May 15 10:16:01.806254 kernel: NUMA: Failed to initialise from firmware May 15 10:16:01.806259 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] May 15 10:16:01.806265 kernel: NUMA: NODE_DATA [mem 0xdcb0a900-0xdcb0ffff] May 15 10:16:01.806271 kernel: Zone ranges: May 15 10:16:01.806276 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] May 15 10:16:01.806283 kernel: DMA32 empty May 15 10:16:01.806289 kernel: Normal empty May 15 10:16:01.806295 kernel: Movable zone start for each node May 15 10:16:01.806300 kernel: Early memory node ranges May 15 10:16:01.806306 kernel: node 0: [mem 0x0000000040000000-0x00000000d924ffff] May 15 10:16:01.806312 kernel: node 0: [mem 0x00000000d9250000-0x00000000d951ffff] May 15 10:16:01.806317 kernel: node 0: [mem 0x00000000d9520000-0x00000000dc7fffff] May 15 10:16:01.806323 kernel: node 0: [mem 0x00000000dc800000-0x00000000dc88ffff] May 15 10:16:01.806328 kernel: node 0: [mem 0x00000000dc890000-0x00000000dc89ffff] May 15 10:16:01.806334 kernel: node 0: [mem 0x00000000dc8a0000-0x00000000dc9bffff] May 15 10:16:01.806340 kernel: node 0: [mem 0x00000000dc9c0000-0x00000000dcffffff] May 15 10:16:01.806345 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] May 15 10:16:01.806352 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges May 15 10:16:01.806358 kernel: psci: probing for conduit method from ACPI. May 15 10:16:01.806363 kernel: psci: PSCIv1.1 detected in firmware. May 15 10:16:01.806369 kernel: psci: Using standard PSCI v0.2 function IDs May 15 10:16:01.806375 kernel: psci: Trusted OS migration not required May 15 10:16:01.806383 kernel: psci: SMC Calling Convention v1.1 May 15 10:16:01.806389 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) May 15 10:16:01.806397 kernel: ACPI: SRAT not present May 15 10:16:01.806403 kernel: percpu: Embedded 30 pages/cpu s83032 r8192 d31656 u122880 May 15 10:16:01.806409 kernel: pcpu-alloc: s83032 r8192 d31656 u122880 alloc=30*4096 May 15 10:16:01.806415 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 May 15 10:16:01.806421 kernel: Detected PIPT I-cache on CPU0 May 15 10:16:01.806427 kernel: CPU features: detected: GIC system register CPU interface May 15 10:16:01.806433 kernel: CPU features: detected: Hardware dirty bit management May 15 10:16:01.806439 kernel: CPU features: detected: Spectre-v4 May 15 10:16:01.806445 kernel: CPU features: detected: Spectre-BHB May 15 10:16:01.806452 kernel: CPU features: kernel page table isolation forced ON by KASLR May 15 10:16:01.806491 kernel: CPU features: detected: Kernel page table isolation (KPTI) May 15 10:16:01.806497 kernel: CPU features: detected: ARM erratum 1418040 May 15 10:16:01.806503 kernel: CPU features: detected: SSBS not fully self-synchronizing May 15 10:16:01.806509 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 May 15 10:16:01.806515 kernel: Policy zone: DMA May 15 10:16:01.806522 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=aa29d2e9841b6b978238db9eff73afa5af149616ae25608914babb265d82dda7 May 15 10:16:01.806536 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 15 10:16:01.806542 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 15 10:16:01.806548 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 15 10:16:01.806554 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 15 10:16:01.806563 kernel: Memory: 2457400K/2572288K available (9792K kernel code, 2094K rwdata, 7584K rodata, 36416K init, 777K bss, 114888K reserved, 0K cma-reserved) May 15 10:16:01.806569 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 15 10:16:01.806575 kernel: trace event string verifier disabled May 15 10:16:01.806581 kernel: rcu: Preemptible hierarchical RCU implementation. May 15 10:16:01.806587 kernel: rcu: RCU event tracing is enabled. May 15 10:16:01.806593 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 15 10:16:01.806600 kernel: Trampoline variant of Tasks RCU enabled. May 15 10:16:01.806606 kernel: Tracing variant of Tasks RCU enabled. May 15 10:16:01.806612 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 15 10:16:01.806618 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 15 10:16:01.806624 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 May 15 10:16:01.806631 kernel: GICv3: 256 SPIs implemented May 15 10:16:01.806637 kernel: GICv3: 0 Extended SPIs implemented May 15 10:16:01.806643 kernel: GICv3: Distributor has no Range Selector support May 15 10:16:01.806649 kernel: Root IRQ handler: gic_handle_irq May 15 10:16:01.806655 kernel: GICv3: 16 PPIs implemented May 15 10:16:01.806661 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 May 15 10:16:01.806666 kernel: ACPI: SRAT not present May 15 10:16:01.806672 kernel: ITS [mem 0x08080000-0x0809ffff] May 15 10:16:01.806678 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400b0000 (indirect, esz 8, psz 64K, shr 1) May 15 10:16:01.806685 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400c0000 (flat, esz 8, psz 64K, shr 1) May 15 10:16:01.806691 kernel: GICv3: using LPI property table @0x00000000400d0000 May 15 10:16:01.806697 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000000400e0000 May 15 10:16:01.806704 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 10:16:01.806710 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). May 15 10:16:01.806717 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns May 15 10:16:01.806723 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns May 15 10:16:01.806729 kernel: arm-pv: using stolen time PV May 15 10:16:01.806735 kernel: Console: colour dummy device 80x25 May 15 10:16:01.806741 kernel: ACPI: Core revision 20210730 May 15 10:16:01.806748 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) May 15 10:16:01.806754 kernel: pid_max: default: 32768 minimum: 301 May 15 10:16:01.806760 kernel: LSM: Security Framework initializing May 15 10:16:01.806768 kernel: SELinux: Initializing. May 15 10:16:01.806774 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 15 10:16:01.806780 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 15 10:16:01.806786 kernel: ACPI PPTT: PPTT table found, but unable to locate core 3 (3) May 15 10:16:01.806792 kernel: rcu: Hierarchical SRCU implementation. May 15 10:16:01.806799 kernel: Platform MSI: ITS@0x8080000 domain created May 15 10:16:01.806805 kernel: PCI/MSI: ITS@0x8080000 domain created May 15 10:16:01.806811 kernel: Remapping and enabling EFI services. May 15 10:16:01.806817 kernel: smp: Bringing up secondary CPUs ... May 15 10:16:01.806825 kernel: Detected PIPT I-cache on CPU1 May 15 10:16:01.806831 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 May 15 10:16:01.806837 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000000400f0000 May 15 10:16:01.806844 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 10:16:01.806850 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] May 15 10:16:01.806856 kernel: Detected PIPT I-cache on CPU2 May 15 10:16:01.806862 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 May 15 10:16:01.806868 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040100000 May 15 10:16:01.806875 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 10:16:01.806881 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] May 15 10:16:01.806888 kernel: Detected PIPT I-cache on CPU3 May 15 10:16:01.806894 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 May 15 10:16:01.806900 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040110000 May 15 10:16:01.806907 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 10:16:01.806917 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] May 15 10:16:01.806925 kernel: smp: Brought up 1 node, 4 CPUs May 15 10:16:01.806931 kernel: SMP: Total of 4 processors activated. May 15 10:16:01.806938 kernel: CPU features: detected: 32-bit EL0 Support May 15 10:16:01.806944 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence May 15 10:16:01.806951 kernel: CPU features: detected: Common not Private translations May 15 10:16:01.806957 kernel: CPU features: detected: CRC32 instructions May 15 10:16:01.806964 kernel: CPU features: detected: RCpc load-acquire (LDAPR) May 15 10:16:01.806971 kernel: CPU features: detected: LSE atomic instructions May 15 10:16:01.806978 kernel: CPU features: detected: Privileged Access Never May 15 10:16:01.806985 kernel: CPU features: detected: RAS Extension Support May 15 10:16:01.806991 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) May 15 10:16:01.806998 kernel: CPU: All CPU(s) started at EL1 May 15 10:16:01.807005 kernel: alternatives: patching kernel code May 15 10:16:01.807011 kernel: devtmpfs: initialized May 15 10:16:01.807018 kernel: KASLR enabled May 15 10:16:01.807024 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 15 10:16:01.807031 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 15 10:16:01.807037 kernel: pinctrl core: initialized pinctrl subsystem May 15 10:16:01.807044 kernel: SMBIOS 3.0.0 present. May 15 10:16:01.807050 kernel: DMI: QEMU KVM Virtual Machine, BIOS 0.0.0 02/06/2015 May 15 10:16:01.807057 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 15 10:16:01.807065 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations May 15 10:16:01.807071 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations May 15 10:16:01.807078 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations May 15 10:16:01.807084 kernel: audit: initializing netlink subsys (disabled) May 15 10:16:01.807091 kernel: audit: type=2000 audit(0.033:1): state=initialized audit_enabled=0 res=1 May 15 10:16:01.807097 kernel: thermal_sys: Registered thermal governor 'step_wise' May 15 10:16:01.807104 kernel: cpuidle: using governor menu May 15 10:16:01.807110 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. May 15 10:16:01.807117 kernel: ASID allocator initialised with 32768 entries May 15 10:16:01.807128 kernel: ACPI: bus type PCI registered May 15 10:16:01.807135 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 15 10:16:01.807142 kernel: Serial: AMBA PL011 UART driver May 15 10:16:01.807148 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages May 15 10:16:01.807155 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages May 15 10:16:01.807161 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages May 15 10:16:01.807168 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages May 15 10:16:01.807174 kernel: cryptd: max_cpu_qlen set to 1000 May 15 10:16:01.807181 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) May 15 10:16:01.807188 kernel: ACPI: Added _OSI(Module Device) May 15 10:16:01.807195 kernel: ACPI: Added _OSI(Processor Device) May 15 10:16:01.807201 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 15 10:16:01.807208 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 15 10:16:01.807214 kernel: ACPI: Added _OSI(Linux-Dell-Video) May 15 10:16:01.807221 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) May 15 10:16:01.807227 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) May 15 10:16:01.807234 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 15 10:16:01.807240 kernel: ACPI: Interpreter enabled May 15 10:16:01.807248 kernel: ACPI: Using GIC for interrupt routing May 15 10:16:01.807254 kernel: ACPI: MCFG table detected, 1 entries May 15 10:16:01.807261 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA May 15 10:16:01.807267 kernel: printk: console [ttyAMA0] enabled May 15 10:16:01.807274 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 15 10:16:01.807393 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 15 10:16:01.807500 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] May 15 10:16:01.807576 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] May 15 10:16:01.807635 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 May 15 10:16:01.807692 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] May 15 10:16:01.807700 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] May 15 10:16:01.807707 kernel: PCI host bridge to bus 0000:00 May 15 10:16:01.807773 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] May 15 10:16:01.807827 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] May 15 10:16:01.807879 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] May 15 10:16:01.807934 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 15 10:16:01.808008 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 May 15 10:16:01.808087 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 May 15 10:16:01.808149 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] May 15 10:16:01.808210 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] May 15 10:16:01.808270 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] May 15 10:16:01.808332 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] May 15 10:16:01.808393 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] May 15 10:16:01.808453 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] May 15 10:16:01.808530 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] May 15 10:16:01.808587 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] May 15 10:16:01.808642 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] May 15 10:16:01.808651 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 May 15 10:16:01.808658 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 May 15 10:16:01.808666 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 May 15 10:16:01.808673 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 May 15 10:16:01.808680 kernel: iommu: Default domain type: Translated May 15 10:16:01.808687 kernel: iommu: DMA domain TLB invalidation policy: strict mode May 15 10:16:01.808693 kernel: vgaarb: loaded May 15 10:16:01.808700 kernel: pps_core: LinuxPPS API ver. 1 registered May 15 10:16:01.808707 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti May 15 10:16:01.808714 kernel: PTP clock support registered May 15 10:16:01.808721 kernel: Registered efivars operations May 15 10:16:01.808729 kernel: clocksource: Switched to clocksource arch_sys_counter May 15 10:16:01.808735 kernel: VFS: Disk quotas dquot_6.6.0 May 15 10:16:01.808742 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 15 10:16:01.808749 kernel: pnp: PnP ACPI init May 15 10:16:01.808817 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved May 15 10:16:01.808831 kernel: pnp: PnP ACPI: found 1 devices May 15 10:16:01.808838 kernel: NET: Registered PF_INET protocol family May 15 10:16:01.808845 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 15 10:16:01.808853 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 15 10:16:01.808860 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 15 10:16:01.808867 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 15 10:16:01.808874 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) May 15 10:16:01.808881 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 15 10:16:01.808888 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 15 10:16:01.808895 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 15 10:16:01.808901 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 15 10:16:01.808908 kernel: PCI: CLS 0 bytes, default 64 May 15 10:16:01.808916 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available May 15 10:16:01.808923 kernel: kvm [1]: HYP mode not available May 15 10:16:01.808930 kernel: Initialise system trusted keyrings May 15 10:16:01.808936 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 15 10:16:01.808943 kernel: Key type asymmetric registered May 15 10:16:01.808949 kernel: Asymmetric key parser 'x509' registered May 15 10:16:01.808956 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) May 15 10:16:01.808963 kernel: io scheduler mq-deadline registered May 15 10:16:01.808970 kernel: io scheduler kyber registered May 15 10:16:01.808978 kernel: io scheduler bfq registered May 15 10:16:01.808985 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 May 15 10:16:01.808991 kernel: ACPI: button: Power Button [PWRB] May 15 10:16:01.808998 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 May 15 10:16:01.809061 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) May 15 10:16:01.809070 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 15 10:16:01.809077 kernel: thunder_xcv, ver 1.0 May 15 10:16:01.809084 kernel: thunder_bgx, ver 1.0 May 15 10:16:01.809090 kernel: nicpf, ver 1.0 May 15 10:16:01.809098 kernel: nicvf, ver 1.0 May 15 10:16:01.809168 kernel: rtc-efi rtc-efi.0: registered as rtc0 May 15 10:16:01.809225 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-05-15T10:16:01 UTC (1747304161) May 15 10:16:01.809234 kernel: hid: raw HID events driver (C) Jiri Kosina May 15 10:16:01.809240 kernel: NET: Registered PF_INET6 protocol family May 15 10:16:01.809247 kernel: Segment Routing with IPv6 May 15 10:16:01.809254 kernel: In-situ OAM (IOAM) with IPv6 May 15 10:16:01.809260 kernel: NET: Registered PF_PACKET protocol family May 15 10:16:01.809268 kernel: Key type dns_resolver registered May 15 10:16:01.809275 kernel: registered taskstats version 1 May 15 10:16:01.809282 kernel: Loading compiled-in X.509 certificates May 15 10:16:01.809288 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.182-flatcar: 3679cbfb4d4756a2ddc177f0eaedea33fb5fdf2e' May 15 10:16:01.809295 kernel: Key type .fscrypt registered May 15 10:16:01.809301 kernel: Key type fscrypt-provisioning registered May 15 10:16:01.809308 kernel: ima: No TPM chip found, activating TPM-bypass! May 15 10:16:01.809315 kernel: ima: Allocated hash algorithm: sha1 May 15 10:16:01.809322 kernel: ima: No architecture policies found May 15 10:16:01.809330 kernel: clk: Disabling unused clocks May 15 10:16:01.809336 kernel: Freeing unused kernel memory: 36416K May 15 10:16:01.809343 kernel: Run /init as init process May 15 10:16:01.809350 kernel: with arguments: May 15 10:16:01.809356 kernel: /init May 15 10:16:01.809362 kernel: with environment: May 15 10:16:01.809369 kernel: HOME=/ May 15 10:16:01.809375 kernel: TERM=linux May 15 10:16:01.809382 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 15 10:16:01.809392 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) May 15 10:16:01.809401 systemd[1]: Detected virtualization kvm. May 15 10:16:01.809408 systemd[1]: Detected architecture arm64. May 15 10:16:01.809416 systemd[1]: Running in initrd. May 15 10:16:01.809423 systemd[1]: No hostname configured, using default hostname. May 15 10:16:01.809430 systemd[1]: Hostname set to . May 15 10:16:01.809437 systemd[1]: Initializing machine ID from VM UUID. May 15 10:16:01.809446 systemd[1]: Queued start job for default target initrd.target. May 15 10:16:01.809453 systemd[1]: Started systemd-ask-password-console.path. May 15 10:16:01.809470 systemd[1]: Reached target cryptsetup.target. May 15 10:16:01.809477 systemd[1]: Reached target paths.target. May 15 10:16:01.809484 systemd[1]: Reached target slices.target. May 15 10:16:01.809491 systemd[1]: Reached target swap.target. May 15 10:16:01.809498 systemd[1]: Reached target timers.target. May 15 10:16:01.809506 systemd[1]: Listening on iscsid.socket. May 15 10:16:01.809514 systemd[1]: Listening on iscsiuio.socket. May 15 10:16:01.809521 systemd[1]: Listening on systemd-journald-audit.socket. May 15 10:16:01.809533 systemd[1]: Listening on systemd-journald-dev-log.socket. May 15 10:16:01.809541 systemd[1]: Listening on systemd-journald.socket. May 15 10:16:01.809548 systemd[1]: Listening on systemd-networkd.socket. May 15 10:16:01.809555 systemd[1]: Listening on systemd-udevd-control.socket. May 15 10:16:01.809562 systemd[1]: Listening on systemd-udevd-kernel.socket. May 15 10:16:01.809569 systemd[1]: Reached target sockets.target. May 15 10:16:01.809578 systemd[1]: Starting kmod-static-nodes.service... May 15 10:16:01.809586 systemd[1]: Finished network-cleanup.service. May 15 10:16:01.809593 systemd[1]: Starting systemd-fsck-usr.service... May 15 10:16:01.809600 systemd[1]: Starting systemd-journald.service... May 15 10:16:01.809607 systemd[1]: Starting systemd-modules-load.service... May 15 10:16:01.809614 systemd[1]: Starting systemd-resolved.service... May 15 10:16:01.809622 systemd[1]: Starting systemd-vconsole-setup.service... May 15 10:16:01.809629 systemd[1]: Finished kmod-static-nodes.service. May 15 10:16:01.809636 systemd[1]: Finished systemd-fsck-usr.service. May 15 10:16:01.809644 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... May 15 10:16:01.809652 systemd[1]: Finished systemd-vconsole-setup.service. May 15 10:16:01.809659 systemd[1]: Starting dracut-cmdline-ask.service... May 15 10:16:01.809666 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. May 15 10:16:01.809673 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 15 10:16:01.809681 kernel: audit: type=1130 audit(1747304161.807:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:01.809692 systemd-journald[291]: Journal started May 15 10:16:01.809735 systemd-journald[291]: Runtime Journal (/run/log/journal/e417184722fe4474a6d221eb8335b766) is 6.0M, max 48.7M, 42.6M free. May 15 10:16:01.807000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:01.781601 systemd-modules-load[292]: Inserted module 'overlay' May 15 10:16:01.814053 systemd[1]: Started systemd-journald.service. May 15 10:16:01.814072 kernel: audit: type=1130 audit(1747304161.810:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:01.810000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:01.819938 systemd-modules-load[292]: Inserted module 'br_netfilter' May 15 10:16:01.821604 kernel: Bridge firewalling registered May 15 10:16:01.821802 systemd[1]: Finished dracut-cmdline-ask.service. May 15 10:16:01.821000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:01.823587 systemd[1]: Starting dracut-cmdline.service... May 15 10:16:01.827764 kernel: audit: type=1130 audit(1747304161.821:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:01.828371 systemd-resolved[293]: Positive Trust Anchors: May 15 10:16:01.828386 systemd-resolved[293]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 15 10:16:01.828510 systemd-resolved[293]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test May 15 10:16:01.835007 systemd-resolved[293]: Defaulting to hostname 'linux'. May 15 10:16:01.835936 systemd[1]: Started systemd-resolved.service. May 15 10:16:01.839495 kernel: audit: type=1130 audit(1747304161.836:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:01.836000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:01.839568 dracut-cmdline[309]: dracut-dracut-053 May 15 10:16:01.841535 kernel: SCSI subsystem initialized May 15 10:16:01.837005 systemd[1]: Reached target nss-lookup.target. May 15 10:16:01.842136 dracut-cmdline[309]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=aa29d2e9841b6b978238db9eff73afa5af149616ae25608914babb265d82dda7 May 15 10:16:01.851169 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 15 10:16:01.851221 kernel: device-mapper: uevent: version 1.0.3 May 15 10:16:01.851231 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com May 15 10:16:01.857319 systemd-modules-load[292]: Inserted module 'dm_multipath' May 15 10:16:01.858497 systemd[1]: Finished systemd-modules-load.service. May 15 10:16:01.858000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:01.860067 systemd[1]: Starting systemd-sysctl.service... May 15 10:16:01.862667 kernel: audit: type=1130 audit(1747304161.858:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:01.869051 systemd[1]: Finished systemd-sysctl.service. May 15 10:16:01.869000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:01.873507 kernel: audit: type=1130 audit(1747304161.869:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:01.914489 kernel: Loading iSCSI transport class v2.0-870. May 15 10:16:01.927503 kernel: iscsi: registered transport (tcp) May 15 10:16:01.942485 kernel: iscsi: registered transport (qla4xxx) May 15 10:16:01.942530 kernel: QLogic iSCSI HBA Driver May 15 10:16:01.977493 systemd[1]: Finished dracut-cmdline.service. May 15 10:16:01.977000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:01.978995 systemd[1]: Starting dracut-pre-udev.service... May 15 10:16:01.981540 kernel: audit: type=1130 audit(1747304161.977:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:02.027494 kernel: raid6: neonx8 gen() 13741 MB/s May 15 10:16:02.044489 kernel: raid6: neonx8 xor() 10689 MB/s May 15 10:16:02.061478 kernel: raid6: neonx4 gen() 13519 MB/s May 15 10:16:02.078475 kernel: raid6: neonx4 xor() 11161 MB/s May 15 10:16:02.095504 kernel: raid6: neonx2 gen() 13004 MB/s May 15 10:16:02.112477 kernel: raid6: neonx2 xor() 10370 MB/s May 15 10:16:02.129474 kernel: raid6: neonx1 gen() 10564 MB/s May 15 10:16:02.146475 kernel: raid6: neonx1 xor() 8770 MB/s May 15 10:16:02.163472 kernel: raid6: int64x8 gen() 6272 MB/s May 15 10:16:02.180472 kernel: raid6: int64x8 xor() 3542 MB/s May 15 10:16:02.197480 kernel: raid6: int64x4 gen() 7205 MB/s May 15 10:16:02.214474 kernel: raid6: int64x4 xor() 3854 MB/s May 15 10:16:02.231475 kernel: raid6: int64x2 gen() 6153 MB/s May 15 10:16:02.248492 kernel: raid6: int64x2 xor() 3317 MB/s May 15 10:16:02.265476 kernel: raid6: int64x1 gen() 5047 MB/s May 15 10:16:02.282799 kernel: raid6: int64x1 xor() 2646 MB/s May 15 10:16:02.282809 kernel: raid6: using algorithm neonx8 gen() 13741 MB/s May 15 10:16:02.282818 kernel: raid6: .... xor() 10689 MB/s, rmw enabled May 15 10:16:02.282835 kernel: raid6: using neon recovery algorithm May 15 10:16:02.294932 kernel: xor: measuring software checksum speed May 15 10:16:02.294954 kernel: 8regs : 17191 MB/sec May 15 10:16:02.294963 kernel: 32regs : 20723 MB/sec May 15 10:16:02.295921 kernel: arm64_neon : 27710 MB/sec May 15 10:16:02.295932 kernel: xor: using function: arm64_neon (27710 MB/sec) May 15 10:16:02.350484 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no May 15 10:16:02.360401 systemd[1]: Finished dracut-pre-udev.service. May 15 10:16:02.364101 kernel: audit: type=1130 audit(1747304162.360:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:02.364124 kernel: audit: type=1334 audit(1747304162.362:10): prog-id=7 op=LOAD May 15 10:16:02.360000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:02.362000 audit: BPF prog-id=7 op=LOAD May 15 10:16:02.363000 audit: BPF prog-id=8 op=LOAD May 15 10:16:02.364473 systemd[1]: Starting systemd-udevd.service... May 15 10:16:02.376570 systemd-udevd[492]: Using default interface naming scheme 'v252'. May 15 10:16:02.379937 systemd[1]: Started systemd-udevd.service. May 15 10:16:02.380000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:02.382019 systemd[1]: Starting dracut-pre-trigger.service... May 15 10:16:02.404944 dracut-pre-trigger[499]: rd.md=0: removing MD RAID activation May 15 10:16:02.431866 systemd[1]: Finished dracut-pre-trigger.service. May 15 10:16:02.432000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:02.433589 systemd[1]: Starting systemd-udev-trigger.service... May 15 10:16:02.475224 systemd[1]: Finished systemd-udev-trigger.service. May 15 10:16:02.475000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:02.509586 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 15 10:16:02.513647 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 15 10:16:02.513668 kernel: GPT:9289727 != 19775487 May 15 10:16:02.513677 kernel: GPT:Alternate GPT header not at the end of the disk. May 15 10:16:02.513686 kernel: GPT:9289727 != 19775487 May 15 10:16:02.513695 kernel: GPT: Use GNU Parted to correct GPT errors. May 15 10:16:02.513704 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 15 10:16:02.523485 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. May 15 10:16:02.528476 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (537) May 15 10:16:02.530924 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. May 15 10:16:02.532056 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. May 15 10:16:02.536400 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. May 15 10:16:02.542411 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. May 15 10:16:02.544268 systemd[1]: Starting disk-uuid.service... May 15 10:16:02.550112 disk-uuid[562]: Primary Header is updated. May 15 10:16:02.550112 disk-uuid[562]: Secondary Entries is updated. May 15 10:16:02.550112 disk-uuid[562]: Secondary Header is updated. May 15 10:16:02.552763 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 15 10:16:03.572343 disk-uuid[563]: The operation has completed successfully. May 15 10:16:03.573317 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 15 10:16:03.599065 systemd[1]: disk-uuid.service: Deactivated successfully. May 15 10:16:03.599163 systemd[1]: Finished disk-uuid.service. May 15 10:16:03.600651 systemd[1]: Starting verity-setup.service... May 15 10:16:03.599000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:03.599000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:03.619072 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" May 15 10:16:03.648303 systemd[1]: Found device dev-mapper-usr.device. May 15 10:16:03.650508 systemd[1]: Mounting sysusr-usr.mount... May 15 10:16:03.652240 systemd[1]: Finished verity-setup.service. May 15 10:16:03.652000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:03.703479 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. May 15 10:16:03.703910 systemd[1]: Mounted sysusr-usr.mount. May 15 10:16:03.704664 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. May 15 10:16:03.705543 systemd[1]: Starting ignition-setup.service... May 15 10:16:03.707287 systemd[1]: Starting parse-ip-for-networkd.service... May 15 10:16:03.720589 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 15 10:16:03.720638 kernel: BTRFS info (device vda6): using free space tree May 15 10:16:03.720648 kernel: BTRFS info (device vda6): has skinny extents May 15 10:16:03.728683 systemd[1]: mnt-oem.mount: Deactivated successfully. May 15 10:16:03.733898 systemd[1]: Finished ignition-setup.service. May 15 10:16:03.734000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:03.735339 systemd[1]: Starting ignition-fetch-offline.service... May 15 10:16:03.798000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:03.797834 systemd[1]: Finished parse-ip-for-networkd.service. May 15 10:16:03.798000 audit: BPF prog-id=9 op=LOAD May 15 10:16:03.799912 systemd[1]: Starting systemd-networkd.service... May 15 10:16:03.830202 systemd-networkd[739]: lo: Link UP May 15 10:16:03.830211 systemd-networkd[739]: lo: Gained carrier May 15 10:16:03.830963 systemd-networkd[739]: Enumeration completed May 15 10:16:03.831000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:03.831055 systemd[1]: Started systemd-networkd.service. May 15 10:16:03.831861 systemd[1]: Reached target network.target. May 15 10:16:03.833882 systemd[1]: Starting iscsiuio.service... May 15 10:16:03.835391 systemd-networkd[739]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 15 10:16:03.837279 systemd-networkd[739]: eth0: Link UP May 15 10:16:03.837283 systemd-networkd[739]: eth0: Gained carrier May 15 10:16:03.844004 systemd[1]: Started iscsiuio.service. May 15 10:16:03.844000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:03.845865 systemd[1]: Starting iscsid.service... May 15 10:16:03.849641 iscsid[744]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi May 15 10:16:03.849641 iscsid[744]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. May 15 10:16:03.849641 iscsid[744]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. May 15 10:16:03.849641 iscsid[744]: If using hardware iscsi like qla4xxx this message can be ignored. May 15 10:16:03.849641 iscsid[744]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi May 15 10:16:03.849641 iscsid[744]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf May 15 10:16:03.857000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:03.859709 ignition[658]: Ignition 2.14.0 May 15 10:16:03.856943 systemd[1]: Started iscsid.service. May 15 10:16:03.859716 ignition[658]: Stage: fetch-offline May 15 10:16:03.858667 systemd[1]: Starting dracut-initqueue.service... May 15 10:16:03.859752 ignition[658]: no configs at "/usr/lib/ignition/base.d" May 15 10:16:03.859760 ignition[658]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 15 10:16:03.859891 ignition[658]: parsed url from cmdline: "" May 15 10:16:03.859895 ignition[658]: no config URL provided May 15 10:16:03.867555 systemd-networkd[739]: eth0: DHCPv4 address 10.0.0.71/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 15 10:16:03.859899 ignition[658]: reading system config file "/usr/lib/ignition/user.ign" May 15 10:16:03.859906 ignition[658]: no config at "/usr/lib/ignition/user.ign" May 15 10:16:03.859928 ignition[658]: op(1): [started] loading QEMU firmware config module May 15 10:16:03.859933 ignition[658]: op(1): executing: "modprobe" "qemu_fw_cfg" May 15 10:16:03.866872 ignition[658]: op(1): [finished] loading QEMU firmware config module May 15 10:16:03.875683 systemd[1]: Finished dracut-initqueue.service. May 15 10:16:03.875000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:03.876563 systemd[1]: Reached target remote-fs-pre.target. May 15 10:16:03.878061 systemd[1]: Reached target remote-cryptsetup.target. May 15 10:16:03.879663 systemd[1]: Reached target remote-fs.target. May 15 10:16:03.882003 systemd[1]: Starting dracut-pre-mount.service... May 15 10:16:03.882526 ignition[658]: parsing config with SHA512: 36e05fb4debb8a1cb951fa31c92a5177202087d468bf33e607473643276885643bc4f896954e0468a424f85000c5d3e221e630e13e115b2c022bac4036ff4974 May 15 10:16:03.889060 unknown[658]: fetched base config from "system" May 15 10:16:03.889385 ignition[658]: fetch-offline: fetch-offline passed May 15 10:16:03.889071 unknown[658]: fetched user config from "qemu" May 15 10:16:03.891000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:03.889436 ignition[658]: Ignition finished successfully May 15 10:16:03.892000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:03.890706 systemd[1]: Finished ignition-fetch-offline.service. May 15 10:16:03.892014 systemd[1]: Finished dracut-pre-mount.service. May 15 10:16:03.893210 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 15 10:16:03.894051 systemd[1]: Starting ignition-kargs.service... May 15 10:16:03.902823 ignition[760]: Ignition 2.14.0 May 15 10:16:03.902833 ignition[760]: Stage: kargs May 15 10:16:03.902928 ignition[760]: no configs at "/usr/lib/ignition/base.d" May 15 10:16:03.904848 systemd[1]: Finished ignition-kargs.service. May 15 10:16:03.905000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:03.902938 ignition[760]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 15 10:16:03.903583 ignition[760]: kargs: kargs passed May 15 10:16:03.907169 systemd[1]: Starting ignition-disks.service... May 15 10:16:03.903628 ignition[760]: Ignition finished successfully May 15 10:16:03.914111 ignition[766]: Ignition 2.14.0 May 15 10:16:03.914123 ignition[766]: Stage: disks May 15 10:16:03.914219 ignition[766]: no configs at "/usr/lib/ignition/base.d" May 15 10:16:03.915651 systemd[1]: Finished ignition-disks.service. May 15 10:16:03.916000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:03.914228 ignition[766]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 15 10:16:03.917276 systemd[1]: Reached target initrd-root-device.target. May 15 10:16:03.914934 ignition[766]: disks: disks passed May 15 10:16:03.918558 systemd[1]: Reached target local-fs-pre.target. May 15 10:16:03.914976 ignition[766]: Ignition finished successfully May 15 10:16:03.920176 systemd[1]: Reached target local-fs.target. May 15 10:16:03.921603 systemd[1]: Reached target sysinit.target. May 15 10:16:03.922744 systemd[1]: Reached target basic.target. May 15 10:16:03.924923 systemd[1]: Starting systemd-fsck-root.service... May 15 10:16:03.935813 systemd-fsck[774]: ROOT: clean, 623/553520 files, 56022/553472 blocks May 15 10:16:03.939906 systemd[1]: Finished systemd-fsck-root.service. May 15 10:16:03.940000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:03.941647 systemd[1]: Mounting sysroot.mount... May 15 10:16:03.946475 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. May 15 10:16:03.946770 systemd[1]: Mounted sysroot.mount. May 15 10:16:03.947514 systemd[1]: Reached target initrd-root-fs.target. May 15 10:16:03.950368 systemd[1]: Mounting sysroot-usr.mount... May 15 10:16:03.951261 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. May 15 10:16:03.951301 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 15 10:16:03.951322 systemd[1]: Reached target ignition-diskful.target. May 15 10:16:03.953194 systemd[1]: Mounted sysroot-usr.mount. May 15 10:16:03.954813 systemd[1]: Starting initrd-setup-root.service... May 15 10:16:03.959037 initrd-setup-root[784]: cut: /sysroot/etc/passwd: No such file or directory May 15 10:16:03.962432 initrd-setup-root[792]: cut: /sysroot/etc/group: No such file or directory May 15 10:16:03.966254 initrd-setup-root[800]: cut: /sysroot/etc/shadow: No such file or directory May 15 10:16:03.970216 initrd-setup-root[808]: cut: /sysroot/etc/gshadow: No such file or directory May 15 10:16:03.997134 systemd[1]: Finished initrd-setup-root.service. May 15 10:16:03.997000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:03.998592 systemd[1]: Starting ignition-mount.service... May 15 10:16:03.999755 systemd[1]: Starting sysroot-boot.service... May 15 10:16:04.004241 bash[825]: umount: /sysroot/usr/share/oem: not mounted. May 15 10:16:04.012346 ignition[827]: INFO : Ignition 2.14.0 May 15 10:16:04.012346 ignition[827]: INFO : Stage: mount May 15 10:16:04.014292 ignition[827]: INFO : no configs at "/usr/lib/ignition/base.d" May 15 10:16:04.014292 ignition[827]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 15 10:16:04.014292 ignition[827]: INFO : mount: mount passed May 15 10:16:04.014292 ignition[827]: INFO : Ignition finished successfully May 15 10:16:04.016000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:04.015477 systemd[1]: Finished ignition-mount.service. May 15 10:16:04.018000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:04.018062 systemd[1]: Finished sysroot-boot.service. May 15 10:16:04.663150 systemd[1]: Mounting sysroot-usr-share-oem.mount... May 15 10:16:04.668472 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (837) May 15 10:16:04.669899 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 15 10:16:04.669929 kernel: BTRFS info (device vda6): using free space tree May 15 10:16:04.669946 kernel: BTRFS info (device vda6): has skinny extents May 15 10:16:04.673088 systemd[1]: Mounted sysroot-usr-share-oem.mount. May 15 10:16:04.674752 systemd[1]: Starting ignition-files.service... May 15 10:16:04.688331 ignition[857]: INFO : Ignition 2.14.0 May 15 10:16:04.688331 ignition[857]: INFO : Stage: files May 15 10:16:04.689895 ignition[857]: INFO : no configs at "/usr/lib/ignition/base.d" May 15 10:16:04.689895 ignition[857]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 15 10:16:04.689895 ignition[857]: DEBUG : files: compiled without relabeling support, skipping May 15 10:16:04.692604 ignition[857]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 15 10:16:04.692604 ignition[857]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 15 10:16:04.697759 ignition[857]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 15 10:16:04.698884 ignition[857]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 15 10:16:04.700164 unknown[857]: wrote ssh authorized keys file for user: core May 15 10:16:04.701133 ignition[857]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 15 10:16:04.701133 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" May 15 10:16:04.701133 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" May 15 10:16:04.701133 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" May 15 10:16:04.706511 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 15 10:16:04.706511 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" May 15 10:16:04.706511 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" May 15 10:16:04.706511 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" May 15 10:16:04.706511 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-arm64.raw: attempt #1 May 15 10:16:05.035038 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK May 15 10:16:05.235554 ignition[857]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-arm64.raw" May 15 10:16:05.235554 ignition[857]: INFO : files: op(7): [started] processing unit "coreos-metadata.service" May 15 10:16:05.238176 ignition[857]: INFO : files: op(7): op(8): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 15 10:16:05.238176 ignition[857]: INFO : files: op(7): op(8): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 15 10:16:05.238176 ignition[857]: INFO : files: op(7): [finished] processing unit "coreos-metadata.service" May 15 10:16:05.238176 ignition[857]: INFO : files: op(9): [started] setting preset to disabled for "coreos-metadata.service" May 15 10:16:05.238176 ignition[857]: INFO : files: op(9): op(a): [started] removing enablement symlink(s) for "coreos-metadata.service" May 15 10:16:05.272002 ignition[857]: INFO : files: op(9): op(a): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 15 10:16:05.273157 ignition[857]: INFO : files: op(9): [finished] setting preset to disabled for "coreos-metadata.service" May 15 10:16:05.273157 ignition[857]: INFO : files: createResultFile: createFiles: op(b): [started] writing file "/sysroot/etc/.ignition-result.json" May 15 10:16:05.273157 ignition[857]: INFO : files: createResultFile: createFiles: op(b): [finished] writing file "/sysroot/etc/.ignition-result.json" May 15 10:16:05.273157 ignition[857]: INFO : files: files passed May 15 10:16:05.273157 ignition[857]: INFO : Ignition finished successfully May 15 10:16:05.275000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:05.273379 systemd[1]: Finished ignition-files.service. May 15 10:16:05.275925 systemd[1]: Starting initrd-setup-root-after-ignition.service... May 15 10:16:05.281925 initrd-setup-root-after-ignition[882]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory May 15 10:16:05.282000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:05.282000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:05.277028 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). May 15 10:16:05.283000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:05.285447 initrd-setup-root-after-ignition[884]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 15 10:16:05.277695 systemd[1]: Starting ignition-quench.service... May 15 10:16:05.281422 systemd[1]: ignition-quench.service: Deactivated successfully. May 15 10:16:05.281514 systemd[1]: Finished ignition-quench.service. May 15 10:16:05.283156 systemd[1]: Finished initrd-setup-root-after-ignition.service. May 15 10:16:05.284270 systemd[1]: Reached target ignition-complete.target. May 15 10:16:05.286629 systemd[1]: Starting initrd-parse-etc.service... May 15 10:16:05.299012 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 15 10:16:05.299115 systemd[1]: Finished initrd-parse-etc.service. May 15 10:16:05.299000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:05.299000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:05.300446 systemd[1]: Reached target initrd-fs.target. May 15 10:16:05.301298 systemd[1]: Reached target initrd.target. May 15 10:16:05.302280 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. May 15 10:16:05.303067 systemd[1]: Starting dracut-pre-pivot.service... May 15 10:16:05.313469 systemd[1]: Finished dracut-pre-pivot.service. May 15 10:16:05.313000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:05.314954 systemd[1]: Starting initrd-cleanup.service... May 15 10:16:05.323285 systemd[1]: Stopped target nss-lookup.target. May 15 10:16:05.324265 systemd[1]: Stopped target remote-cryptsetup.target. May 15 10:16:05.325598 systemd[1]: Stopped target timers.target. May 15 10:16:05.326814 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 15 10:16:05.327000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:05.326933 systemd[1]: Stopped dracut-pre-pivot.service. May 15 10:16:05.328063 systemd[1]: Stopped target initrd.target. May 15 10:16:05.329249 systemd[1]: Stopped target basic.target. May 15 10:16:05.330348 systemd[1]: Stopped target ignition-complete.target. May 15 10:16:05.331598 systemd[1]: Stopped target ignition-diskful.target. May 15 10:16:05.332816 systemd[1]: Stopped target initrd-root-device.target. May 15 10:16:05.334120 systemd[1]: Stopped target remote-fs.target. May 15 10:16:05.335375 systemd[1]: Stopped target remote-fs-pre.target. May 15 10:16:05.336656 systemd[1]: Stopped target sysinit.target. May 15 10:16:05.337786 systemd[1]: Stopped target local-fs.target. May 15 10:16:05.339043 systemd[1]: Stopped target local-fs-pre.target. May 15 10:16:05.340256 systemd[1]: Stopped target swap.target. May 15 10:16:05.342000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:05.341420 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 15 10:16:05.341565 systemd[1]: Stopped dracut-pre-mount.service. May 15 10:16:05.344000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:05.342724 systemd[1]: Stopped target cryptsetup.target. May 15 10:16:05.345000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:05.343776 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 15 10:16:05.343878 systemd[1]: Stopped dracut-initqueue.service. May 15 10:16:05.345177 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 15 10:16:05.345287 systemd[1]: Stopped ignition-fetch-offline.service. May 15 10:16:05.346435 systemd[1]: Stopped target paths.target. May 15 10:16:05.347447 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 15 10:16:05.351498 systemd[1]: Stopped systemd-ask-password-console.path. May 15 10:16:05.353111 systemd[1]: Stopped target slices.target. May 15 10:16:05.353959 systemd[1]: Stopped target sockets.target. May 15 10:16:05.355163 systemd[1]: iscsid.socket: Deactivated successfully. May 15 10:16:05.355239 systemd[1]: Closed iscsid.socket. May 15 10:16:05.357000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:05.356237 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 15 10:16:05.358000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:05.356344 systemd[1]: Stopped initrd-setup-root-after-ignition.service. May 15 10:16:05.357662 systemd[1]: ignition-files.service: Deactivated successfully. May 15 10:16:05.357758 systemd[1]: Stopped ignition-files.service. May 15 10:16:05.362000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:05.359576 systemd[1]: Stopping ignition-mount.service... May 15 10:16:05.360780 systemd[1]: Stopping iscsiuio.service... May 15 10:16:05.365000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:05.366613 ignition[897]: INFO : Ignition 2.14.0 May 15 10:16:05.366613 ignition[897]: INFO : Stage: umount May 15 10:16:05.366613 ignition[897]: INFO : no configs at "/usr/lib/ignition/base.d" May 15 10:16:05.366613 ignition[897]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 15 10:16:05.367000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:05.362542 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 15 10:16:05.370000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:05.371323 ignition[897]: INFO : umount: umount passed May 15 10:16:05.371323 ignition[897]: INFO : Ignition finished successfully May 15 10:16:05.371000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:05.362678 systemd[1]: Stopped kmod-static-nodes.service. May 15 10:16:05.364080 systemd[1]: Stopping sysroot-boot.service... May 15 10:16:05.374000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:05.364662 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 15 10:16:05.375000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:05.364793 systemd[1]: Stopped systemd-udev-trigger.service. May 15 10:16:05.377000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:05.366005 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 15 10:16:05.366093 systemd[1]: Stopped dracut-pre-trigger.service. May 15 10:16:05.369188 systemd[1]: iscsiuio.service: Deactivated successfully. May 15 10:16:05.369284 systemd[1]: Stopped iscsiuio.service. May 15 10:16:05.370969 systemd[1]: ignition-mount.service: Deactivated successfully. May 15 10:16:05.371048 systemd[1]: Stopped ignition-mount.service. May 15 10:16:05.382000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:05.382000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:05.372400 systemd[1]: Stopped target network.target. May 15 10:16:05.373181 systemd[1]: iscsiuio.socket: Deactivated successfully. May 15 10:16:05.373213 systemd[1]: Closed iscsiuio.socket. May 15 10:16:05.374255 systemd[1]: ignition-disks.service: Deactivated successfully. May 15 10:16:05.374296 systemd[1]: Stopped ignition-disks.service. May 15 10:16:05.386000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:05.375438 systemd[1]: ignition-kargs.service: Deactivated successfully. May 15 10:16:05.375491 systemd[1]: Stopped ignition-kargs.service. May 15 10:16:05.376421 systemd[1]: ignition-setup.service: Deactivated successfully. May 15 10:16:05.390000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:05.376466 systemd[1]: Stopped ignition-setup.service. May 15 10:16:05.392000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:05.378759 systemd[1]: Stopping systemd-networkd.service... May 15 10:16:05.379875 systemd[1]: Stopping systemd-resolved.service... May 15 10:16:05.381595 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 15 10:16:05.382128 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 15 10:16:05.382220 systemd[1]: Finished initrd-cleanup.service. May 15 10:16:05.396000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:05.384560 systemd-networkd[739]: eth0: DHCPv6 lease lost May 15 10:16:05.399000 audit: BPF prog-id=9 op=UNLOAD May 15 10:16:05.386065 systemd[1]: systemd-networkd.service: Deactivated successfully. May 15 10:16:05.386148 systemd[1]: Stopped systemd-networkd.service. May 15 10:16:05.402000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:05.387226 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 15 10:16:05.387254 systemd[1]: Closed systemd-networkd.socket. May 15 10:16:05.388743 systemd[1]: Stopping network-cleanup.service... May 15 10:16:05.390575 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 15 10:16:05.406000 audit: BPF prog-id=6 op=UNLOAD May 15 10:16:05.390634 systemd[1]: Stopped parse-ip-for-networkd.service. May 15 10:16:05.391353 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 15 10:16:05.408000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:05.391391 systemd[1]: Stopped systemd-sysctl.service. May 15 10:16:05.409000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:05.396121 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 15 10:16:05.411000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:05.396176 systemd[1]: Stopped systemd-modules-load.service. May 15 10:16:05.396998 systemd[1]: Stopping systemd-udevd.service... May 15 10:16:05.401158 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 15 10:16:05.414000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:05.401645 systemd[1]: systemd-resolved.service: Deactivated successfully. May 15 10:16:05.416000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:05.401740 systemd[1]: Stopped systemd-resolved.service. May 15 10:16:05.417000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:05.407604 systemd[1]: sysroot-boot.service: Deactivated successfully. May 15 10:16:05.419000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:05.407704 systemd[1]: Stopped sysroot-boot.service. May 15 10:16:05.409154 systemd[1]: systemd-udevd.service: Deactivated successfully. May 15 10:16:05.421000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:05.409273 systemd[1]: Stopped systemd-udevd.service. May 15 10:16:05.410682 systemd[1]: network-cleanup.service: Deactivated successfully. May 15 10:16:05.410767 systemd[1]: Stopped network-cleanup.service. May 15 10:16:05.411710 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 15 10:16:05.411742 systemd[1]: Closed systemd-udevd-control.socket. May 15 10:16:05.426000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:05.426000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:05.413098 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 15 10:16:05.413126 systemd[1]: Closed systemd-udevd-kernel.socket. May 15 10:16:05.414322 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 15 10:16:05.414366 systemd[1]: Stopped dracut-pre-udev.service. May 15 10:16:05.415508 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 15 10:16:05.415549 systemd[1]: Stopped dracut-cmdline.service. May 15 10:16:05.417038 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 15 10:16:05.417072 systemd[1]: Stopped dracut-cmdline-ask.service. May 15 10:16:05.418439 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 15 10:16:05.418491 systemd[1]: Stopped initrd-setup-root.service. May 15 10:16:05.420312 systemd[1]: Starting initrd-udevadm-cleanup-db.service... May 15 10:16:05.421108 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 15 10:16:05.421161 systemd[1]: Stopped systemd-vconsole-setup.service. May 15 10:16:05.426442 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 15 10:16:05.426566 systemd[1]: Finished initrd-udevadm-cleanup-db.service. May 15 10:16:05.427380 systemd[1]: Reached target initrd-switch-root.target. May 15 10:16:05.429355 systemd[1]: Starting initrd-switch-root.service... May 15 10:16:05.436067 systemd[1]: Switching root. May 15 10:16:05.453928 iscsid[744]: iscsid shutting down. May 15 10:16:05.454631 systemd-journald[291]: Received SIGTERM from PID 1 (systemd). May 15 10:16:05.454680 systemd-journald[291]: Journal stopped May 15 10:16:07.443268 kernel: SELinux: Class mctp_socket not defined in policy. May 15 10:16:07.443323 kernel: SELinux: Class anon_inode not defined in policy. May 15 10:16:07.443334 kernel: SELinux: the above unknown classes and permissions will be allowed May 15 10:16:07.443344 kernel: SELinux: policy capability network_peer_controls=1 May 15 10:16:07.443358 kernel: SELinux: policy capability open_perms=1 May 15 10:16:07.443368 kernel: SELinux: policy capability extended_socket_class=1 May 15 10:16:07.443381 kernel: SELinux: policy capability always_check_network=0 May 15 10:16:07.443391 kernel: SELinux: policy capability cgroup_seclabel=1 May 15 10:16:07.443400 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 15 10:16:07.443415 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 15 10:16:07.443425 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 15 10:16:07.443435 systemd[1]: Successfully loaded SELinux policy in 36.922ms. May 15 10:16:07.443451 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 7.364ms. May 15 10:16:07.443479 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) May 15 10:16:07.443491 systemd[1]: Detected virtualization kvm. May 15 10:16:07.443501 systemd[1]: Detected architecture arm64. May 15 10:16:07.443520 systemd[1]: Detected first boot. May 15 10:16:07.443532 systemd[1]: Initializing machine ID from VM UUID. May 15 10:16:07.443543 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). May 15 10:16:07.443552 systemd[1]: Populated /etc with preset unit settings. May 15 10:16:07.443563 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 15 10:16:07.443574 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 15 10:16:07.443586 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 10:16:07.443601 kernel: kauditd_printk_skb: 78 callbacks suppressed May 15 10:16:07.443611 kernel: audit: type=1334 audit(1747304167.315:82): prog-id=12 op=LOAD May 15 10:16:07.443621 kernel: audit: type=1334 audit(1747304167.315:83): prog-id=3 op=UNLOAD May 15 10:16:07.443631 kernel: audit: type=1334 audit(1747304167.316:84): prog-id=13 op=LOAD May 15 10:16:07.443640 kernel: audit: type=1334 audit(1747304167.316:85): prog-id=14 op=LOAD May 15 10:16:07.443649 kernel: audit: type=1334 audit(1747304167.316:86): prog-id=4 op=UNLOAD May 15 10:16:07.443659 kernel: audit: type=1334 audit(1747304167.316:87): prog-id=5 op=UNLOAD May 15 10:16:07.443669 kernel: audit: type=1334 audit(1747304167.317:88): prog-id=15 op=LOAD May 15 10:16:07.443678 kernel: audit: type=1334 audit(1747304167.317:89): prog-id=12 op=UNLOAD May 15 10:16:07.443688 kernel: audit: type=1334 audit(1747304167.317:90): prog-id=16 op=LOAD May 15 10:16:07.443700 systemd[1]: iscsid.service: Deactivated successfully. May 15 10:16:07.443712 kernel: audit: type=1334 audit(1747304167.318:91): prog-id=17 op=LOAD May 15 10:16:07.443722 systemd[1]: Stopped iscsid.service. May 15 10:16:07.443733 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 15 10:16:07.443743 systemd[1]: Stopped initrd-switch-root.service. May 15 10:16:07.443754 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 15 10:16:07.443764 systemd[1]: Created slice system-addon\x2dconfig.slice. May 15 10:16:07.443775 systemd[1]: Created slice system-addon\x2drun.slice. May 15 10:16:07.443785 systemd[1]: Created slice system-getty.slice. May 15 10:16:07.443796 systemd[1]: Created slice system-modprobe.slice. May 15 10:16:07.443807 systemd[1]: Created slice system-serial\x2dgetty.slice. May 15 10:16:07.443817 systemd[1]: Created slice system-system\x2dcloudinit.slice. May 15 10:16:07.443828 systemd[1]: Created slice system-systemd\x2dfsck.slice. May 15 10:16:07.443838 systemd[1]: Created slice user.slice. May 15 10:16:07.443848 systemd[1]: Started systemd-ask-password-console.path. May 15 10:16:07.443859 systemd[1]: Started systemd-ask-password-wall.path. May 15 10:16:07.443869 systemd[1]: Set up automount boot.automount. May 15 10:16:07.443880 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. May 15 10:16:07.443891 systemd[1]: Stopped target initrd-switch-root.target. May 15 10:16:07.443902 systemd[1]: Stopped target initrd-fs.target. May 15 10:16:07.443912 systemd[1]: Stopped target initrd-root-fs.target. May 15 10:16:07.443922 systemd[1]: Reached target integritysetup.target. May 15 10:16:07.443934 systemd[1]: Reached target remote-cryptsetup.target. May 15 10:16:07.443945 systemd[1]: Reached target remote-fs.target. May 15 10:16:07.443956 systemd[1]: Reached target slices.target. May 15 10:16:07.443966 systemd[1]: Reached target swap.target. May 15 10:16:07.443977 systemd[1]: Reached target torcx.target. May 15 10:16:07.443988 systemd[1]: Reached target veritysetup.target. May 15 10:16:07.443998 systemd[1]: Listening on systemd-coredump.socket. May 15 10:16:07.444008 systemd[1]: Listening on systemd-initctl.socket. May 15 10:16:07.444018 systemd[1]: Listening on systemd-networkd.socket. May 15 10:16:07.444028 systemd[1]: Listening on systemd-udevd-control.socket. May 15 10:16:07.444039 systemd[1]: Listening on systemd-udevd-kernel.socket. May 15 10:16:07.444050 systemd[1]: Listening on systemd-userdbd.socket. May 15 10:16:07.444060 systemd[1]: Mounting dev-hugepages.mount... May 15 10:16:07.444071 systemd[1]: Mounting dev-mqueue.mount... May 15 10:16:07.444081 systemd[1]: Mounting media.mount... May 15 10:16:07.444091 systemd[1]: Mounting sys-kernel-debug.mount... May 15 10:16:07.444102 systemd[1]: Mounting sys-kernel-tracing.mount... May 15 10:16:07.444112 systemd[1]: Mounting tmp.mount... May 15 10:16:07.444122 systemd[1]: Starting flatcar-tmpfiles.service... May 15 10:16:07.444132 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 15 10:16:07.444144 systemd[1]: Starting kmod-static-nodes.service... May 15 10:16:07.444154 systemd[1]: Starting modprobe@configfs.service... May 15 10:16:07.444166 systemd[1]: Starting modprobe@dm_mod.service... May 15 10:16:07.444178 systemd[1]: Starting modprobe@drm.service... May 15 10:16:07.444188 systemd[1]: Starting modprobe@efi_pstore.service... May 15 10:16:07.444198 systemd[1]: Starting modprobe@fuse.service... May 15 10:16:07.444208 systemd[1]: Starting modprobe@loop.service... May 15 10:16:07.444219 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 15 10:16:07.444230 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 15 10:16:07.444241 systemd[1]: Stopped systemd-fsck-root.service. May 15 10:16:07.444252 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 15 10:16:07.444262 systemd[1]: Stopped systemd-fsck-usr.service. May 15 10:16:07.444272 kernel: loop: module loaded May 15 10:16:07.444282 systemd[1]: Stopped systemd-journald.service. May 15 10:16:07.444292 systemd[1]: Starting systemd-journald.service... May 15 10:16:07.444303 kernel: fuse: init (API version 7.34) May 15 10:16:07.444313 systemd[1]: Starting systemd-modules-load.service... May 15 10:16:07.444323 systemd[1]: Starting systemd-network-generator.service... May 15 10:16:07.444335 systemd[1]: Starting systemd-remount-fs.service... May 15 10:16:07.444345 systemd[1]: Starting systemd-udev-trigger.service... May 15 10:16:07.444356 systemd[1]: verity-setup.service: Deactivated successfully. May 15 10:16:07.444370 systemd[1]: Stopped verity-setup.service. May 15 10:16:07.444380 systemd[1]: Mounted dev-hugepages.mount. May 15 10:16:07.444391 systemd[1]: Mounted dev-mqueue.mount. May 15 10:16:07.444402 systemd[1]: Mounted media.mount. May 15 10:16:07.444412 systemd[1]: Mounted sys-kernel-debug.mount. May 15 10:16:07.444426 systemd-journald[997]: Journal started May 15 10:16:07.444474 systemd-journald[997]: Runtime Journal (/run/log/journal/e417184722fe4474a6d221eb8335b766) is 6.0M, max 48.7M, 42.6M free. May 15 10:16:05.523000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 May 15 10:16:05.590000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 May 15 10:16:05.590000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 May 15 10:16:05.590000 audit: BPF prog-id=10 op=LOAD May 15 10:16:05.590000 audit: BPF prog-id=10 op=UNLOAD May 15 10:16:05.590000 audit: BPF prog-id=11 op=LOAD May 15 10:16:05.590000 audit: BPF prog-id=11 op=UNLOAD May 15 10:16:05.640000 audit[931]: AVC avc: denied { associate } for pid=931 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" May 15 10:16:05.640000 audit[931]: SYSCALL arch=c00000b7 syscall=5 success=yes exit=0 a0=40001c589c a1=40000c8de0 a2=40000cf040 a3=32 items=0 ppid=914 pid=931 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:16:05.640000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 May 15 10:16:05.641000 audit[931]: AVC avc: denied { associate } for pid=931 comm="torcx-generator" name="lib" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 May 15 10:16:05.641000 audit[931]: SYSCALL arch=c00000b7 syscall=34 success=yes exit=0 a0=ffffffffffffff9c a1=40001c5979 a2=1ed a3=0 items=2 ppid=914 pid=931 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:16:05.641000 audit: CWD cwd="/" May 15 10:16:05.641000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:16:05.641000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 15 10:16:05.641000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 May 15 10:16:07.315000 audit: BPF prog-id=12 op=LOAD May 15 10:16:07.315000 audit: BPF prog-id=3 op=UNLOAD May 15 10:16:07.316000 audit: BPF prog-id=13 op=LOAD May 15 10:16:07.316000 audit: BPF prog-id=14 op=LOAD May 15 10:16:07.316000 audit: BPF prog-id=4 op=UNLOAD May 15 10:16:07.316000 audit: BPF prog-id=5 op=UNLOAD May 15 10:16:07.317000 audit: BPF prog-id=15 op=LOAD May 15 10:16:07.317000 audit: BPF prog-id=12 op=UNLOAD May 15 10:16:07.317000 audit: BPF prog-id=16 op=LOAD May 15 10:16:07.318000 audit: BPF prog-id=17 op=LOAD May 15 10:16:07.318000 audit: BPF prog-id=13 op=UNLOAD May 15 10:16:07.318000 audit: BPF prog-id=14 op=UNLOAD May 15 10:16:07.319000 audit: BPF prog-id=18 op=LOAD May 15 10:16:07.319000 audit: BPF prog-id=15 op=UNLOAD May 15 10:16:07.320000 audit: BPF prog-id=19 op=LOAD May 15 10:16:07.320000 audit: BPF prog-id=20 op=LOAD May 15 10:16:07.320000 audit: BPF prog-id=16 op=UNLOAD May 15 10:16:07.320000 audit: BPF prog-id=17 op=UNLOAD May 15 10:16:07.321000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:07.323000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:07.326000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:07.326000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:07.334000 audit: BPF prog-id=18 op=UNLOAD May 15 10:16:07.416000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:07.419000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:07.420000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:07.420000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:07.421000 audit: BPF prog-id=21 op=LOAD May 15 10:16:07.421000 audit: BPF prog-id=22 op=LOAD May 15 10:16:07.421000 audit: BPF prog-id=23 op=LOAD May 15 10:16:07.421000 audit: BPF prog-id=19 op=UNLOAD May 15 10:16:07.421000 audit: BPF prog-id=20 op=UNLOAD May 15 10:16:07.437000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:07.442000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 May 15 10:16:07.442000 audit[997]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=6 a1=fffff3c87a10 a2=4000 a3=1 items=0 ppid=1 pid=997 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:16:07.442000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" May 15 10:16:05.638795 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2025-05-15T10:16:05Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.100 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.100 /var/lib/torcx/store]" May 15 10:16:07.315008 systemd[1]: Queued start job for default target multi-user.target. May 15 10:16:05.639148 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2025-05-15T10:16:05Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json May 15 10:16:07.315020 systemd[1]: Unnecessary job was removed for dev-vda6.device. May 15 10:16:05.639168 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2025-05-15T10:16:05Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json May 15 10:16:07.321535 systemd[1]: systemd-journald.service: Deactivated successfully. May 15 10:16:05.639202 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2025-05-15T10:16:05Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" May 15 10:16:05.639213 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2025-05-15T10:16:05Z" level=debug msg="skipped missing lower profile" missing profile=oem May 15 10:16:07.446066 systemd[1]: Started systemd-journald.service. May 15 10:16:05.639246 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2025-05-15T10:16:05Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" May 15 10:16:05.639258 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2025-05-15T10:16:05Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= May 15 10:16:07.446000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:05.639500 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2025-05-15T10:16:05Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack May 15 10:16:05.639549 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2025-05-15T10:16:05Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json May 15 10:16:05.639562 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2025-05-15T10:16:05Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json May 15 10:16:05.640283 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2025-05-15T10:16:05Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 May 15 10:16:07.446452 systemd[1]: Mounted sys-kernel-tracing.mount. May 15 10:16:05.640322 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2025-05-15T10:16:05Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl May 15 10:16:05.640341 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2025-05-15T10:16:05Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.100: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.100 May 15 10:16:05.640355 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2025-05-15T10:16:05Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store May 15 10:16:05.640373 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2025-05-15T10:16:05Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.100: no such file or directory" path=/var/lib/torcx/store/3510.3.100 May 15 10:16:05.640387 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2025-05-15T10:16:05Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store May 15 10:16:07.079029 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2025-05-15T10:16:07Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 15 10:16:07.079297 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2025-05-15T10:16:07Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 15 10:16:07.079401 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2025-05-15T10:16:07Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 15 10:16:07.079603 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2025-05-15T10:16:07Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 15 10:16:07.079652 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2025-05-15T10:16:07Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= May 15 10:16:07.079708 /usr/lib/systemd/system-generators/torcx-generator[931]: time="2025-05-15T10:16:07Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx May 15 10:16:07.447675 systemd[1]: Mounted tmp.mount. May 15 10:16:07.449319 systemd[1]: Finished kmod-static-nodes.service. May 15 10:16:07.450000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:07.450449 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 15 10:16:07.450671 systemd[1]: Finished modprobe@configfs.service. May 15 10:16:07.451000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:07.451000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:07.451812 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 15 10:16:07.451938 systemd[1]: Finished modprobe@dm_mod.service. May 15 10:16:07.452000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:07.452000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:07.453046 systemd[1]: modprobe@drm.service: Deactivated successfully. May 15 10:16:07.453209 systemd[1]: Finished modprobe@drm.service. May 15 10:16:07.453000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:07.453000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:07.454377 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 15 10:16:07.454572 systemd[1]: Finished modprobe@efi_pstore.service. May 15 10:16:07.454000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:07.454000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:07.455797 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 15 10:16:07.455919 systemd[1]: Finished modprobe@fuse.service. May 15 10:16:07.456000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:07.456000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:07.457051 systemd[1]: modprobe@loop.service: Deactivated successfully. May 15 10:16:07.457166 systemd[1]: Finished modprobe@loop.service. May 15 10:16:07.457000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:07.457000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:07.459379 systemd[1]: Finished systemd-modules-load.service. May 15 10:16:07.460000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:07.460716 systemd[1]: Finished flatcar-tmpfiles.service. May 15 10:16:07.461000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:07.461920 systemd[1]: Finished systemd-network-generator.service. May 15 10:16:07.462000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:07.463291 systemd[1]: Finished systemd-remount-fs.service. May 15 10:16:07.464000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:07.464657 systemd[1]: Reached target network-pre.target. May 15 10:16:07.466767 systemd[1]: Mounting sys-fs-fuse-connections.mount... May 15 10:16:07.468789 systemd[1]: Mounting sys-kernel-config.mount... May 15 10:16:07.469550 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 15 10:16:07.471049 systemd[1]: Starting systemd-hwdb-update.service... May 15 10:16:07.473173 systemd[1]: Starting systemd-journal-flush.service... May 15 10:16:07.474216 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 15 10:16:07.475231 systemd[1]: Starting systemd-random-seed.service... May 15 10:16:07.476212 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 15 10:16:07.477293 systemd[1]: Starting systemd-sysctl.service... May 15 10:16:07.479323 systemd[1]: Starting systemd-sysusers.service... May 15 10:16:07.479695 systemd-journald[997]: Time spent on flushing to /var/log/journal/e417184722fe4474a6d221eb8335b766 is 22.246ms for 981 entries. May 15 10:16:07.479695 systemd-journald[997]: System Journal (/var/log/journal/e417184722fe4474a6d221eb8335b766) is 8.0M, max 195.6M, 187.6M free. May 15 10:16:07.511601 systemd-journald[997]: Received client request to flush runtime journal. May 15 10:16:07.489000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:07.494000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:07.498000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:07.511000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:07.484277 systemd[1]: Mounted sys-fs-fuse-connections.mount. May 15 10:16:07.485339 systemd[1]: Mounted sys-kernel-config.mount. May 15 10:16:07.489173 systemd[1]: Finished systemd-udev-trigger.service. May 15 10:16:07.512283 udevadm[1032]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. May 15 10:16:07.491342 systemd[1]: Starting systemd-udev-settle.service... May 15 10:16:07.494001 systemd[1]: Finished systemd-sysctl.service. May 15 10:16:07.498050 systemd[1]: Finished systemd-random-seed.service. May 15 10:16:07.499226 systemd[1]: Reached target first-boot-complete.target. May 15 10:16:07.510573 systemd[1]: Finished systemd-sysusers.service. May 15 10:16:07.512564 systemd[1]: Finished systemd-journal-flush.service. May 15 10:16:07.512000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:07.886762 systemd[1]: Finished systemd-hwdb-update.service. May 15 10:16:07.886000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:07.887000 audit: BPF prog-id=24 op=LOAD May 15 10:16:07.887000 audit: BPF prog-id=25 op=LOAD May 15 10:16:07.887000 audit: BPF prog-id=7 op=UNLOAD May 15 10:16:07.887000 audit: BPF prog-id=8 op=UNLOAD May 15 10:16:07.888765 systemd[1]: Starting systemd-udevd.service... May 15 10:16:07.907138 systemd-udevd[1034]: Using default interface naming scheme 'v252'. May 15 10:16:07.921440 systemd[1]: Started systemd-udevd.service. May 15 10:16:07.921000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:07.923000 audit: BPF prog-id=26 op=LOAD May 15 10:16:07.924164 systemd[1]: Starting systemd-networkd.service... May 15 10:16:07.931000 audit: BPF prog-id=27 op=LOAD May 15 10:16:07.931000 audit: BPF prog-id=28 op=LOAD May 15 10:16:07.931000 audit: BPF prog-id=29 op=LOAD May 15 10:16:07.932559 systemd[1]: Starting systemd-userdbd.service... May 15 10:16:07.952991 systemd[1]: Condition check resulted in dev-ttyAMA0.device being skipped. May 15 10:16:07.958094 systemd[1]: Started systemd-userdbd.service. May 15 10:16:07.958000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:08.016993 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. May 15 10:16:08.017106 systemd-networkd[1037]: lo: Link UP May 15 10:16:08.017110 systemd-networkd[1037]: lo: Gained carrier May 15 10:16:08.017560 systemd-networkd[1037]: Enumeration completed May 15 10:16:08.017672 systemd-networkd[1037]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 15 10:16:08.017797 systemd[1]: Started systemd-networkd.service. May 15 10:16:08.017000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:08.018941 systemd-networkd[1037]: eth0: Link UP May 15 10:16:08.018952 systemd-networkd[1037]: eth0: Gained carrier May 15 10:16:08.041636 systemd-networkd[1037]: eth0: DHCPv4 address 10.0.0.71/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 15 10:16:08.043932 systemd[1]: Finished systemd-udev-settle.service. May 15 10:16:08.044000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:08.045852 systemd[1]: Starting lvm2-activation-early.service... May 15 10:16:08.062726 lvm[1067]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 15 10:16:08.089333 systemd[1]: Finished lvm2-activation-early.service. May 15 10:16:08.089000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:08.090407 systemd[1]: Reached target cryptsetup.target. May 15 10:16:08.092546 systemd[1]: Starting lvm2-activation.service... May 15 10:16:08.096113 lvm[1068]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 15 10:16:08.129927 systemd[1]: Finished lvm2-activation.service. May 15 10:16:08.130000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:08.130900 systemd[1]: Reached target local-fs-pre.target. May 15 10:16:08.131749 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 15 10:16:08.131783 systemd[1]: Reached target local-fs.target. May 15 10:16:08.132546 systemd[1]: Reached target machines.target. May 15 10:16:08.134530 systemd[1]: Starting ldconfig.service... May 15 10:16:08.135530 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 15 10:16:08.135589 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 15 10:16:08.136621 systemd[1]: Starting systemd-boot-update.service... May 15 10:16:08.138383 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... May 15 10:16:08.140826 systemd[1]: Starting systemd-machine-id-commit.service... May 15 10:16:08.143197 systemd[1]: Starting systemd-sysext.service... May 15 10:16:08.144243 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1070 (bootctl) May 15 10:16:08.145707 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... May 15 10:16:08.149642 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. May 15 10:16:08.150000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:08.164838 systemd[1]: Unmounting usr-share-oem.mount... May 15 10:16:08.168192 systemd[1]: usr-share-oem.mount: Deactivated successfully. May 15 10:16:08.168397 systemd[1]: Unmounted usr-share-oem.mount. May 15 10:16:08.250000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:08.247963 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 15 10:16:08.251181 systemd-fsck[1079]: fsck.fat 4.2 (2021-01-31) May 15 10:16:08.251181 systemd-fsck[1079]: /dev/vda1: 236 files, 117182/258078 clusters May 15 10:16:08.249302 systemd[1]: Finished systemd-machine-id-commit.service. May 15 10:16:08.251505 kernel: loop0: detected capacity change from 0 to 201592 May 15 10:16:08.252854 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. May 15 10:16:08.253000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:08.256610 systemd[1]: Mounting boot.mount... May 15 10:16:08.263857 systemd[1]: Mounted boot.mount. May 15 10:16:08.264471 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 15 10:16:08.271758 systemd[1]: Finished systemd-boot-update.service. May 15 10:16:08.272000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:08.281506 kernel: loop1: detected capacity change from 0 to 201592 May 15 10:16:08.288288 (sd-sysext)[1085]: Using extensions 'kubernetes'. May 15 10:16:08.288696 (sd-sysext)[1085]: Merged extensions into '/usr'. May 15 10:16:08.308786 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 15 10:16:08.310249 systemd[1]: Starting modprobe@dm_mod.service... May 15 10:16:08.312351 systemd[1]: Starting modprobe@efi_pstore.service... May 15 10:16:08.314554 systemd[1]: Starting modprobe@loop.service... May 15 10:16:08.315380 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 15 10:16:08.315551 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 15 10:16:08.316323 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 15 10:16:08.316454 systemd[1]: Finished modprobe@dm_mod.service. May 15 10:16:08.317000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:08.317000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:08.318047 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 15 10:16:08.318234 systemd[1]: Finished modprobe@efi_pstore.service. May 15 10:16:08.319000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:08.319000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:08.319784 systemd[1]: modprobe@loop.service: Deactivated successfully. May 15 10:16:08.319901 systemd[1]: Finished modprobe@loop.service. May 15 10:16:08.320000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:08.320000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:08.321337 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 15 10:16:08.321455 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 15 10:16:08.363984 ldconfig[1069]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 15 10:16:08.367214 systemd[1]: Finished ldconfig.service. May 15 10:16:08.367000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:08.440634 systemd[1]: Mounting usr-share-oem.mount... May 15 10:16:08.445794 systemd[1]: Mounted usr-share-oem.mount. May 15 10:16:08.447706 systemd[1]: Finished systemd-sysext.service. May 15 10:16:08.448000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:08.449851 systemd[1]: Starting ensure-sysext.service... May 15 10:16:08.451588 systemd[1]: Starting systemd-tmpfiles-setup.service... May 15 10:16:08.456033 systemd[1]: Reloading. May 15 10:16:08.465181 systemd-tmpfiles[1092]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. May 15 10:16:08.467504 systemd-tmpfiles[1092]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 15 10:16:08.470494 systemd-tmpfiles[1092]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 15 10:16:08.498164 /usr/lib/systemd/system-generators/torcx-generator[1112]: time="2025-05-15T10:16:08Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.100 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.100 /var/lib/torcx/store]" May 15 10:16:08.500199 /usr/lib/systemd/system-generators/torcx-generator[1112]: time="2025-05-15T10:16:08Z" level=info msg="torcx already run" May 15 10:16:08.553718 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 15 10:16:08.553739 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 15 10:16:08.569595 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 10:16:08.611000 audit: BPF prog-id=30 op=LOAD May 15 10:16:08.611000 audit: BPF prog-id=31 op=LOAD May 15 10:16:08.611000 audit: BPF prog-id=24 op=UNLOAD May 15 10:16:08.611000 audit: BPF prog-id=25 op=UNLOAD May 15 10:16:08.612000 audit: BPF prog-id=32 op=LOAD May 15 10:16:08.612000 audit: BPF prog-id=27 op=UNLOAD May 15 10:16:08.612000 audit: BPF prog-id=33 op=LOAD May 15 10:16:08.612000 audit: BPF prog-id=34 op=LOAD May 15 10:16:08.612000 audit: BPF prog-id=28 op=UNLOAD May 15 10:16:08.612000 audit: BPF prog-id=29 op=UNLOAD May 15 10:16:08.614000 audit: BPF prog-id=35 op=LOAD May 15 10:16:08.614000 audit: BPF prog-id=26 op=UNLOAD May 15 10:16:08.614000 audit: BPF prog-id=36 op=LOAD May 15 10:16:08.614000 audit: BPF prog-id=21 op=UNLOAD May 15 10:16:08.614000 audit: BPF prog-id=37 op=LOAD May 15 10:16:08.614000 audit: BPF prog-id=38 op=LOAD May 15 10:16:08.614000 audit: BPF prog-id=22 op=UNLOAD May 15 10:16:08.614000 audit: BPF prog-id=23 op=UNLOAD May 15 10:16:08.617252 systemd[1]: Finished systemd-tmpfiles-setup.service. May 15 10:16:08.617000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:08.621430 systemd[1]: Starting audit-rules.service... May 15 10:16:08.623289 systemd[1]: Starting clean-ca-certificates.service... May 15 10:16:08.625997 systemd[1]: Starting systemd-journal-catalog-update.service... May 15 10:16:08.628000 audit: BPF prog-id=39 op=LOAD May 15 10:16:08.632690 systemd[1]: Starting systemd-resolved.service... May 15 10:16:08.634000 audit: BPF prog-id=40 op=LOAD May 15 10:16:08.635338 systemd[1]: Starting systemd-timesyncd.service... May 15 10:16:08.637242 systemd[1]: Starting systemd-update-utmp.service... May 15 10:16:08.638702 systemd[1]: Finished clean-ca-certificates.service. May 15 10:16:08.639000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:08.641947 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 15 10:16:08.645000 audit[1162]: SYSTEM_BOOT pid=1162 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' May 15 10:16:08.646216 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 15 10:16:08.647502 systemd[1]: Starting modprobe@dm_mod.service... May 15 10:16:08.650751 systemd[1]: Starting modprobe@efi_pstore.service... May 15 10:16:08.653058 systemd[1]: Starting modprobe@loop.service... May 15 10:16:08.653907 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 15 10:16:08.654053 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 15 10:16:08.654154 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 15 10:16:08.655045 systemd[1]: Finished systemd-journal-catalog-update.service. May 15 10:16:08.655000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:08.656544 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 15 10:16:08.656683 systemd[1]: Finished modprobe@dm_mod.service. May 15 10:16:08.657000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:08.657000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:08.657950 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 15 10:16:08.658069 systemd[1]: Finished modprobe@efi_pstore.service. May 15 10:16:08.658000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:08.658000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:08.659391 systemd[1]: modprobe@loop.service: Deactivated successfully. May 15 10:16:08.659658 systemd[1]: Finished modprobe@loop.service. May 15 10:16:08.660000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:08.660000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:08.664335 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 15 10:16:08.665868 systemd[1]: Starting modprobe@dm_mod.service... May 15 10:16:08.667842 systemd[1]: Starting modprobe@efi_pstore.service... May 15 10:16:08.669855 systemd[1]: Starting modprobe@loop.service... May 15 10:16:08.670674 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 15 10:16:08.670825 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 15 10:16:08.672316 systemd[1]: Starting systemd-update-done.service... May 15 10:16:08.673342 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 15 10:16:08.675000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:08.674555 systemd[1]: Finished systemd-update-utmp.service. May 15 10:16:08.675906 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 15 10:16:08.676018 systemd[1]: Finished modprobe@dm_mod.service. May 15 10:16:08.676000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:08.676000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:08.677372 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 15 10:16:08.677519 systemd[1]: Finished modprobe@efi_pstore.service. May 15 10:16:08.678000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:08.678000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:08.678919 systemd[1]: modprobe@loop.service: Deactivated successfully. May 15 10:16:08.679037 systemd[1]: Finished modprobe@loop.service. May 15 10:16:08.679000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:08.679000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:08.680399 systemd[1]: Finished systemd-update-done.service. May 15 10:16:08.681000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 15 10:16:08.685005 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 15 10:16:08.686925 systemd[1]: Starting modprobe@dm_mod.service... May 15 10:16:08.689125 systemd[1]: Starting modprobe@drm.service... May 15 10:16:08.695705 systemd[1]: Starting modprobe@efi_pstore.service... May 15 10:16:08.698124 systemd[1]: Starting modprobe@loop.service... May 15 10:16:08.698988 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 15 10:16:08.699123 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 15 10:16:08.699000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 May 15 10:16:08.699000 audit[1180]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffdbf84670 a2=420 a3=0 items=0 ppid=1151 pid=1180 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) May 15 10:16:08.699000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 May 15 10:16:08.700334 augenrules[1180]: No rules May 15 10:16:08.700725 systemd[1]: Starting systemd-networkd-wait-online.service... May 15 10:16:08.701926 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 15 10:16:08.703350 systemd[1]: Finished audit-rules.service. May 15 10:16:08.704688 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 15 10:16:08.704813 systemd[1]: Finished modprobe@dm_mod.service. May 15 10:16:08.705927 systemd[1]: Started systemd-timesyncd.service. May 15 10:16:08.305503 systemd-timesyncd[1161]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 15 10:16:08.351556 systemd-journald[997]: Time jumped backwards, rotating. May 15 10:16:08.305567 systemd-timesyncd[1161]: Initial clock synchronization to Thu 2025-05-15 10:16:08.305414 UTC. May 15 10:16:08.307176 systemd[1]: modprobe@drm.service: Deactivated successfully. May 15 10:16:08.307297 systemd[1]: Finished modprobe@drm.service. May 15 10:16:08.309012 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 15 10:16:08.309205 systemd[1]: Finished modprobe@efi_pstore.service. May 15 10:16:08.310665 systemd[1]: modprobe@loop.service: Deactivated successfully. May 15 10:16:08.310767 systemd[1]: Finished modprobe@loop.service. May 15 10:16:08.312223 systemd[1]: Reached target time-set.target. May 15 10:16:08.313049 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 15 10:16:08.313092 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 15 10:16:08.313411 systemd[1]: Finished ensure-sysext.service. May 15 10:16:08.322812 systemd-resolved[1155]: Positive Trust Anchors: May 15 10:16:08.322820 systemd-resolved[1155]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 15 10:16:08.322846 systemd-resolved[1155]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test May 15 10:16:08.360890 systemd-resolved[1155]: Defaulting to hostname 'linux'. May 15 10:16:08.362400 systemd[1]: Started systemd-resolved.service. May 15 10:16:08.363365 systemd[1]: Reached target network.target. May 15 10:16:08.364178 systemd[1]: Reached target nss-lookup.target. May 15 10:16:08.365063 systemd[1]: Reached target sysinit.target. May 15 10:16:08.365962 systemd[1]: Started motdgen.path. May 15 10:16:08.366693 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. May 15 10:16:08.367928 systemd[1]: Started logrotate.timer. May 15 10:16:08.368802 systemd[1]: Started mdadm.timer. May 15 10:16:08.369494 systemd[1]: Started systemd-tmpfiles-clean.timer. May 15 10:16:08.370337 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 15 10:16:08.370372 systemd[1]: Reached target paths.target. May 15 10:16:08.371148 systemd[1]: Reached target timers.target. May 15 10:16:08.372289 systemd[1]: Listening on dbus.socket. May 15 10:16:08.374290 systemd[1]: Starting docker.socket... May 15 10:16:08.378035 systemd[1]: Listening on sshd.socket. May 15 10:16:08.378951 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 15 10:16:08.379403 systemd[1]: Listening on docker.socket. May 15 10:16:08.380358 systemd[1]: Reached target sockets.target. May 15 10:16:08.381277 systemd[1]: Reached target basic.target. May 15 10:16:08.382131 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. May 15 10:16:08.382180 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. May 15 10:16:08.383217 systemd[1]: Starting containerd.service... May 15 10:16:08.385118 systemd[1]: Starting dbus.service... May 15 10:16:08.386951 systemd[1]: Starting enable-oem-cloudinit.service... May 15 10:16:08.389402 systemd[1]: Starting extend-filesystems.service... May 15 10:16:08.390436 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). May 15 10:16:08.392130 systemd[1]: Starting motdgen.service... May 15 10:16:08.395391 systemd[1]: Starting ssh-key-proc-cmdline.service... May 15 10:16:08.399538 jq[1195]: false May 15 10:16:08.399768 systemd[1]: Starting sshd-keygen.service... May 15 10:16:08.412869 systemd[1]: Starting systemd-logind.service... May 15 10:16:08.413747 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 15 10:16:08.413888 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 15 10:16:08.414883 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 15 10:16:08.415922 systemd[1]: Starting update-engine.service... May 15 10:16:08.419261 systemd[1]: Starting update-ssh-keys-after-ignition.service... May 15 10:16:08.421287 extend-filesystems[1196]: Found loop1 May 15 10:16:08.423075 extend-filesystems[1196]: Found vda May 15 10:16:08.423075 extend-filesystems[1196]: Found vda1 May 15 10:16:08.423075 extend-filesystems[1196]: Found vda2 May 15 10:16:08.423075 extend-filesystems[1196]: Found vda3 May 15 10:16:08.423075 extend-filesystems[1196]: Found usr May 15 10:16:08.423075 extend-filesystems[1196]: Found vda4 May 15 10:16:08.423075 extend-filesystems[1196]: Found vda6 May 15 10:16:08.423075 extend-filesystems[1196]: Found vda7 May 15 10:16:08.423075 extend-filesystems[1196]: Found vda9 May 15 10:16:08.423075 extend-filesystems[1196]: Checking size of /dev/vda9 May 15 10:16:08.454401 jq[1213]: true May 15 10:16:08.448392 dbus-daemon[1194]: [system] SELinux support is enabled May 15 10:16:08.423944 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 15 10:16:08.424294 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. May 15 10:16:08.455052 jq[1216]: true May 15 10:16:08.424675 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 15 10:16:08.424813 systemd[1]: Finished ssh-key-proc-cmdline.service. May 15 10:16:08.425856 systemd[1]: motdgen.service: Deactivated successfully. May 15 10:16:08.425995 systemd[1]: Finished motdgen.service. May 15 10:16:08.448663 systemd[1]: Started dbus.service. May 15 10:16:08.452141 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 15 10:16:08.452180 systemd[1]: Reached target system-config.target. May 15 10:16:08.452873 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 15 10:16:08.452922 systemd[1]: Reached target user-config.target. May 15 10:16:08.464625 extend-filesystems[1196]: Resized partition /dev/vda9 May 15 10:16:08.471860 extend-filesystems[1231]: resize2fs 1.46.5 (30-Dec-2021) May 15 10:16:08.491478 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 15 10:16:08.500753 update_engine[1210]: I0515 10:16:08.500499 1210 main.cc:92] Flatcar Update Engine starting May 15 10:16:08.511015 update_engine[1210]: I0515 10:16:08.503626 1210 update_check_scheduler.cc:74] Next update check in 11m21s May 15 10:16:08.503578 systemd[1]: Started update-engine.service. May 15 10:16:08.509009 systemd[1]: Started locksmithd.service. May 15 10:16:08.509952 systemd-logind[1208]: Watching system buttons on /dev/input/event0 (Power Button) May 15 10:16:08.510129 systemd-logind[1208]: New seat seat0. May 15 10:16:08.511411 systemd[1]: Started systemd-logind.service. May 15 10:16:08.526676 env[1217]: time="2025-05-15T10:16:08.526618272Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 May 15 10:16:08.543276 env[1217]: time="2025-05-15T10:16:08.543225912Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 15 10:16:08.543478 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 15 10:16:08.607568 locksmithd[1244]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 15 10:16:08.612009 extend-filesystems[1231]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 15 10:16:08.612009 extend-filesystems[1231]: old_desc_blocks = 1, new_desc_blocks = 1 May 15 10:16:08.612009 extend-filesystems[1231]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 15 10:16:08.616323 extend-filesystems[1196]: Resized filesystem in /dev/vda9 May 15 10:16:08.617100 env[1217]: time="2025-05-15T10:16:08.616093672Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 15 10:16:08.612933 systemd[1]: extend-filesystems.service: Deactivated successfully. May 15 10:16:08.617186 bash[1243]: Updated "/home/core/.ssh/authorized_keys" May 15 10:16:08.613169 systemd[1]: Finished extend-filesystems.service. May 15 10:16:08.616405 systemd[1]: Finished update-ssh-keys-after-ignition.service. May 15 10:16:08.618253 env[1217]: time="2025-05-15T10:16:08.618213552Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.182-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 15 10:16:08.618253 env[1217]: time="2025-05-15T10:16:08.618248632Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 15 10:16:08.618493 env[1217]: time="2025-05-15T10:16:08.618472352Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 15 10:16:08.618529 env[1217]: time="2025-05-15T10:16:08.618494192Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 15 10:16:08.618529 env[1217]: time="2025-05-15T10:16:08.618517632Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" May 15 10:16:08.618570 env[1217]: time="2025-05-15T10:16:08.618529912Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 15 10:16:08.618734 env[1217]: time="2025-05-15T10:16:08.618606992Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 15 10:16:08.618912 env[1217]: time="2025-05-15T10:16:08.618891512Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 15 10:16:08.619032 env[1217]: time="2025-05-15T10:16:08.619014552Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 15 10:16:08.619054 env[1217]: time="2025-05-15T10:16:08.619033392Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 15 10:16:08.619098 env[1217]: time="2025-05-15T10:16:08.619084472Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" May 15 10:16:08.619132 env[1217]: time="2025-05-15T10:16:08.619100552Z" level=info msg="metadata content store policy set" policy=shared May 15 10:16:08.622351 env[1217]: time="2025-05-15T10:16:08.622309432Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 15 10:16:08.622351 env[1217]: time="2025-05-15T10:16:08.622343912Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 15 10:16:08.622351 env[1217]: time="2025-05-15T10:16:08.622356952Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 15 10:16:08.622499 env[1217]: time="2025-05-15T10:16:08.622398952Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 15 10:16:08.622499 env[1217]: time="2025-05-15T10:16:08.622413232Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 15 10:16:08.622499 env[1217]: time="2025-05-15T10:16:08.622426472Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 15 10:16:08.622499 env[1217]: time="2025-05-15T10:16:08.622440312Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 15 10:16:08.622869 env[1217]: time="2025-05-15T10:16:08.622832512Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 15 10:16:08.622869 env[1217]: time="2025-05-15T10:16:08.622860432Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 May 15 10:16:08.622921 env[1217]: time="2025-05-15T10:16:08.622876152Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 15 10:16:08.622921 env[1217]: time="2025-05-15T10:16:08.622889752Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 15 10:16:08.622921 env[1217]: time="2025-05-15T10:16:08.622902792Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 15 10:16:08.623060 env[1217]: time="2025-05-15T10:16:08.623032872Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 15 10:16:08.623154 env[1217]: time="2025-05-15T10:16:08.623130512Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 15 10:16:08.624400 env[1217]: time="2025-05-15T10:16:08.624365352Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 15 10:16:08.624470 env[1217]: time="2025-05-15T10:16:08.624407912Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 15 10:16:08.624470 env[1217]: time="2025-05-15T10:16:08.624421792Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 15 10:16:08.624616 env[1217]: time="2025-05-15T10:16:08.624599832Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 15 10:16:08.624641 env[1217]: time="2025-05-15T10:16:08.624618832Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 15 10:16:08.624641 env[1217]: time="2025-05-15T10:16:08.624631232Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 15 10:16:08.624711 env[1217]: time="2025-05-15T10:16:08.624642472Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 15 10:16:08.624739 env[1217]: time="2025-05-15T10:16:08.624714152Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 15 10:16:08.624739 env[1217]: time="2025-05-15T10:16:08.624727072Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 15 10:16:08.624779 env[1217]: time="2025-05-15T10:16:08.624737792Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 15 10:16:08.624779 env[1217]: time="2025-05-15T10:16:08.624749152Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 15 10:16:08.624779 env[1217]: time="2025-05-15T10:16:08.624762712Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 15 10:16:08.624913 env[1217]: time="2025-05-15T10:16:08.624896672Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 15 10:16:08.624939 env[1217]: time="2025-05-15T10:16:08.624918032Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 15 10:16:08.624939 env[1217]: time="2025-05-15T10:16:08.624930352Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 15 10:16:08.624975 env[1217]: time="2025-05-15T10:16:08.624957232Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 15 10:16:08.624994 env[1217]: time="2025-05-15T10:16:08.624971352Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 May 15 10:16:08.624994 env[1217]: time="2025-05-15T10:16:08.624982112Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 15 10:16:08.625078 env[1217]: time="2025-05-15T10:16:08.624999032Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" May 15 10:16:08.625078 env[1217]: time="2025-05-15T10:16:08.625033552Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 15 10:16:08.625263 env[1217]: time="2025-05-15T10:16:08.625215352Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 15 10:16:08.627782 env[1217]: time="2025-05-15T10:16:08.625272232Z" level=info msg="Connect containerd service" May 15 10:16:08.627782 env[1217]: time="2025-05-15T10:16:08.625305232Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 15 10:16:08.627782 env[1217]: time="2025-05-15T10:16:08.626159312Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 15 10:16:08.627782 env[1217]: time="2025-05-15T10:16:08.626558232Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 15 10:16:08.627782 env[1217]: time="2025-05-15T10:16:08.626595632Z" level=info msg=serving... address=/run/containerd/containerd.sock May 15 10:16:08.627782 env[1217]: time="2025-05-15T10:16:08.626611952Z" level=info msg="Start subscribing containerd event" May 15 10:16:08.627782 env[1217]: time="2025-05-15T10:16:08.626664672Z" level=info msg="Start recovering state" May 15 10:16:08.627782 env[1217]: time="2025-05-15T10:16:08.626734032Z" level=info msg="Start event monitor" May 15 10:16:08.627782 env[1217]: time="2025-05-15T10:16:08.626745552Z" level=info msg="Start snapshots syncer" May 15 10:16:08.627782 env[1217]: time="2025-05-15T10:16:08.626755232Z" level=info msg="Start cni network conf syncer for default" May 15 10:16:08.627782 env[1217]: time="2025-05-15T10:16:08.626762432Z" level=info msg="Start streaming server" May 15 10:16:08.627782 env[1217]: time="2025-05-15T10:16:08.627543712Z" level=info msg="containerd successfully booted in 0.101728s" May 15 10:16:08.626710 systemd[1]: Started containerd.service. May 15 10:16:09.112704 systemd-networkd[1037]: eth0: Gained IPv6LL May 15 10:16:09.114413 systemd[1]: Finished systemd-networkd-wait-online.service. May 15 10:16:09.115417 systemd[1]: Reached target network-online.target. May 15 10:16:09.117843 systemd[1]: Starting kubelet.service... May 15 10:16:09.706681 sshd_keygen[1207]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 15 10:16:09.719407 systemd[1]: Started kubelet.service. May 15 10:16:09.733937 systemd[1]: Finished sshd-keygen.service. May 15 10:16:09.736197 systemd[1]: Starting issuegen.service... May 15 10:16:09.741363 systemd[1]: issuegen.service: Deactivated successfully. May 15 10:16:09.741588 systemd[1]: Finished issuegen.service. May 15 10:16:09.743632 systemd[1]: Starting systemd-user-sessions.service... May 15 10:16:09.751151 systemd[1]: Finished systemd-user-sessions.service. May 15 10:16:09.753790 systemd[1]: Started getty@tty1.service. May 15 10:16:09.756122 systemd[1]: Started serial-getty@ttyAMA0.service. May 15 10:16:09.757205 systemd[1]: Reached target getty.target. May 15 10:16:09.758000 systemd[1]: Reached target multi-user.target. May 15 10:16:09.760318 systemd[1]: Starting systemd-update-utmp-runlevel.service... May 15 10:16:09.769724 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. May 15 10:16:09.769913 systemd[1]: Finished systemd-update-utmp-runlevel.service. May 15 10:16:09.770991 systemd[1]: Startup finished in 616ms (kernel) + 3.898s (initrd) + 4.687s (userspace) = 9.201s. May 15 10:16:10.191317 kubelet[1262]: E0515 10:16:10.191217 1262 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 15 10:16:10.193330 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 15 10:16:10.193471 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 15 10:16:14.037975 systemd[1]: Created slice system-sshd.slice. May 15 10:16:14.039145 systemd[1]: Started sshd@0-10.0.0.71:22-10.0.0.1:39634.service. May 15 10:16:14.087416 sshd[1279]: Accepted publickey for core from 10.0.0.1 port 39634 ssh2: RSA SHA256:I/C30/eWBhvgAgcCboY0f9pk+vr1TzGX+qBjeoJjilE May 15 10:16:14.089295 sshd[1279]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 10:16:14.099754 systemd-logind[1208]: New session 1 of user core. May 15 10:16:14.100683 systemd[1]: Created slice user-500.slice. May 15 10:16:14.101753 systemd[1]: Starting user-runtime-dir@500.service... May 15 10:16:14.109728 systemd[1]: Finished user-runtime-dir@500.service. May 15 10:16:14.110999 systemd[1]: Starting user@500.service... May 15 10:16:14.113544 (systemd)[1282]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 15 10:16:14.172079 systemd[1282]: Queued start job for default target default.target. May 15 10:16:14.172601 systemd[1282]: Reached target paths.target. May 15 10:16:14.172634 systemd[1282]: Reached target sockets.target. May 15 10:16:14.172645 systemd[1282]: Reached target timers.target. May 15 10:16:14.172655 systemd[1282]: Reached target basic.target. May 15 10:16:14.172695 systemd[1282]: Reached target default.target. May 15 10:16:14.172725 systemd[1282]: Startup finished in 53ms. May 15 10:16:14.172792 systemd[1]: Started user@500.service. May 15 10:16:14.173797 systemd[1]: Started session-1.scope. May 15 10:16:14.224309 systemd[1]: Started sshd@1-10.0.0.71:22-10.0.0.1:39636.service. May 15 10:16:14.261599 sshd[1291]: Accepted publickey for core from 10.0.0.1 port 39636 ssh2: RSA SHA256:I/C30/eWBhvgAgcCboY0f9pk+vr1TzGX+qBjeoJjilE May 15 10:16:14.262924 sshd[1291]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 10:16:14.266354 systemd-logind[1208]: New session 2 of user core. May 15 10:16:14.267520 systemd[1]: Started session-2.scope. May 15 10:16:14.322788 sshd[1291]: pam_unix(sshd:session): session closed for user core May 15 10:16:14.326117 systemd[1]: Started sshd@2-10.0.0.71:22-10.0.0.1:39650.service. May 15 10:16:14.326627 systemd[1]: sshd@1-10.0.0.71:22-10.0.0.1:39636.service: Deactivated successfully. May 15 10:16:14.327226 systemd[1]: session-2.scope: Deactivated successfully. May 15 10:16:14.327787 systemd-logind[1208]: Session 2 logged out. Waiting for processes to exit. May 15 10:16:14.328838 systemd-logind[1208]: Removed session 2. May 15 10:16:14.363353 sshd[1296]: Accepted publickey for core from 10.0.0.1 port 39650 ssh2: RSA SHA256:I/C30/eWBhvgAgcCboY0f9pk+vr1TzGX+qBjeoJjilE May 15 10:16:14.364585 sshd[1296]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 10:16:14.368848 systemd-logind[1208]: New session 3 of user core. May 15 10:16:14.369913 systemd[1]: Started session-3.scope. May 15 10:16:14.419121 sshd[1296]: pam_unix(sshd:session): session closed for user core May 15 10:16:14.421808 systemd[1]: sshd@2-10.0.0.71:22-10.0.0.1:39650.service: Deactivated successfully. May 15 10:16:14.422328 systemd[1]: session-3.scope: Deactivated successfully. May 15 10:16:14.423095 systemd-logind[1208]: Session 3 logged out. Waiting for processes to exit. May 15 10:16:14.425783 systemd[1]: Started sshd@3-10.0.0.71:22-10.0.0.1:39658.service. May 15 10:16:14.429016 systemd-logind[1208]: Removed session 3. May 15 10:16:14.464540 sshd[1304]: Accepted publickey for core from 10.0.0.1 port 39658 ssh2: RSA SHA256:I/C30/eWBhvgAgcCboY0f9pk+vr1TzGX+qBjeoJjilE May 15 10:16:14.465760 sshd[1304]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 10:16:14.469521 systemd-logind[1208]: New session 4 of user core. May 15 10:16:14.469856 systemd[1]: Started session-4.scope. May 15 10:16:14.523153 sshd[1304]: pam_unix(sshd:session): session closed for user core May 15 10:16:14.526926 systemd[1]: Started sshd@4-10.0.0.71:22-10.0.0.1:39670.service. May 15 10:16:14.527438 systemd[1]: sshd@3-10.0.0.71:22-10.0.0.1:39658.service: Deactivated successfully. May 15 10:16:14.528061 systemd[1]: session-4.scope: Deactivated successfully. May 15 10:16:14.528599 systemd-logind[1208]: Session 4 logged out. Waiting for processes to exit. May 15 10:16:14.529286 systemd-logind[1208]: Removed session 4. May 15 10:16:14.564104 sshd[1309]: Accepted publickey for core from 10.0.0.1 port 39670 ssh2: RSA SHA256:I/C30/eWBhvgAgcCboY0f9pk+vr1TzGX+qBjeoJjilE May 15 10:16:14.565211 sshd[1309]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 15 10:16:14.568511 systemd-logind[1208]: New session 5 of user core. May 15 10:16:14.568918 systemd[1]: Started session-5.scope. May 15 10:16:14.628971 sudo[1313]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 15 10:16:14.629192 sudo[1313]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) May 15 10:16:14.640762 systemd[1]: Starting coreos-metadata.service... May 15 10:16:14.646955 systemd[1]: coreos-metadata.service: Deactivated successfully. May 15 10:16:14.647105 systemd[1]: Finished coreos-metadata.service. May 15 10:16:15.100146 systemd[1]: Stopped kubelet.service. May 15 10:16:15.102626 systemd[1]: Starting kubelet.service... May 15 10:16:15.122344 systemd[1]: Reloading. May 15 10:16:15.186737 /usr/lib/systemd/system-generators/torcx-generator[1372]: time="2025-05-15T10:16:15Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.100 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.100 /var/lib/torcx/store]" May 15 10:16:15.186769 /usr/lib/systemd/system-generators/torcx-generator[1372]: time="2025-05-15T10:16:15Z" level=info msg="torcx already run" May 15 10:16:15.343690 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 15 10:16:15.343713 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 15 10:16:15.359622 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 10:16:15.428710 systemd[1]: Started kubelet.service. May 15 10:16:15.430569 systemd[1]: Stopping kubelet.service... May 15 10:16:15.430967 systemd[1]: kubelet.service: Deactivated successfully. May 15 10:16:15.431129 systemd[1]: Stopped kubelet.service. May 15 10:16:15.432655 systemd[1]: Starting kubelet.service... May 15 10:16:15.518738 systemd[1]: Started kubelet.service. May 15 10:16:15.553773 kubelet[1416]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 10:16:15.553773 kubelet[1416]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 15 10:16:15.553773 kubelet[1416]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 10:16:15.554180 kubelet[1416]: I0515 10:16:15.553827 1416 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 15 10:16:16.235230 kubelet[1416]: I0515 10:16:16.235177 1416 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" May 15 10:16:16.235230 kubelet[1416]: I0515 10:16:16.235218 1416 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 15 10:16:16.235707 kubelet[1416]: I0515 10:16:16.235685 1416 server.go:954] "Client rotation is on, will bootstrap in background" May 15 10:16:16.294199 kubelet[1416]: I0515 10:16:16.294155 1416 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 15 10:16:16.301445 kubelet[1416]: E0515 10:16:16.301392 1416 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 15 10:16:16.301445 kubelet[1416]: I0515 10:16:16.301427 1416 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 15 10:16:16.304186 kubelet[1416]: I0515 10:16:16.304160 1416 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 15 10:16:16.305544 kubelet[1416]: I0515 10:16:16.305501 1416 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 15 10:16:16.305720 kubelet[1416]: I0515 10:16:16.305544 1416 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"10.0.0.71","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 15 10:16:16.305794 kubelet[1416]: I0515 10:16:16.305783 1416 topology_manager.go:138] "Creating topology manager with none policy" May 15 10:16:16.305794 kubelet[1416]: I0515 10:16:16.305793 1416 container_manager_linux.go:304] "Creating device plugin manager" May 15 10:16:16.306009 kubelet[1416]: I0515 10:16:16.305984 1416 state_mem.go:36] "Initialized new in-memory state store" May 15 10:16:16.317846 kubelet[1416]: I0515 10:16:16.317810 1416 kubelet.go:446] "Attempting to sync node with API server" May 15 10:16:16.317846 kubelet[1416]: I0515 10:16:16.317838 1416 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 15 10:16:16.317982 kubelet[1416]: I0515 10:16:16.317856 1416 kubelet.go:352] "Adding apiserver pod source" May 15 10:16:16.317982 kubelet[1416]: I0515 10:16:16.317913 1416 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 15 10:16:16.321714 kubelet[1416]: E0515 10:16:16.321679 1416 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 15 10:16:16.321839 kubelet[1416]: E0515 10:16:16.321825 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 15 10:16:16.323913 kubelet[1416]: I0515 10:16:16.323897 1416 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" May 15 10:16:16.324611 kubelet[1416]: I0515 10:16:16.324592 1416 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 15 10:16:16.324795 kubelet[1416]: W0515 10:16:16.324783 1416 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 15 10:16:16.328052 kubelet[1416]: I0515 10:16:16.328023 1416 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 15 10:16:16.328117 kubelet[1416]: I0515 10:16:16.328066 1416 server.go:1287] "Started kubelet" May 15 10:16:16.328906 kubelet[1416]: I0515 10:16:16.328869 1416 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 15 10:16:16.330355 kubelet[1416]: I0515 10:16:16.330265 1416 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 15 10:16:16.331533 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). May 15 10:16:16.331581 kubelet[1416]: I0515 10:16:16.330855 1416 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 15 10:16:16.333614 kubelet[1416]: I0515 10:16:16.333349 1416 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 15 10:16:16.333614 kubelet[1416]: I0515 10:16:16.333483 1416 server.go:490] "Adding debug handlers to kubelet server" May 15 10:16:16.335097 kubelet[1416]: I0515 10:16:16.335071 1416 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 15 10:16:16.336410 kubelet[1416]: E0515 10:16:16.336393 1416 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.71\" not found" May 15 10:16:16.336534 kubelet[1416]: I0515 10:16:16.336522 1416 volume_manager.go:297] "Starting Kubelet Volume Manager" May 15 10:16:16.336785 kubelet[1416]: I0515 10:16:16.336764 1416 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 15 10:16:16.337059 kubelet[1416]: I0515 10:16:16.337043 1416 reconciler.go:26] "Reconciler: start to sync state" May 15 10:16:16.337338 kubelet[1416]: I0515 10:16:16.337307 1416 factory.go:221] Registration of the systemd container factory successfully May 15 10:16:16.337423 kubelet[1416]: I0515 10:16:16.337403 1416 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 15 10:16:16.340021 kubelet[1416]: I0515 10:16:16.339995 1416 factory.go:221] Registration of the containerd container factory successfully May 15 10:16:16.340106 kubelet[1416]: E0515 10:16:16.340078 1416 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 15 10:16:16.351381 kubelet[1416]: I0515 10:16:16.351356 1416 cpu_manager.go:221] "Starting CPU manager" policy="none" May 15 10:16:16.351514 kubelet[1416]: I0515 10:16:16.351498 1416 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 15 10:16:16.351576 kubelet[1416]: I0515 10:16:16.351566 1416 state_mem.go:36] "Initialized new in-memory state store" May 15 10:16:16.356526 kubelet[1416]: E0515 10:16:16.356474 1416 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.0.0.71\" not found" node="10.0.0.71" May 15 10:16:16.420184 kubelet[1416]: I0515 10:16:16.420154 1416 policy_none.go:49] "None policy: Start" May 15 10:16:16.420328 kubelet[1416]: I0515 10:16:16.420316 1416 memory_manager.go:186] "Starting memorymanager" policy="None" May 15 10:16:16.420387 kubelet[1416]: I0515 10:16:16.420378 1416 state_mem.go:35] "Initializing new in-memory state store" May 15 10:16:16.424586 systemd[1]: Created slice kubepods.slice. May 15 10:16:16.428688 systemd[1]: Created slice kubepods-burstable.slice. May 15 10:16:16.431408 systemd[1]: Created slice kubepods-besteffort.slice. May 15 10:16:16.436648 kubelet[1416]: E0515 10:16:16.436627 1416 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.0.0.71\" not found" May 15 10:16:16.439588 kubelet[1416]: I0515 10:16:16.439570 1416 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 15 10:16:16.439918 kubelet[1416]: I0515 10:16:16.439900 1416 eviction_manager.go:189] "Eviction manager: starting control loop" May 15 10:16:16.440018 kubelet[1416]: I0515 10:16:16.439985 1416 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 15 10:16:16.440297 kubelet[1416]: I0515 10:16:16.440281 1416 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 15 10:16:16.441476 kubelet[1416]: E0515 10:16:16.441433 1416 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 15 10:16:16.441476 kubelet[1416]: E0515 10:16:16.441478 1416 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.0.71\" not found" May 15 10:16:16.495156 kubelet[1416]: I0515 10:16:16.494245 1416 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 15 10:16:16.495240 kubelet[1416]: I0515 10:16:16.495218 1416 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 15 10:16:16.495240 kubelet[1416]: I0515 10:16:16.495238 1416 status_manager.go:227] "Starting to sync pod status with apiserver" May 15 10:16:16.495778 kubelet[1416]: I0515 10:16:16.495256 1416 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 15 10:16:16.495778 kubelet[1416]: I0515 10:16:16.495776 1416 kubelet.go:2388] "Starting kubelet main sync loop" May 15 10:16:16.495871 kubelet[1416]: E0515 10:16:16.495846 1416 kubelet.go:2412] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" May 15 10:16:16.541447 kubelet[1416]: I0515 10:16:16.541409 1416 kubelet_node_status.go:76] "Attempting to register node" node="10.0.0.71" May 15 10:16:16.551441 kubelet[1416]: I0515 10:16:16.551404 1416 kubelet_node_status.go:79] "Successfully registered node" node="10.0.0.71" May 15 10:16:16.655072 kubelet[1416]: I0515 10:16:16.655035 1416 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" May 15 10:16:16.655649 env[1217]: time="2025-05-15T10:16:16.655554592Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 15 10:16:16.655847 kubelet[1416]: I0515 10:16:16.655753 1416 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" May 15 10:16:17.044317 sudo[1313]: pam_unix(sudo:session): session closed for user root May 15 10:16:17.047708 sshd[1309]: pam_unix(sshd:session): session closed for user core May 15 10:16:17.051259 systemd[1]: sshd@4-10.0.0.71:22-10.0.0.1:39670.service: Deactivated successfully. May 15 10:16:17.051982 systemd[1]: session-5.scope: Deactivated successfully. May 15 10:16:17.052936 systemd-logind[1208]: Session 5 logged out. Waiting for processes to exit. May 15 10:16:17.054506 systemd-logind[1208]: Removed session 5. May 15 10:16:17.238477 kubelet[1416]: I0515 10:16:17.238286 1416 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" May 15 10:16:17.238623 kubelet[1416]: W0515 10:16:17.238603 1416 reflector.go:492] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received May 15 10:16:17.238658 kubelet[1416]: W0515 10:16:17.238651 1416 reflector.go:492] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received May 15 10:16:17.238683 kubelet[1416]: W0515 10:16:17.238674 1416 reflector.go:492] k8s.io/client-go/informers/factory.go:160: watch of *v1.Service ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received May 15 10:16:17.318752 kubelet[1416]: I0515 10:16:17.318329 1416 apiserver.go:52] "Watching apiserver" May 15 10:16:17.322801 kubelet[1416]: E0515 10:16:17.322774 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 15 10:16:17.329061 systemd[1]: Created slice kubepods-burstable-podffb4e6bb_8de9_41d1_bfcb_d7af27931a34.slice. May 15 10:16:17.338580 kubelet[1416]: I0515 10:16:17.338535 1416 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 15 10:16:17.343056 kubelet[1416]: I0515 10:16:17.343024 1416 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ffb4e6bb-8de9-41d1-bfcb-d7af27931a34-cilium-cgroup\") pod \"cilium-lmmxc\" (UID: \"ffb4e6bb-8de9-41d1-bfcb-d7af27931a34\") " pod="kube-system/cilium-lmmxc" May 15 10:16:17.343056 kubelet[1416]: I0515 10:16:17.343057 1416 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ffb4e6bb-8de9-41d1-bfcb-d7af27931a34-etc-cni-netd\") pod \"cilium-lmmxc\" (UID: \"ffb4e6bb-8de9-41d1-bfcb-d7af27931a34\") " pod="kube-system/cilium-lmmxc" May 15 10:16:17.343151 kubelet[1416]: I0515 10:16:17.343078 1416 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0d2301c0-7a70-4ce7-97b5-dc1324dbbdf9-xtables-lock\") pod \"kube-proxy-rp2jz\" (UID: \"0d2301c0-7a70-4ce7-97b5-dc1324dbbdf9\") " pod="kube-system/kube-proxy-rp2jz" May 15 10:16:17.343629 kubelet[1416]: I0515 10:16:17.343602 1416 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2w5s\" (UniqueName: \"kubernetes.io/projected/ffb4e6bb-8de9-41d1-bfcb-d7af27931a34-kube-api-access-s2w5s\") pod \"cilium-lmmxc\" (UID: \"ffb4e6bb-8de9-41d1-bfcb-d7af27931a34\") " pod="kube-system/cilium-lmmxc" May 15 10:16:17.343700 kubelet[1416]: I0515 10:16:17.343663 1416 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/0d2301c0-7a70-4ce7-97b5-dc1324dbbdf9-kube-proxy\") pod \"kube-proxy-rp2jz\" (UID: \"0d2301c0-7a70-4ce7-97b5-dc1324dbbdf9\") " pod="kube-system/kube-proxy-rp2jz" May 15 10:16:17.343700 kubelet[1416]: I0515 10:16:17.343685 1416 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ffb4e6bb-8de9-41d1-bfcb-d7af27931a34-hostproc\") pod \"cilium-lmmxc\" (UID: \"ffb4e6bb-8de9-41d1-bfcb-d7af27931a34\") " pod="kube-system/cilium-lmmxc" May 15 10:16:17.343744 kubelet[1416]: I0515 10:16:17.343701 1416 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ffb4e6bb-8de9-41d1-bfcb-d7af27931a34-clustermesh-secrets\") pod \"cilium-lmmxc\" (UID: \"ffb4e6bb-8de9-41d1-bfcb-d7af27931a34\") " pod="kube-system/cilium-lmmxc" May 15 10:16:17.343744 kubelet[1416]: I0515 10:16:17.343718 1416 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ffb4e6bb-8de9-41d1-bfcb-d7af27931a34-host-proc-sys-net\") pod \"cilium-lmmxc\" (UID: \"ffb4e6bb-8de9-41d1-bfcb-d7af27931a34\") " pod="kube-system/cilium-lmmxc" May 15 10:16:17.343744 kubelet[1416]: I0515 10:16:17.343732 1416 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ffb4e6bb-8de9-41d1-bfcb-d7af27931a34-host-proc-sys-kernel\") pod \"cilium-lmmxc\" (UID: \"ffb4e6bb-8de9-41d1-bfcb-d7af27931a34\") " pod="kube-system/cilium-lmmxc" May 15 10:16:17.343813 kubelet[1416]: I0515 10:16:17.343746 1416 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ffb4e6bb-8de9-41d1-bfcb-d7af27931a34-cilium-run\") pod \"cilium-lmmxc\" (UID: \"ffb4e6bb-8de9-41d1-bfcb-d7af27931a34\") " pod="kube-system/cilium-lmmxc" May 15 10:16:17.343813 kubelet[1416]: I0515 10:16:17.343760 1416 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ffb4e6bb-8de9-41d1-bfcb-d7af27931a34-bpf-maps\") pod \"cilium-lmmxc\" (UID: \"ffb4e6bb-8de9-41d1-bfcb-d7af27931a34\") " pod="kube-system/cilium-lmmxc" May 15 10:16:17.343813 kubelet[1416]: I0515 10:16:17.343774 1416 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ffb4e6bb-8de9-41d1-bfcb-d7af27931a34-lib-modules\") pod \"cilium-lmmxc\" (UID: \"ffb4e6bb-8de9-41d1-bfcb-d7af27931a34\") " pod="kube-system/cilium-lmmxc" May 15 10:16:17.343813 kubelet[1416]: I0515 10:16:17.343789 1416 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0d2301c0-7a70-4ce7-97b5-dc1324dbbdf9-lib-modules\") pod \"kube-proxy-rp2jz\" (UID: \"0d2301c0-7a70-4ce7-97b5-dc1324dbbdf9\") " pod="kube-system/kube-proxy-rp2jz" May 15 10:16:17.343897 kubelet[1416]: I0515 10:16:17.343815 1416 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4f4kc\" (UniqueName: \"kubernetes.io/projected/0d2301c0-7a70-4ce7-97b5-dc1324dbbdf9-kube-api-access-4f4kc\") pod \"kube-proxy-rp2jz\" (UID: \"0d2301c0-7a70-4ce7-97b5-dc1324dbbdf9\") " pod="kube-system/kube-proxy-rp2jz" May 15 10:16:17.343897 kubelet[1416]: I0515 10:16:17.343832 1416 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ffb4e6bb-8de9-41d1-bfcb-d7af27931a34-cni-path\") pod \"cilium-lmmxc\" (UID: \"ffb4e6bb-8de9-41d1-bfcb-d7af27931a34\") " pod="kube-system/cilium-lmmxc" May 15 10:16:17.343897 kubelet[1416]: I0515 10:16:17.343847 1416 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ffb4e6bb-8de9-41d1-bfcb-d7af27931a34-xtables-lock\") pod \"cilium-lmmxc\" (UID: \"ffb4e6bb-8de9-41d1-bfcb-d7af27931a34\") " pod="kube-system/cilium-lmmxc" May 15 10:16:17.343897 kubelet[1416]: I0515 10:16:17.343862 1416 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ffb4e6bb-8de9-41d1-bfcb-d7af27931a34-cilium-config-path\") pod \"cilium-lmmxc\" (UID: \"ffb4e6bb-8de9-41d1-bfcb-d7af27931a34\") " pod="kube-system/cilium-lmmxc" May 15 10:16:17.343897 kubelet[1416]: I0515 10:16:17.343877 1416 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ffb4e6bb-8de9-41d1-bfcb-d7af27931a34-hubble-tls\") pod \"cilium-lmmxc\" (UID: \"ffb4e6bb-8de9-41d1-bfcb-d7af27931a34\") " pod="kube-system/cilium-lmmxc" May 15 10:16:17.344691 systemd[1]: Created slice kubepods-besteffort-pod0d2301c0_7a70_4ce7_97b5_dc1324dbbdf9.slice. May 15 10:16:17.445049 kubelet[1416]: I0515 10:16:17.444995 1416 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" May 15 10:16:17.642791 kubelet[1416]: E0515 10:16:17.642122 1416 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:16:17.643322 env[1217]: time="2025-05-15T10:16:17.643286792Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-lmmxc,Uid:ffb4e6bb-8de9-41d1-bfcb-d7af27931a34,Namespace:kube-system,Attempt:0,}" May 15 10:16:17.656755 kubelet[1416]: E0515 10:16:17.656715 1416 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:16:17.657312 env[1217]: time="2025-05-15T10:16:17.657260712Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rp2jz,Uid:0d2301c0-7a70-4ce7-97b5-dc1324dbbdf9,Namespace:kube-system,Attempt:0,}" May 15 10:16:18.299010 env[1217]: time="2025-05-15T10:16:18.298965632Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:16:18.301563 env[1217]: time="2025-05-15T10:16:18.301525992Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:16:18.302497 env[1217]: time="2025-05-15T10:16:18.302450032Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:16:18.304258 env[1217]: time="2025-05-15T10:16:18.304231992Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:16:18.305652 env[1217]: time="2025-05-15T10:16:18.305613632Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:16:18.307094 env[1217]: time="2025-05-15T10:16:18.307065672Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:16:18.308718 env[1217]: time="2025-05-15T10:16:18.308691232Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:16:18.310376 env[1217]: time="2025-05-15T10:16:18.310349472Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:16:18.323828 kubelet[1416]: E0515 10:16:18.323795 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 15 10:16:18.345185 env[1217]: time="2025-05-15T10:16:18.343285312Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 10:16:18.345185 env[1217]: time="2025-05-15T10:16:18.343324352Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 10:16:18.345185 env[1217]: time="2025-05-15T10:16:18.343334352Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 10:16:18.345185 env[1217]: time="2025-05-15T10:16:18.343571712Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/bb858f8c3a48b0908365d68d34ce9a89bfcea6a611047dfcf35aea2d1e1d1160 pid=1483 runtime=io.containerd.runc.v2 May 15 10:16:18.349793 env[1217]: time="2025-05-15T10:16:18.349721472Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 10:16:18.349793 env[1217]: time="2025-05-15T10:16:18.349758432Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 10:16:18.349793 env[1217]: time="2025-05-15T10:16:18.349769072Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 10:16:18.350090 env[1217]: time="2025-05-15T10:16:18.350044712Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7c68ee8f80fc428a687c0bab5b5ec56b77d6f9bf7773c7de948a0053ae81b048 pid=1494 runtime=io.containerd.runc.v2 May 15 10:16:18.363690 systemd[1]: Started cri-containerd-7c68ee8f80fc428a687c0bab5b5ec56b77d6f9bf7773c7de948a0053ae81b048.scope. May 15 10:16:18.366465 systemd[1]: Started cri-containerd-bb858f8c3a48b0908365d68d34ce9a89bfcea6a611047dfcf35aea2d1e1d1160.scope. May 15 10:16:18.408315 env[1217]: time="2025-05-15T10:16:18.408270352Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rp2jz,Uid:0d2301c0-7a70-4ce7-97b5-dc1324dbbdf9,Namespace:kube-system,Attempt:0,} returns sandbox id \"7c68ee8f80fc428a687c0bab5b5ec56b77d6f9bf7773c7de948a0053ae81b048\"" May 15 10:16:18.409266 kubelet[1416]: E0515 10:16:18.409237 1416 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:16:18.411542 env[1217]: time="2025-05-15T10:16:18.411418832Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\"" May 15 10:16:18.415203 env[1217]: time="2025-05-15T10:16:18.415170912Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-lmmxc,Uid:ffb4e6bb-8de9-41d1-bfcb-d7af27931a34,Namespace:kube-system,Attempt:0,} returns sandbox id \"bb858f8c3a48b0908365d68d34ce9a89bfcea6a611047dfcf35aea2d1e1d1160\"" May 15 10:16:18.415691 kubelet[1416]: E0515 10:16:18.415667 1416 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:16:18.451266 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2586020137.mount: Deactivated successfully. May 15 10:16:19.324424 kubelet[1416]: E0515 10:16:19.324367 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 15 10:16:19.453070 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount233004071.mount: Deactivated successfully. May 15 10:16:19.921469 env[1217]: time="2025-05-15T10:16:19.921419552Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.32.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:16:19.922760 env[1217]: time="2025-05-15T10:16:19.922732312Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:62c496efa595c8eb7d098e43430b2b94ad66812214759a7ea9daaaa1ed901fc7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:16:19.924032 env[1217]: time="2025-05-15T10:16:19.923999432Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.32.4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:16:19.925213 env[1217]: time="2025-05-15T10:16:19.925162152Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:152638222ecf265eb8e5352e3c50e8fc520994e8ffcff1ee1490c975f7fc2b36,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:16:19.925548 env[1217]: time="2025-05-15T10:16:19.925524312Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.4\" returns image reference \"sha256:62c496efa595c8eb7d098e43430b2b94ad66812214759a7ea9daaaa1ed901fc7\"" May 15 10:16:19.926812 env[1217]: time="2025-05-15T10:16:19.926773592Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 15 10:16:19.927915 env[1217]: time="2025-05-15T10:16:19.927883352Z" level=info msg="CreateContainer within sandbox \"7c68ee8f80fc428a687c0bab5b5ec56b77d6f9bf7773c7de948a0053ae81b048\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 15 10:16:19.939011 env[1217]: time="2025-05-15T10:16:19.938973232Z" level=info msg="CreateContainer within sandbox \"7c68ee8f80fc428a687c0bab5b5ec56b77d6f9bf7773c7de948a0053ae81b048\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"bbd05f34544681a36aa0e34522e6a38accf539cdf4fd52dc4fa1253b7d9ae52a\"" May 15 10:16:19.939814 env[1217]: time="2025-05-15T10:16:19.939785152Z" level=info msg="StartContainer for \"bbd05f34544681a36aa0e34522e6a38accf539cdf4fd52dc4fa1253b7d9ae52a\"" May 15 10:16:19.956093 systemd[1]: Started cri-containerd-bbd05f34544681a36aa0e34522e6a38accf539cdf4fd52dc4fa1253b7d9ae52a.scope. May 15 10:16:19.995360 env[1217]: time="2025-05-15T10:16:19.995315232Z" level=info msg="StartContainer for \"bbd05f34544681a36aa0e34522e6a38accf539cdf4fd52dc4fa1253b7d9ae52a\" returns successfully" May 15 10:16:20.324875 kubelet[1416]: E0515 10:16:20.324777 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 15 10:16:20.503985 kubelet[1416]: E0515 10:16:20.503901 1416 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:16:20.518903 kubelet[1416]: I0515 10:16:20.518827 1416 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-rp2jz" podStartSLOduration=3.002895832 podStartE2EDuration="4.518809832s" podCreationTimestamp="2025-05-15 10:16:16 +0000 UTC" firstStartedPulling="2025-05-15 10:16:18.410691912 +0000 UTC m=+2.887405321" lastFinishedPulling="2025-05-15 10:16:19.926605872 +0000 UTC m=+4.403319321" observedRunningTime="2025-05-15 10:16:20.516052432 +0000 UTC m=+4.992765881" watchObservedRunningTime="2025-05-15 10:16:20.518809832 +0000 UTC m=+4.995523321" May 15 10:16:21.325622 kubelet[1416]: E0515 10:16:21.325577 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 15 10:16:21.506107 kubelet[1416]: E0515 10:16:21.506030 1416 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:16:22.326273 kubelet[1416]: E0515 10:16:22.326216 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 15 10:16:23.327205 kubelet[1416]: E0515 10:16:23.327160 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 15 10:16:23.783706 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount323974724.mount: Deactivated successfully. May 15 10:16:24.327801 kubelet[1416]: E0515 10:16:24.327745 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 15 10:16:25.328524 kubelet[1416]: E0515 10:16:25.328479 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 15 10:16:25.940995 env[1217]: time="2025-05-15T10:16:25.940950912Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:16:25.942401 env[1217]: time="2025-05-15T10:16:25.942345632Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:16:25.943725 env[1217]: time="2025-05-15T10:16:25.943693672Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:16:25.944199 env[1217]: time="2025-05-15T10:16:25.944175352Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" May 15 10:16:25.946797 env[1217]: time="2025-05-15T10:16:25.946765832Z" level=info msg="CreateContainer within sandbox \"bb858f8c3a48b0908365d68d34ce9a89bfcea6a611047dfcf35aea2d1e1d1160\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 15 10:16:25.956844 env[1217]: time="2025-05-15T10:16:25.956805192Z" level=info msg="CreateContainer within sandbox \"bb858f8c3a48b0908365d68d34ce9a89bfcea6a611047dfcf35aea2d1e1d1160\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"1a3a739f3cedd7fcc16acc6039dcb149169ebc6d135bcc7f1f032d0814a5eeb4\"" May 15 10:16:25.957379 env[1217]: time="2025-05-15T10:16:25.957353432Z" level=info msg="StartContainer for \"1a3a739f3cedd7fcc16acc6039dcb149169ebc6d135bcc7f1f032d0814a5eeb4\"" May 15 10:16:25.975137 systemd[1]: Started cri-containerd-1a3a739f3cedd7fcc16acc6039dcb149169ebc6d135bcc7f1f032d0814a5eeb4.scope. May 15 10:16:26.003735 env[1217]: time="2025-05-15T10:16:26.003692552Z" level=info msg="StartContainer for \"1a3a739f3cedd7fcc16acc6039dcb149169ebc6d135bcc7f1f032d0814a5eeb4\" returns successfully" May 15 10:16:26.047214 systemd[1]: cri-containerd-1a3a739f3cedd7fcc16acc6039dcb149169ebc6d135bcc7f1f032d0814a5eeb4.scope: Deactivated successfully. May 15 10:16:26.193489 env[1217]: time="2025-05-15T10:16:26.193369952Z" level=info msg="shim disconnected" id=1a3a739f3cedd7fcc16acc6039dcb149169ebc6d135bcc7f1f032d0814a5eeb4 May 15 10:16:26.193489 env[1217]: time="2025-05-15T10:16:26.193415192Z" level=warning msg="cleaning up after shim disconnected" id=1a3a739f3cedd7fcc16acc6039dcb149169ebc6d135bcc7f1f032d0814a5eeb4 namespace=k8s.io May 15 10:16:26.193489 env[1217]: time="2025-05-15T10:16:26.193425152Z" level=info msg="cleaning up dead shim" May 15 10:16:26.199312 env[1217]: time="2025-05-15T10:16:26.199274992Z" level=warning msg="cleanup warnings time=\"2025-05-15T10:16:26Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1767 runtime=io.containerd.runc.v2\n" May 15 10:16:26.329650 kubelet[1416]: E0515 10:16:26.329580 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 15 10:16:26.512107 kubelet[1416]: E0515 10:16:26.512011 1416 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:16:26.514115 env[1217]: time="2025-05-15T10:16:26.514072032Z" level=info msg="CreateContainer within sandbox \"bb858f8c3a48b0908365d68d34ce9a89bfcea6a611047dfcf35aea2d1e1d1160\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 15 10:16:26.526226 env[1217]: time="2025-05-15T10:16:26.526167232Z" level=info msg="CreateContainer within sandbox \"bb858f8c3a48b0908365d68d34ce9a89bfcea6a611047dfcf35aea2d1e1d1160\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"fb4ebbaef66de2259361822bde3d0825d98748f145783a63848e1b154a8817cf\"" May 15 10:16:26.527265 env[1217]: time="2025-05-15T10:16:26.527236392Z" level=info msg="StartContainer for \"fb4ebbaef66de2259361822bde3d0825d98748f145783a63848e1b154a8817cf\"" May 15 10:16:26.539744 systemd[1]: Started cri-containerd-fb4ebbaef66de2259361822bde3d0825d98748f145783a63848e1b154a8817cf.scope. May 15 10:16:26.577655 env[1217]: time="2025-05-15T10:16:26.577610192Z" level=info msg="StartContainer for \"fb4ebbaef66de2259361822bde3d0825d98748f145783a63848e1b154a8817cf\" returns successfully" May 15 10:16:26.590081 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 15 10:16:26.590275 systemd[1]: Stopped systemd-sysctl.service. May 15 10:16:26.590528 systemd[1]: Stopping systemd-sysctl.service... May 15 10:16:26.592230 systemd[1]: Starting systemd-sysctl.service... May 15 10:16:26.594384 systemd[1]: cri-containerd-fb4ebbaef66de2259361822bde3d0825d98748f145783a63848e1b154a8817cf.scope: Deactivated successfully. May 15 10:16:26.600801 systemd[1]: Finished systemd-sysctl.service. May 15 10:16:26.625093 env[1217]: time="2025-05-15T10:16:26.625046952Z" level=info msg="shim disconnected" id=fb4ebbaef66de2259361822bde3d0825d98748f145783a63848e1b154a8817cf May 15 10:16:26.625093 env[1217]: time="2025-05-15T10:16:26.625091792Z" level=warning msg="cleaning up after shim disconnected" id=fb4ebbaef66de2259361822bde3d0825d98748f145783a63848e1b154a8817cf namespace=k8s.io May 15 10:16:26.625093 env[1217]: time="2025-05-15T10:16:26.625100872Z" level=info msg="cleaning up dead shim" May 15 10:16:26.631478 env[1217]: time="2025-05-15T10:16:26.631422272Z" level=warning msg="cleanup warnings time=\"2025-05-15T10:16:26Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1830 runtime=io.containerd.runc.v2\n" May 15 10:16:26.953161 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1a3a739f3cedd7fcc16acc6039dcb149169ebc6d135bcc7f1f032d0814a5eeb4-rootfs.mount: Deactivated successfully. May 15 10:16:27.329932 kubelet[1416]: E0515 10:16:27.329807 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 15 10:16:27.515165 kubelet[1416]: E0515 10:16:27.514653 1416 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:16:27.516520 env[1217]: time="2025-05-15T10:16:27.516482912Z" level=info msg="CreateContainer within sandbox \"bb858f8c3a48b0908365d68d34ce9a89bfcea6a611047dfcf35aea2d1e1d1160\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 15 10:16:27.528801 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3885549377.mount: Deactivated successfully. May 15 10:16:27.532321 env[1217]: time="2025-05-15T10:16:27.532271632Z" level=info msg="CreateContainer within sandbox \"bb858f8c3a48b0908365d68d34ce9a89bfcea6a611047dfcf35aea2d1e1d1160\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"2b518e1a3e21747c422ece182de90a9d82dbcae219622841112c326cc6d0f8a9\"" May 15 10:16:27.532929 env[1217]: time="2025-05-15T10:16:27.532862832Z" level=info msg="StartContainer for \"2b518e1a3e21747c422ece182de90a9d82dbcae219622841112c326cc6d0f8a9\"" May 15 10:16:27.551334 systemd[1]: Started cri-containerd-2b518e1a3e21747c422ece182de90a9d82dbcae219622841112c326cc6d0f8a9.scope. May 15 10:16:27.588389 env[1217]: time="2025-05-15T10:16:27.588290512Z" level=info msg="StartContainer for \"2b518e1a3e21747c422ece182de90a9d82dbcae219622841112c326cc6d0f8a9\" returns successfully" May 15 10:16:27.596534 systemd[1]: cri-containerd-2b518e1a3e21747c422ece182de90a9d82dbcae219622841112c326cc6d0f8a9.scope: Deactivated successfully. May 15 10:16:27.616280 env[1217]: time="2025-05-15T10:16:27.616237352Z" level=info msg="shim disconnected" id=2b518e1a3e21747c422ece182de90a9d82dbcae219622841112c326cc6d0f8a9 May 15 10:16:27.616280 env[1217]: time="2025-05-15T10:16:27.616283512Z" level=warning msg="cleaning up after shim disconnected" id=2b518e1a3e21747c422ece182de90a9d82dbcae219622841112c326cc6d0f8a9 namespace=k8s.io May 15 10:16:27.616490 env[1217]: time="2025-05-15T10:16:27.616293072Z" level=info msg="cleaning up dead shim" May 15 10:16:27.622964 env[1217]: time="2025-05-15T10:16:27.622930952Z" level=warning msg="cleanup warnings time=\"2025-05-15T10:16:27Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1887 runtime=io.containerd.runc.v2\n" May 15 10:16:27.952845 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2b518e1a3e21747c422ece182de90a9d82dbcae219622841112c326cc6d0f8a9-rootfs.mount: Deactivated successfully. May 15 10:16:28.330889 kubelet[1416]: E0515 10:16:28.330770 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 15 10:16:28.518534 kubelet[1416]: E0515 10:16:28.518299 1416 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:16:28.520145 env[1217]: time="2025-05-15T10:16:28.520094672Z" level=info msg="CreateContainer within sandbox \"bb858f8c3a48b0908365d68d34ce9a89bfcea6a611047dfcf35aea2d1e1d1160\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 15 10:16:28.535950 env[1217]: time="2025-05-15T10:16:28.535902032Z" level=info msg="CreateContainer within sandbox \"bb858f8c3a48b0908365d68d34ce9a89bfcea6a611047dfcf35aea2d1e1d1160\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"45c72d02352cf55a7080ac1dedf34cc437ae488ac1a9eb4b526610688fcc9731\"" May 15 10:16:28.536446 env[1217]: time="2025-05-15T10:16:28.536419472Z" level=info msg="StartContainer for \"45c72d02352cf55a7080ac1dedf34cc437ae488ac1a9eb4b526610688fcc9731\"" May 15 10:16:28.553428 systemd[1]: Started cri-containerd-45c72d02352cf55a7080ac1dedf34cc437ae488ac1a9eb4b526610688fcc9731.scope. May 15 10:16:28.600081 env[1217]: time="2025-05-15T10:16:28.599345232Z" level=info msg="StartContainer for \"45c72d02352cf55a7080ac1dedf34cc437ae488ac1a9eb4b526610688fcc9731\" returns successfully" May 15 10:16:28.599549 systemd[1]: cri-containerd-45c72d02352cf55a7080ac1dedf34cc437ae488ac1a9eb4b526610688fcc9731.scope: Deactivated successfully. May 15 10:16:28.618565 env[1217]: time="2025-05-15T10:16:28.618513272Z" level=info msg="shim disconnected" id=45c72d02352cf55a7080ac1dedf34cc437ae488ac1a9eb4b526610688fcc9731 May 15 10:16:28.618565 env[1217]: time="2025-05-15T10:16:28.618568832Z" level=warning msg="cleaning up after shim disconnected" id=45c72d02352cf55a7080ac1dedf34cc437ae488ac1a9eb4b526610688fcc9731 namespace=k8s.io May 15 10:16:28.618767 env[1217]: time="2025-05-15T10:16:28.618578112Z" level=info msg="cleaning up dead shim" May 15 10:16:28.625960 env[1217]: time="2025-05-15T10:16:28.625917352Z" level=warning msg="cleanup warnings time=\"2025-05-15T10:16:28Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1943 runtime=io.containerd.runc.v2\n" May 15 10:16:28.953035 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-45c72d02352cf55a7080ac1dedf34cc437ae488ac1a9eb4b526610688fcc9731-rootfs.mount: Deactivated successfully. May 15 10:16:29.331396 kubelet[1416]: E0515 10:16:29.331298 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 15 10:16:29.522029 kubelet[1416]: E0515 10:16:29.522002 1416 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:16:29.523857 env[1217]: time="2025-05-15T10:16:29.523792352Z" level=info msg="CreateContainer within sandbox \"bb858f8c3a48b0908365d68d34ce9a89bfcea6a611047dfcf35aea2d1e1d1160\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 15 10:16:29.542335 env[1217]: time="2025-05-15T10:16:29.542276792Z" level=info msg="CreateContainer within sandbox \"bb858f8c3a48b0908365d68d34ce9a89bfcea6a611047dfcf35aea2d1e1d1160\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"93e18573ec42046767f009abf6d8fd633bd2a110152fe5af38569c6cc6dbd0cf\"" May 15 10:16:29.542850 env[1217]: time="2025-05-15T10:16:29.542825592Z" level=info msg="StartContainer for \"93e18573ec42046767f009abf6d8fd633bd2a110152fe5af38569c6cc6dbd0cf\"" May 15 10:16:29.561198 systemd[1]: Started cri-containerd-93e18573ec42046767f009abf6d8fd633bd2a110152fe5af38569c6cc6dbd0cf.scope. May 15 10:16:29.597208 env[1217]: time="2025-05-15T10:16:29.596816392Z" level=info msg="StartContainer for \"93e18573ec42046767f009abf6d8fd633bd2a110152fe5af38569c6cc6dbd0cf\" returns successfully" May 15 10:16:29.777324 kubelet[1416]: I0515 10:16:29.777290 1416 kubelet_node_status.go:502] "Fast updating node status as it just became ready" May 15 10:16:29.894500 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! May 15 10:16:30.128488 kernel: Initializing XFRM netlink socket May 15 10:16:30.130489 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! May 15 10:16:30.331661 kubelet[1416]: E0515 10:16:30.331607 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 15 10:16:30.525974 kubelet[1416]: E0515 10:16:30.525872 1416 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:16:30.541632 kubelet[1416]: I0515 10:16:30.541513 1416 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-lmmxc" podStartSLOduration=7.012400792 podStartE2EDuration="14.541496832s" podCreationTimestamp="2025-05-15 10:16:16 +0000 UTC" firstStartedPulling="2025-05-15 10:16:18.416210832 +0000 UTC m=+2.892924281" lastFinishedPulling="2025-05-15 10:16:25.945306872 +0000 UTC m=+10.422020321" observedRunningTime="2025-05-15 10:16:30.541276312 +0000 UTC m=+15.017989761" watchObservedRunningTime="2025-05-15 10:16:30.541496832 +0000 UTC m=+15.018210281" May 15 10:16:31.332051 kubelet[1416]: E0515 10:16:31.331986 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 15 10:16:31.527745 kubelet[1416]: E0515 10:16:31.527704 1416 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:16:31.739352 systemd-networkd[1037]: cilium_host: Link UP May 15 10:16:31.740634 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready May 15 10:16:31.740446 systemd-networkd[1037]: cilium_net: Link UP May 15 10:16:31.740504 systemd-networkd[1037]: cilium_net: Gained carrier May 15 10:16:31.740695 systemd-networkd[1037]: cilium_host: Gained carrier May 15 10:16:31.775575 systemd-networkd[1037]: cilium_net: Gained IPv6LL May 15 10:16:31.817597 systemd-networkd[1037]: cilium_vxlan: Link UP May 15 10:16:31.817606 systemd-networkd[1037]: cilium_vxlan: Gained carrier May 15 10:16:32.131488 kernel: NET: Registered PF_ALG protocol family May 15 10:16:32.332657 kubelet[1416]: E0515 10:16:32.332606 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 15 10:16:32.472655 systemd-networkd[1037]: cilium_host: Gained IPv6LL May 15 10:16:32.529238 kubelet[1416]: E0515 10:16:32.529197 1416 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:16:32.704798 systemd-networkd[1037]: lxc_health: Link UP May 15 10:16:32.716668 systemd-networkd[1037]: lxc_health: Gained carrier May 15 10:16:32.717563 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready May 15 10:16:33.000389 systemd[1]: Created slice kubepods-besteffort-pod11b15180_1e9b_4a67_9c05_33019052cf75.slice. May 15 10:16:33.130741 kubelet[1416]: I0515 10:16:33.130684 1416 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p9xzv\" (UniqueName: \"kubernetes.io/projected/11b15180-1e9b-4a67-9c05-33019052cf75-kube-api-access-p9xzv\") pod \"nginx-deployment-7fcdb87857-l8wjl\" (UID: \"11b15180-1e9b-4a67-9c05-33019052cf75\") " pod="default/nginx-deployment-7fcdb87857-l8wjl" May 15 10:16:33.302935 env[1217]: time="2025-05-15T10:16:33.302826152Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-l8wjl,Uid:11b15180-1e9b-4a67-9c05-33019052cf75,Namespace:default,Attempt:0,}" May 15 10:16:33.333044 kubelet[1416]: E0515 10:16:33.333006 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 15 10:16:33.411645 systemd-networkd[1037]: lxc350c3abe8b91: Link UP May 15 10:16:33.424510 kernel: eth0: renamed from tmp0158e May 15 10:16:33.431506 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready May 15 10:16:33.431596 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc350c3abe8b91: link becomes ready May 15 10:16:33.432215 systemd-networkd[1037]: lxc350c3abe8b91: Gained carrier May 15 10:16:33.689609 systemd-networkd[1037]: cilium_vxlan: Gained IPv6LL May 15 10:16:34.072625 systemd-networkd[1037]: lxc_health: Gained IPv6LL May 15 10:16:34.333511 kubelet[1416]: E0515 10:16:34.333334 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 15 10:16:34.669847 kubelet[1416]: E0515 10:16:34.669713 1416 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:16:34.842549 systemd-networkd[1037]: lxc350c3abe8b91: Gained IPv6LL May 15 10:16:35.333626 kubelet[1416]: E0515 10:16:35.333576 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 15 10:16:36.318775 kubelet[1416]: E0515 10:16:36.318729 1416 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 15 10:16:36.334357 kubelet[1416]: E0515 10:16:36.334321 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 15 10:16:36.902362 env[1217]: time="2025-05-15T10:16:36.902290432Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 10:16:36.902362 env[1217]: time="2025-05-15T10:16:36.902328672Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 10:16:36.902362 env[1217]: time="2025-05-15T10:16:36.902338912Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 10:16:36.902735 env[1217]: time="2025-05-15T10:16:36.902475912Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0158e40f13005d3228b43ba1d74b0c0ea57536e8bf24ecc3be463b8425abe8b3 pid=2492 runtime=io.containerd.runc.v2 May 15 10:16:36.914375 systemd[1]: Started cri-containerd-0158e40f13005d3228b43ba1d74b0c0ea57536e8bf24ecc3be463b8425abe8b3.scope. May 15 10:16:37.003885 systemd-resolved[1155]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 15 10:16:37.018858 env[1217]: time="2025-05-15T10:16:37.018815992Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-l8wjl,Uid:11b15180-1e9b-4a67-9c05-33019052cf75,Namespace:default,Attempt:0,} returns sandbox id \"0158e40f13005d3228b43ba1d74b0c0ea57536e8bf24ecc3be463b8425abe8b3\"" May 15 10:16:37.020322 env[1217]: time="2025-05-15T10:16:37.020287032Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" May 15 10:16:37.335538 kubelet[1416]: E0515 10:16:37.335054 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 15 10:16:38.336143 kubelet[1416]: E0515 10:16:38.336096 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 15 10:16:38.483299 kubelet[1416]: I0515 10:16:38.483132 1416 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 15 10:16:38.484560 kubelet[1416]: E0515 10:16:38.484515 1416 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:16:38.540159 kubelet[1416]: E0515 10:16:38.540123 1416 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:16:39.238136 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2803754188.mount: Deactivated successfully. May 15 10:16:39.336789 kubelet[1416]: E0515 10:16:39.336729 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 15 10:16:40.337031 kubelet[1416]: E0515 10:16:40.336972 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 15 10:16:40.459438 env[1217]: time="2025-05-15T10:16:40.459391352Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:16:40.460973 env[1217]: time="2025-05-15T10:16:40.460940832Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e8b1cb61bd96acc3ff3c695318c9cc691213d532eee3731d038af92816fcb5f4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:16:40.462810 env[1217]: time="2025-05-15T10:16:40.462781712Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:16:40.465982 env[1217]: time="2025-05-15T10:16:40.465936792Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx@sha256:beabce8f1782671ba500ddff99dd260fbf9c5ec85fb9c3162e35a3c40bafd023,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:16:40.466795 env[1217]: time="2025-05-15T10:16:40.466762112Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:e8b1cb61bd96acc3ff3c695318c9cc691213d532eee3731d038af92816fcb5f4\"" May 15 10:16:40.469294 env[1217]: time="2025-05-15T10:16:40.469263072Z" level=info msg="CreateContainer within sandbox \"0158e40f13005d3228b43ba1d74b0c0ea57536e8bf24ecc3be463b8425abe8b3\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" May 15 10:16:40.478559 env[1217]: time="2025-05-15T10:16:40.478519912Z" level=info msg="CreateContainer within sandbox \"0158e40f13005d3228b43ba1d74b0c0ea57536e8bf24ecc3be463b8425abe8b3\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"4e2d5cc9d6afae8cc3d8c5ed292de86b47bad57c7a398fd272355d5ae96b3bb4\"" May 15 10:16:40.479299 env[1217]: time="2025-05-15T10:16:40.479270832Z" level=info msg="StartContainer for \"4e2d5cc9d6afae8cc3d8c5ed292de86b47bad57c7a398fd272355d5ae96b3bb4\"" May 15 10:16:40.495506 systemd[1]: run-containerd-runc-k8s.io-4e2d5cc9d6afae8cc3d8c5ed292de86b47bad57c7a398fd272355d5ae96b3bb4-runc.k5fqh7.mount: Deactivated successfully. May 15 10:16:40.496924 systemd[1]: Started cri-containerd-4e2d5cc9d6afae8cc3d8c5ed292de86b47bad57c7a398fd272355d5ae96b3bb4.scope. May 15 10:16:40.534201 env[1217]: time="2025-05-15T10:16:40.534108032Z" level=info msg="StartContainer for \"4e2d5cc9d6afae8cc3d8c5ed292de86b47bad57c7a398fd272355d5ae96b3bb4\" returns successfully" May 15 10:16:40.552096 kubelet[1416]: I0515 10:16:40.552029 1416 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-7fcdb87857-l8wjl" podStartSLOduration=5.103814552 podStartE2EDuration="8.552011632s" podCreationTimestamp="2025-05-15 10:16:32 +0000 UTC" firstStartedPulling="2025-05-15 10:16:37.019880232 +0000 UTC m=+21.496593641" lastFinishedPulling="2025-05-15 10:16:40.468077312 +0000 UTC m=+24.944790721" observedRunningTime="2025-05-15 10:16:40.551986872 +0000 UTC m=+25.028700321" watchObservedRunningTime="2025-05-15 10:16:40.552011632 +0000 UTC m=+25.028725081" May 15 10:16:41.337373 kubelet[1416]: E0515 10:16:41.337323 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 15 10:16:42.337516 kubelet[1416]: E0515 10:16:42.337452 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 15 10:16:43.338582 kubelet[1416]: E0515 10:16:43.338521 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 15 10:16:44.339127 kubelet[1416]: E0515 10:16:44.339091 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 15 10:16:44.503482 systemd[1]: Created slice kubepods-besteffort-pod122ee25a_d2cb_49fc_8a2d_cbf8aa9e89e2.slice. May 15 10:16:44.584238 kubelet[1416]: I0515 10:16:44.584195 1416 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/122ee25a-d2cb-49fc-8a2d-cbf8aa9e89e2-data\") pod \"nfs-server-provisioner-0\" (UID: \"122ee25a-d2cb-49fc-8a2d-cbf8aa9e89e2\") " pod="default/nfs-server-provisioner-0" May 15 10:16:44.584367 kubelet[1416]: I0515 10:16:44.584304 1416 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8tpld\" (UniqueName: \"kubernetes.io/projected/122ee25a-d2cb-49fc-8a2d-cbf8aa9e89e2-kube-api-access-8tpld\") pod \"nfs-server-provisioner-0\" (UID: \"122ee25a-d2cb-49fc-8a2d-cbf8aa9e89e2\") " pod="default/nfs-server-provisioner-0" May 15 10:16:44.806254 env[1217]: time="2025-05-15T10:16:44.806148927Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:122ee25a-d2cb-49fc-8a2d-cbf8aa9e89e2,Namespace:default,Attempt:0,}" May 15 10:16:44.837995 systemd-networkd[1037]: lxc5f7209c8cd47: Link UP May 15 10:16:44.848492 kernel: eth0: renamed from tmp6250a May 15 10:16:44.857481 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready May 15 10:16:44.857540 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc5f7209c8cd47: link becomes ready May 15 10:16:44.857640 systemd-networkd[1037]: lxc5f7209c8cd47: Gained carrier May 15 10:16:44.991760 env[1217]: time="2025-05-15T10:16:44.991684075Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 10:16:44.991760 env[1217]: time="2025-05-15T10:16:44.991727315Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 10:16:44.991760 env[1217]: time="2025-05-15T10:16:44.991737955Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 10:16:44.992100 env[1217]: time="2025-05-15T10:16:44.992069672Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6250a9628e9abd5549d02ce0a2ed88010f7b1b2174efb0a6ed63000e519fdd14 pid=2622 runtime=io.containerd.runc.v2 May 15 10:16:45.006062 systemd[1]: Started cri-containerd-6250a9628e9abd5549d02ce0a2ed88010f7b1b2174efb0a6ed63000e519fdd14.scope. May 15 10:16:45.028859 systemd-resolved[1155]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 15 10:16:45.046215 env[1217]: time="2025-05-15T10:16:45.046174776Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:122ee25a-d2cb-49fc-8a2d-cbf8aa9e89e2,Namespace:default,Attempt:0,} returns sandbox id \"6250a9628e9abd5549d02ce0a2ed88010f7b1b2174efb0a6ed63000e519fdd14\"" May 15 10:16:45.048222 env[1217]: time="2025-05-15T10:16:45.048192839Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" May 15 10:16:45.339935 kubelet[1416]: E0515 10:16:45.339870 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 15 10:16:46.340193 kubelet[1416]: E0515 10:16:46.340148 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 15 10:16:46.872679 systemd-networkd[1037]: lxc5f7209c8cd47: Gained IPv6LL May 15 10:16:47.145799 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2424258428.mount: Deactivated successfully. May 15 10:16:47.340344 kubelet[1416]: E0515 10:16:47.340286 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 15 10:16:48.341130 kubelet[1416]: E0515 10:16:48.341084 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 15 10:16:48.923382 env[1217]: time="2025-05-15T10:16:48.923279347Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:16:48.949597 env[1217]: time="2025-05-15T10:16:48.949537767Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:16:48.962950 env[1217]: time="2025-05-15T10:16:48.962910275Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:16:48.964859 env[1217]: time="2025-05-15T10:16:48.964824542Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:16:48.965718 env[1217]: time="2025-05-15T10:16:48.965679536Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\"" May 15 10:16:48.968321 env[1217]: time="2025-05-15T10:16:48.968290918Z" level=info msg="CreateContainer within sandbox \"6250a9628e9abd5549d02ce0a2ed88010f7b1b2174efb0a6ed63000e519fdd14\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" May 15 10:16:48.976078 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3199197618.mount: Deactivated successfully. May 15 10:16:48.977829 env[1217]: time="2025-05-15T10:16:48.977799012Z" level=info msg="CreateContainer within sandbox \"6250a9628e9abd5549d02ce0a2ed88010f7b1b2174efb0a6ed63000e519fdd14\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"fd1e255e477fb88b0e12243528c9b705c76bd50815ab7a7d6cfdc79511033cbd\"" May 15 10:16:48.978527 env[1217]: time="2025-05-15T10:16:48.978484928Z" level=info msg="StartContainer for \"fd1e255e477fb88b0e12243528c9b705c76bd50815ab7a7d6cfdc79511033cbd\"" May 15 10:16:48.992389 systemd[1]: Started cri-containerd-fd1e255e477fb88b0e12243528c9b705c76bd50815ab7a7d6cfdc79511033cbd.scope. May 15 10:16:49.053248 env[1217]: time="2025-05-15T10:16:49.053211636Z" level=info msg="StartContainer for \"fd1e255e477fb88b0e12243528c9b705c76bd50815ab7a7d6cfdc79511033cbd\" returns successfully" May 15 10:16:49.342220 kubelet[1416]: E0515 10:16:49.342113 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 15 10:16:49.570251 kubelet[1416]: I0515 10:16:49.570122 1416 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=1.65087126 podStartE2EDuration="5.570100704s" podCreationTimestamp="2025-05-15 10:16:44 +0000 UTC" firstStartedPulling="2025-05-15 10:16:45.047704203 +0000 UTC m=+29.524417652" lastFinishedPulling="2025-05-15 10:16:48.966933687 +0000 UTC m=+33.443647096" observedRunningTime="2025-05-15 10:16:49.569744147 +0000 UTC m=+34.046457596" watchObservedRunningTime="2025-05-15 10:16:49.570100704 +0000 UTC m=+34.046814113" May 15 10:16:50.344838 kubelet[1416]: E0515 10:16:50.342939 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 15 10:16:51.343786 kubelet[1416]: E0515 10:16:51.343736 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 15 10:16:52.343896 kubelet[1416]: E0515 10:16:52.343844 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 15 10:16:53.344561 kubelet[1416]: E0515 10:16:53.344514 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 15 10:16:53.574864 update_engine[1210]: I0515 10:16:53.574805 1210 update_attempter.cc:509] Updating boot flags... May 15 10:16:54.345585 kubelet[1416]: E0515 10:16:54.345550 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 15 10:16:55.345904 kubelet[1416]: E0515 10:16:55.345858 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 15 10:16:56.318547 kubelet[1416]: E0515 10:16:56.318506 1416 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 15 10:16:56.346238 kubelet[1416]: E0515 10:16:56.346208 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 15 10:16:57.347403 kubelet[1416]: E0515 10:16:57.347359 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 15 10:16:58.348197 kubelet[1416]: E0515 10:16:58.348144 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 15 10:16:58.947035 systemd[1]: Created slice kubepods-besteffort-podf8b2efb5_83a1_4d19_b4f1_793bd3b846a5.slice. May 15 10:16:59.050837 kubelet[1416]: I0515 10:16:59.050782 1416 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-b684153b-0450-4efb-96c6-2514a9bf3f7c\" (UniqueName: \"kubernetes.io/nfs/f8b2efb5-83a1-4d19-b4f1-793bd3b846a5-pvc-b684153b-0450-4efb-96c6-2514a9bf3f7c\") pod \"test-pod-1\" (UID: \"f8b2efb5-83a1-4d19-b4f1-793bd3b846a5\") " pod="default/test-pod-1" May 15 10:16:59.050984 kubelet[1416]: I0515 10:16:59.050839 1416 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-74k6g\" (UniqueName: \"kubernetes.io/projected/f8b2efb5-83a1-4d19-b4f1-793bd3b846a5-kube-api-access-74k6g\") pod \"test-pod-1\" (UID: \"f8b2efb5-83a1-4d19-b4f1-793bd3b846a5\") " pod="default/test-pod-1" May 15 10:16:59.180490 kernel: FS-Cache: Loaded May 15 10:16:59.212993 kernel: RPC: Registered named UNIX socket transport module. May 15 10:16:59.213091 kernel: RPC: Registered udp transport module. May 15 10:16:59.213114 kernel: RPC: Registered tcp transport module. May 15 10:16:59.213132 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. May 15 10:16:59.254491 kernel: FS-Cache: Netfs 'nfs' registered for caching May 15 10:16:59.349240 kubelet[1416]: E0515 10:16:59.349185 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 15 10:16:59.384490 kernel: NFS: Registering the id_resolver key type May 15 10:16:59.385483 kernel: Key type id_resolver registered May 15 10:16:59.385535 kernel: Key type id_legacy registered May 15 10:16:59.413166 nfsidmap[2757]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' May 15 10:16:59.416893 nfsidmap[2760]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' May 15 10:16:59.550305 env[1217]: time="2025-05-15T10:16:59.550181405Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:f8b2efb5-83a1-4d19-b4f1-793bd3b846a5,Namespace:default,Attempt:0,}" May 15 10:16:59.573899 systemd-networkd[1037]: lxc34ae6d22466f: Link UP May 15 10:16:59.585491 kernel: eth0: renamed from tmp63730 May 15 10:16:59.594715 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready May 15 10:16:59.594796 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc34ae6d22466f: link becomes ready May 15 10:16:59.594899 systemd-networkd[1037]: lxc34ae6d22466f: Gained carrier May 15 10:16:59.765196 env[1217]: time="2025-05-15T10:16:59.765107318Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 10:16:59.765196 env[1217]: time="2025-05-15T10:16:59.765157518Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 10:16:59.765196 env[1217]: time="2025-05-15T10:16:59.765169518Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 10:16:59.765674 env[1217]: time="2025-05-15T10:16:59.765641077Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/637307e4540f251ae3024ecc03bca32b5b3a4e815bb81c5cfb24d44bf0e4ac4f pid=2795 runtime=io.containerd.runc.v2 May 15 10:16:59.776927 systemd[1]: Started cri-containerd-637307e4540f251ae3024ecc03bca32b5b3a4e815bb81c5cfb24d44bf0e4ac4f.scope. May 15 10:16:59.810722 systemd-resolved[1155]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 15 10:16:59.827606 env[1217]: time="2025-05-15T10:16:59.827548387Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:f8b2efb5-83a1-4d19-b4f1-793bd3b846a5,Namespace:default,Attempt:0,} returns sandbox id \"637307e4540f251ae3024ecc03bca32b5b3a4e815bb81c5cfb24d44bf0e4ac4f\"" May 15 10:16:59.829207 env[1217]: time="2025-05-15T10:16:59.829182422Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" May 15 10:17:00.321360 env[1217]: time="2025-05-15T10:17:00.321196186Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:17:00.324257 env[1217]: time="2025-05-15T10:17:00.324206937Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:e8b1cb61bd96acc3ff3c695318c9cc691213d532eee3731d038af92816fcb5f4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:17:00.326248 env[1217]: time="2025-05-15T10:17:00.326213890Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:17:00.328135 env[1217]: time="2025-05-15T10:17:00.328099764Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx@sha256:beabce8f1782671ba500ddff99dd260fbf9c5ec85fb9c3162e35a3c40bafd023,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:17:00.328915 env[1217]: time="2025-05-15T10:17:00.328879802Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:e8b1cb61bd96acc3ff3c695318c9cc691213d532eee3731d038af92816fcb5f4\"" May 15 10:17:00.331519 env[1217]: time="2025-05-15T10:17:00.331481313Z" level=info msg="CreateContainer within sandbox \"637307e4540f251ae3024ecc03bca32b5b3a4e815bb81c5cfb24d44bf0e4ac4f\" for container &ContainerMetadata{Name:test,Attempt:0,}" May 15 10:17:00.343665 env[1217]: time="2025-05-15T10:17:00.343613915Z" level=info msg="CreateContainer within sandbox \"637307e4540f251ae3024ecc03bca32b5b3a4e815bb81c5cfb24d44bf0e4ac4f\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"c9f962fb86a5ef6e84ad291a297ad8b1da96dffe93b9932b48c7f9e7db7c4812\"" May 15 10:17:00.344498 env[1217]: time="2025-05-15T10:17:00.344470192Z" level=info msg="StartContainer for \"c9f962fb86a5ef6e84ad291a297ad8b1da96dffe93b9932b48c7f9e7db7c4812\"" May 15 10:17:00.351998 kubelet[1416]: E0515 10:17:00.351932 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 15 10:17:00.363384 systemd[1]: Started cri-containerd-c9f962fb86a5ef6e84ad291a297ad8b1da96dffe93b9932b48c7f9e7db7c4812.scope. May 15 10:17:00.415072 env[1217]: time="2025-05-15T10:17:00.415028209Z" level=info msg="StartContainer for \"c9f962fb86a5ef6e84ad291a297ad8b1da96dffe93b9932b48c7f9e7db7c4812\" returns successfully" May 15 10:17:01.272630 systemd-networkd[1037]: lxc34ae6d22466f: Gained IPv6LL May 15 10:17:01.352177 kubelet[1416]: E0515 10:17:01.352139 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 15 10:17:02.353600 kubelet[1416]: E0515 10:17:02.353556 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 15 10:17:03.353953 kubelet[1416]: E0515 10:17:03.353912 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 15 10:17:04.354590 kubelet[1416]: E0515 10:17:04.354559 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 15 10:17:05.355336 kubelet[1416]: E0515 10:17:05.355297 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 15 10:17:06.356359 kubelet[1416]: E0515 10:17:06.356318 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 15 10:17:07.357836 kubelet[1416]: E0515 10:17:07.357789 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 15 10:17:08.280980 kubelet[1416]: I0515 10:17:08.280921 1416 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=23.779425116 podStartE2EDuration="24.28090257s" podCreationTimestamp="2025-05-15 10:16:44 +0000 UTC" firstStartedPulling="2025-05-15 10:16:59.828446304 +0000 UTC m=+44.305159753" lastFinishedPulling="2025-05-15 10:17:00.329923798 +0000 UTC m=+44.806637207" observedRunningTime="2025-05-15 10:17:00.589653615 +0000 UTC m=+45.066367064" watchObservedRunningTime="2025-05-15 10:17:08.28090257 +0000 UTC m=+52.757616019" May 15 10:17:08.334280 env[1217]: time="2025-05-15T10:17:08.334214069Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 15 10:17:08.339501 env[1217]: time="2025-05-15T10:17:08.339438180Z" level=info msg="StopContainer for \"93e18573ec42046767f009abf6d8fd633bd2a110152fe5af38569c6cc6dbd0cf\" with timeout 2 (s)" May 15 10:17:08.339906 env[1217]: time="2025-05-15T10:17:08.339877659Z" level=info msg="Stop container \"93e18573ec42046767f009abf6d8fd633bd2a110152fe5af38569c6cc6dbd0cf\" with signal terminated" May 15 10:17:08.344950 systemd-networkd[1037]: lxc_health: Link DOWN May 15 10:17:08.344957 systemd-networkd[1037]: lxc_health: Lost carrier May 15 10:17:08.358691 kubelet[1416]: E0515 10:17:08.358651 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 15 10:17:08.385827 systemd[1]: cri-containerd-93e18573ec42046767f009abf6d8fd633bd2a110152fe5af38569c6cc6dbd0cf.scope: Deactivated successfully. May 15 10:17:08.386157 systemd[1]: cri-containerd-93e18573ec42046767f009abf6d8fd633bd2a110152fe5af38569c6cc6dbd0cf.scope: Consumed 6.481s CPU time. May 15 10:17:08.402635 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-93e18573ec42046767f009abf6d8fd633bd2a110152fe5af38569c6cc6dbd0cf-rootfs.mount: Deactivated successfully. May 15 10:17:08.565935 env[1217]: time="2025-05-15T10:17:08.565707712Z" level=info msg="shim disconnected" id=93e18573ec42046767f009abf6d8fd633bd2a110152fe5af38569c6cc6dbd0cf May 15 10:17:08.565935 env[1217]: time="2025-05-15T10:17:08.565755111Z" level=warning msg="cleaning up after shim disconnected" id=93e18573ec42046767f009abf6d8fd633bd2a110152fe5af38569c6cc6dbd0cf namespace=k8s.io May 15 10:17:08.565935 env[1217]: time="2025-05-15T10:17:08.565764471Z" level=info msg="cleaning up dead shim" May 15 10:17:08.572673 env[1217]: time="2025-05-15T10:17:08.572633898Z" level=warning msg="cleanup warnings time=\"2025-05-15T10:17:08Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2928 runtime=io.containerd.runc.v2\n" May 15 10:17:08.576077 env[1217]: time="2025-05-15T10:17:08.576037492Z" level=info msg="StopContainer for \"93e18573ec42046767f009abf6d8fd633bd2a110152fe5af38569c6cc6dbd0cf\" returns successfully" May 15 10:17:08.576630 env[1217]: time="2025-05-15T10:17:08.576608491Z" level=info msg="StopPodSandbox for \"bb858f8c3a48b0908365d68d34ce9a89bfcea6a611047dfcf35aea2d1e1d1160\"" May 15 10:17:08.576685 env[1217]: time="2025-05-15T10:17:08.576665131Z" level=info msg="Container to stop \"93e18573ec42046767f009abf6d8fd633bd2a110152fe5af38569c6cc6dbd0cf\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 10:17:08.576685 env[1217]: time="2025-05-15T10:17:08.576679931Z" level=info msg="Container to stop \"1a3a739f3cedd7fcc16acc6039dcb149169ebc6d135bcc7f1f032d0814a5eeb4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 10:17:08.576741 env[1217]: time="2025-05-15T10:17:08.576690931Z" level=info msg="Container to stop \"45c72d02352cf55a7080ac1dedf34cc437ae488ac1a9eb4b526610688fcc9731\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 10:17:08.576741 env[1217]: time="2025-05-15T10:17:08.576701571Z" level=info msg="Container to stop \"2b518e1a3e21747c422ece182de90a9d82dbcae219622841112c326cc6d0f8a9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 10:17:08.576741 env[1217]: time="2025-05-15T10:17:08.576712331Z" level=info msg="Container to stop \"fb4ebbaef66de2259361822bde3d0825d98748f145783a63848e1b154a8817cf\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 10:17:08.578268 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-bb858f8c3a48b0908365d68d34ce9a89bfcea6a611047dfcf35aea2d1e1d1160-shm.mount: Deactivated successfully. May 15 10:17:08.583624 systemd[1]: cri-containerd-bb858f8c3a48b0908365d68d34ce9a89bfcea6a611047dfcf35aea2d1e1d1160.scope: Deactivated successfully. May 15 10:17:08.602920 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bb858f8c3a48b0908365d68d34ce9a89bfcea6a611047dfcf35aea2d1e1d1160-rootfs.mount: Deactivated successfully. May 15 10:17:08.606359 env[1217]: time="2025-05-15T10:17:08.606312035Z" level=info msg="shim disconnected" id=bb858f8c3a48b0908365d68d34ce9a89bfcea6a611047dfcf35aea2d1e1d1160 May 15 10:17:08.606359 env[1217]: time="2025-05-15T10:17:08.606354195Z" level=warning msg="cleaning up after shim disconnected" id=bb858f8c3a48b0908365d68d34ce9a89bfcea6a611047dfcf35aea2d1e1d1160 namespace=k8s.io May 15 10:17:08.606516 env[1217]: time="2025-05-15T10:17:08.606363835Z" level=info msg="cleaning up dead shim" May 15 10:17:08.612604 env[1217]: time="2025-05-15T10:17:08.612569863Z" level=warning msg="cleanup warnings time=\"2025-05-15T10:17:08Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2958 runtime=io.containerd.runc.v2\n" May 15 10:17:08.612883 env[1217]: time="2025-05-15T10:17:08.612851022Z" level=info msg="TearDown network for sandbox \"bb858f8c3a48b0908365d68d34ce9a89bfcea6a611047dfcf35aea2d1e1d1160\" successfully" May 15 10:17:08.612883 env[1217]: time="2025-05-15T10:17:08.612872022Z" level=info msg="StopPodSandbox for \"bb858f8c3a48b0908365d68d34ce9a89bfcea6a611047dfcf35aea2d1e1d1160\" returns successfully" May 15 10:17:08.808983 kubelet[1416]: I0515 10:17:08.808934 1416 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ffb4e6bb-8de9-41d1-bfcb-d7af27931a34-etc-cni-netd\") pod \"ffb4e6bb-8de9-41d1-bfcb-d7af27931a34\" (UID: \"ffb4e6bb-8de9-41d1-bfcb-d7af27931a34\") " May 15 10:17:08.808983 kubelet[1416]: I0515 10:17:08.808973 1416 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ffb4e6bb-8de9-41d1-bfcb-d7af27931a34-host-proc-sys-kernel\") pod \"ffb4e6bb-8de9-41d1-bfcb-d7af27931a34\" (UID: \"ffb4e6bb-8de9-41d1-bfcb-d7af27931a34\") " May 15 10:17:08.808983 kubelet[1416]: I0515 10:17:08.808990 1416 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ffb4e6bb-8de9-41d1-bfcb-d7af27931a34-cni-path\") pod \"ffb4e6bb-8de9-41d1-bfcb-d7af27931a34\" (UID: \"ffb4e6bb-8de9-41d1-bfcb-d7af27931a34\") " May 15 10:17:08.809220 kubelet[1416]: I0515 10:17:08.808993 1416 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ffb4e6bb-8de9-41d1-bfcb-d7af27931a34-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "ffb4e6bb-8de9-41d1-bfcb-d7af27931a34" (UID: "ffb4e6bb-8de9-41d1-bfcb-d7af27931a34"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 15 10:17:08.809220 kubelet[1416]: I0515 10:17:08.809014 1416 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s2w5s\" (UniqueName: \"kubernetes.io/projected/ffb4e6bb-8de9-41d1-bfcb-d7af27931a34-kube-api-access-s2w5s\") pod \"ffb4e6bb-8de9-41d1-bfcb-d7af27931a34\" (UID: \"ffb4e6bb-8de9-41d1-bfcb-d7af27931a34\") " May 15 10:17:08.809220 kubelet[1416]: I0515 10:17:08.809063 1416 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ffb4e6bb-8de9-41d1-bfcb-d7af27931a34-hostproc\") pod \"ffb4e6bb-8de9-41d1-bfcb-d7af27931a34\" (UID: \"ffb4e6bb-8de9-41d1-bfcb-d7af27931a34\") " May 15 10:17:08.809220 kubelet[1416]: I0515 10:17:08.809082 1416 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ffb4e6bb-8de9-41d1-bfcb-d7af27931a34-bpf-maps\") pod \"ffb4e6bb-8de9-41d1-bfcb-d7af27931a34\" (UID: \"ffb4e6bb-8de9-41d1-bfcb-d7af27931a34\") " May 15 10:17:08.809220 kubelet[1416]: I0515 10:17:08.809097 1416 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ffb4e6bb-8de9-41d1-bfcb-d7af27931a34-lib-modules\") pod \"ffb4e6bb-8de9-41d1-bfcb-d7af27931a34\" (UID: \"ffb4e6bb-8de9-41d1-bfcb-d7af27931a34\") " May 15 10:17:08.809220 kubelet[1416]: I0515 10:17:08.809133 1416 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ffb4e6bb-8de9-41d1-bfcb-d7af27931a34-cilium-cgroup\") pod \"ffb4e6bb-8de9-41d1-bfcb-d7af27931a34\" (UID: \"ffb4e6bb-8de9-41d1-bfcb-d7af27931a34\") " May 15 10:17:08.809373 kubelet[1416]: I0515 10:17:08.809154 1416 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ffb4e6bb-8de9-41d1-bfcb-d7af27931a34-hubble-tls\") pod \"ffb4e6bb-8de9-41d1-bfcb-d7af27931a34\" (UID: \"ffb4e6bb-8de9-41d1-bfcb-d7af27931a34\") " May 15 10:17:08.809373 kubelet[1416]: I0515 10:17:08.809171 1416 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ffb4e6bb-8de9-41d1-bfcb-d7af27931a34-cilium-run\") pod \"ffb4e6bb-8de9-41d1-bfcb-d7af27931a34\" (UID: \"ffb4e6bb-8de9-41d1-bfcb-d7af27931a34\") " May 15 10:17:08.809373 kubelet[1416]: I0515 10:17:08.809185 1416 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ffb4e6bb-8de9-41d1-bfcb-d7af27931a34-xtables-lock\") pod \"ffb4e6bb-8de9-41d1-bfcb-d7af27931a34\" (UID: \"ffb4e6bb-8de9-41d1-bfcb-d7af27931a34\") " May 15 10:17:08.809373 kubelet[1416]: I0515 10:17:08.809205 1416 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ffb4e6bb-8de9-41d1-bfcb-d7af27931a34-clustermesh-secrets\") pod \"ffb4e6bb-8de9-41d1-bfcb-d7af27931a34\" (UID: \"ffb4e6bb-8de9-41d1-bfcb-d7af27931a34\") " May 15 10:17:08.809373 kubelet[1416]: I0515 10:17:08.809220 1416 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ffb4e6bb-8de9-41d1-bfcb-d7af27931a34-host-proc-sys-net\") pod \"ffb4e6bb-8de9-41d1-bfcb-d7af27931a34\" (UID: \"ffb4e6bb-8de9-41d1-bfcb-d7af27931a34\") " May 15 10:17:08.809373 kubelet[1416]: I0515 10:17:08.809244 1416 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ffb4e6bb-8de9-41d1-bfcb-d7af27931a34-cilium-config-path\") pod \"ffb4e6bb-8de9-41d1-bfcb-d7af27931a34\" (UID: \"ffb4e6bb-8de9-41d1-bfcb-d7af27931a34\") " May 15 10:17:08.809929 kubelet[1416]: I0515 10:17:08.809286 1416 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ffb4e6bb-8de9-41d1-bfcb-d7af27931a34-etc-cni-netd\") on node \"10.0.0.71\" DevicePath \"\"" May 15 10:17:08.809929 kubelet[1416]: I0515 10:17:08.809313 1416 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ffb4e6bb-8de9-41d1-bfcb-d7af27931a34-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "ffb4e6bb-8de9-41d1-bfcb-d7af27931a34" (UID: "ffb4e6bb-8de9-41d1-bfcb-d7af27931a34"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 15 10:17:08.809929 kubelet[1416]: I0515 10:17:08.809348 1416 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ffb4e6bb-8de9-41d1-bfcb-d7af27931a34-cni-path" (OuterVolumeSpecName: "cni-path") pod "ffb4e6bb-8de9-41d1-bfcb-d7af27931a34" (UID: "ffb4e6bb-8de9-41d1-bfcb-d7af27931a34"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 15 10:17:08.809929 kubelet[1416]: I0515 10:17:08.809444 1416 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ffb4e6bb-8de9-41d1-bfcb-d7af27931a34-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "ffb4e6bb-8de9-41d1-bfcb-d7af27931a34" (UID: "ffb4e6bb-8de9-41d1-bfcb-d7af27931a34"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 15 10:17:08.809929 kubelet[1416]: I0515 10:17:08.809500 1416 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ffb4e6bb-8de9-41d1-bfcb-d7af27931a34-hostproc" (OuterVolumeSpecName: "hostproc") pod "ffb4e6bb-8de9-41d1-bfcb-d7af27931a34" (UID: "ffb4e6bb-8de9-41d1-bfcb-d7af27931a34"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 15 10:17:08.810048 kubelet[1416]: I0515 10:17:08.809556 1416 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ffb4e6bb-8de9-41d1-bfcb-d7af27931a34-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "ffb4e6bb-8de9-41d1-bfcb-d7af27931a34" (UID: "ffb4e6bb-8de9-41d1-bfcb-d7af27931a34"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 15 10:17:08.810048 kubelet[1416]: I0515 10:17:08.809571 1416 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ffb4e6bb-8de9-41d1-bfcb-d7af27931a34-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "ffb4e6bb-8de9-41d1-bfcb-d7af27931a34" (UID: "ffb4e6bb-8de9-41d1-bfcb-d7af27931a34"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 15 10:17:08.810048 kubelet[1416]: I0515 10:17:08.809591 1416 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ffb4e6bb-8de9-41d1-bfcb-d7af27931a34-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "ffb4e6bb-8de9-41d1-bfcb-d7af27931a34" (UID: "ffb4e6bb-8de9-41d1-bfcb-d7af27931a34"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 15 10:17:08.810048 kubelet[1416]: I0515 10:17:08.809608 1416 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ffb4e6bb-8de9-41d1-bfcb-d7af27931a34-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "ffb4e6bb-8de9-41d1-bfcb-d7af27931a34" (UID: "ffb4e6bb-8de9-41d1-bfcb-d7af27931a34"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 15 10:17:08.811659 kubelet[1416]: I0515 10:17:08.810608 1416 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ffb4e6bb-8de9-41d1-bfcb-d7af27931a34-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "ffb4e6bb-8de9-41d1-bfcb-d7af27931a34" (UID: "ffb4e6bb-8de9-41d1-bfcb-d7af27931a34"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 15 10:17:08.811659 kubelet[1416]: I0515 10:17:08.811189 1416 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ffb4e6bb-8de9-41d1-bfcb-d7af27931a34-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "ffb4e6bb-8de9-41d1-bfcb-d7af27931a34" (UID: "ffb4e6bb-8de9-41d1-bfcb-d7af27931a34"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 15 10:17:08.813049 kubelet[1416]: I0515 10:17:08.813017 1416 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ffb4e6bb-8de9-41d1-bfcb-d7af27931a34-kube-api-access-s2w5s" (OuterVolumeSpecName: "kube-api-access-s2w5s") pod "ffb4e6bb-8de9-41d1-bfcb-d7af27931a34" (UID: "ffb4e6bb-8de9-41d1-bfcb-d7af27931a34"). InnerVolumeSpecName "kube-api-access-s2w5s". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 15 10:17:08.813150 systemd[1]: var-lib-kubelet-pods-ffb4e6bb\x2d8de9\x2d41d1\x2dbfcb\x2dd7af27931a34-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ds2w5s.mount: Deactivated successfully. May 15 10:17:08.813243 systemd[1]: var-lib-kubelet-pods-ffb4e6bb\x2d8de9\x2d41d1\x2dbfcb\x2dd7af27931a34-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 15 10:17:08.813303 kubelet[1416]: I0515 10:17:08.813143 1416 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ffb4e6bb-8de9-41d1-bfcb-d7af27931a34-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "ffb4e6bb-8de9-41d1-bfcb-d7af27931a34" (UID: "ffb4e6bb-8de9-41d1-bfcb-d7af27931a34"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 15 10:17:08.813647 kubelet[1416]: I0515 10:17:08.813215 1416 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ffb4e6bb-8de9-41d1-bfcb-d7af27931a34-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "ffb4e6bb-8de9-41d1-bfcb-d7af27931a34" (UID: "ffb4e6bb-8de9-41d1-bfcb-d7af27931a34"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 15 10:17:08.909835 kubelet[1416]: I0515 10:17:08.909793 1416 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ffb4e6bb-8de9-41d1-bfcb-d7af27931a34-hubble-tls\") on node \"10.0.0.71\" DevicePath \"\"" May 15 10:17:08.909835 kubelet[1416]: I0515 10:17:08.909824 1416 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ffb4e6bb-8de9-41d1-bfcb-d7af27931a34-cilium-run\") on node \"10.0.0.71\" DevicePath \"\"" May 15 10:17:08.909835 kubelet[1416]: I0515 10:17:08.909841 1416 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ffb4e6bb-8de9-41d1-bfcb-d7af27931a34-xtables-lock\") on node \"10.0.0.71\" DevicePath \"\"" May 15 10:17:08.910020 kubelet[1416]: I0515 10:17:08.909850 1416 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ffb4e6bb-8de9-41d1-bfcb-d7af27931a34-cilium-cgroup\") on node \"10.0.0.71\" DevicePath \"\"" May 15 10:17:08.910020 kubelet[1416]: I0515 10:17:08.909859 1416 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ffb4e6bb-8de9-41d1-bfcb-d7af27931a34-clustermesh-secrets\") on node \"10.0.0.71\" DevicePath \"\"" May 15 10:17:08.910020 kubelet[1416]: I0515 10:17:08.909871 1416 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ffb4e6bb-8de9-41d1-bfcb-d7af27931a34-host-proc-sys-net\") on node \"10.0.0.71\" DevicePath \"\"" May 15 10:17:08.910020 kubelet[1416]: I0515 10:17:08.909878 1416 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ffb4e6bb-8de9-41d1-bfcb-d7af27931a34-cilium-config-path\") on node \"10.0.0.71\" DevicePath \"\"" May 15 10:17:08.910020 kubelet[1416]: I0515 10:17:08.909886 1416 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ffb4e6bb-8de9-41d1-bfcb-d7af27931a34-host-proc-sys-kernel\") on node \"10.0.0.71\" DevicePath \"\"" May 15 10:17:08.910020 kubelet[1416]: I0515 10:17:08.909893 1416 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ffb4e6bb-8de9-41d1-bfcb-d7af27931a34-cni-path\") on node \"10.0.0.71\" DevicePath \"\"" May 15 10:17:08.910020 kubelet[1416]: I0515 10:17:08.909900 1416 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ffb4e6bb-8de9-41d1-bfcb-d7af27931a34-hostproc\") on node \"10.0.0.71\" DevicePath \"\"" May 15 10:17:08.910020 kubelet[1416]: I0515 10:17:08.909914 1416 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ffb4e6bb-8de9-41d1-bfcb-d7af27931a34-bpf-maps\") on node \"10.0.0.71\" DevicePath \"\"" May 15 10:17:08.910194 kubelet[1416]: I0515 10:17:08.909922 1416 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ffb4e6bb-8de9-41d1-bfcb-d7af27931a34-lib-modules\") on node \"10.0.0.71\" DevicePath \"\"" May 15 10:17:08.910194 kubelet[1416]: I0515 10:17:08.909930 1416 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-s2w5s\" (UniqueName: \"kubernetes.io/projected/ffb4e6bb-8de9-41d1-bfcb-d7af27931a34-kube-api-access-s2w5s\") on node \"10.0.0.71\" DevicePath \"\"" May 15 10:17:09.292820 systemd[1]: var-lib-kubelet-pods-ffb4e6bb\x2d8de9\x2d41d1\x2dbfcb\x2dd7af27931a34-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 15 10:17:09.359320 kubelet[1416]: E0515 10:17:09.359279 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 15 10:17:09.602388 kubelet[1416]: I0515 10:17:09.602313 1416 scope.go:117] "RemoveContainer" containerID="93e18573ec42046767f009abf6d8fd633bd2a110152fe5af38569c6cc6dbd0cf" May 15 10:17:09.604579 env[1217]: time="2025-05-15T10:17:09.604537817Z" level=info msg="RemoveContainer for \"93e18573ec42046767f009abf6d8fd633bd2a110152fe5af38569c6cc6dbd0cf\"" May 15 10:17:09.607258 systemd[1]: Removed slice kubepods-burstable-podffb4e6bb_8de9_41d1_bfcb_d7af27931a34.slice. May 15 10:17:09.607340 systemd[1]: kubepods-burstable-podffb4e6bb_8de9_41d1_bfcb_d7af27931a34.slice: Consumed 6.670s CPU time. May 15 10:17:09.607543 env[1217]: time="2025-05-15T10:17:09.607384532Z" level=info msg="RemoveContainer for \"93e18573ec42046767f009abf6d8fd633bd2a110152fe5af38569c6cc6dbd0cf\" returns successfully" May 15 10:17:09.607829 kubelet[1416]: I0515 10:17:09.607610 1416 scope.go:117] "RemoveContainer" containerID="45c72d02352cf55a7080ac1dedf34cc437ae488ac1a9eb4b526610688fcc9731" May 15 10:17:09.608986 env[1217]: time="2025-05-15T10:17:09.608957769Z" level=info msg="RemoveContainer for \"45c72d02352cf55a7080ac1dedf34cc437ae488ac1a9eb4b526610688fcc9731\"" May 15 10:17:09.611708 env[1217]: time="2025-05-15T10:17:09.611668364Z" level=info msg="RemoveContainer for \"45c72d02352cf55a7080ac1dedf34cc437ae488ac1a9eb4b526610688fcc9731\" returns successfully" May 15 10:17:09.611831 kubelet[1416]: I0515 10:17:09.611807 1416 scope.go:117] "RemoveContainer" containerID="2b518e1a3e21747c422ece182de90a9d82dbcae219622841112c326cc6d0f8a9" May 15 10:17:09.612779 env[1217]: time="2025-05-15T10:17:09.612752242Z" level=info msg="RemoveContainer for \"2b518e1a3e21747c422ece182de90a9d82dbcae219622841112c326cc6d0f8a9\"" May 15 10:17:09.615824 env[1217]: time="2025-05-15T10:17:09.615791197Z" level=info msg="RemoveContainer for \"2b518e1a3e21747c422ece182de90a9d82dbcae219622841112c326cc6d0f8a9\" returns successfully" May 15 10:17:09.616051 kubelet[1416]: I0515 10:17:09.616033 1416 scope.go:117] "RemoveContainer" containerID="fb4ebbaef66de2259361822bde3d0825d98748f145783a63848e1b154a8817cf" May 15 10:17:09.617317 env[1217]: time="2025-05-15T10:17:09.617279114Z" level=info msg="RemoveContainer for \"fb4ebbaef66de2259361822bde3d0825d98748f145783a63848e1b154a8817cf\"" May 15 10:17:09.620687 env[1217]: time="2025-05-15T10:17:09.620655828Z" level=info msg="RemoveContainer for \"fb4ebbaef66de2259361822bde3d0825d98748f145783a63848e1b154a8817cf\" returns successfully" May 15 10:17:09.620946 kubelet[1416]: I0515 10:17:09.620921 1416 scope.go:117] "RemoveContainer" containerID="1a3a739f3cedd7fcc16acc6039dcb149169ebc6d135bcc7f1f032d0814a5eeb4" May 15 10:17:09.621984 env[1217]: time="2025-05-15T10:17:09.621897066Z" level=info msg="RemoveContainer for \"1a3a739f3cedd7fcc16acc6039dcb149169ebc6d135bcc7f1f032d0814a5eeb4\"" May 15 10:17:09.626494 env[1217]: time="2025-05-15T10:17:09.626443938Z" level=info msg="RemoveContainer for \"1a3a739f3cedd7fcc16acc6039dcb149169ebc6d135bcc7f1f032d0814a5eeb4\" returns successfully" May 15 10:17:10.359721 kubelet[1416]: E0515 10:17:10.359642 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 15 10:17:10.498765 kubelet[1416]: I0515 10:17:10.498731 1416 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ffb4e6bb-8de9-41d1-bfcb-d7af27931a34" path="/var/lib/kubelet/pods/ffb4e6bb-8de9-41d1-bfcb-d7af27931a34/volumes" May 15 10:17:11.302728 kubelet[1416]: I0515 10:17:11.302687 1416 memory_manager.go:355] "RemoveStaleState removing state" podUID="ffb4e6bb-8de9-41d1-bfcb-d7af27931a34" containerName="cilium-agent" May 15 10:17:11.307324 systemd[1]: Created slice kubepods-besteffort-pod208faeea_5665_4652_a402_02a7f0c0e51f.slice. May 15 10:17:11.329997 kubelet[1416]: I0515 10:17:11.329730 1416 status_manager.go:890] "Failed to get status for pod" podUID="62331e56-793a-41e0-8098-a70ebc513192" pod="kube-system/cilium-xf7nv" err="pods \"cilium-xf7nv\" is forbidden: User \"system:node:10.0.0.71\" cannot get resource \"pods\" in API group \"\" in the namespace \"kube-system\": no relationship found between node '10.0.0.71' and this object" May 15 10:17:11.329997 kubelet[1416]: W0515 10:17:11.330072 1416 reflector.go:569] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:10.0.0.71" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node '10.0.0.71' and this object May 15 10:17:11.329997 kubelet[1416]: E0515 10:17:11.330115 1416 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"cilium-clustermesh\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-clustermesh\" is forbidden: User \"system:node:10.0.0.71\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node '10.0.0.71' and this object" logger="UnhandledError" May 15 10:17:11.329997 kubelet[1416]: W0515 10:17:11.330164 1416 reflector.go:569] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:10.0.0.71" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node '10.0.0.71' and this object May 15 10:17:11.329997 kubelet[1416]: E0515 10:17:11.330175 1416 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"hubble-server-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"hubble-server-certs\" is forbidden: User \"system:node:10.0.0.71\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node '10.0.0.71' and this object" logger="UnhandledError" May 15 10:17:11.330477 kubelet[1416]: W0515 10:17:11.330203 1416 reflector.go:569] object-"kube-system"/"cilium-ipsec-keys": failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:10.0.0.71" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node '10.0.0.71' and this object May 15 10:17:11.330477 kubelet[1416]: E0515 10:17:11.330213 1416 reflector.go:166] "Unhandled Error" err="object-\"kube-system\"/\"cilium-ipsec-keys\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-ipsec-keys\" is forbidden: User \"system:node:10.0.0.71\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node '10.0.0.71' and this object" logger="UnhandledError" May 15 10:17:11.330135 systemd[1]: Created slice kubepods-burstable-pod62331e56_793a_41e0_8098_a70ebc513192.slice. May 15 10:17:11.360249 kubelet[1416]: E0515 10:17:11.360186 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 15 10:17:11.424496 kubelet[1416]: I0515 10:17:11.424443 1416 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/62331e56-793a-41e0-8098-a70ebc513192-cilium-run\") pod \"cilium-xf7nv\" (UID: \"62331e56-793a-41e0-8098-a70ebc513192\") " pod="kube-system/cilium-xf7nv" May 15 10:17:11.424579 kubelet[1416]: I0515 10:17:11.424501 1416 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/62331e56-793a-41e0-8098-a70ebc513192-cni-path\") pod \"cilium-xf7nv\" (UID: \"62331e56-793a-41e0-8098-a70ebc513192\") " pod="kube-system/cilium-xf7nv" May 15 10:17:11.424579 kubelet[1416]: I0515 10:17:11.424523 1416 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/62331e56-793a-41e0-8098-a70ebc513192-xtables-lock\") pod \"cilium-xf7nv\" (UID: \"62331e56-793a-41e0-8098-a70ebc513192\") " pod="kube-system/cilium-xf7nv" May 15 10:17:11.424579 kubelet[1416]: I0515 10:17:11.424540 1416 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/62331e56-793a-41e0-8098-a70ebc513192-hubble-tls\") pod \"cilium-xf7nv\" (UID: \"62331e56-793a-41e0-8098-a70ebc513192\") " pod="kube-system/cilium-xf7nv" May 15 10:17:11.424659 kubelet[1416]: I0515 10:17:11.424583 1416 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qrms9\" (UniqueName: \"kubernetes.io/projected/208faeea-5665-4652-a402-02a7f0c0e51f-kube-api-access-qrms9\") pod \"cilium-operator-6c4d7847fc-rvzjd\" (UID: \"208faeea-5665-4652-a402-02a7f0c0e51f\") " pod="kube-system/cilium-operator-6c4d7847fc-rvzjd" May 15 10:17:11.424659 kubelet[1416]: I0515 10:17:11.424601 1416 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/62331e56-793a-41e0-8098-a70ebc513192-hostproc\") pod \"cilium-xf7nv\" (UID: \"62331e56-793a-41e0-8098-a70ebc513192\") " pod="kube-system/cilium-xf7nv" May 15 10:17:11.424659 kubelet[1416]: I0515 10:17:11.424615 1416 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/62331e56-793a-41e0-8098-a70ebc513192-cilium-cgroup\") pod \"cilium-xf7nv\" (UID: \"62331e56-793a-41e0-8098-a70ebc513192\") " pod="kube-system/cilium-xf7nv" May 15 10:17:11.424659 kubelet[1416]: I0515 10:17:11.424631 1416 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/62331e56-793a-41e0-8098-a70ebc513192-lib-modules\") pod \"cilium-xf7nv\" (UID: \"62331e56-793a-41e0-8098-a70ebc513192\") " pod="kube-system/cilium-xf7nv" May 15 10:17:11.424659 kubelet[1416]: I0515 10:17:11.424649 1416 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/62331e56-793a-41e0-8098-a70ebc513192-clustermesh-secrets\") pod \"cilium-xf7nv\" (UID: \"62331e56-793a-41e0-8098-a70ebc513192\") " pod="kube-system/cilium-xf7nv" May 15 10:17:11.424762 kubelet[1416]: I0515 10:17:11.424664 1416 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/208faeea-5665-4652-a402-02a7f0c0e51f-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-rvzjd\" (UID: \"208faeea-5665-4652-a402-02a7f0c0e51f\") " pod="kube-system/cilium-operator-6c4d7847fc-rvzjd" May 15 10:17:11.424762 kubelet[1416]: I0515 10:17:11.424680 1416 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/62331e56-793a-41e0-8098-a70ebc513192-etc-cni-netd\") pod \"cilium-xf7nv\" (UID: \"62331e56-793a-41e0-8098-a70ebc513192\") " pod="kube-system/cilium-xf7nv" May 15 10:17:11.424762 kubelet[1416]: I0515 10:17:11.424698 1416 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/62331e56-793a-41e0-8098-a70ebc513192-cilium-ipsec-secrets\") pod \"cilium-xf7nv\" (UID: \"62331e56-793a-41e0-8098-a70ebc513192\") " pod="kube-system/cilium-xf7nv" May 15 10:17:11.424762 kubelet[1416]: I0515 10:17:11.424717 1416 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/62331e56-793a-41e0-8098-a70ebc513192-host-proc-sys-net\") pod \"cilium-xf7nv\" (UID: \"62331e56-793a-41e0-8098-a70ebc513192\") " pod="kube-system/cilium-xf7nv" May 15 10:17:11.424762 kubelet[1416]: I0515 10:17:11.424737 1416 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/62331e56-793a-41e0-8098-a70ebc513192-host-proc-sys-kernel\") pod \"cilium-xf7nv\" (UID: \"62331e56-793a-41e0-8098-a70ebc513192\") " pod="kube-system/cilium-xf7nv" May 15 10:17:11.424903 kubelet[1416]: I0515 10:17:11.424752 1416 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/62331e56-793a-41e0-8098-a70ebc513192-bpf-maps\") pod \"cilium-xf7nv\" (UID: \"62331e56-793a-41e0-8098-a70ebc513192\") " pod="kube-system/cilium-xf7nv" May 15 10:17:11.424903 kubelet[1416]: I0515 10:17:11.424766 1416 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/62331e56-793a-41e0-8098-a70ebc513192-cilium-config-path\") pod \"cilium-xf7nv\" (UID: \"62331e56-793a-41e0-8098-a70ebc513192\") " pod="kube-system/cilium-xf7nv" May 15 10:17:11.424903 kubelet[1416]: I0515 10:17:11.424784 1416 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q2j5c\" (UniqueName: \"kubernetes.io/projected/62331e56-793a-41e0-8098-a70ebc513192-kube-api-access-q2j5c\") pod \"cilium-xf7nv\" (UID: \"62331e56-793a-41e0-8098-a70ebc513192\") " pod="kube-system/cilium-xf7nv" May 15 10:17:11.452185 kubelet[1416]: E0515 10:17:11.452149 1416 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 15 10:17:11.499267 kubelet[1416]: E0515 10:17:11.499204 1416 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[bpf-maps cilium-cgroup cilium-config-path cilium-ipsec-secrets cilium-run clustermesh-secrets cni-path etc-cni-netd host-proc-sys-kernel host-proc-sys-net hostproc hubble-tls kube-api-access-q2j5c lib-modules xtables-lock], unattached volumes=[], failed to process volumes=[]: context canceled" pod="kube-system/cilium-xf7nv" podUID="62331e56-793a-41e0-8098-a70ebc513192" May 15 10:17:11.610084 kubelet[1416]: E0515 10:17:11.609995 1416 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:17:11.611785 env[1217]: time="2025-05-15T10:17:11.611735539Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-rvzjd,Uid:208faeea-5665-4652-a402-02a7f0c0e51f,Namespace:kube-system,Attempt:0,}" May 15 10:17:11.625533 env[1217]: time="2025-05-15T10:17:11.625444478Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 10:17:11.625665 env[1217]: time="2025-05-15T10:17:11.625513878Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 10:17:11.625665 env[1217]: time="2025-05-15T10:17:11.625525758Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 10:17:11.625776 env[1217]: time="2025-05-15T10:17:11.625732637Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/fe8972ca4b81b4a915b74574017b66a2f8778d4f23f5080213e4bb0418237102 pid=2983 runtime=io.containerd.runc.v2 May 15 10:17:11.643087 systemd[1]: Started cri-containerd-fe8972ca4b81b4a915b74574017b66a2f8778d4f23f5080213e4bb0418237102.scope. May 15 10:17:11.685146 env[1217]: time="2025-05-15T10:17:11.685106825Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-rvzjd,Uid:208faeea-5665-4652-a402-02a7f0c0e51f,Namespace:kube-system,Attempt:0,} returns sandbox id \"fe8972ca4b81b4a915b74574017b66a2f8778d4f23f5080213e4bb0418237102\"" May 15 10:17:11.685914 kubelet[1416]: E0515 10:17:11.685889 1416 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:17:11.686755 env[1217]: time="2025-05-15T10:17:11.686723422Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 15 10:17:11.727507 kubelet[1416]: I0515 10:17:11.727443 1416 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/62331e56-793a-41e0-8098-a70ebc513192-host-proc-sys-kernel\") pod \"62331e56-793a-41e0-8098-a70ebc513192\" (UID: \"62331e56-793a-41e0-8098-a70ebc513192\") " May 15 10:17:11.727618 kubelet[1416]: I0515 10:17:11.727528 1416 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/62331e56-793a-41e0-8098-a70ebc513192-cilium-config-path\") pod \"62331e56-793a-41e0-8098-a70ebc513192\" (UID: \"62331e56-793a-41e0-8098-a70ebc513192\") " May 15 10:17:11.727618 kubelet[1416]: I0515 10:17:11.727567 1416 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q2j5c\" (UniqueName: \"kubernetes.io/projected/62331e56-793a-41e0-8098-a70ebc513192-kube-api-access-q2j5c\") pod \"62331e56-793a-41e0-8098-a70ebc513192\" (UID: \"62331e56-793a-41e0-8098-a70ebc513192\") " May 15 10:17:11.727618 kubelet[1416]: I0515 10:17:11.727595 1416 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/62331e56-793a-41e0-8098-a70ebc513192-cilium-cgroup\") pod \"62331e56-793a-41e0-8098-a70ebc513192\" (UID: \"62331e56-793a-41e0-8098-a70ebc513192\") " May 15 10:17:11.727618 kubelet[1416]: I0515 10:17:11.727614 1416 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/62331e56-793a-41e0-8098-a70ebc513192-lib-modules\") pod \"62331e56-793a-41e0-8098-a70ebc513192\" (UID: \"62331e56-793a-41e0-8098-a70ebc513192\") " May 15 10:17:11.727730 kubelet[1416]: I0515 10:17:11.727631 1416 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/62331e56-793a-41e0-8098-a70ebc513192-cilium-run\") pod \"62331e56-793a-41e0-8098-a70ebc513192\" (UID: \"62331e56-793a-41e0-8098-a70ebc513192\") " May 15 10:17:11.727730 kubelet[1416]: I0515 10:17:11.727649 1416 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/62331e56-793a-41e0-8098-a70ebc513192-cni-path\") pod \"62331e56-793a-41e0-8098-a70ebc513192\" (UID: \"62331e56-793a-41e0-8098-a70ebc513192\") " May 15 10:17:11.727730 kubelet[1416]: I0515 10:17:11.727663 1416 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/62331e56-793a-41e0-8098-a70ebc513192-etc-cni-netd\") pod \"62331e56-793a-41e0-8098-a70ebc513192\" (UID: \"62331e56-793a-41e0-8098-a70ebc513192\") " May 15 10:17:11.727730 kubelet[1416]: I0515 10:17:11.727677 1416 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/62331e56-793a-41e0-8098-a70ebc513192-hostproc\") pod \"62331e56-793a-41e0-8098-a70ebc513192\" (UID: \"62331e56-793a-41e0-8098-a70ebc513192\") " May 15 10:17:11.727730 kubelet[1416]: I0515 10:17:11.727691 1416 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/62331e56-793a-41e0-8098-a70ebc513192-host-proc-sys-net\") pod \"62331e56-793a-41e0-8098-a70ebc513192\" (UID: \"62331e56-793a-41e0-8098-a70ebc513192\") " May 15 10:17:11.727730 kubelet[1416]: I0515 10:17:11.727706 1416 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/62331e56-793a-41e0-8098-a70ebc513192-bpf-maps\") pod \"62331e56-793a-41e0-8098-a70ebc513192\" (UID: \"62331e56-793a-41e0-8098-a70ebc513192\") " May 15 10:17:11.727850 kubelet[1416]: I0515 10:17:11.727720 1416 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/62331e56-793a-41e0-8098-a70ebc513192-xtables-lock\") pod \"62331e56-793a-41e0-8098-a70ebc513192\" (UID: \"62331e56-793a-41e0-8098-a70ebc513192\") " May 15 10:17:11.727850 kubelet[1416]: I0515 10:17:11.727794 1416 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/62331e56-793a-41e0-8098-a70ebc513192-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "62331e56-793a-41e0-8098-a70ebc513192" (UID: "62331e56-793a-41e0-8098-a70ebc513192"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 15 10:17:11.727850 kubelet[1416]: I0515 10:17:11.727816 1416 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/62331e56-793a-41e0-8098-a70ebc513192-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "62331e56-793a-41e0-8098-a70ebc513192" (UID: "62331e56-793a-41e0-8098-a70ebc513192"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 15 10:17:11.728101 kubelet[1416]: I0515 10:17:11.727971 1416 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/62331e56-793a-41e0-8098-a70ebc513192-hostproc" (OuterVolumeSpecName: "hostproc") pod "62331e56-793a-41e0-8098-a70ebc513192" (UID: "62331e56-793a-41e0-8098-a70ebc513192"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 15 10:17:11.728101 kubelet[1416]: I0515 10:17:11.727970 1416 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/62331e56-793a-41e0-8098-a70ebc513192-cni-path" (OuterVolumeSpecName: "cni-path") pod "62331e56-793a-41e0-8098-a70ebc513192" (UID: "62331e56-793a-41e0-8098-a70ebc513192"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 15 10:17:11.728101 kubelet[1416]: I0515 10:17:11.728009 1416 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/62331e56-793a-41e0-8098-a70ebc513192-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "62331e56-793a-41e0-8098-a70ebc513192" (UID: "62331e56-793a-41e0-8098-a70ebc513192"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 15 10:17:11.728101 kubelet[1416]: I0515 10:17:11.728035 1416 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/62331e56-793a-41e0-8098-a70ebc513192-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "62331e56-793a-41e0-8098-a70ebc513192" (UID: "62331e56-793a-41e0-8098-a70ebc513192"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 15 10:17:11.728101 kubelet[1416]: I0515 10:17:11.728054 1416 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/62331e56-793a-41e0-8098-a70ebc513192-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "62331e56-793a-41e0-8098-a70ebc513192" (UID: "62331e56-793a-41e0-8098-a70ebc513192"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 15 10:17:11.728243 kubelet[1416]: I0515 10:17:11.728070 1416 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/62331e56-793a-41e0-8098-a70ebc513192-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "62331e56-793a-41e0-8098-a70ebc513192" (UID: "62331e56-793a-41e0-8098-a70ebc513192"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 15 10:17:11.728243 kubelet[1416]: I0515 10:17:11.728085 1416 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/62331e56-793a-41e0-8098-a70ebc513192-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "62331e56-793a-41e0-8098-a70ebc513192" (UID: "62331e56-793a-41e0-8098-a70ebc513192"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 15 10:17:11.728325 kubelet[1416]: I0515 10:17:11.728303 1416 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/62331e56-793a-41e0-8098-a70ebc513192-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "62331e56-793a-41e0-8098-a70ebc513192" (UID: "62331e56-793a-41e0-8098-a70ebc513192"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 15 10:17:11.729475 kubelet[1416]: I0515 10:17:11.729435 1416 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/62331e56-793a-41e0-8098-a70ebc513192-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "62331e56-793a-41e0-8098-a70ebc513192" (UID: "62331e56-793a-41e0-8098-a70ebc513192"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 15 10:17:11.730575 kubelet[1416]: I0515 10:17:11.730539 1416 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/62331e56-793a-41e0-8098-a70ebc513192-kube-api-access-q2j5c" (OuterVolumeSpecName: "kube-api-access-q2j5c") pod "62331e56-793a-41e0-8098-a70ebc513192" (UID: "62331e56-793a-41e0-8098-a70ebc513192"). InnerVolumeSpecName "kube-api-access-q2j5c". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 15 10:17:11.828023 kubelet[1416]: I0515 10:17:11.827970 1416 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/62331e56-793a-41e0-8098-a70ebc513192-cilium-run\") on node \"10.0.0.71\" DevicePath \"\"" May 15 10:17:11.828023 kubelet[1416]: I0515 10:17:11.828015 1416 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/62331e56-793a-41e0-8098-a70ebc513192-cilium-cgroup\") on node \"10.0.0.71\" DevicePath \"\"" May 15 10:17:11.828023 kubelet[1416]: I0515 10:17:11.828032 1416 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/62331e56-793a-41e0-8098-a70ebc513192-lib-modules\") on node \"10.0.0.71\" DevicePath \"\"" May 15 10:17:11.828210 kubelet[1416]: I0515 10:17:11.828048 1416 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/62331e56-793a-41e0-8098-a70ebc513192-cni-path\") on node \"10.0.0.71\" DevicePath \"\"" May 15 10:17:11.828210 kubelet[1416]: I0515 10:17:11.828064 1416 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/62331e56-793a-41e0-8098-a70ebc513192-etc-cni-netd\") on node \"10.0.0.71\" DevicePath \"\"" May 15 10:17:11.828210 kubelet[1416]: I0515 10:17:11.828078 1416 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/62331e56-793a-41e0-8098-a70ebc513192-bpf-maps\") on node \"10.0.0.71\" DevicePath \"\"" May 15 10:17:11.828210 kubelet[1416]: I0515 10:17:11.828092 1416 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/62331e56-793a-41e0-8098-a70ebc513192-xtables-lock\") on node \"10.0.0.71\" DevicePath \"\"" May 15 10:17:11.828210 kubelet[1416]: I0515 10:17:11.828106 1416 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/62331e56-793a-41e0-8098-a70ebc513192-hostproc\") on node \"10.0.0.71\" DevicePath \"\"" May 15 10:17:11.828210 kubelet[1416]: I0515 10:17:11.828121 1416 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/62331e56-793a-41e0-8098-a70ebc513192-host-proc-sys-net\") on node \"10.0.0.71\" DevicePath \"\"" May 15 10:17:11.828210 kubelet[1416]: I0515 10:17:11.828142 1416 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-q2j5c\" (UniqueName: \"kubernetes.io/projected/62331e56-793a-41e0-8098-a70ebc513192-kube-api-access-q2j5c\") on node \"10.0.0.71\" DevicePath \"\"" May 15 10:17:11.828210 kubelet[1416]: I0515 10:17:11.828158 1416 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/62331e56-793a-41e0-8098-a70ebc513192-host-proc-sys-kernel\") on node \"10.0.0.71\" DevicePath \"\"" May 15 10:17:11.828402 kubelet[1416]: I0515 10:17:11.828172 1416 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/62331e56-793a-41e0-8098-a70ebc513192-cilium-config-path\") on node \"10.0.0.71\" DevicePath \"\"" May 15 10:17:12.360970 kubelet[1416]: E0515 10:17:12.360886 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 15 10:17:12.502372 systemd[1]: Removed slice kubepods-burstable-pod62331e56_793a_41e0_8098_a70ebc513192.slice. May 15 10:17:12.527594 kubelet[1416]: E0515 10:17:12.527287 1416 secret.go:189] Couldn't get secret kube-system/cilium-ipsec-keys: failed to sync secret cache: timed out waiting for the condition May 15 10:17:12.527594 kubelet[1416]: E0515 10:17:12.527370 1416 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/62331e56-793a-41e0-8098-a70ebc513192-cilium-ipsec-secrets podName:62331e56-793a-41e0-8098-a70ebc513192 nodeName:}" failed. No retries permitted until 2025-05-15 10:17:13.027345323 +0000 UTC m=+57.504058772 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-ipsec-secrets" (UniqueName: "kubernetes.io/secret/62331e56-793a-41e0-8098-a70ebc513192-cilium-ipsec-secrets") pod "cilium-xf7nv" (UID: "62331e56-793a-41e0-8098-a70ebc513192") : failed to sync secret cache: timed out waiting for the condition May 15 10:17:12.527823 kubelet[1416]: E0515 10:17:12.527629 1416 projected.go:263] Couldn't get secret kube-system/hubble-server-certs: failed to sync secret cache: timed out waiting for the condition May 15 10:17:12.527823 kubelet[1416]: E0515 10:17:12.527655 1416 projected.go:194] Error preparing data for projected volume hubble-tls for pod kube-system/cilium-xf7nv: failed to sync secret cache: timed out waiting for the condition May 15 10:17:12.527823 kubelet[1416]: E0515 10:17:12.527633 1416 secret.go:189] Couldn't get secret kube-system/cilium-clustermesh: failed to sync secret cache: timed out waiting for the condition May 15 10:17:12.527823 kubelet[1416]: E0515 10:17:12.527716 1416 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/62331e56-793a-41e0-8098-a70ebc513192-hubble-tls podName:62331e56-793a-41e0-8098-a70ebc513192 nodeName:}" failed. No retries permitted until 2025-05-15 10:17:13.027704243 +0000 UTC m=+57.504417692 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "hubble-tls" (UniqueName: "kubernetes.io/projected/62331e56-793a-41e0-8098-a70ebc513192-hubble-tls") pod "cilium-xf7nv" (UID: "62331e56-793a-41e0-8098-a70ebc513192") : failed to sync secret cache: timed out waiting for the condition May 15 10:17:12.527823 kubelet[1416]: E0515 10:17:12.527750 1416 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/62331e56-793a-41e0-8098-a70ebc513192-clustermesh-secrets podName:62331e56-793a-41e0-8098-a70ebc513192 nodeName:}" failed. No retries permitted until 2025-05-15 10:17:13.027732523 +0000 UTC m=+57.504445972 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "clustermesh-secrets" (UniqueName: "kubernetes.io/secret/62331e56-793a-41e0-8098-a70ebc513192-clustermesh-secrets") pod "cilium-xf7nv" (UID: "62331e56-793a-41e0-8098-a70ebc513192") : failed to sync secret cache: timed out waiting for the condition May 15 10:17:12.537750 systemd[1]: run-containerd-runc-k8s.io-fe8972ca4b81b4a915b74574017b66a2f8778d4f23f5080213e4bb0418237102-runc.tB5zZG.mount: Deactivated successfully. May 15 10:17:12.537842 systemd[1]: var-lib-kubelet-pods-62331e56\x2d793a\x2d41e0\x2d8098\x2da70ebc513192-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dq2j5c.mount: Deactivated successfully. May 15 10:17:12.659441 systemd[1]: Created slice kubepods-burstable-pod00c25477_c0b5_4ead_b621_3c0df3344b52.slice. May 15 10:17:12.834591 kubelet[1416]: I0515 10:17:12.834553 1416 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/00c25477-c0b5-4ead-b621-3c0df3344b52-hostproc\") pod \"cilium-pgr5m\" (UID: \"00c25477-c0b5-4ead-b621-3c0df3344b52\") " pod="kube-system/cilium-pgr5m" May 15 10:17:12.834800 kubelet[1416]: I0515 10:17:12.834785 1416 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/00c25477-c0b5-4ead-b621-3c0df3344b52-clustermesh-secrets\") pod \"cilium-pgr5m\" (UID: \"00c25477-c0b5-4ead-b621-3c0df3344b52\") " pod="kube-system/cilium-pgr5m" May 15 10:17:12.834874 kubelet[1416]: I0515 10:17:12.834862 1416 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/00c25477-c0b5-4ead-b621-3c0df3344b52-hubble-tls\") pod \"cilium-pgr5m\" (UID: \"00c25477-c0b5-4ead-b621-3c0df3344b52\") " pod="kube-system/cilium-pgr5m" May 15 10:17:12.834947 kubelet[1416]: I0515 10:17:12.834935 1416 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l98vb\" (UniqueName: \"kubernetes.io/projected/00c25477-c0b5-4ead-b621-3c0df3344b52-kube-api-access-l98vb\") pod \"cilium-pgr5m\" (UID: \"00c25477-c0b5-4ead-b621-3c0df3344b52\") " pod="kube-system/cilium-pgr5m" May 15 10:17:12.835022 kubelet[1416]: I0515 10:17:12.835007 1416 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/00c25477-c0b5-4ead-b621-3c0df3344b52-cilium-run\") pod \"cilium-pgr5m\" (UID: \"00c25477-c0b5-4ead-b621-3c0df3344b52\") " pod="kube-system/cilium-pgr5m" May 15 10:17:12.835090 kubelet[1416]: I0515 10:17:12.835078 1416 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/00c25477-c0b5-4ead-b621-3c0df3344b52-cni-path\") pod \"cilium-pgr5m\" (UID: \"00c25477-c0b5-4ead-b621-3c0df3344b52\") " pod="kube-system/cilium-pgr5m" May 15 10:17:12.835178 kubelet[1416]: I0515 10:17:12.835165 1416 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/00c25477-c0b5-4ead-b621-3c0df3344b52-bpf-maps\") pod \"cilium-pgr5m\" (UID: \"00c25477-c0b5-4ead-b621-3c0df3344b52\") " pod="kube-system/cilium-pgr5m" May 15 10:17:12.835262 kubelet[1416]: I0515 10:17:12.835248 1416 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/00c25477-c0b5-4ead-b621-3c0df3344b52-cilium-config-path\") pod \"cilium-pgr5m\" (UID: \"00c25477-c0b5-4ead-b621-3c0df3344b52\") " pod="kube-system/cilium-pgr5m" May 15 10:17:12.835344 kubelet[1416]: I0515 10:17:12.835331 1416 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/00c25477-c0b5-4ead-b621-3c0df3344b52-xtables-lock\") pod \"cilium-pgr5m\" (UID: \"00c25477-c0b5-4ead-b621-3c0df3344b52\") " pod="kube-system/cilium-pgr5m" May 15 10:17:12.835423 kubelet[1416]: I0515 10:17:12.835411 1416 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/00c25477-c0b5-4ead-b621-3c0df3344b52-host-proc-sys-net\") pod \"cilium-pgr5m\" (UID: \"00c25477-c0b5-4ead-b621-3c0df3344b52\") " pod="kube-system/cilium-pgr5m" May 15 10:17:12.835517 kubelet[1416]: I0515 10:17:12.835505 1416 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/00c25477-c0b5-4ead-b621-3c0df3344b52-cilium-cgroup\") pod \"cilium-pgr5m\" (UID: \"00c25477-c0b5-4ead-b621-3c0df3344b52\") " pod="kube-system/cilium-pgr5m" May 15 10:17:12.835606 kubelet[1416]: I0515 10:17:12.835594 1416 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/00c25477-c0b5-4ead-b621-3c0df3344b52-etc-cni-netd\") pod \"cilium-pgr5m\" (UID: \"00c25477-c0b5-4ead-b621-3c0df3344b52\") " pod="kube-system/cilium-pgr5m" May 15 10:17:12.835677 kubelet[1416]: I0515 10:17:12.835664 1416 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/00c25477-c0b5-4ead-b621-3c0df3344b52-lib-modules\") pod \"cilium-pgr5m\" (UID: \"00c25477-c0b5-4ead-b621-3c0df3344b52\") " pod="kube-system/cilium-pgr5m" May 15 10:17:12.835737 kubelet[1416]: I0515 10:17:12.835726 1416 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/00c25477-c0b5-4ead-b621-3c0df3344b52-cilium-ipsec-secrets\") pod \"cilium-pgr5m\" (UID: \"00c25477-c0b5-4ead-b621-3c0df3344b52\") " pod="kube-system/cilium-pgr5m" May 15 10:17:12.835821 kubelet[1416]: I0515 10:17:12.835794 1416 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/00c25477-c0b5-4ead-b621-3c0df3344b52-host-proc-sys-kernel\") pod \"cilium-pgr5m\" (UID: \"00c25477-c0b5-4ead-b621-3c0df3344b52\") " pod="kube-system/cilium-pgr5m" May 15 10:17:12.835897 kubelet[1416]: I0515 10:17:12.835885 1416 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/62331e56-793a-41e0-8098-a70ebc513192-clustermesh-secrets\") on node \"10.0.0.71\" DevicePath \"\"" May 15 10:17:12.835956 kubelet[1416]: I0515 10:17:12.835947 1416 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/62331e56-793a-41e0-8098-a70ebc513192-hubble-tls\") on node \"10.0.0.71\" DevicePath \"\"" May 15 10:17:12.836020 kubelet[1416]: I0515 10:17:12.836005 1416 reconciler_common.go:299] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/62331e56-793a-41e0-8098-a70ebc513192-cilium-ipsec-secrets\") on node \"10.0.0.71\" DevicePath \"\"" May 15 10:17:12.972980 kubelet[1416]: E0515 10:17:12.972876 1416 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:17:12.973643 env[1217]: time="2025-05-15T10:17:12.973356951Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-pgr5m,Uid:00c25477-c0b5-4ead-b621-3c0df3344b52,Namespace:kube-system,Attempt:0,}" May 15 10:17:12.986658 env[1217]: time="2025-05-15T10:17:12.986590812Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 10:17:12.986658 env[1217]: time="2025-05-15T10:17:12.986633732Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 10:17:12.986830 env[1217]: time="2025-05-15T10:17:12.986797612Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 10:17:12.987069 env[1217]: time="2025-05-15T10:17:12.987017052Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/9f2fc5503559672f75303b024d370487d7acf660683efba1e59e26e76b7e74ee pid=3031 runtime=io.containerd.runc.v2 May 15 10:17:12.996267 systemd[1]: Started cri-containerd-9f2fc5503559672f75303b024d370487d7acf660683efba1e59e26e76b7e74ee.scope. May 15 10:17:13.058420 env[1217]: time="2025-05-15T10:17:13.058373873Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-pgr5m,Uid:00c25477-c0b5-4ead-b621-3c0df3344b52,Namespace:kube-system,Attempt:0,} returns sandbox id \"9f2fc5503559672f75303b024d370487d7acf660683efba1e59e26e76b7e74ee\"" May 15 10:17:13.059867 kubelet[1416]: E0515 10:17:13.059011 1416 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:17:13.060480 env[1217]: time="2025-05-15T10:17:13.060421750Z" level=info msg="CreateContainer within sandbox \"9f2fc5503559672f75303b024d370487d7acf660683efba1e59e26e76b7e74ee\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 15 10:17:13.073309 env[1217]: time="2025-05-15T10:17:13.073220092Z" level=info msg="CreateContainer within sandbox \"9f2fc5503559672f75303b024d370487d7acf660683efba1e59e26e76b7e74ee\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"fbf657ba5941113f8b24cb50eee5caac051e0cc09d5fd8b187475abb6276942a\"" May 15 10:17:13.073841 env[1217]: time="2025-05-15T10:17:13.073808291Z" level=info msg="StartContainer for \"fbf657ba5941113f8b24cb50eee5caac051e0cc09d5fd8b187475abb6276942a\"" May 15 10:17:13.086248 systemd[1]: Started cri-containerd-fbf657ba5941113f8b24cb50eee5caac051e0cc09d5fd8b187475abb6276942a.scope. May 15 10:17:13.118503 env[1217]: time="2025-05-15T10:17:13.117010192Z" level=info msg="StartContainer for \"fbf657ba5941113f8b24cb50eee5caac051e0cc09d5fd8b187475abb6276942a\" returns successfully" May 15 10:17:13.133679 systemd[1]: cri-containerd-fbf657ba5941113f8b24cb50eee5caac051e0cc09d5fd8b187475abb6276942a.scope: Deactivated successfully. May 15 10:17:13.156964 env[1217]: time="2025-05-15T10:17:13.156918778Z" level=info msg="shim disconnected" id=fbf657ba5941113f8b24cb50eee5caac051e0cc09d5fd8b187475abb6276942a May 15 10:17:13.156964 env[1217]: time="2025-05-15T10:17:13.156963458Z" level=warning msg="cleaning up after shim disconnected" id=fbf657ba5941113f8b24cb50eee5caac051e0cc09d5fd8b187475abb6276942a namespace=k8s.io May 15 10:17:13.156964 env[1217]: time="2025-05-15T10:17:13.156973697Z" level=info msg="cleaning up dead shim" May 15 10:17:13.163944 env[1217]: time="2025-05-15T10:17:13.163904448Z" level=warning msg="cleanup warnings time=\"2025-05-15T10:17:13Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3116 runtime=io.containerd.runc.v2\n" May 15 10:17:13.361848 kubelet[1416]: E0515 10:17:13.361631 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 15 10:17:13.615121 kubelet[1416]: E0515 10:17:13.615012 1416 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:17:13.620298 env[1217]: time="2025-05-15T10:17:13.620232583Z" level=info msg="CreateContainer within sandbox \"9f2fc5503559672f75303b024d370487d7acf660683efba1e59e26e76b7e74ee\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 15 10:17:13.630534 env[1217]: time="2025-05-15T10:17:13.630493889Z" level=info msg="CreateContainer within sandbox \"9f2fc5503559672f75303b024d370487d7acf660683efba1e59e26e76b7e74ee\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"09e48dc85a67fc891584631f48fda6c508e5b6730d1899593763378f954aae94\"" May 15 10:17:13.631553 env[1217]: time="2025-05-15T10:17:13.631513728Z" level=info msg="StartContainer for \"09e48dc85a67fc891584631f48fda6c508e5b6730d1899593763378f954aae94\"" May 15 10:17:13.655234 systemd[1]: Started cri-containerd-09e48dc85a67fc891584631f48fda6c508e5b6730d1899593763378f954aae94.scope. May 15 10:17:13.687967 env[1217]: time="2025-05-15T10:17:13.687924330Z" level=info msg="StartContainer for \"09e48dc85a67fc891584631f48fda6c508e5b6730d1899593763378f954aae94\" returns successfully" May 15 10:17:13.693851 systemd[1]: cri-containerd-09e48dc85a67fc891584631f48fda6c508e5b6730d1899593763378f954aae94.scope: Deactivated successfully. May 15 10:17:13.731036 env[1217]: time="2025-05-15T10:17:13.730831071Z" level=info msg="shim disconnected" id=09e48dc85a67fc891584631f48fda6c508e5b6730d1899593763378f954aae94 May 15 10:17:13.731036 env[1217]: time="2025-05-15T10:17:13.730876551Z" level=warning msg="cleaning up after shim disconnected" id=09e48dc85a67fc891584631f48fda6c508e5b6730d1899593763378f954aae94 namespace=k8s.io May 15 10:17:13.731036 env[1217]: time="2025-05-15T10:17:13.730885311Z" level=info msg="cleaning up dead shim" May 15 10:17:13.738825 env[1217]: time="2025-05-15T10:17:13.738787101Z" level=warning msg="cleanup warnings time=\"2025-05-15T10:17:13Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3178 runtime=io.containerd.runc.v2\n" May 15 10:17:14.163975 env[1217]: time="2025-05-15T10:17:14.163900692Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:17:14.165797 env[1217]: time="2025-05-15T10:17:14.165755850Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:17:14.167441 env[1217]: time="2025-05-15T10:17:14.167399928Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 15 10:17:14.167925 env[1217]: time="2025-05-15T10:17:14.167899167Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" May 15 10:17:14.170522 env[1217]: time="2025-05-15T10:17:14.170489324Z" level=info msg="CreateContainer within sandbox \"fe8972ca4b81b4a915b74574017b66a2f8778d4f23f5080213e4bb0418237102\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 15 10:17:14.180860 env[1217]: time="2025-05-15T10:17:14.180808351Z" level=info msg="CreateContainer within sandbox \"fe8972ca4b81b4a915b74574017b66a2f8778d4f23f5080213e4bb0418237102\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"af55c17c01c9b2e0a1a85085f147b4417672c7b1dd307e9482fc038267418add\"" May 15 10:17:14.181529 env[1217]: time="2025-05-15T10:17:14.181500910Z" level=info msg="StartContainer for \"af55c17c01c9b2e0a1a85085f147b4417672c7b1dd307e9482fc038267418add\"" May 15 10:17:14.195704 systemd[1]: Started cri-containerd-af55c17c01c9b2e0a1a85085f147b4417672c7b1dd307e9482fc038267418add.scope. May 15 10:17:14.285797 env[1217]: time="2025-05-15T10:17:14.285099537Z" level=info msg="StartContainer for \"af55c17c01c9b2e0a1a85085f147b4417672c7b1dd307e9482fc038267418add\" returns successfully" May 15 10:17:14.362566 kubelet[1416]: E0515 10:17:14.362522 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 15 10:17:14.498807 kubelet[1416]: I0515 10:17:14.498703 1416 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="62331e56-793a-41e0-8098-a70ebc513192" path="/var/lib/kubelet/pods/62331e56-793a-41e0-8098-a70ebc513192/volumes" May 15 10:17:14.538728 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-09e48dc85a67fc891584631f48fda6c508e5b6730d1899593763378f954aae94-rootfs.mount: Deactivated successfully. May 15 10:17:14.618134 kubelet[1416]: E0515 10:17:14.618084 1416 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:17:14.619882 kubelet[1416]: E0515 10:17:14.619846 1416 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:17:14.621424 env[1217]: time="2025-05-15T10:17:14.621358665Z" level=info msg="CreateContainer within sandbox \"9f2fc5503559672f75303b024d370487d7acf660683efba1e59e26e76b7e74ee\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 15 10:17:14.627408 kubelet[1416]: I0515 10:17:14.627358 1416 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-rvzjd" podStartSLOduration=1.144754554 podStartE2EDuration="3.627344297s" podCreationTimestamp="2025-05-15 10:17:11 +0000 UTC" firstStartedPulling="2025-05-15 10:17:11.686410903 +0000 UTC m=+56.163124352" lastFinishedPulling="2025-05-15 10:17:14.169000646 +0000 UTC m=+58.645714095" observedRunningTime="2025-05-15 10:17:14.627152777 +0000 UTC m=+59.103866186" watchObservedRunningTime="2025-05-15 10:17:14.627344297 +0000 UTC m=+59.104057746" May 15 10:17:14.637088 env[1217]: time="2025-05-15T10:17:14.637013365Z" level=info msg="CreateContainer within sandbox \"9f2fc5503559672f75303b024d370487d7acf660683efba1e59e26e76b7e74ee\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"ed2b2b98308ae5d44ec0e58900b61bee04e20821f6ec2f516cd5d46bea71cba2\"" May 15 10:17:14.637725 env[1217]: time="2025-05-15T10:17:14.637698324Z" level=info msg="StartContainer for \"ed2b2b98308ae5d44ec0e58900b61bee04e20821f6ec2f516cd5d46bea71cba2\"" May 15 10:17:14.659480 systemd[1]: Started cri-containerd-ed2b2b98308ae5d44ec0e58900b61bee04e20821f6ec2f516cd5d46bea71cba2.scope. May 15 10:17:14.697247 env[1217]: time="2025-05-15T10:17:14.697197287Z" level=info msg="StartContainer for \"ed2b2b98308ae5d44ec0e58900b61bee04e20821f6ec2f516cd5d46bea71cba2\" returns successfully" May 15 10:17:14.698995 systemd[1]: cri-containerd-ed2b2b98308ae5d44ec0e58900b61bee04e20821f6ec2f516cd5d46bea71cba2.scope: Deactivated successfully. May 15 10:17:14.716727 env[1217]: time="2025-05-15T10:17:14.716685982Z" level=info msg="shim disconnected" id=ed2b2b98308ae5d44ec0e58900b61bee04e20821f6ec2f516cd5d46bea71cba2 May 15 10:17:14.716915 env[1217]: time="2025-05-15T10:17:14.716895662Z" level=warning msg="cleaning up after shim disconnected" id=ed2b2b98308ae5d44ec0e58900b61bee04e20821f6ec2f516cd5d46bea71cba2 namespace=k8s.io May 15 10:17:14.716974 env[1217]: time="2025-05-15T10:17:14.716961342Z" level=info msg="cleaning up dead shim" May 15 10:17:14.722854 env[1217]: time="2025-05-15T10:17:14.722822655Z" level=warning msg="cleanup warnings time=\"2025-05-15T10:17:14Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3274 runtime=io.containerd.runc.v2\n" May 15 10:17:15.362946 kubelet[1416]: E0515 10:17:15.362897 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 15 10:17:15.538068 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ed2b2b98308ae5d44ec0e58900b61bee04e20821f6ec2f516cd5d46bea71cba2-rootfs.mount: Deactivated successfully. May 15 10:17:15.623958 kubelet[1416]: E0515 10:17:15.623817 1416 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:17:15.624332 kubelet[1416]: E0515 10:17:15.624194 1416 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:17:15.625886 env[1217]: time="2025-05-15T10:17:15.625849665Z" level=info msg="CreateContainer within sandbox \"9f2fc5503559672f75303b024d370487d7acf660683efba1e59e26e76b7e74ee\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 15 10:17:15.641724 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4042794592.mount: Deactivated successfully. May 15 10:17:15.644146 env[1217]: time="2025-05-15T10:17:15.644107763Z" level=info msg="CreateContainer within sandbox \"9f2fc5503559672f75303b024d370487d7acf660683efba1e59e26e76b7e74ee\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"5a68280b8d4d9c652aff1f06ff6c09ad5540f19cd3356ac6cd25012a75389fb0\"" May 15 10:17:15.644703 env[1217]: time="2025-05-15T10:17:15.644668122Z" level=info msg="StartContainer for \"5a68280b8d4d9c652aff1f06ff6c09ad5540f19cd3356ac6cd25012a75389fb0\"" May 15 10:17:15.660872 systemd[1]: Started cri-containerd-5a68280b8d4d9c652aff1f06ff6c09ad5540f19cd3356ac6cd25012a75389fb0.scope. May 15 10:17:15.692005 systemd[1]: cri-containerd-5a68280b8d4d9c652aff1f06ff6c09ad5540f19cd3356ac6cd25012a75389fb0.scope: Deactivated successfully. May 15 10:17:15.693269 env[1217]: time="2025-05-15T10:17:15.693191744Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod00c25477_c0b5_4ead_b621_3c0df3344b52.slice/cri-containerd-5a68280b8d4d9c652aff1f06ff6c09ad5540f19cd3356ac6cd25012a75389fb0.scope/memory.events\": no such file or directory" May 15 10:17:15.694994 env[1217]: time="2025-05-15T10:17:15.694953422Z" level=info msg="StartContainer for \"5a68280b8d4d9c652aff1f06ff6c09ad5540f19cd3356ac6cd25012a75389fb0\" returns successfully" May 15 10:17:15.712362 env[1217]: time="2025-05-15T10:17:15.712309561Z" level=info msg="shim disconnected" id=5a68280b8d4d9c652aff1f06ff6c09ad5540f19cd3356ac6cd25012a75389fb0 May 15 10:17:15.712362 env[1217]: time="2025-05-15T10:17:15.712362041Z" level=warning msg="cleaning up after shim disconnected" id=5a68280b8d4d9c652aff1f06ff6c09ad5540f19cd3356ac6cd25012a75389fb0 namespace=k8s.io May 15 10:17:15.712601 env[1217]: time="2025-05-15T10:17:15.712373281Z" level=info msg="cleaning up dead shim" May 15 10:17:15.719252 env[1217]: time="2025-05-15T10:17:15.719185353Z" level=warning msg="cleanup warnings time=\"2025-05-15T10:17:15Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3329 runtime=io.containerd.runc.v2\n" May 15 10:17:16.318199 kubelet[1416]: E0515 10:17:16.318152 1416 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 15 10:17:16.340509 env[1217]: time="2025-05-15T10:17:16.340452830Z" level=info msg="StopPodSandbox for \"bb858f8c3a48b0908365d68d34ce9a89bfcea6a611047dfcf35aea2d1e1d1160\"" May 15 10:17:16.340644 env[1217]: time="2025-05-15T10:17:16.340557830Z" level=info msg="TearDown network for sandbox \"bb858f8c3a48b0908365d68d34ce9a89bfcea6a611047dfcf35aea2d1e1d1160\" successfully" May 15 10:17:16.340644 env[1217]: time="2025-05-15T10:17:16.340591830Z" level=info msg="StopPodSandbox for \"bb858f8c3a48b0908365d68d34ce9a89bfcea6a611047dfcf35aea2d1e1d1160\" returns successfully" May 15 10:17:16.341169 env[1217]: time="2025-05-15T10:17:16.341141710Z" level=info msg="RemovePodSandbox for \"bb858f8c3a48b0908365d68d34ce9a89bfcea6a611047dfcf35aea2d1e1d1160\"" May 15 10:17:16.341304 env[1217]: time="2025-05-15T10:17:16.341268149Z" level=info msg="Forcibly stopping sandbox \"bb858f8c3a48b0908365d68d34ce9a89bfcea6a611047dfcf35aea2d1e1d1160\"" May 15 10:17:16.341424 env[1217]: time="2025-05-15T10:17:16.341406109Z" level=info msg="TearDown network for sandbox \"bb858f8c3a48b0908365d68d34ce9a89bfcea6a611047dfcf35aea2d1e1d1160\" successfully" May 15 10:17:16.345228 env[1217]: time="2025-05-15T10:17:16.345189425Z" level=info msg="RemovePodSandbox \"bb858f8c3a48b0908365d68d34ce9a89bfcea6a611047dfcf35aea2d1e1d1160\" returns successfully" May 15 10:17:16.364065 kubelet[1416]: E0515 10:17:16.364030 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 15 10:17:16.453266 kubelet[1416]: E0515 10:17:16.453233 1416 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 15 10:17:16.538145 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5a68280b8d4d9c652aff1f06ff6c09ad5540f19cd3356ac6cd25012a75389fb0-rootfs.mount: Deactivated successfully. May 15 10:17:16.627499 kubelet[1416]: E0515 10:17:16.627134 1416 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:17:16.629304 env[1217]: time="2025-05-15T10:17:16.629257944Z" level=info msg="CreateContainer within sandbox \"9f2fc5503559672f75303b024d370487d7acf660683efba1e59e26e76b7e74ee\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 15 10:17:16.643491 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3424396028.mount: Deactivated successfully. May 15 10:17:16.655133 env[1217]: time="2025-05-15T10:17:16.655064755Z" level=info msg="CreateContainer within sandbox \"9f2fc5503559672f75303b024d370487d7acf660683efba1e59e26e76b7e74ee\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"b58f47e24fce15ad8b9091b4ca2a81edc376837ff858dd3c0ecb06de1891dc4d\"" May 15 10:17:16.655604 env[1217]: time="2025-05-15T10:17:16.655572635Z" level=info msg="StartContainer for \"b58f47e24fce15ad8b9091b4ca2a81edc376837ff858dd3c0ecb06de1891dc4d\"" May 15 10:17:16.669257 systemd[1]: Started cri-containerd-b58f47e24fce15ad8b9091b4ca2a81edc376837ff858dd3c0ecb06de1891dc4d.scope. May 15 10:17:16.713447 env[1217]: time="2025-05-15T10:17:16.713392129Z" level=info msg="StartContainer for \"b58f47e24fce15ad8b9091b4ca2a81edc376837ff858dd3c0ecb06de1891dc4d\" returns successfully" May 15 10:17:16.947477 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm-aes-ce))) May 15 10:17:17.364304 kubelet[1416]: E0515 10:17:17.364143 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 15 10:17:17.563266 kubelet[1416]: I0515 10:17:17.563204 1416 setters.go:602] "Node became not ready" node="10.0.0.71" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-05-15T10:17:17Z","lastTransitionTime":"2025-05-15T10:17:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} May 15 10:17:17.631359 kubelet[1416]: E0515 10:17:17.631268 1416 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:17:17.646312 kubelet[1416]: I0515 10:17:17.646249 1416 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-pgr5m" podStartSLOduration=5.646232882 podStartE2EDuration="5.646232882s" podCreationTimestamp="2025-05-15 10:17:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 10:17:17.645250203 +0000 UTC m=+62.121963652" watchObservedRunningTime="2025-05-15 10:17:17.646232882 +0000 UTC m=+62.122946371" May 15 10:17:18.364512 kubelet[1416]: E0515 10:17:18.364444 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 15 10:17:18.974255 kubelet[1416]: E0515 10:17:18.974189 1416 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:17:19.365017 kubelet[1416]: E0515 10:17:19.364904 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 15 10:17:19.810599 systemd-networkd[1037]: lxc_health: Link UP May 15 10:17:19.824569 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready May 15 10:17:19.822022 systemd-networkd[1037]: lxc_health: Gained carrier May 15 10:17:19.965274 systemd[1]: run-containerd-runc-k8s.io-b58f47e24fce15ad8b9091b4ca2a81edc376837ff858dd3c0ecb06de1891dc4d-runc.HxBXZS.mount: Deactivated successfully. May 15 10:17:20.365857 kubelet[1416]: E0515 10:17:20.365803 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 15 10:17:20.975134 kubelet[1416]: E0515 10:17:20.975078 1416 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:17:21.366708 kubelet[1416]: E0515 10:17:21.366592 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 15 10:17:21.643363 kubelet[1416]: E0515 10:17:21.643314 1416 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 10:17:21.688653 systemd-networkd[1037]: lxc_health: Gained IPv6LL May 15 10:17:22.110062 systemd[1]: run-containerd-runc-k8s.io-b58f47e24fce15ad8b9091b4ca2a81edc376837ff858dd3c0ecb06de1891dc4d-runc.qaoHJW.mount: Deactivated successfully. May 15 10:17:22.367319 kubelet[1416]: E0515 10:17:22.366996 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 15 10:17:23.367971 kubelet[1416]: E0515 10:17:23.367922 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 15 10:17:24.368892 kubelet[1416]: E0515 10:17:24.368830 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 15 10:17:25.369755 kubelet[1416]: E0515 10:17:25.369708 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 15 10:17:26.369827 kubelet[1416]: E0515 10:17:26.369792 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 15 10:17:27.370593 kubelet[1416]: E0515 10:17:27.370547 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 15 10:17:28.370946 kubelet[1416]: E0515 10:17:28.370877 1416 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"