May 14 00:37:12.714667 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] May 14 00:37:12.714687 kernel: Linux version 5.15.181-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Tue May 13 23:17:31 -00 2025 May 14 00:37:12.714695 kernel: efi: EFI v2.70 by EDK II May 14 00:37:12.714701 kernel: efi: SMBIOS 3.0=0xd9260000 ACPI 2.0=0xd9240000 MEMATTR=0xda32b018 RNG=0xd9220018 MEMRESERVE=0xd9521c18 May 14 00:37:12.714706 kernel: random: crng init done May 14 00:37:12.714711 kernel: ACPI: Early table checksum verification disabled May 14 00:37:12.714717 kernel: ACPI: RSDP 0x00000000D9240000 000024 (v02 BOCHS ) May 14 00:37:12.714724 kernel: ACPI: XSDT 0x00000000D9230000 000064 (v01 BOCHS BXPC 00000001 01000013) May 14 00:37:12.714730 kernel: ACPI: FACP 0x00000000D91E0000 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) May 14 00:37:12.714735 kernel: ACPI: DSDT 0x00000000D91F0000 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 14 00:37:12.714741 kernel: ACPI: APIC 0x00000000D91D0000 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) May 14 00:37:12.714746 kernel: ACPI: PPTT 0x00000000D91C0000 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) May 14 00:37:12.714751 kernel: ACPI: GTDT 0x00000000D91B0000 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 14 00:37:12.714757 kernel: ACPI: MCFG 0x00000000D91A0000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 14 00:37:12.714765 kernel: ACPI: SPCR 0x00000000D9190000 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 14 00:37:12.714770 kernel: ACPI: DBG2 0x00000000D9180000 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) May 14 00:37:12.714776 kernel: ACPI: IORT 0x00000000D9170000 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 14 00:37:12.714782 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 May 14 00:37:12.714788 kernel: NUMA: Failed to initialise from firmware May 14 00:37:12.714794 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] May 14 00:37:12.714800 kernel: NUMA: NODE_DATA [mem 0xdcb0a900-0xdcb0ffff] May 14 00:37:12.714805 kernel: Zone ranges: May 14 00:37:12.714811 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] May 14 00:37:12.714817 kernel: DMA32 empty May 14 00:37:12.714823 kernel: Normal empty May 14 00:37:12.714829 kernel: Movable zone start for each node May 14 00:37:12.714834 kernel: Early memory node ranges May 14 00:37:12.714840 kernel: node 0: [mem 0x0000000040000000-0x00000000d924ffff] May 14 00:37:12.714846 kernel: node 0: [mem 0x00000000d9250000-0x00000000d951ffff] May 14 00:37:12.714852 kernel: node 0: [mem 0x00000000d9520000-0x00000000dc7fffff] May 14 00:37:12.714857 kernel: node 0: [mem 0x00000000dc800000-0x00000000dc88ffff] May 14 00:37:12.714863 kernel: node 0: [mem 0x00000000dc890000-0x00000000dc89ffff] May 14 00:37:12.714869 kernel: node 0: [mem 0x00000000dc8a0000-0x00000000dc9bffff] May 14 00:37:12.714875 kernel: node 0: [mem 0x00000000dc9c0000-0x00000000dcffffff] May 14 00:37:12.714881 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] May 14 00:37:12.714888 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges May 14 00:37:12.714893 kernel: psci: probing for conduit method from ACPI. May 14 00:37:12.714899 kernel: psci: PSCIv1.1 detected in firmware. May 14 00:37:12.714904 kernel: psci: Using standard PSCI v0.2 function IDs May 14 00:37:12.714910 kernel: psci: Trusted OS migration not required May 14 00:37:12.714918 kernel: psci: SMC Calling Convention v1.1 May 14 00:37:12.714925 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) May 14 00:37:12.714932 kernel: ACPI: SRAT not present May 14 00:37:12.714938 kernel: percpu: Embedded 30 pages/cpu s83032 r8192 d31656 u122880 May 14 00:37:12.714944 kernel: pcpu-alloc: s83032 r8192 d31656 u122880 alloc=30*4096 May 14 00:37:12.714951 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 May 14 00:37:12.714957 kernel: Detected PIPT I-cache on CPU0 May 14 00:37:12.714963 kernel: CPU features: detected: GIC system register CPU interface May 14 00:37:12.714969 kernel: CPU features: detected: Hardware dirty bit management May 14 00:37:12.714975 kernel: CPU features: detected: Spectre-v4 May 14 00:37:12.714981 kernel: CPU features: detected: Spectre-BHB May 14 00:37:12.714988 kernel: CPU features: kernel page table isolation forced ON by KASLR May 14 00:37:12.714994 kernel: CPU features: detected: Kernel page table isolation (KPTI) May 14 00:37:12.715000 kernel: CPU features: detected: ARM erratum 1418040 May 14 00:37:12.715006 kernel: CPU features: detected: SSBS not fully self-synchronizing May 14 00:37:12.715013 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 May 14 00:37:12.715019 kernel: Policy zone: DMA May 14 00:37:12.715026 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=412b3b42de04d7d5abb18ecf506be3ad2c72d6425f1b2391aa97d359e8bd9923 May 14 00:37:12.715032 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 14 00:37:12.715038 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 14 00:37:12.715049 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 14 00:37:12.715055 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 14 00:37:12.715062 kernel: Memory: 2457336K/2572288K available (9792K kernel code, 2094K rwdata, 7584K rodata, 36480K init, 777K bss, 114952K reserved, 0K cma-reserved) May 14 00:37:12.715069 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 14 00:37:12.715075 kernel: trace event string verifier disabled May 14 00:37:12.715081 kernel: rcu: Preemptible hierarchical RCU implementation. May 14 00:37:12.715087 kernel: rcu: RCU event tracing is enabled. May 14 00:37:12.715094 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 14 00:37:12.715100 kernel: Trampoline variant of Tasks RCU enabled. May 14 00:37:12.715106 kernel: Tracing variant of Tasks RCU enabled. May 14 00:37:12.715120 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 14 00:37:12.715127 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 14 00:37:12.715133 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 May 14 00:37:12.715141 kernel: GICv3: 256 SPIs implemented May 14 00:37:12.715147 kernel: GICv3: 0 Extended SPIs implemented May 14 00:37:12.715170 kernel: GICv3: Distributor has no Range Selector support May 14 00:37:12.715177 kernel: Root IRQ handler: gic_handle_irq May 14 00:37:12.715183 kernel: GICv3: 16 PPIs implemented May 14 00:37:12.715189 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 May 14 00:37:12.715195 kernel: ACPI: SRAT not present May 14 00:37:12.715201 kernel: ITS [mem 0x08080000-0x0809ffff] May 14 00:37:12.715207 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400b0000 (indirect, esz 8, psz 64K, shr 1) May 14 00:37:12.715213 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400c0000 (flat, esz 8, psz 64K, shr 1) May 14 00:37:12.715220 kernel: GICv3: using LPI property table @0x00000000400d0000 May 14 00:37:12.715226 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000000400e0000 May 14 00:37:12.715233 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 00:37:12.715240 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). May 14 00:37:12.715246 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns May 14 00:37:12.715252 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns May 14 00:37:12.715258 kernel: arm-pv: using stolen time PV May 14 00:37:12.715265 kernel: Console: colour dummy device 80x25 May 14 00:37:12.715271 kernel: ACPI: Core revision 20210730 May 14 00:37:12.715278 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) May 14 00:37:12.715284 kernel: pid_max: default: 32768 minimum: 301 May 14 00:37:12.715290 kernel: LSM: Security Framework initializing May 14 00:37:12.715297 kernel: SELinux: Initializing. May 14 00:37:12.715304 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 14 00:37:12.715311 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 14 00:37:12.715317 kernel: ACPI PPTT: PPTT table found, but unable to locate core 3 (3) May 14 00:37:12.715324 kernel: rcu: Hierarchical SRCU implementation. May 14 00:37:12.715330 kernel: Platform MSI: ITS@0x8080000 domain created May 14 00:37:12.715336 kernel: PCI/MSI: ITS@0x8080000 domain created May 14 00:37:12.715342 kernel: Remapping and enabling EFI services. May 14 00:37:12.715348 kernel: smp: Bringing up secondary CPUs ... May 14 00:37:12.715356 kernel: Detected PIPT I-cache on CPU1 May 14 00:37:12.715362 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 May 14 00:37:12.715369 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000000400f0000 May 14 00:37:12.715375 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 00:37:12.715381 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] May 14 00:37:12.715387 kernel: Detected PIPT I-cache on CPU2 May 14 00:37:12.715394 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 May 14 00:37:12.715400 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040100000 May 14 00:37:12.715406 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 00:37:12.715412 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] May 14 00:37:12.715420 kernel: Detected PIPT I-cache on CPU3 May 14 00:37:12.715426 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 May 14 00:37:12.715433 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040110000 May 14 00:37:12.715439 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 00:37:12.715450 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] May 14 00:37:12.715457 kernel: smp: Brought up 1 node, 4 CPUs May 14 00:37:12.715464 kernel: SMP: Total of 4 processors activated. May 14 00:37:12.715470 kernel: CPU features: detected: 32-bit EL0 Support May 14 00:37:12.715477 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence May 14 00:37:12.715484 kernel: CPU features: detected: Common not Private translations May 14 00:37:12.715490 kernel: CPU features: detected: CRC32 instructions May 14 00:37:12.715497 kernel: CPU features: detected: RCpc load-acquire (LDAPR) May 14 00:37:12.715505 kernel: CPU features: detected: LSE atomic instructions May 14 00:37:12.715512 kernel: CPU features: detected: Privileged Access Never May 14 00:37:12.715518 kernel: CPU features: detected: RAS Extension Support May 14 00:37:12.715525 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) May 14 00:37:12.715531 kernel: CPU: All CPU(s) started at EL1 May 14 00:37:12.715539 kernel: alternatives: patching kernel code May 14 00:37:12.715545 kernel: devtmpfs: initialized May 14 00:37:12.715552 kernel: KASLR enabled May 14 00:37:12.715559 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 14 00:37:12.715565 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 14 00:37:12.715572 kernel: pinctrl core: initialized pinctrl subsystem May 14 00:37:12.715579 kernel: SMBIOS 3.0.0 present. May 14 00:37:12.715585 kernel: DMI: QEMU KVM Virtual Machine, BIOS 0.0.0 02/06/2015 May 14 00:37:12.715592 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 14 00:37:12.715600 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations May 14 00:37:12.715607 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations May 14 00:37:12.715613 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations May 14 00:37:12.715620 kernel: audit: initializing netlink subsys (disabled) May 14 00:37:12.715627 kernel: audit: type=2000 audit(0.031:1): state=initialized audit_enabled=0 res=1 May 14 00:37:12.715633 kernel: thermal_sys: Registered thermal governor 'step_wise' May 14 00:37:12.715640 kernel: cpuidle: using governor menu May 14 00:37:12.715646 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. May 14 00:37:12.715653 kernel: ASID allocator initialised with 32768 entries May 14 00:37:12.715660 kernel: ACPI: bus type PCI registered May 14 00:37:12.715667 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 14 00:37:12.715674 kernel: Serial: AMBA PL011 UART driver May 14 00:37:12.715681 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages May 14 00:37:12.715687 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages May 14 00:37:12.715694 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages May 14 00:37:12.715700 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages May 14 00:37:12.715707 kernel: cryptd: max_cpu_qlen set to 1000 May 14 00:37:12.715714 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) May 14 00:37:12.715721 kernel: ACPI: Added _OSI(Module Device) May 14 00:37:12.715728 kernel: ACPI: Added _OSI(Processor Device) May 14 00:37:12.715735 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 14 00:37:12.715741 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 14 00:37:12.715748 kernel: ACPI: Added _OSI(Linux-Dell-Video) May 14 00:37:12.715755 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) May 14 00:37:12.715761 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) May 14 00:37:12.715768 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 14 00:37:12.715775 kernel: ACPI: Interpreter enabled May 14 00:37:12.715782 kernel: ACPI: Using GIC for interrupt routing May 14 00:37:12.715789 kernel: ACPI: MCFG table detected, 1 entries May 14 00:37:12.715795 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA May 14 00:37:12.715802 kernel: printk: console [ttyAMA0] enabled May 14 00:37:12.715809 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 14 00:37:12.715932 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 14 00:37:12.715997 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] May 14 00:37:12.716059 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] May 14 00:37:12.716126 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 May 14 00:37:12.716207 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] May 14 00:37:12.716217 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] May 14 00:37:12.716224 kernel: PCI host bridge to bus 0000:00 May 14 00:37:12.716369 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] May 14 00:37:12.716430 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] May 14 00:37:12.716483 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] May 14 00:37:12.716540 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 14 00:37:12.716613 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 May 14 00:37:12.716689 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 May 14 00:37:12.716752 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] May 14 00:37:12.716813 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] May 14 00:37:12.716873 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] May 14 00:37:12.716934 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] May 14 00:37:12.716994 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] May 14 00:37:12.717056 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] May 14 00:37:12.717117 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] May 14 00:37:12.717183 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] May 14 00:37:12.717238 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] May 14 00:37:12.717247 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 May 14 00:37:12.717254 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 May 14 00:37:12.717262 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 May 14 00:37:12.717269 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 May 14 00:37:12.717276 kernel: iommu: Default domain type: Translated May 14 00:37:12.717282 kernel: iommu: DMA domain TLB invalidation policy: strict mode May 14 00:37:12.717289 kernel: vgaarb: loaded May 14 00:37:12.717295 kernel: pps_core: LinuxPPS API ver. 1 registered May 14 00:37:12.717302 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti May 14 00:37:12.717309 kernel: PTP clock support registered May 14 00:37:12.717316 kernel: Registered efivars operations May 14 00:37:12.717324 kernel: clocksource: Switched to clocksource arch_sys_counter May 14 00:37:12.717331 kernel: VFS: Disk quotas dquot_6.6.0 May 14 00:37:12.717337 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 14 00:37:12.717344 kernel: pnp: PnP ACPI init May 14 00:37:12.717409 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved May 14 00:37:12.717419 kernel: pnp: PnP ACPI: found 1 devices May 14 00:37:12.717425 kernel: NET: Registered PF_INET protocol family May 14 00:37:12.717432 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 14 00:37:12.717441 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 14 00:37:12.717447 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 14 00:37:12.717454 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 14 00:37:12.717461 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) May 14 00:37:12.717468 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 14 00:37:12.717474 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 14 00:37:12.717481 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 14 00:37:12.717488 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 14 00:37:12.717494 kernel: PCI: CLS 0 bytes, default 64 May 14 00:37:12.717503 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available May 14 00:37:12.717509 kernel: kvm [1]: HYP mode not available May 14 00:37:12.717516 kernel: Initialise system trusted keyrings May 14 00:37:12.717523 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 14 00:37:12.717529 kernel: Key type asymmetric registered May 14 00:37:12.717536 kernel: Asymmetric key parser 'x509' registered May 14 00:37:12.717542 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) May 14 00:37:12.717549 kernel: io scheduler mq-deadline registered May 14 00:37:12.717556 kernel: io scheduler kyber registered May 14 00:37:12.717564 kernel: io scheduler bfq registered May 14 00:37:12.717570 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 May 14 00:37:12.719422 kernel: ACPI: button: Power Button [PWRB] May 14 00:37:12.719434 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 May 14 00:37:12.719521 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) May 14 00:37:12.719532 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 14 00:37:12.719545 kernel: thunder_xcv, ver 1.0 May 14 00:37:12.719551 kernel: thunder_bgx, ver 1.0 May 14 00:37:12.719558 kernel: nicpf, ver 1.0 May 14 00:37:12.719568 kernel: nicvf, ver 1.0 May 14 00:37:12.719643 kernel: rtc-efi rtc-efi.0: registered as rtc0 May 14 00:37:12.719702 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-05-14T00:37:12 UTC (1747183032) May 14 00:37:12.719712 kernel: hid: raw HID events driver (C) Jiri Kosina May 14 00:37:12.719718 kernel: NET: Registered PF_INET6 protocol family May 14 00:37:12.719725 kernel: Segment Routing with IPv6 May 14 00:37:12.719732 kernel: In-situ OAM (IOAM) with IPv6 May 14 00:37:12.719738 kernel: NET: Registered PF_PACKET protocol family May 14 00:37:12.719747 kernel: Key type dns_resolver registered May 14 00:37:12.719753 kernel: registered taskstats version 1 May 14 00:37:12.719760 kernel: Loading compiled-in X.509 certificates May 14 00:37:12.719767 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.181-flatcar: 7727f4e7680a5b8534f3d5e7bb84b1f695e8c34b' May 14 00:37:12.719774 kernel: Key type .fscrypt registered May 14 00:37:12.719780 kernel: Key type fscrypt-provisioning registered May 14 00:37:12.719787 kernel: ima: No TPM chip found, activating TPM-bypass! May 14 00:37:12.719794 kernel: ima: Allocated hash algorithm: sha1 May 14 00:37:12.719800 kernel: ima: No architecture policies found May 14 00:37:12.719808 kernel: clk: Disabling unused clocks May 14 00:37:12.719815 kernel: Freeing unused kernel memory: 36480K May 14 00:37:12.719821 kernel: Run /init as init process May 14 00:37:12.719828 kernel: with arguments: May 14 00:37:12.719834 kernel: /init May 14 00:37:12.719840 kernel: with environment: May 14 00:37:12.719847 kernel: HOME=/ May 14 00:37:12.719853 kernel: TERM=linux May 14 00:37:12.719860 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 14 00:37:12.719869 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) May 14 00:37:12.719878 systemd[1]: Detected virtualization kvm. May 14 00:37:12.719885 systemd[1]: Detected architecture arm64. May 14 00:37:12.719892 systemd[1]: Running in initrd. May 14 00:37:12.719899 systemd[1]: No hostname configured, using default hostname. May 14 00:37:12.719906 systemd[1]: Hostname set to . May 14 00:37:12.719913 systemd[1]: Initializing machine ID from VM UUID. May 14 00:37:12.719921 systemd[1]: Queued start job for default target initrd.target. May 14 00:37:12.719928 systemd[1]: Started systemd-ask-password-console.path. May 14 00:37:12.719935 systemd[1]: Reached target cryptsetup.target. May 14 00:37:12.719942 systemd[1]: Reached target paths.target. May 14 00:37:12.719949 systemd[1]: Reached target slices.target. May 14 00:37:12.719956 systemd[1]: Reached target swap.target. May 14 00:37:12.719962 systemd[1]: Reached target timers.target. May 14 00:37:12.719970 systemd[1]: Listening on iscsid.socket. May 14 00:37:12.719981 systemd[1]: Listening on iscsiuio.socket. May 14 00:37:12.719988 systemd[1]: Listening on systemd-journald-audit.socket. May 14 00:37:12.719995 systemd[1]: Listening on systemd-journald-dev-log.socket. May 14 00:37:12.720002 systemd[1]: Listening on systemd-journald.socket. May 14 00:37:12.720009 systemd[1]: Listening on systemd-networkd.socket. May 14 00:37:12.720017 systemd[1]: Listening on systemd-udevd-control.socket. May 14 00:37:12.720024 systemd[1]: Listening on systemd-udevd-kernel.socket. May 14 00:37:12.720031 systemd[1]: Reached target sockets.target. May 14 00:37:12.720039 systemd[1]: Starting kmod-static-nodes.service... May 14 00:37:12.720046 systemd[1]: Finished network-cleanup.service. May 14 00:37:12.720053 systemd[1]: Starting systemd-fsck-usr.service... May 14 00:37:12.720059 systemd[1]: Starting systemd-journald.service... May 14 00:37:12.720066 systemd[1]: Starting systemd-modules-load.service... May 14 00:37:12.720073 systemd[1]: Starting systemd-resolved.service... May 14 00:37:12.720080 systemd[1]: Starting systemd-vconsole-setup.service... May 14 00:37:12.720087 systemd[1]: Finished kmod-static-nodes.service. May 14 00:37:12.720094 systemd[1]: Finished systemd-fsck-usr.service. May 14 00:37:12.720103 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... May 14 00:37:12.720116 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. May 14 00:37:12.720124 kernel: audit: type=1130 audit(1747183032.717:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:37:12.720131 systemd[1]: Finished systemd-vconsole-setup.service. May 14 00:37:12.720141 systemd-journald[289]: Journal started May 14 00:37:12.720214 systemd-journald[289]: Runtime Journal (/run/log/journal/6f492c24a6724cd5b42006a7fede9122) is 6.0M, max 48.7M, 42.6M free. May 14 00:37:12.717000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:37:12.710641 systemd-modules-load[290]: Inserted module 'overlay' May 14 00:37:12.723607 systemd[1]: Started systemd-journald.service. May 14 00:37:12.723624 kernel: audit: type=1130 audit(1747183032.720:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:37:12.720000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:37:12.724000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:37:12.724898 systemd[1]: Starting dracut-cmdline-ask.service... May 14 00:37:12.727446 kernel: audit: type=1130 audit(1747183032.724:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:37:12.734187 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 14 00:37:12.738910 systemd-resolved[291]: Positive Trust Anchors: May 14 00:37:12.738925 systemd-resolved[291]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 14 00:37:12.738953 systemd-resolved[291]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test May 14 00:37:12.744811 kernel: Bridge firewalling registered May 14 00:37:12.741020 systemd-modules-load[290]: Inserted module 'br_netfilter' May 14 00:37:12.743466 systemd-resolved[291]: Defaulting to hostname 'linux'. May 14 00:37:12.745266 systemd[1]: Started systemd-resolved.service. May 14 00:37:12.746000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:37:12.746592 systemd[1]: Reached target nss-lookup.target. May 14 00:37:12.750453 kernel: audit: type=1130 audit(1747183032.746:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:37:12.751714 systemd[1]: Finished dracut-cmdline-ask.service. May 14 00:37:12.753358 systemd[1]: Starting dracut-cmdline.service... May 14 00:37:12.756675 kernel: SCSI subsystem initialized May 14 00:37:12.756691 kernel: audit: type=1130 audit(1747183032.752:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:37:12.752000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:37:12.761487 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 14 00:37:12.761523 kernel: device-mapper: uevent: version 1.0.3 May 14 00:37:12.761763 dracut-cmdline[307]: dracut-dracut-053 May 14 00:37:12.762787 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com May 14 00:37:12.764045 dracut-cmdline[307]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=412b3b42de04d7d5abb18ecf506be3ad2c72d6425f1b2391aa97d359e8bd9923 May 14 00:37:12.767464 systemd-modules-load[290]: Inserted module 'dm_multipath' May 14 00:37:12.768588 systemd[1]: Finished systemd-modules-load.service. May 14 00:37:12.769000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:37:12.770476 systemd[1]: Starting systemd-sysctl.service... May 14 00:37:12.776182 kernel: audit: type=1130 audit(1747183032.769:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:37:12.778429 systemd[1]: Finished systemd-sysctl.service. May 14 00:37:12.778000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:37:12.782177 kernel: audit: type=1130 audit(1747183032.778:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:37:12.825172 kernel: Loading iSCSI transport class v2.0-870. May 14 00:37:12.837188 kernel: iscsi: registered transport (tcp) May 14 00:37:12.852180 kernel: iscsi: registered transport (qla4xxx) May 14 00:37:12.852211 kernel: QLogic iSCSI HBA Driver May 14 00:37:12.885311 systemd[1]: Finished dracut-cmdline.service. May 14 00:37:12.885000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:37:12.888171 kernel: audit: type=1130 audit(1747183032.885:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:37:12.886734 systemd[1]: Starting dracut-pre-udev.service... May 14 00:37:12.932171 kernel: raid6: neonx8 gen() 13760 MB/s May 14 00:37:12.949165 kernel: raid6: neonx8 xor() 10757 MB/s May 14 00:37:12.966163 kernel: raid6: neonx4 gen() 13464 MB/s May 14 00:37:12.983170 kernel: raid6: neonx4 xor() 11195 MB/s May 14 00:37:13.000168 kernel: raid6: neonx2 gen() 12941 MB/s May 14 00:37:13.017164 kernel: raid6: neonx2 xor() 10349 MB/s May 14 00:37:13.034168 kernel: raid6: neonx1 gen() 10602 MB/s May 14 00:37:13.051170 kernel: raid6: neonx1 xor() 8772 MB/s May 14 00:37:13.068170 kernel: raid6: int64x8 gen() 6225 MB/s May 14 00:37:13.085162 kernel: raid6: int64x8 xor() 3528 MB/s May 14 00:37:13.102167 kernel: raid6: int64x4 gen() 7167 MB/s May 14 00:37:13.119178 kernel: raid6: int64x4 xor() 3839 MB/s May 14 00:37:13.136175 kernel: raid6: int64x2 gen() 6133 MB/s May 14 00:37:13.153168 kernel: raid6: int64x2 xor() 3306 MB/s May 14 00:37:13.170187 kernel: raid6: int64x1 gen() 5036 MB/s May 14 00:37:13.187492 kernel: raid6: int64x1 xor() 2639 MB/s May 14 00:37:13.187516 kernel: raid6: using algorithm neonx8 gen() 13760 MB/s May 14 00:37:13.187525 kernel: raid6: .... xor() 10757 MB/s, rmw enabled May 14 00:37:13.187534 kernel: raid6: using neon recovery algorithm May 14 00:37:13.198176 kernel: xor: measuring software checksum speed May 14 00:37:13.198209 kernel: 8regs : 17206 MB/sec May 14 00:37:13.199572 kernel: 32regs : 18916 MB/sec May 14 00:37:13.199593 kernel: arm64_neon : 27889 MB/sec May 14 00:37:13.199609 kernel: xor: using function: arm64_neon (27889 MB/sec) May 14 00:37:13.253177 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no May 14 00:37:13.263769 systemd[1]: Finished dracut-pre-udev.service. May 14 00:37:13.264000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:37:13.266000 audit: BPF prog-id=7 op=LOAD May 14 00:37:13.266000 audit: BPF prog-id=8 op=LOAD May 14 00:37:13.267046 systemd[1]: Starting systemd-udevd.service... May 14 00:37:13.268315 kernel: audit: type=1130 audit(1747183033.264:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:37:13.279584 systemd-udevd[491]: Using default interface naming scheme 'v252'. May 14 00:37:13.283036 systemd[1]: Started systemd-udevd.service. May 14 00:37:13.283000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:37:13.284836 systemd[1]: Starting dracut-pre-trigger.service... May 14 00:37:13.295367 dracut-pre-trigger[498]: rd.md=0: removing MD RAID activation May 14 00:37:13.322296 systemd[1]: Finished dracut-pre-trigger.service. May 14 00:37:13.322000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:37:13.323652 systemd[1]: Starting systemd-udev-trigger.service... May 14 00:37:13.356825 systemd[1]: Finished systemd-udev-trigger.service. May 14 00:37:13.357000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:37:13.383772 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 14 00:37:13.389497 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 14 00:37:13.389513 kernel: GPT:9289727 != 19775487 May 14 00:37:13.389522 kernel: GPT:Alternate GPT header not at the end of the disk. May 14 00:37:13.389531 kernel: GPT:9289727 != 19775487 May 14 00:37:13.389539 kernel: GPT: Use GNU Parted to correct GPT errors. May 14 00:37:13.389554 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 14 00:37:13.403299 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. May 14 00:37:13.404140 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. May 14 00:37:13.408132 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. May 14 00:37:13.411186 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (539) May 14 00:37:13.411308 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. May 14 00:37:13.414717 systemd[1]: Starting disk-uuid.service... May 14 00:37:13.418983 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. May 14 00:37:13.421055 disk-uuid[562]: Primary Header is updated. May 14 00:37:13.421055 disk-uuid[562]: Secondary Entries is updated. May 14 00:37:13.421055 disk-uuid[562]: Secondary Header is updated. May 14 00:37:13.424183 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 14 00:37:13.434175 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 14 00:37:14.437181 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 14 00:37:14.437230 disk-uuid[563]: The operation has completed successfully. May 14 00:37:14.462088 systemd[1]: disk-uuid.service: Deactivated successfully. May 14 00:37:14.462000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:37:14.462000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:37:14.462204 systemd[1]: Finished disk-uuid.service. May 14 00:37:14.465661 systemd[1]: Starting verity-setup.service... May 14 00:37:14.482176 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" May 14 00:37:14.504390 systemd[1]: Found device dev-mapper-usr.device. May 14 00:37:14.506475 systemd[1]: Mounting sysusr-usr.mount... May 14 00:37:14.509000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:37:14.508515 systemd[1]: Finished verity-setup.service. May 14 00:37:14.556552 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. May 14 00:37:14.555973 systemd[1]: Mounted sysusr-usr.mount. May 14 00:37:14.557244 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. May 14 00:37:14.559064 systemd[1]: Starting ignition-setup.service... May 14 00:37:14.560997 systemd[1]: Starting parse-ip-for-networkd.service... May 14 00:37:14.567528 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 14 00:37:14.567661 kernel: BTRFS info (device vda6): using free space tree May 14 00:37:14.567741 kernel: BTRFS info (device vda6): has skinny extents May 14 00:37:14.577633 systemd[1]: mnt-oem.mount: Deactivated successfully. May 14 00:37:14.584721 systemd[1]: Finished ignition-setup.service. May 14 00:37:14.585000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:37:14.586231 systemd[1]: Starting ignition-fetch-offline.service... May 14 00:37:14.642893 systemd[1]: Finished parse-ip-for-networkd.service. May 14 00:37:14.643000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:37:14.644000 audit: BPF prog-id=9 op=LOAD May 14 00:37:14.644887 systemd[1]: Starting systemd-networkd.service... May 14 00:37:14.660177 ignition[650]: Ignition 2.14.0 May 14 00:37:14.660189 ignition[650]: Stage: fetch-offline May 14 00:37:14.660230 ignition[650]: no configs at "/usr/lib/ignition/base.d" May 14 00:37:14.660240 ignition[650]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 14 00:37:14.660402 ignition[650]: parsed url from cmdline: "" May 14 00:37:14.660405 ignition[650]: no config URL provided May 14 00:37:14.660410 ignition[650]: reading system config file "/usr/lib/ignition/user.ign" May 14 00:37:14.660417 ignition[650]: no config at "/usr/lib/ignition/user.ign" May 14 00:37:14.660437 ignition[650]: op(1): [started] loading QEMU firmware config module May 14 00:37:14.660442 ignition[650]: op(1): executing: "modprobe" "qemu_fw_cfg" May 14 00:37:14.665261 ignition[650]: op(1): [finished] loading QEMU firmware config module May 14 00:37:14.669608 systemd-networkd[740]: lo: Link UP May 14 00:37:14.671000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:37:14.669618 systemd-networkd[740]: lo: Gained carrier May 14 00:37:14.669972 systemd-networkd[740]: Enumeration completed May 14 00:37:14.670177 systemd-networkd[740]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 14 00:37:14.674390 ignition[650]: parsing config with SHA512: 7bfcf0167eebe85a7e908f94b098f7dea2aab8deaaac3d94f6a27fe064691416b4bc4b0d8239d45a61b6e7433e1dce9e25470b88f1c964e50fe0c96894689903 May 14 00:37:14.670495 systemd[1]: Started systemd-networkd.service. May 14 00:37:14.671302 systemd-networkd[740]: eth0: Link UP May 14 00:37:14.671306 systemd-networkd[740]: eth0: Gained carrier May 14 00:37:14.671822 systemd[1]: Reached target network.target. May 14 00:37:14.673241 systemd[1]: Starting iscsiuio.service... May 14 00:37:14.681787 unknown[650]: fetched base config from "system" May 14 00:37:14.681804 unknown[650]: fetched user config from "qemu" May 14 00:37:14.683000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:37:14.682199 ignition[650]: fetch-offline: fetch-offline passed May 14 00:37:14.682807 systemd[1]: Started iscsiuio.service. May 14 00:37:14.682269 ignition[650]: Ignition finished successfully May 14 00:37:14.684837 systemd[1]: Starting iscsid.service... May 14 00:37:14.686006 systemd[1]: Finished ignition-fetch-offline.service. May 14 00:37:14.687172 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 14 00:37:14.686000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:37:14.689916 iscsid[747]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi May 14 00:37:14.689916 iscsid[747]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. May 14 00:37:14.689916 iscsid[747]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. May 14 00:37:14.689916 iscsid[747]: If using hardware iscsi like qla4xxx this message can be ignored. May 14 00:37:14.689916 iscsid[747]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi May 14 00:37:14.689916 iscsid[747]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf May 14 00:37:14.695000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:37:14.687856 systemd[1]: Starting ignition-kargs.service... May 14 00:37:14.697276 ignition[748]: Ignition 2.14.0 May 14 00:37:14.691257 systemd-networkd[740]: eth0: DHCPv4 address 10.0.0.50/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 14 00:37:14.697282 ignition[748]: Stage: kargs May 14 00:37:14.691268 systemd[1]: Started iscsid.service. May 14 00:37:14.702000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:37:14.697373 ignition[748]: no configs at "/usr/lib/ignition/base.d" May 14 00:37:14.695926 systemd[1]: Starting dracut-initqueue.service... May 14 00:37:14.697383 ignition[748]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 14 00:37:14.701896 systemd[1]: Finished ignition-kargs.service. May 14 00:37:14.697983 ignition[748]: kargs: kargs passed May 14 00:37:14.703648 systemd[1]: Starting ignition-disks.service... May 14 00:37:14.698022 ignition[748]: Ignition finished successfully May 14 00:37:14.709422 systemd[1]: Finished dracut-initqueue.service. May 14 00:37:14.709000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:37:14.710209 systemd[1]: Reached target remote-fs-pre.target. May 14 00:37:14.711715 ignition[760]: Ignition 2.14.0 May 14 00:37:14.711724 ignition[760]: Stage: disks May 14 00:37:14.711814 ignition[760]: no configs at "/usr/lib/ignition/base.d" May 14 00:37:14.711823 ignition[760]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 14 00:37:14.713543 systemd[1]: Reached target remote-cryptsetup.target. May 14 00:37:14.712645 ignition[760]: disks: disks passed May 14 00:37:14.714773 systemd[1]: Reached target remote-fs.target. May 14 00:37:14.718000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:37:14.712685 ignition[760]: Ignition finished successfully May 14 00:37:14.716900 systemd[1]: Starting dracut-pre-mount.service... May 14 00:37:14.717643 systemd[1]: Finished ignition-disks.service. May 14 00:37:14.718734 systemd[1]: Reached target initrd-root-device.target. May 14 00:37:14.719853 systemd[1]: Reached target local-fs-pre.target. May 14 00:37:14.720993 systemd[1]: Reached target local-fs.target. May 14 00:37:14.725000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:37:14.722220 systemd[1]: Reached target sysinit.target. May 14 00:37:14.723329 systemd[1]: Reached target basic.target. May 14 00:37:14.724834 systemd[1]: Finished dracut-pre-mount.service. May 14 00:37:14.726580 systemd[1]: Starting systemd-fsck-root.service... May 14 00:37:14.736630 systemd-fsck[775]: ROOT: clean, 619/553520 files, 56022/553472 blocks May 14 00:37:14.740223 systemd[1]: Finished systemd-fsck-root.service. May 14 00:37:14.740000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:37:14.741899 systemd[1]: Mounting sysroot.mount... May 14 00:37:14.750183 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. May 14 00:37:14.750689 systemd[1]: Mounted sysroot.mount. May 14 00:37:14.751327 systemd[1]: Reached target initrd-root-fs.target. May 14 00:37:14.753865 systemd[1]: Mounting sysroot-usr.mount... May 14 00:37:14.754595 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. May 14 00:37:14.754632 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 14 00:37:14.754657 systemd[1]: Reached target ignition-diskful.target. May 14 00:37:14.756412 systemd[1]: Mounted sysroot-usr.mount. May 14 00:37:14.757967 systemd[1]: Starting initrd-setup-root.service... May 14 00:37:14.761971 initrd-setup-root[785]: cut: /sysroot/etc/passwd: No such file or directory May 14 00:37:14.765634 initrd-setup-root[793]: cut: /sysroot/etc/group: No such file or directory May 14 00:37:14.768729 initrd-setup-root[801]: cut: /sysroot/etc/shadow: No such file or directory May 14 00:37:14.772665 initrd-setup-root[809]: cut: /sysroot/etc/gshadow: No such file or directory May 14 00:37:14.797601 systemd[1]: Finished initrd-setup-root.service. May 14 00:37:14.798000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:37:14.798989 systemd[1]: Starting ignition-mount.service... May 14 00:37:14.800183 systemd[1]: Starting sysroot-boot.service... May 14 00:37:14.804831 bash[826]: umount: /sysroot/usr/share/oem: not mounted. May 14 00:37:14.813374 ignition[828]: INFO : Ignition 2.14.0 May 14 00:37:14.814374 ignition[828]: INFO : Stage: mount May 14 00:37:14.814374 ignition[828]: INFO : no configs at "/usr/lib/ignition/base.d" May 14 00:37:14.814374 ignition[828]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 14 00:37:14.816000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:37:14.815474 systemd[1]: Finished sysroot-boot.service. May 14 00:37:14.817000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:37:14.818862 ignition[828]: INFO : mount: mount passed May 14 00:37:14.818862 ignition[828]: INFO : Ignition finished successfully May 14 00:37:14.816747 systemd[1]: Finished ignition-mount.service. May 14 00:37:15.516941 systemd[1]: Mounting sysroot-usr-share-oem.mount... May 14 00:37:15.523539 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (836) May 14 00:37:15.523577 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 14 00:37:15.523588 kernel: BTRFS info (device vda6): using free space tree May 14 00:37:15.524494 kernel: BTRFS info (device vda6): has skinny extents May 14 00:37:15.527274 systemd[1]: Mounted sysroot-usr-share-oem.mount. May 14 00:37:15.528676 systemd[1]: Starting ignition-files.service... May 14 00:37:15.542283 ignition[856]: INFO : Ignition 2.14.0 May 14 00:37:15.542283 ignition[856]: INFO : Stage: files May 14 00:37:15.543964 ignition[856]: INFO : no configs at "/usr/lib/ignition/base.d" May 14 00:37:15.543964 ignition[856]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 14 00:37:15.543964 ignition[856]: DEBUG : files: compiled without relabeling support, skipping May 14 00:37:15.549249 ignition[856]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 14 00:37:15.549249 ignition[856]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 14 00:37:15.553047 ignition[856]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 14 00:37:15.554394 ignition[856]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 14 00:37:15.555878 unknown[856]: wrote ssh authorized keys file for user: core May 14 00:37:15.557072 ignition[856]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 14 00:37:15.557072 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" May 14 00:37:15.557072 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" May 14 00:37:15.557072 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" May 14 00:37:15.557072 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 14 00:37:15.557072 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 14 00:37:15.567677 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 14 00:37:15.567677 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 14 00:37:15.567677 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-arm64.raw: attempt #1 May 14 00:37:15.906597 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK May 14 00:37:16.265911 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-arm64.raw" May 14 00:37:16.265911 ignition[856]: INFO : files: op(7): [started] processing unit "coreos-metadata.service" May 14 00:37:16.269366 ignition[856]: INFO : files: op(7): op(8): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 14 00:37:16.269366 ignition[856]: INFO : files: op(7): op(8): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 14 00:37:16.269366 ignition[856]: INFO : files: op(7): [finished] processing unit "coreos-metadata.service" May 14 00:37:16.269366 ignition[856]: INFO : files: op(9): [started] setting preset to disabled for "coreos-metadata.service" May 14 00:37:16.269366 ignition[856]: INFO : files: op(9): op(a): [started] removing enablement symlink(s) for "coreos-metadata.service" May 14 00:37:16.306636 ignition[856]: INFO : files: op(9): op(a): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 14 00:37:16.308942 ignition[856]: INFO : files: op(9): [finished] setting preset to disabled for "coreos-metadata.service" May 14 00:37:16.308942 ignition[856]: INFO : files: createResultFile: createFiles: op(b): [started] writing file "/sysroot/etc/.ignition-result.json" May 14 00:37:16.308942 ignition[856]: INFO : files: createResultFile: createFiles: op(b): [finished] writing file "/sysroot/etc/.ignition-result.json" May 14 00:37:16.308942 ignition[856]: INFO : files: files passed May 14 00:37:16.308942 ignition[856]: INFO : Ignition finished successfully May 14 00:37:16.310000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:37:16.309118 systemd[1]: Finished ignition-files.service. May 14 00:37:16.311795 systemd[1]: Starting initrd-setup-root-after-ignition.service... May 14 00:37:16.318000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:37:16.318000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:37:16.313071 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). May 14 00:37:16.320947 initrd-setup-root-after-ignition[882]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory May 14 00:37:16.321000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:37:16.313762 systemd[1]: Starting ignition-quench.service... May 14 00:37:16.324130 initrd-setup-root-after-ignition[884]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 14 00:37:16.317495 systemd[1]: ignition-quench.service: Deactivated successfully. May 14 00:37:16.317575 systemd[1]: Finished ignition-quench.service. May 14 00:37:16.320072 systemd[1]: Finished initrd-setup-root-after-ignition.service. May 14 00:37:16.321624 systemd[1]: Reached target ignition-complete.target. May 14 00:37:16.324051 systemd[1]: Starting initrd-parse-etc.service... May 14 00:37:16.335698 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 14 00:37:16.335785 systemd[1]: Finished initrd-parse-etc.service. May 14 00:37:16.336000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:37:16.336000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:37:16.337245 systemd[1]: Reached target initrd-fs.target. May 14 00:37:16.338453 systemd[1]: Reached target initrd.target. May 14 00:37:16.339595 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. May 14 00:37:16.340315 systemd[1]: Starting dracut-pre-pivot.service... May 14 00:37:16.350198 systemd[1]: Finished dracut-pre-pivot.service. May 14 00:37:16.350000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:37:16.351518 systemd[1]: Starting initrd-cleanup.service... May 14 00:37:16.358989 systemd[1]: Stopped target nss-lookup.target. May 14 00:37:16.359683 systemd[1]: Stopped target remote-cryptsetup.target. May 14 00:37:16.360889 systemd[1]: Stopped target timers.target. May 14 00:37:16.362026 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 14 00:37:16.362000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:37:16.362136 systemd[1]: Stopped dracut-pre-pivot.service. May 14 00:37:16.363233 systemd[1]: Stopped target initrd.target. May 14 00:37:16.364480 systemd[1]: Stopped target basic.target. May 14 00:37:16.365591 systemd[1]: Stopped target ignition-complete.target. May 14 00:37:16.366727 systemd[1]: Stopped target ignition-diskful.target. May 14 00:37:16.367821 systemd[1]: Stopped target initrd-root-device.target. May 14 00:37:16.369096 systemd[1]: Stopped target remote-fs.target. May 14 00:37:16.370475 systemd[1]: Stopped target remote-fs-pre.target. May 14 00:37:16.371674 systemd[1]: Stopped target sysinit.target. May 14 00:37:16.372735 systemd[1]: Stopped target local-fs.target. May 14 00:37:16.373948 systemd[1]: Stopped target local-fs-pre.target. May 14 00:37:16.375085 systemd[1]: Stopped target swap.target. May 14 00:37:16.377000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:37:16.376114 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 14 00:37:16.376235 systemd[1]: Stopped dracut-pre-mount.service. May 14 00:37:16.379000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:37:16.377383 systemd[1]: Stopped target cryptsetup.target. May 14 00:37:16.380000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:37:16.378422 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 14 00:37:16.378517 systemd[1]: Stopped dracut-initqueue.service. May 14 00:37:16.379820 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 14 00:37:16.379911 systemd[1]: Stopped ignition-fetch-offline.service. May 14 00:37:16.381050 systemd[1]: Stopped target paths.target. May 14 00:37:16.382095 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 14 00:37:16.387214 systemd[1]: Stopped systemd-ask-password-console.path. May 14 00:37:16.387943 systemd[1]: Stopped target slices.target. May 14 00:37:16.389124 systemd[1]: Stopped target sockets.target. May 14 00:37:16.390331 systemd[1]: iscsid.socket: Deactivated successfully. May 14 00:37:16.390400 systemd[1]: Closed iscsid.socket. May 14 00:37:16.391358 systemd[1]: iscsiuio.socket: Deactivated successfully. May 14 00:37:16.393000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:37:16.391421 systemd[1]: Closed iscsiuio.socket. May 14 00:37:16.394000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:37:16.392477 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 14 00:37:16.392576 systemd[1]: Stopped initrd-setup-root-after-ignition.service. May 14 00:37:16.393619 systemd[1]: ignition-files.service: Deactivated successfully. May 14 00:37:16.398000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:37:16.393707 systemd[1]: Stopped ignition-files.service. May 14 00:37:16.395695 systemd[1]: Stopping ignition-mount.service... May 14 00:37:16.400000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:37:16.396865 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 14 00:37:16.401000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:37:16.396982 systemd[1]: Stopped kmod-static-nodes.service. May 14 00:37:16.403525 ignition[897]: INFO : Ignition 2.14.0 May 14 00:37:16.403525 ignition[897]: INFO : Stage: umount May 14 00:37:16.403525 ignition[897]: INFO : no configs at "/usr/lib/ignition/base.d" May 14 00:37:16.403525 ignition[897]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 14 00:37:16.403525 ignition[897]: INFO : umount: umount passed May 14 00:37:16.403525 ignition[897]: INFO : Ignition finished successfully May 14 00:37:16.404000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:37:16.407000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:37:16.407000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:37:16.410000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:37:16.398946 systemd[1]: Stopping sysroot-boot.service... May 14 00:37:16.411000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:37:16.399499 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 14 00:37:16.413000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:37:16.399630 systemd[1]: Stopped systemd-udev-trigger.service. May 14 00:37:16.400771 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 14 00:37:16.400869 systemd[1]: Stopped dracut-pre-trigger.service. May 14 00:37:16.403643 systemd[1]: ignition-mount.service: Deactivated successfully. May 14 00:37:16.403733 systemd[1]: Stopped ignition-mount.service. May 14 00:37:16.405325 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 14 00:37:16.405391 systemd[1]: Finished initrd-cleanup.service. May 14 00:37:16.407905 systemd[1]: Stopped target network.target. May 14 00:37:16.408905 systemd[1]: ignition-disks.service: Deactivated successfully. May 14 00:37:16.424000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:37:16.425000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:37:16.408955 systemd[1]: Stopped ignition-disks.service. May 14 00:37:16.410406 systemd[1]: ignition-kargs.service: Deactivated successfully. May 14 00:37:16.429000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:37:16.410446 systemd[1]: Stopped ignition-kargs.service. May 14 00:37:16.431000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:37:16.412030 systemd[1]: ignition-setup.service: Deactivated successfully. May 14 00:37:16.433000 audit: BPF prog-id=6 op=UNLOAD May 14 00:37:16.433000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:37:16.412069 systemd[1]: Stopped ignition-setup.service. May 14 00:37:16.413523 systemd[1]: Stopping systemd-networkd.service... May 14 00:37:16.415216 systemd[1]: Stopping systemd-resolved.service... May 14 00:37:16.417480 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 14 00:37:16.440000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:37:16.419187 systemd-networkd[740]: eth0: DHCPv6 lease lost May 14 00:37:16.440000 audit: BPF prog-id=9 op=UNLOAD May 14 00:37:16.420278 systemd[1]: systemd-networkd.service: Deactivated successfully. May 14 00:37:16.420377 systemd[1]: Stopped systemd-networkd.service. May 14 00:37:16.424983 systemd[1]: systemd-resolved.service: Deactivated successfully. May 14 00:37:16.444000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:37:16.425076 systemd[1]: Stopped systemd-resolved.service. May 14 00:37:16.445000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:37:16.426274 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 14 00:37:16.447000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:37:16.426303 systemd[1]: Closed systemd-networkd.socket. May 14 00:37:16.428259 systemd[1]: Stopping network-cleanup.service... May 14 00:37:16.429017 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 14 00:37:16.451000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:37:16.429078 systemd[1]: Stopped parse-ip-for-networkd.service. May 14 00:37:16.452000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:37:16.430028 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 14 00:37:16.454000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:37:16.430071 systemd[1]: Stopped systemd-sysctl.service. May 14 00:37:16.432233 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 14 00:37:16.457000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:37:16.432275 systemd[1]: Stopped systemd-modules-load.service. May 14 00:37:16.434707 systemd[1]: Stopping systemd-udevd.service... May 14 00:37:16.436229 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 14 00:37:16.438917 systemd[1]: network-cleanup.service: Deactivated successfully. May 14 00:37:16.461000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:37:16.461000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:37:16.439022 systemd[1]: Stopped network-cleanup.service. May 14 00:37:16.443135 systemd[1]: sysroot-boot.service: Deactivated successfully. May 14 00:37:16.443226 systemd[1]: Stopped sysroot-boot.service. May 14 00:37:16.444770 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 14 00:37:16.444808 systemd[1]: Stopped initrd-setup-root.service. May 14 00:37:16.446352 systemd[1]: systemd-udevd.service: Deactivated successfully. May 14 00:37:16.446463 systemd[1]: Stopped systemd-udevd.service. May 14 00:37:16.447695 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 14 00:37:16.447727 systemd[1]: Closed systemd-udevd-control.socket. May 14 00:37:16.448978 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 14 00:37:16.449009 systemd[1]: Closed systemd-udevd-kernel.socket. May 14 00:37:16.450311 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 14 00:37:16.450358 systemd[1]: Stopped dracut-pre-udev.service. May 14 00:37:16.451629 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 14 00:37:16.451667 systemd[1]: Stopped dracut-cmdline.service. May 14 00:37:16.453162 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 14 00:37:16.453203 systemd[1]: Stopped dracut-cmdline-ask.service. May 14 00:37:16.455211 systemd[1]: Starting initrd-udevadm-cleanup-db.service... May 14 00:37:16.455947 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 14 00:37:16.455999 systemd[1]: Stopped systemd-vconsole-setup.service. May 14 00:37:16.460123 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 14 00:37:16.460210 systemd[1]: Finished initrd-udevadm-cleanup-db.service. May 14 00:37:16.461831 systemd[1]: Reached target initrd-switch-root.target. May 14 00:37:16.485967 systemd-journald[289]: Received SIGTERM from PID 1 (systemd). May 14 00:37:16.486022 iscsid[747]: iscsid shutting down. May 14 00:37:16.463946 systemd[1]: Starting initrd-switch-root.service... May 14 00:37:16.470111 systemd[1]: Switching root. May 14 00:37:16.488001 systemd-journald[289]: Journal stopped May 14 00:37:18.452195 kernel: SELinux: Class mctp_socket not defined in policy. May 14 00:37:18.452249 kernel: SELinux: Class anon_inode not defined in policy. May 14 00:37:18.452262 kernel: SELinux: the above unknown classes and permissions will be allowed May 14 00:37:18.452272 kernel: SELinux: policy capability network_peer_controls=1 May 14 00:37:18.452285 kernel: SELinux: policy capability open_perms=1 May 14 00:37:18.452296 kernel: SELinux: policy capability extended_socket_class=1 May 14 00:37:18.452311 kernel: SELinux: policy capability always_check_network=0 May 14 00:37:18.452321 kernel: SELinux: policy capability cgroup_seclabel=1 May 14 00:37:18.452331 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 14 00:37:18.452341 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 14 00:37:18.452351 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 14 00:37:18.452362 systemd[1]: Successfully loaded SELinux policy in 37.004ms. May 14 00:37:18.452382 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 7.195ms. May 14 00:37:18.452394 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) May 14 00:37:18.452405 systemd[1]: Detected virtualization kvm. May 14 00:37:18.452417 systemd[1]: Detected architecture arm64. May 14 00:37:18.452428 systemd[1]: Detected first boot. May 14 00:37:18.452439 systemd[1]: Initializing machine ID from VM UUID. May 14 00:37:18.452449 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). May 14 00:37:18.452460 systemd[1]: Populated /etc with preset unit settings. May 14 00:37:18.452471 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 14 00:37:18.452483 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 14 00:37:18.452496 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 00:37:18.452508 kernel: kauditd_printk_skb: 78 callbacks suppressed May 14 00:37:18.452518 kernel: audit: type=1334 audit(1747183038.310:82): prog-id=12 op=LOAD May 14 00:37:18.452529 kernel: audit: type=1334 audit(1747183038.310:83): prog-id=3 op=UNLOAD May 14 00:37:18.452538 kernel: audit: type=1334 audit(1747183038.312:84): prog-id=13 op=LOAD May 14 00:37:18.452548 kernel: audit: type=1334 audit(1747183038.313:85): prog-id=14 op=LOAD May 14 00:37:18.452558 kernel: audit: type=1334 audit(1747183038.313:86): prog-id=4 op=UNLOAD May 14 00:37:18.452568 kernel: audit: type=1334 audit(1747183038.313:87): prog-id=5 op=UNLOAD May 14 00:37:18.452577 kernel: audit: type=1334 audit(1747183038.314:88): prog-id=15 op=LOAD May 14 00:37:18.452588 kernel: audit: type=1334 audit(1747183038.314:89): prog-id=12 op=UNLOAD May 14 00:37:18.452600 kernel: audit: type=1334 audit(1747183038.316:90): prog-id=16 op=LOAD May 14 00:37:18.452612 kernel: audit: type=1334 audit(1747183038.316:91): prog-id=17 op=LOAD May 14 00:37:18.452622 systemd[1]: iscsiuio.service: Deactivated successfully. May 14 00:37:18.452633 systemd[1]: Stopped iscsiuio.service. May 14 00:37:18.452643 systemd[1]: iscsid.service: Deactivated successfully. May 14 00:37:18.452653 systemd[1]: Stopped iscsid.service. May 14 00:37:18.452665 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 14 00:37:18.452675 systemd[1]: Stopped initrd-switch-root.service. May 14 00:37:18.452687 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 14 00:37:18.452698 systemd[1]: Created slice system-addon\x2dconfig.slice. May 14 00:37:18.452708 systemd[1]: Created slice system-addon\x2drun.slice. May 14 00:37:18.452723 systemd[1]: Created slice system-getty.slice. May 14 00:37:18.452733 systemd[1]: Created slice system-modprobe.slice. May 14 00:37:18.452744 systemd[1]: Created slice system-serial\x2dgetty.slice. May 14 00:37:18.452755 systemd[1]: Created slice system-system\x2dcloudinit.slice. May 14 00:37:18.452769 systemd[1]: Created slice system-systemd\x2dfsck.slice. May 14 00:37:18.452780 systemd[1]: Created slice user.slice. May 14 00:37:18.452791 systemd[1]: Started systemd-ask-password-console.path. May 14 00:37:18.452802 systemd[1]: Started systemd-ask-password-wall.path. May 14 00:37:18.452812 systemd[1]: Set up automount boot.automount. May 14 00:37:18.452823 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. May 14 00:37:18.452833 systemd[1]: Stopped target initrd-switch-root.target. May 14 00:37:18.452845 systemd[1]: Stopped target initrd-fs.target. May 14 00:37:18.452856 systemd[1]: Stopped target initrd-root-fs.target. May 14 00:37:18.452866 systemd[1]: Reached target integritysetup.target. May 14 00:37:18.452879 systemd[1]: Reached target remote-cryptsetup.target. May 14 00:37:18.452890 systemd[1]: Reached target remote-fs.target. May 14 00:37:18.452901 systemd[1]: Reached target slices.target. May 14 00:37:18.452911 systemd[1]: Reached target swap.target. May 14 00:37:18.452922 systemd[1]: Reached target torcx.target. May 14 00:37:18.452933 systemd[1]: Reached target veritysetup.target. May 14 00:37:18.452944 systemd[1]: Listening on systemd-coredump.socket. May 14 00:37:18.452954 systemd[1]: Listening on systemd-initctl.socket. May 14 00:37:18.452965 systemd[1]: Listening on systemd-networkd.socket. May 14 00:37:18.452975 systemd[1]: Listening on systemd-udevd-control.socket. May 14 00:37:18.452986 systemd[1]: Listening on systemd-udevd-kernel.socket. May 14 00:37:18.452997 systemd[1]: Listening on systemd-userdbd.socket. May 14 00:37:18.453007 systemd[1]: Mounting dev-hugepages.mount... May 14 00:37:18.453018 systemd[1]: Mounting dev-mqueue.mount... May 14 00:37:18.453029 systemd[1]: Mounting media.mount... May 14 00:37:18.453040 systemd[1]: Mounting sys-kernel-debug.mount... May 14 00:37:18.453052 systemd[1]: Mounting sys-kernel-tracing.mount... May 14 00:37:18.453062 systemd[1]: Mounting tmp.mount... May 14 00:37:18.453073 systemd[1]: Starting flatcar-tmpfiles.service... May 14 00:37:18.453089 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 14 00:37:18.453100 systemd[1]: Starting kmod-static-nodes.service... May 14 00:37:18.453111 systemd[1]: Starting modprobe@configfs.service... May 14 00:37:18.453122 systemd[1]: Starting modprobe@dm_mod.service... May 14 00:37:18.453132 systemd[1]: Starting modprobe@drm.service... May 14 00:37:18.453143 systemd[1]: Starting modprobe@efi_pstore.service... May 14 00:37:18.453161 systemd[1]: Starting modprobe@fuse.service... May 14 00:37:18.453172 systemd[1]: Starting modprobe@loop.service... May 14 00:37:18.453183 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 14 00:37:18.453194 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 14 00:37:18.453204 systemd[1]: Stopped systemd-fsck-root.service. May 14 00:37:18.453215 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 14 00:37:18.453225 systemd[1]: Stopped systemd-fsck-usr.service. May 14 00:37:18.453236 systemd[1]: Stopped systemd-journald.service. May 14 00:37:18.453247 kernel: loop: module loaded May 14 00:37:18.453257 kernel: fuse: init (API version 7.34) May 14 00:37:18.453267 systemd[1]: Starting systemd-journald.service... May 14 00:37:18.453277 systemd[1]: Starting systemd-modules-load.service... May 14 00:37:18.453288 systemd[1]: Starting systemd-network-generator.service... May 14 00:37:18.453299 systemd[1]: Starting systemd-remount-fs.service... May 14 00:37:18.453309 systemd[1]: Starting systemd-udev-trigger.service... May 14 00:37:18.453320 systemd[1]: verity-setup.service: Deactivated successfully. May 14 00:37:18.453331 systemd[1]: Stopped verity-setup.service. May 14 00:37:18.453341 systemd[1]: Mounted dev-hugepages.mount. May 14 00:37:18.453353 systemd[1]: Mounted dev-mqueue.mount. May 14 00:37:18.453363 systemd[1]: Mounted media.mount. May 14 00:37:18.453378 systemd[1]: Mounted sys-kernel-debug.mount. May 14 00:37:18.453389 systemd[1]: Mounted sys-kernel-tracing.mount. May 14 00:37:18.453399 systemd[1]: Mounted tmp.mount. May 14 00:37:18.453410 systemd[1]: Finished kmod-static-nodes.service. May 14 00:37:18.453420 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 14 00:37:18.453431 systemd[1]: Finished modprobe@configfs.service. May 14 00:37:18.453442 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 14 00:37:18.453452 systemd[1]: Finished modprobe@dm_mod.service. May 14 00:37:18.453463 systemd[1]: modprobe@drm.service: Deactivated successfully. May 14 00:37:18.453473 systemd[1]: Finished modprobe@drm.service. May 14 00:37:18.453483 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 14 00:37:18.453494 systemd[1]: Finished modprobe@efi_pstore.service. May 14 00:37:18.453506 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 14 00:37:18.453516 systemd[1]: Finished modprobe@fuse.service. May 14 00:37:18.453527 systemd[1]: modprobe@loop.service: Deactivated successfully. May 14 00:37:18.453537 systemd[1]: Finished modprobe@loop.service. May 14 00:37:18.453549 systemd[1]: Finished systemd-modules-load.service. May 14 00:37:18.453571 systemd[1]: Finished systemd-network-generator.service. May 14 00:37:18.453584 systemd-journald[994]: Journal started May 14 00:37:18.453624 systemd-journald[994]: Runtime Journal (/run/log/journal/6f492c24a6724cd5b42006a7fede9122) is 6.0M, max 48.7M, 42.6M free. May 14 00:37:16.551000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 May 14 00:37:16.622000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 May 14 00:37:16.622000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 May 14 00:37:16.622000 audit: BPF prog-id=10 op=LOAD May 14 00:37:16.622000 audit: BPF prog-id=10 op=UNLOAD May 14 00:37:16.622000 audit: BPF prog-id=11 op=LOAD May 14 00:37:16.622000 audit: BPF prog-id=11 op=UNLOAD May 14 00:37:16.660000 audit[932]: AVC avc: denied { associate } for pid=932 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" May 14 00:37:16.660000 audit[932]: SYSCALL arch=c00000b7 syscall=5 success=yes exit=0 a0=400014d8a2 a1=40000d0de0 a2=40000d70c0 a3=32 items=0 ppid=915 pid=932 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) May 14 00:37:16.660000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 May 14 00:37:16.660000 audit[932]: AVC avc: denied { associate } for pid=932 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 May 14 00:37:16.660000 audit[932]: SYSCALL arch=c00000b7 syscall=34 success=yes exit=0 a0=ffffffffffffff9c a1=400014d979 a2=1ed a3=0 items=2 ppid=915 pid=932 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) May 14 00:37:16.660000 audit: CWD cwd="/" May 14 00:37:16.660000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 14 00:37:16.660000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 14 00:37:16.660000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 May 14 00:37:18.310000 audit: BPF prog-id=12 op=LOAD May 14 00:37:18.310000 audit: BPF prog-id=3 op=UNLOAD May 14 00:37:18.312000 audit: BPF prog-id=13 op=LOAD May 14 00:37:18.313000 audit: BPF prog-id=14 op=LOAD May 14 00:37:18.313000 audit: BPF prog-id=4 op=UNLOAD May 14 00:37:18.313000 audit: BPF prog-id=5 op=UNLOAD May 14 00:37:18.314000 audit: BPF prog-id=15 op=LOAD May 14 00:37:18.314000 audit: BPF prog-id=12 op=UNLOAD May 14 00:37:18.316000 audit: BPF prog-id=16 op=LOAD May 14 00:37:18.316000 audit: BPF prog-id=17 op=LOAD May 14 00:37:18.316000 audit: BPF prog-id=13 op=UNLOAD May 14 00:37:18.316000 audit: BPF prog-id=14 op=UNLOAD May 14 00:37:18.317000 audit: BPF prog-id=18 op=LOAD May 14 00:37:18.317000 audit: BPF prog-id=15 op=UNLOAD May 14 00:37:18.317000 audit: BPF prog-id=19 op=LOAD May 14 00:37:18.317000 audit: BPF prog-id=20 op=LOAD May 14 00:37:18.317000 audit: BPF prog-id=16 op=UNLOAD May 14 00:37:18.317000 audit: BPF prog-id=17 op=UNLOAD May 14 00:37:18.318000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:37:18.320000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:37:18.322000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:37:18.324000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:37:18.324000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:37:18.330000 audit: BPF prog-id=18 op=UNLOAD May 14 00:37:18.404000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:37:18.406000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:37:18.407000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:37:18.407000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:37:18.408000 audit: BPF prog-id=21 op=LOAD May 14 00:37:18.408000 audit: BPF prog-id=22 op=LOAD May 14 00:37:18.409000 audit: BPF prog-id=23 op=LOAD May 14 00:37:18.409000 audit: BPF prog-id=19 op=UNLOAD May 14 00:37:18.409000 audit: BPF prog-id=20 op=UNLOAD May 14 00:37:18.422000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:37:18.435000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:37:18.437000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:37:18.437000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:37:18.440000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:37:18.440000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:37:18.442000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:37:18.442000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:37:18.444000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:37:18.444000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:37:18.447000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:37:18.447000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:37:18.448000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 May 14 00:37:18.448000 audit[994]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=4 a1=ffffc8e4f710 a2=4000 a3=1 items=0 ppid=1 pid=994 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) May 14 00:37:18.448000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" May 14 00:37:18.449000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:37:18.449000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:37:18.451000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:37:18.309623 systemd[1]: Queued start job for default target multi-user.target. May 14 00:37:16.658762 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-05-14T00:37:16Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 14 00:37:18.309636 systemd[1]: Unnecessary job was removed for dev-vda6.device. May 14 00:37:16.658991 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-05-14T00:37:16Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json May 14 00:37:18.318436 systemd[1]: systemd-journald.service: Deactivated successfully. May 14 00:37:16.659008 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-05-14T00:37:16Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json May 14 00:37:16.659035 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-05-14T00:37:16Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" May 14 00:37:16.659045 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-05-14T00:37:16Z" level=debug msg="skipped missing lower profile" missing profile=oem May 14 00:37:16.659085 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-05-14T00:37:16Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" May 14 00:37:16.659105 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-05-14T00:37:16Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= May 14 00:37:16.659301 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-05-14T00:37:16Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack May 14 00:37:16.659334 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-05-14T00:37:16Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json May 14 00:37:16.659345 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-05-14T00:37:16Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json May 14 00:37:16.659723 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-05-14T00:37:16Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 May 14 00:37:16.659754 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-05-14T00:37:16Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl May 14 00:37:16.659771 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-05-14T00:37:16Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.7: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.7 May 14 00:37:18.456333 systemd[1]: Started systemd-journald.service. May 14 00:37:16.659784 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-05-14T00:37:16Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store May 14 00:37:16.659800 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-05-14T00:37:16Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.7: no such file or directory" path=/var/lib/torcx/store/3510.3.7 May 14 00:37:16.659813 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-05-14T00:37:16Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store May 14 00:37:18.073115 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-05-14T00:37:18Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 14 00:37:18.073404 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-05-14T00:37:18Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 14 00:37:18.073515 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-05-14T00:37:18Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 14 00:37:18.073679 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-05-14T00:37:18Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 14 00:37:18.073729 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-05-14T00:37:18Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= May 14 00:37:18.073790 /usr/lib/systemd/system-generators/torcx-generator[932]: time="2025-05-14T00:37:18Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx May 14 00:37:18.457064 systemd[1]: Finished systemd-remount-fs.service. May 14 00:37:18.458355 systemd[1]: Reached target network-pre.target. May 14 00:37:18.454000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:37:18.456000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:37:18.457000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:37:18.460276 systemd[1]: Mounting sys-fs-fuse-connections.mount... May 14 00:37:18.462037 systemd[1]: Mounting sys-kernel-config.mount... May 14 00:37:18.462935 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 14 00:37:18.464816 systemd[1]: Starting systemd-hwdb-update.service... May 14 00:37:18.467072 systemd[1]: Starting systemd-journal-flush.service... May 14 00:37:18.468445 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 14 00:37:18.469445 systemd[1]: Starting systemd-random-seed.service... May 14 00:37:18.470385 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 14 00:37:18.472043 systemd-journald[994]: Time spent on flushing to /var/log/journal/6f492c24a6724cd5b42006a7fede9122 is 12.772ms for 978 entries. May 14 00:37:18.472043 systemd-journald[994]: System Journal (/var/log/journal/6f492c24a6724cd5b42006a7fede9122) is 8.0M, max 195.6M, 187.6M free. May 14 00:37:18.495394 systemd-journald[994]: Received client request to flush runtime journal. May 14 00:37:18.475000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:37:18.478000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:37:18.486000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:37:18.491000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:37:18.497000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:37:18.471430 systemd[1]: Starting systemd-sysctl.service... May 14 00:37:18.475125 systemd[1]: Finished flatcar-tmpfiles.service. May 14 00:37:18.476278 systemd[1]: Mounted sys-fs-fuse-connections.mount. May 14 00:37:18.497888 udevadm[1033]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. May 14 00:37:18.477044 systemd[1]: Mounted sys-kernel-config.mount. May 14 00:37:18.477960 systemd[1]: Finished systemd-random-seed.service. May 14 00:37:18.478888 systemd[1]: Reached target first-boot-complete.target. May 14 00:37:18.480797 systemd[1]: Starting systemd-sysusers.service... May 14 00:37:18.485808 systemd[1]: Finished systemd-udev-trigger.service. May 14 00:37:18.487622 systemd[1]: Starting systemd-udev-settle.service... May 14 00:37:18.490467 systemd[1]: Finished systemd-sysctl.service. May 14 00:37:18.496367 systemd[1]: Finished systemd-journal-flush.service. May 14 00:37:18.508697 systemd[1]: Finished systemd-sysusers.service. May 14 00:37:18.509000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:37:18.846502 systemd[1]: Finished systemd-hwdb-update.service. May 14 00:37:18.847000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:37:18.847000 audit: BPF prog-id=24 op=LOAD May 14 00:37:18.847000 audit: BPF prog-id=25 op=LOAD May 14 00:37:18.847000 audit: BPF prog-id=7 op=UNLOAD May 14 00:37:18.847000 audit: BPF prog-id=8 op=UNLOAD May 14 00:37:18.848723 systemd[1]: Starting systemd-udevd.service... May 14 00:37:18.865185 systemd-udevd[1035]: Using default interface naming scheme 'v252'. May 14 00:37:18.876544 systemd[1]: Started systemd-udevd.service. May 14 00:37:18.877000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:37:18.878000 audit: BPF prog-id=26 op=LOAD May 14 00:37:18.879314 systemd[1]: Starting systemd-networkd.service... May 14 00:37:18.884000 audit: BPF prog-id=27 op=LOAD May 14 00:37:18.884000 audit: BPF prog-id=28 op=LOAD May 14 00:37:18.884000 audit: BPF prog-id=29 op=LOAD May 14 00:37:18.885712 systemd[1]: Starting systemd-userdbd.service... May 14 00:37:18.899287 systemd[1]: Condition check resulted in dev-ttyAMA0.device being skipped. May 14 00:37:18.910911 systemd[1]: Started systemd-userdbd.service. May 14 00:37:18.911000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:37:18.944864 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. May 14 00:37:18.979867 systemd-networkd[1044]: lo: Link UP May 14 00:37:18.980259 systemd-networkd[1044]: lo: Gained carrier May 14 00:37:18.981527 systemd-networkd[1044]: Enumeration completed May 14 00:37:18.981713 systemd[1]: Started systemd-networkd.service. May 14 00:37:18.981823 systemd-networkd[1044]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 14 00:37:18.982000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:37:18.983497 systemd-networkd[1044]: eth0: Link UP May 14 00:37:18.983571 systemd-networkd[1044]: eth0: Gained carrier May 14 00:37:18.984531 systemd[1]: Finished systemd-udev-settle.service. May 14 00:37:18.985000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:37:18.986644 systemd[1]: Starting lvm2-activation-early.service... May 14 00:37:18.994880 lvm[1068]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 14 00:37:19.003296 systemd-networkd[1044]: eth0: DHCPv4 address 10.0.0.50/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 14 00:37:19.020923 systemd[1]: Finished lvm2-activation-early.service. May 14 00:37:19.021000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:37:19.021939 systemd[1]: Reached target cryptsetup.target. May 14 00:37:19.023838 systemd[1]: Starting lvm2-activation.service... May 14 00:37:19.027483 lvm[1069]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 14 00:37:19.057012 systemd[1]: Finished lvm2-activation.service. May 14 00:37:19.057000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:37:19.057962 systemd[1]: Reached target local-fs-pre.target. May 14 00:37:19.058837 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 14 00:37:19.058865 systemd[1]: Reached target local-fs.target. May 14 00:37:19.059676 systemd[1]: Reached target machines.target. May 14 00:37:19.061549 systemd[1]: Starting ldconfig.service... May 14 00:37:19.062567 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 14 00:37:19.062646 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 14 00:37:19.063767 systemd[1]: Starting systemd-boot-update.service... May 14 00:37:19.065866 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... May 14 00:37:19.068062 systemd[1]: Starting systemd-machine-id-commit.service... May 14 00:37:19.071129 systemd[1]: Starting systemd-sysext.service... May 14 00:37:19.072900 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1071 (bootctl) May 14 00:37:19.076998 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... May 14 00:37:19.084000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:37:19.083860 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. May 14 00:37:19.086728 systemd[1]: Unmounting usr-share-oem.mount... May 14 00:37:19.091139 systemd[1]: usr-share-oem.mount: Deactivated successfully. May 14 00:37:19.091388 systemd[1]: Unmounted usr-share-oem.mount. May 14 00:37:19.104186 kernel: loop0: detected capacity change from 0 to 194096 May 14 00:37:19.142855 systemd[1]: Finished systemd-machine-id-commit.service. May 14 00:37:19.143000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:37:19.146228 systemd-fsck[1079]: fsck.fat 4.2 (2021-01-31) May 14 00:37:19.146228 systemd-fsck[1079]: /dev/vda1: 236 files, 117310/258078 clusters May 14 00:37:19.148512 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. May 14 00:37:19.149000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:37:19.154180 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 14 00:37:19.179182 kernel: loop1: detected capacity change from 0 to 194096 May 14 00:37:19.183033 (sd-sysext)[1085]: Using extensions 'kubernetes'. May 14 00:37:19.183684 (sd-sysext)[1085]: Merged extensions into '/usr'. May 14 00:37:19.202427 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 14 00:37:19.203931 systemd[1]: Starting modprobe@dm_mod.service... May 14 00:37:19.205933 systemd[1]: Starting modprobe@efi_pstore.service... May 14 00:37:19.207971 systemd[1]: Starting modprobe@loop.service... May 14 00:37:19.208688 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 14 00:37:19.208810 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 14 00:37:19.209671 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 14 00:37:19.209832 systemd[1]: Finished modprobe@dm_mod.service. May 14 00:37:19.210000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:37:19.210000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:37:19.210930 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 14 00:37:19.211050 systemd[1]: Finished modprobe@efi_pstore.service. May 14 00:37:19.211000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:37:19.211000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:37:19.212232 systemd[1]: modprobe@loop.service: Deactivated successfully. May 14 00:37:19.212384 systemd[1]: Finished modprobe@loop.service. May 14 00:37:19.212000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:37:19.212000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:37:19.213357 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 14 00:37:19.213489 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 14 00:37:19.243309 ldconfig[1070]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 14 00:37:19.247092 systemd[1]: Finished ldconfig.service. May 14 00:37:19.247000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:37:19.424684 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 14 00:37:19.426595 systemd[1]: Mounting boot.mount... May 14 00:37:19.428298 systemd[1]: Mounting usr-share-oem.mount... May 14 00:37:19.434049 systemd[1]: Mounted boot.mount. May 14 00:37:19.434824 systemd[1]: Mounted usr-share-oem.mount. May 14 00:37:19.436557 systemd[1]: Finished systemd-sysext.service. May 14 00:37:19.437000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:37:19.438413 systemd[1]: Starting ensure-sysext.service... May 14 00:37:19.440355 systemd[1]: Starting systemd-tmpfiles-setup.service... May 14 00:37:19.443599 systemd[1]: Finished systemd-boot-update.service. May 14 00:37:19.444000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:37:19.445901 systemd[1]: Reloading. May 14 00:37:19.452778 systemd-tmpfiles[1093]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. May 14 00:37:19.454523 systemd-tmpfiles[1093]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 14 00:37:19.457054 systemd-tmpfiles[1093]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 14 00:37:19.492438 /usr/lib/systemd/system-generators/torcx-generator[1114]: time="2025-05-14T00:37:19Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 14 00:37:19.492464 /usr/lib/systemd/system-generators/torcx-generator[1114]: time="2025-05-14T00:37:19Z" level=info msg="torcx already run" May 14 00:37:19.546584 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 14 00:37:19.546602 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 14 00:37:19.562228 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 00:37:19.606000 audit: BPF prog-id=30 op=LOAD May 14 00:37:19.606000 audit: BPF prog-id=21 op=UNLOAD May 14 00:37:19.606000 audit: BPF prog-id=31 op=LOAD May 14 00:37:19.606000 audit: BPF prog-id=32 op=LOAD May 14 00:37:19.606000 audit: BPF prog-id=22 op=UNLOAD May 14 00:37:19.606000 audit: BPF prog-id=23 op=UNLOAD May 14 00:37:19.607000 audit: BPF prog-id=33 op=LOAD May 14 00:37:19.607000 audit: BPF prog-id=34 op=LOAD May 14 00:37:19.607000 audit: BPF prog-id=24 op=UNLOAD May 14 00:37:19.607000 audit: BPF prog-id=25 op=UNLOAD May 14 00:37:19.608000 audit: BPF prog-id=35 op=LOAD May 14 00:37:19.608000 audit: BPF prog-id=27 op=UNLOAD May 14 00:37:19.608000 audit: BPF prog-id=36 op=LOAD May 14 00:37:19.608000 audit: BPF prog-id=37 op=LOAD May 14 00:37:19.608000 audit: BPF prog-id=28 op=UNLOAD May 14 00:37:19.608000 audit: BPF prog-id=29 op=UNLOAD May 14 00:37:19.608000 audit: BPF prog-id=38 op=LOAD May 14 00:37:19.608000 audit: BPF prog-id=26 op=UNLOAD May 14 00:37:19.610723 systemd[1]: Finished systemd-tmpfiles-setup.service. May 14 00:37:19.611000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:37:19.614966 systemd[1]: Starting audit-rules.service... May 14 00:37:19.618810 systemd[1]: Starting clean-ca-certificates.service... May 14 00:37:19.620547 systemd[1]: Starting systemd-journal-catalog-update.service... May 14 00:37:19.622000 audit: BPF prog-id=39 op=LOAD May 14 00:37:19.627000 audit: BPF prog-id=40 op=LOAD May 14 00:37:19.623486 systemd[1]: Starting systemd-resolved.service... May 14 00:37:19.628184 systemd[1]: Starting systemd-timesyncd.service... May 14 00:37:19.630061 systemd[1]: Starting systemd-update-utmp.service... May 14 00:37:19.633206 systemd[1]: Finished clean-ca-certificates.service. May 14 00:37:19.633000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:37:19.634389 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 14 00:37:19.637340 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 14 00:37:19.638737 systemd[1]: Starting modprobe@dm_mod.service... May 14 00:37:19.639000 audit[1164]: SYSTEM_BOOT pid=1164 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' May 14 00:37:19.640888 systemd[1]: Starting modprobe@efi_pstore.service... May 14 00:37:19.642825 systemd[1]: Starting modprobe@loop.service... May 14 00:37:19.643547 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 14 00:37:19.643673 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 14 00:37:19.643779 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 14 00:37:19.644582 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 14 00:37:19.644842 systemd[1]: Finished modprobe@dm_mod.service. May 14 00:37:19.645000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:37:19.645000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:37:19.645988 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 14 00:37:19.646140 systemd[1]: Finished modprobe@efi_pstore.service. May 14 00:37:19.646000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:37:19.646000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:37:19.647416 systemd[1]: modprobe@loop.service: Deactivated successfully. May 14 00:37:19.647519 systemd[1]: Finished modprobe@loop.service. May 14 00:37:19.648000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:37:19.648000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:37:19.650513 systemd[1]: Finished systemd-journal-catalog-update.service. May 14 00:37:19.651000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:37:19.653276 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 14 00:37:19.655023 systemd[1]: Starting modprobe@dm_mod.service... May 14 00:37:19.656715 systemd[1]: Starting modprobe@efi_pstore.service... May 14 00:37:19.661413 systemd[1]: Starting modprobe@loop.service... May 14 00:37:19.661991 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 14 00:37:19.662128 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 14 00:37:19.663450 systemd[1]: Starting systemd-update-done.service... May 14 00:37:19.664168 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 14 00:37:19.665314 systemd[1]: Finished systemd-update-utmp.service. May 14 00:37:19.665000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:37:19.666364 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 14 00:37:19.666481 systemd[1]: Finished modprobe@dm_mod.service. May 14 00:37:19.667000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:37:19.667000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:37:19.667536 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 14 00:37:19.667653 systemd[1]: Finished modprobe@efi_pstore.service. May 14 00:37:19.668000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:37:19.668000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:37:19.668814 systemd[1]: modprobe@loop.service: Deactivated successfully. May 14 00:37:19.668932 systemd[1]: Finished modprobe@loop.service. May 14 00:37:19.669000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:37:19.669000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:37:19.670056 systemd[1]: Finished systemd-update-done.service. May 14 00:37:19.670000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:37:19.674474 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 14 00:37:19.676465 systemd[1]: Starting modprobe@dm_mod.service... May 14 00:37:19.678655 systemd[1]: Starting modprobe@drm.service... May 14 00:37:19.680553 systemd[1]: Starting modprobe@efi_pstore.service... May 14 00:37:19.682503 systemd[1]: Starting modprobe@loop.service... May 14 00:37:19.683228 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 14 00:37:19.683383 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 14 00:37:19.684859 systemd[1]: Starting systemd-networkd-wait-online.service... May 14 00:37:19.685806 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 14 00:37:19.687012 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 14 00:37:19.687174 systemd[1]: Finished modprobe@dm_mod.service. May 14 00:37:19.690000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:37:19.690000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 14 00:37:19.690000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 May 14 00:37:19.690972 systemd[1]: modprobe@drm.service: Deactivated successfully. May 14 00:37:19.691107 systemd[1]: Finished modprobe@drm.service. May 14 00:37:19.690000 audit[1181]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffc1a54890 a2=420 a3=0 items=0 ppid=1153 pid=1181 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) May 14 00:37:19.690000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 May 14 00:37:19.691617 augenrules[1181]: No rules May 14 00:37:19.691899 systemd-resolved[1157]: Positive Trust Anchors: May 14 00:37:19.691909 systemd-resolved[1157]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 14 00:37:19.691937 systemd-resolved[1157]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test May 14 00:37:19.692306 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 14 00:37:19.692486 systemd[1]: Finished modprobe@efi_pstore.service. May 14 00:37:19.693808 systemd[1]: Finished audit-rules.service. May 14 00:37:19.695145 systemd[1]: modprobe@loop.service: Deactivated successfully. May 14 00:37:19.695287 systemd[1]: Finished modprobe@loop.service. May 14 00:37:19.696253 systemd[1]: Started systemd-timesyncd.service. May 14 00:37:20.190022 systemd-timesyncd[1161]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 14 00:37:20.190082 systemd-timesyncd[1161]: Initial clock synchronization to Wed 2025-05-14 00:37:20.189930 UTC. May 14 00:37:20.191064 systemd[1]: Reached target time-set.target. May 14 00:37:20.191980 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 14 00:37:20.192032 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 14 00:37:20.192353 systemd[1]: Finished ensure-sysext.service. May 14 00:37:20.202404 systemd-resolved[1157]: Defaulting to hostname 'linux'. May 14 00:37:20.204004 systemd[1]: Started systemd-resolved.service. May 14 00:37:20.204748 systemd[1]: Reached target network.target. May 14 00:37:20.205379 systemd[1]: Reached target nss-lookup.target. May 14 00:37:20.205969 systemd[1]: Reached target sysinit.target. May 14 00:37:20.206601 systemd[1]: Started motdgen.path. May 14 00:37:20.207226 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. May 14 00:37:20.208189 systemd[1]: Started logrotate.timer. May 14 00:37:20.208851 systemd[1]: Started mdadm.timer. May 14 00:37:20.209355 systemd[1]: Started systemd-tmpfiles-clean.timer. May 14 00:37:20.209981 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 14 00:37:20.210012 systemd[1]: Reached target paths.target. May 14 00:37:20.210546 systemd[1]: Reached target timers.target. May 14 00:37:20.211489 systemd[1]: Listening on dbus.socket. May 14 00:37:20.213131 systemd[1]: Starting docker.socket... May 14 00:37:20.216211 systemd[1]: Listening on sshd.socket. May 14 00:37:20.216897 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 14 00:37:20.217352 systemd[1]: Listening on docker.socket. May 14 00:37:20.218015 systemd[1]: Reached target sockets.target. May 14 00:37:20.218570 systemd[1]: Reached target basic.target. May 14 00:37:20.219187 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. May 14 00:37:20.219217 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. May 14 00:37:20.220292 systemd[1]: Starting containerd.service... May 14 00:37:20.221976 systemd[1]: Starting dbus.service... May 14 00:37:20.223716 systemd[1]: Starting enable-oem-cloudinit.service... May 14 00:37:20.225649 systemd[1]: Starting extend-filesystems.service... May 14 00:37:20.226491 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). May 14 00:37:20.228672 systemd[1]: Starting motdgen.service... May 14 00:37:20.230670 systemd[1]: Starting ssh-key-proc-cmdline.service... May 14 00:37:20.232782 systemd[1]: Starting sshd-keygen.service... May 14 00:37:20.239857 jq[1195]: false May 14 00:37:20.239881 systemd[1]: Starting systemd-logind.service... May 14 00:37:20.240541 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 14 00:37:20.240625 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 14 00:37:20.241248 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 14 00:37:20.244371 systemd[1]: Starting update-engine.service... May 14 00:37:20.246699 systemd[1]: Starting update-ssh-keys-after-ignition.service... May 14 00:37:20.248564 extend-filesystems[1196]: Found loop1 May 14 00:37:20.248564 extend-filesystems[1196]: Found vda May 14 00:37:20.248564 extend-filesystems[1196]: Found vda1 May 14 00:37:20.248564 extend-filesystems[1196]: Found vda2 May 14 00:37:20.248564 extend-filesystems[1196]: Found vda3 May 14 00:37:20.248564 extend-filesystems[1196]: Found usr May 14 00:37:20.248564 extend-filesystems[1196]: Found vda4 May 14 00:37:20.248564 extend-filesystems[1196]: Found vda6 May 14 00:37:20.248564 extend-filesystems[1196]: Found vda7 May 14 00:37:20.248564 extend-filesystems[1196]: Found vda9 May 14 00:37:20.248564 extend-filesystems[1196]: Checking size of /dev/vda9 May 14 00:37:20.249375 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 14 00:37:20.265467 jq[1210]: true May 14 00:37:20.249602 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. May 14 00:37:20.250001 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 14 00:37:20.250218 systemd[1]: Finished ssh-key-proc-cmdline.service. May 14 00:37:20.268090 jq[1215]: true May 14 00:37:20.272818 systemd[1]: motdgen.service: Deactivated successfully. May 14 00:37:20.273002 systemd[1]: Finished motdgen.service. May 14 00:37:20.280955 extend-filesystems[1196]: Resized partition /dev/vda9 May 14 00:37:20.309830 extend-filesystems[1231]: resize2fs 1.46.5 (30-Dec-2021) May 14 00:37:20.322501 systemd-logind[1203]: Watching system buttons on /dev/input/event0 (Power Button) May 14 00:37:20.324115 dbus-daemon[1194]: [system] SELinux support is enabled May 14 00:37:20.324308 systemd[1]: Started dbus.service. May 14 00:37:20.324489 systemd-logind[1203]: New seat seat0. May 14 00:37:20.326918 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 14 00:37:20.326945 systemd[1]: Reached target system-config.target. May 14 00:37:20.329226 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 14 00:37:20.329255 systemd[1]: Reached target user-config.target. May 14 00:37:20.331137 systemd[1]: Started systemd-logind.service. May 14 00:37:20.335841 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 14 00:37:20.346143 update_engine[1207]: I0514 00:37:20.345887 1207 main.cc:92] Flatcar Update Engine starting May 14 00:37:20.353706 systemd[1]: Started update-engine.service. May 14 00:37:20.353832 update_engine[1207]: I0514 00:37:20.353734 1207 update_check_scheduler.cc:74] Next update check in 4m41s May 14 00:37:20.358258 systemd[1]: Started locksmithd.service. May 14 00:37:20.360857 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 14 00:37:20.376920 extend-filesystems[1231]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 14 00:37:20.376920 extend-filesystems[1231]: old_desc_blocks = 1, new_desc_blocks = 1 May 14 00:37:20.376920 extend-filesystems[1231]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 14 00:37:20.380476 extend-filesystems[1196]: Resized filesystem in /dev/vda9 May 14 00:37:20.381258 env[1214]: time="2025-05-14T00:37:20.377944505Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 May 14 00:37:20.380840 systemd[1]: extend-filesystems.service: Deactivated successfully. May 14 00:37:20.381010 systemd[1]: Finished extend-filesystems.service. May 14 00:37:20.388842 bash[1243]: Updated "/home/core/.ssh/authorized_keys" May 14 00:37:20.389527 systemd[1]: Finished update-ssh-keys-after-ignition.service. May 14 00:37:20.403294 env[1214]: time="2025-05-14T00:37:20.403254905Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 14 00:37:20.403656 env[1214]: time="2025-05-14T00:37:20.403633185Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 14 00:37:20.405064 env[1214]: time="2025-05-14T00:37:20.405031545Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.181-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 14 00:37:20.405152 env[1214]: time="2025-05-14T00:37:20.405136585Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 14 00:37:20.405430 env[1214]: time="2025-05-14T00:37:20.405406785Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 14 00:37:20.405510 env[1214]: time="2025-05-14T00:37:20.405494305Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 14 00:37:20.405626 env[1214]: time="2025-05-14T00:37:20.405608145Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" May 14 00:37:20.405687 env[1214]: time="2025-05-14T00:37:20.405674505Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 14 00:37:20.405853 env[1214]: time="2025-05-14T00:37:20.405831905Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 14 00:37:20.406385 env[1214]: time="2025-05-14T00:37:20.406285385Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 14 00:37:20.406767 env[1214]: time="2025-05-14T00:37:20.406741905Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 14 00:37:20.406863 env[1214]: time="2025-05-14T00:37:20.406846305Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 14 00:37:20.407006 env[1214]: time="2025-05-14T00:37:20.406984865Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" May 14 00:37:20.407073 env[1214]: time="2025-05-14T00:37:20.407058905Z" level=info msg="metadata content store policy set" policy=shared May 14 00:37:20.409797 locksmithd[1244]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 14 00:37:20.410405 env[1214]: time="2025-05-14T00:37:20.410380425Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 14 00:37:20.410491 env[1214]: time="2025-05-14T00:37:20.410475665Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 14 00:37:20.410546 env[1214]: time="2025-05-14T00:37:20.410533225Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 14 00:37:20.410628 env[1214]: time="2025-05-14T00:37:20.410614625Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 14 00:37:20.410763 env[1214]: time="2025-05-14T00:37:20.410746505Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 14 00:37:20.410878 env[1214]: time="2025-05-14T00:37:20.410861945Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 14 00:37:20.410939 env[1214]: time="2025-05-14T00:37:20.410926305Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 14 00:37:20.411316 env[1214]: time="2025-05-14T00:37:20.411287665Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 14 00:37:20.411399 env[1214]: time="2025-05-14T00:37:20.411383745Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 May 14 00:37:20.411477 env[1214]: time="2025-05-14T00:37:20.411462265Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 14 00:37:20.411535 env[1214]: time="2025-05-14T00:37:20.411521945Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 14 00:37:20.411594 env[1214]: time="2025-05-14T00:37:20.411581185Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 14 00:37:20.411762 env[1214]: time="2025-05-14T00:37:20.411738825Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 14 00:37:20.411933 env[1214]: time="2025-05-14T00:37:20.411913265Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 14 00:37:20.412249 env[1214]: time="2025-05-14T00:37:20.412227625Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 14 00:37:20.412348 env[1214]: time="2025-05-14T00:37:20.412332145Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 14 00:37:20.412410 env[1214]: time="2025-05-14T00:37:20.412395985Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 14 00:37:20.412569 env[1214]: time="2025-05-14T00:37:20.412552065Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 14 00:37:20.412644 env[1214]: time="2025-05-14T00:37:20.412630825Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 14 00:37:20.412702 env[1214]: time="2025-05-14T00:37:20.412689305Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 14 00:37:20.412770 env[1214]: time="2025-05-14T00:37:20.412755985Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 14 00:37:20.412843 env[1214]: time="2025-05-14T00:37:20.412829065Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 14 00:37:20.412918 env[1214]: time="2025-05-14T00:37:20.412902545Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 14 00:37:20.412978 env[1214]: time="2025-05-14T00:37:20.412965065Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 14 00:37:20.413032 env[1214]: time="2025-05-14T00:37:20.413020025Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 14 00:37:20.413103 env[1214]: time="2025-05-14T00:37:20.413083825Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 14 00:37:20.413305 env[1214]: time="2025-05-14T00:37:20.413267425Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 14 00:37:20.413390 env[1214]: time="2025-05-14T00:37:20.413374825Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 14 00:37:20.413451 env[1214]: time="2025-05-14T00:37:20.413436785Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 14 00:37:20.413506 env[1214]: time="2025-05-14T00:37:20.413493385Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 14 00:37:20.413565 env[1214]: time="2025-05-14T00:37:20.413550385Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 May 14 00:37:20.413628 env[1214]: time="2025-05-14T00:37:20.413614465Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 14 00:37:20.413696 env[1214]: time="2025-05-14T00:37:20.413681185Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" May 14 00:37:20.413788 env[1214]: time="2025-05-14T00:37:20.413772265Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 14 00:37:20.414126 env[1214]: time="2025-05-14T00:37:20.414068985Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 14 00:37:20.414815 env[1214]: time="2025-05-14T00:37:20.414537305Z" level=info msg="Connect containerd service" May 14 00:37:20.414862 env[1214]: time="2025-05-14T00:37:20.414578985Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 14 00:37:20.415536 env[1214]: time="2025-05-14T00:37:20.415507585Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 14 00:37:20.415697 env[1214]: time="2025-05-14T00:37:20.415664185Z" level=info msg="Start subscribing containerd event" May 14 00:37:20.415828 env[1214]: time="2025-05-14T00:37:20.415811705Z" level=info msg="Start recovering state" May 14 00:37:20.415959 env[1214]: time="2025-05-14T00:37:20.415833265Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 14 00:37:20.416025 env[1214]: time="2025-05-14T00:37:20.415947665Z" level=info msg="Start event monitor" May 14 00:37:20.416095 env[1214]: time="2025-05-14T00:37:20.416081065Z" level=info msg="Start snapshots syncer" May 14 00:37:20.416165 env[1214]: time="2025-05-14T00:37:20.416023145Z" level=info msg=serving... address=/run/containerd/containerd.sock May 14 00:37:20.416165 env[1214]: time="2025-05-14T00:37:20.416142345Z" level=info msg="Start cni network conf syncer for default" May 14 00:37:20.416221 env[1214]: time="2025-05-14T00:37:20.416176745Z" level=info msg="Start streaming server" May 14 00:37:20.416270 systemd[1]: Started containerd.service. May 14 00:37:20.417214 env[1214]: time="2025-05-14T00:37:20.417183745Z" level=info msg="containerd successfully booted in 0.048622s" May 14 00:37:20.736980 systemd-networkd[1044]: eth0: Gained IPv6LL May 14 00:37:20.738983 systemd[1]: Finished systemd-networkd-wait-online.service. May 14 00:37:20.740227 systemd[1]: Reached target network-online.target. May 14 00:37:20.742333 systemd[1]: Starting kubelet.service... May 14 00:37:21.245518 systemd[1]: Started kubelet.service. May 14 00:37:21.745532 kubelet[1258]: E0514 00:37:21.745424 1258 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 00:37:21.747361 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 00:37:21.747489 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 00:37:22.361156 sshd_keygen[1209]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 14 00:37:22.379128 systemd[1]: Finished sshd-keygen.service. May 14 00:37:22.381302 systemd[1]: Starting issuegen.service... May 14 00:37:22.385844 systemd[1]: issuegen.service: Deactivated successfully. May 14 00:37:22.385981 systemd[1]: Finished issuegen.service. May 14 00:37:22.387976 systemd[1]: Starting systemd-user-sessions.service... May 14 00:37:22.393974 systemd[1]: Finished systemd-user-sessions.service. May 14 00:37:22.395887 systemd[1]: Started getty@tty1.service. May 14 00:37:22.397630 systemd[1]: Started serial-getty@ttyAMA0.service. May 14 00:37:22.398498 systemd[1]: Reached target getty.target. May 14 00:37:22.399193 systemd[1]: Reached target multi-user.target. May 14 00:37:22.400984 systemd[1]: Starting systemd-update-utmp-runlevel.service... May 14 00:37:22.407424 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. May 14 00:37:22.407582 systemd[1]: Finished systemd-update-utmp-runlevel.service. May 14 00:37:22.408428 systemd[1]: Startup finished in 548ms (kernel) + 3.943s (initrd) + 5.403s (userspace) = 9.895s. May 14 00:37:25.716523 systemd[1]: Created slice system-sshd.slice. May 14 00:37:25.717726 systemd[1]: Started sshd@0-10.0.0.50:22-10.0.0.1:51282.service. May 14 00:37:25.772466 sshd[1281]: Accepted publickey for core from 10.0.0.1 port 51282 ssh2: RSA SHA256:Ft5GW8W8jN9tGS/uukCO+uGXWTzIC0GL6a4nCPNTNlk May 14 00:37:25.774531 sshd[1281]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 14 00:37:25.782409 systemd[1]: Created slice user-500.slice. May 14 00:37:25.783522 systemd[1]: Starting user-runtime-dir@500.service... May 14 00:37:25.785463 systemd-logind[1203]: New session 1 of user core. May 14 00:37:25.791530 systemd[1]: Finished user-runtime-dir@500.service. May 14 00:37:25.792891 systemd[1]: Starting user@500.service... May 14 00:37:25.796064 (systemd)[1284]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 14 00:37:25.858369 systemd[1284]: Queued start job for default target default.target. May 14 00:37:25.858872 systemd[1284]: Reached target paths.target. May 14 00:37:25.858904 systemd[1284]: Reached target sockets.target. May 14 00:37:25.858914 systemd[1284]: Reached target timers.target. May 14 00:37:25.858924 systemd[1284]: Reached target basic.target. May 14 00:37:25.858963 systemd[1284]: Reached target default.target. May 14 00:37:25.858987 systemd[1284]: Startup finished in 56ms. May 14 00:37:25.859432 systemd[1]: Started user@500.service. May 14 00:37:25.860407 systemd[1]: Started session-1.scope. May 14 00:37:25.916751 systemd[1]: Started sshd@1-10.0.0.50:22-10.0.0.1:51284.service. May 14 00:37:25.961975 sshd[1293]: Accepted publickey for core from 10.0.0.1 port 51284 ssh2: RSA SHA256:Ft5GW8W8jN9tGS/uukCO+uGXWTzIC0GL6a4nCPNTNlk May 14 00:37:25.963547 sshd[1293]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 14 00:37:25.967186 systemd-logind[1203]: New session 2 of user core. May 14 00:37:25.968939 systemd[1]: Started session-2.scope. May 14 00:37:26.023218 sshd[1293]: pam_unix(sshd:session): session closed for user core May 14 00:37:26.026114 systemd[1]: Started sshd@2-10.0.0.50:22-10.0.0.1:51290.service. May 14 00:37:26.026561 systemd[1]: sshd@1-10.0.0.50:22-10.0.0.1:51284.service: Deactivated successfully. May 14 00:37:26.027386 systemd[1]: session-2.scope: Deactivated successfully. May 14 00:37:26.027914 systemd-logind[1203]: Session 2 logged out. Waiting for processes to exit. May 14 00:37:26.028983 systemd-logind[1203]: Removed session 2. May 14 00:37:26.070171 sshd[1298]: Accepted publickey for core from 10.0.0.1 port 51290 ssh2: RSA SHA256:Ft5GW8W8jN9tGS/uukCO+uGXWTzIC0GL6a4nCPNTNlk May 14 00:37:26.071384 sshd[1298]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 14 00:37:26.074644 systemd-logind[1203]: New session 3 of user core. May 14 00:37:26.075449 systemd[1]: Started session-3.scope. May 14 00:37:26.124206 sshd[1298]: pam_unix(sshd:session): session closed for user core May 14 00:37:26.127886 systemd[1]: Started sshd@3-10.0.0.50:22-10.0.0.1:51300.service. May 14 00:37:26.128372 systemd[1]: sshd@2-10.0.0.50:22-10.0.0.1:51290.service: Deactivated successfully. May 14 00:37:26.129058 systemd[1]: session-3.scope: Deactivated successfully. May 14 00:37:26.129588 systemd-logind[1203]: Session 3 logged out. Waiting for processes to exit. May 14 00:37:26.130406 systemd-logind[1203]: Removed session 3. May 14 00:37:26.169536 sshd[1304]: Accepted publickey for core from 10.0.0.1 port 51300 ssh2: RSA SHA256:Ft5GW8W8jN9tGS/uukCO+uGXWTzIC0GL6a4nCPNTNlk May 14 00:37:26.171037 sshd[1304]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 14 00:37:26.174268 systemd-logind[1203]: New session 4 of user core. May 14 00:37:26.175099 systemd[1]: Started session-4.scope. May 14 00:37:26.228254 sshd[1304]: pam_unix(sshd:session): session closed for user core May 14 00:37:26.230997 systemd[1]: sshd@3-10.0.0.50:22-10.0.0.1:51300.service: Deactivated successfully. May 14 00:37:26.231614 systemd[1]: session-4.scope: Deactivated successfully. May 14 00:37:26.232146 systemd-logind[1203]: Session 4 logged out. Waiting for processes to exit. May 14 00:37:26.233122 systemd[1]: Started sshd@4-10.0.0.50:22-10.0.0.1:51302.service. May 14 00:37:26.233757 systemd-logind[1203]: Removed session 4. May 14 00:37:26.274902 sshd[1311]: Accepted publickey for core from 10.0.0.1 port 51302 ssh2: RSA SHA256:Ft5GW8W8jN9tGS/uukCO+uGXWTzIC0GL6a4nCPNTNlk May 14 00:37:26.276325 sshd[1311]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 14 00:37:26.279595 systemd-logind[1203]: New session 5 of user core. May 14 00:37:26.280392 systemd[1]: Started session-5.scope. May 14 00:37:26.341395 sudo[1314]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 14 00:37:26.341618 sudo[1314]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) May 14 00:37:26.353276 systemd[1]: Starting coreos-metadata.service... May 14 00:37:26.360019 systemd[1]: coreos-metadata.service: Deactivated successfully. May 14 00:37:26.360193 systemd[1]: Finished coreos-metadata.service. May 14 00:37:26.856531 systemd[1]: Stopped kubelet.service. May 14 00:37:26.858859 systemd[1]: Starting kubelet.service... May 14 00:37:26.875944 systemd[1]: Reloading. May 14 00:37:26.942354 /usr/lib/systemd/system-generators/torcx-generator[1378]: time="2025-05-14T00:37:26Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 14 00:37:26.942386 /usr/lib/systemd/system-generators/torcx-generator[1378]: time="2025-05-14T00:37:26Z" level=info msg="torcx already run" May 14 00:37:27.084075 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 14 00:37:27.084096 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 14 00:37:27.099076 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 00:37:27.163258 systemd[1]: Started kubelet.service. May 14 00:37:27.164746 systemd[1]: Stopping kubelet.service... May 14 00:37:27.165123 systemd[1]: kubelet.service: Deactivated successfully. May 14 00:37:27.165373 systemd[1]: Stopped kubelet.service. May 14 00:37:27.166905 systemd[1]: Starting kubelet.service... May 14 00:37:27.249434 systemd[1]: Started kubelet.service. May 14 00:37:27.285509 kubelet[1424]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 14 00:37:27.285509 kubelet[1424]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 14 00:37:27.285509 kubelet[1424]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 14 00:37:27.286506 kubelet[1424]: I0514 00:37:27.286456 1424 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 14 00:37:27.738999 kubelet[1424]: I0514 00:37:27.738952 1424 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 14 00:37:27.738999 kubelet[1424]: I0514 00:37:27.738987 1424 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 14 00:37:27.739219 kubelet[1424]: I0514 00:37:27.739195 1424 server.go:927] "Client rotation is on, will bootstrap in background" May 14 00:37:27.769437 kubelet[1424]: I0514 00:37:27.769397 1424 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 14 00:37:27.778977 kubelet[1424]: I0514 00:37:27.778951 1424 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 14 00:37:27.779409 kubelet[1424]: I0514 00:37:27.779369 1424 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 14 00:37:27.780625 kubelet[1424]: I0514 00:37:27.779399 1424 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"10.0.0.50","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 14 00:37:27.780889 kubelet[1424]: I0514 00:37:27.780872 1424 topology_manager.go:138] "Creating topology manager with none policy" May 14 00:37:27.780952 kubelet[1424]: I0514 00:37:27.780943 1424 container_manager_linux.go:301] "Creating device plugin manager" May 14 00:37:27.781823 kubelet[1424]: I0514 00:37:27.781782 1424 state_mem.go:36] "Initialized new in-memory state store" May 14 00:37:27.784761 kubelet[1424]: I0514 00:37:27.784730 1424 kubelet.go:400] "Attempting to sync node with API server" May 14 00:37:27.784761 kubelet[1424]: I0514 00:37:27.784758 1424 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 14 00:37:27.785359 kubelet[1424]: I0514 00:37:27.785341 1424 kubelet.go:312] "Adding apiserver pod source" May 14 00:37:27.785441 kubelet[1424]: I0514 00:37:27.785429 1424 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 14 00:37:27.785728 kubelet[1424]: E0514 00:37:27.785690 1424 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 14 00:37:27.785770 kubelet[1424]: E0514 00:37:27.785743 1424 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 14 00:37:27.786790 kubelet[1424]: I0514 00:37:27.786769 1424 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" May 14 00:37:27.787187 kubelet[1424]: I0514 00:37:27.787175 1424 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 14 00:37:27.787288 kubelet[1424]: W0514 00:37:27.787275 1424 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 14 00:37:27.788825 kubelet[1424]: I0514 00:37:27.788526 1424 server.go:1264] "Started kubelet" May 14 00:37:27.790897 kubelet[1424]: I0514 00:37:27.790834 1424 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 14 00:37:27.793174 kubelet[1424]: I0514 00:37:27.793099 1424 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 14 00:37:27.794824 kubelet[1424]: I0514 00:37:27.794772 1424 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 14 00:37:27.796151 kubelet[1424]: I0514 00:37:27.796131 1424 server.go:455] "Adding debug handlers to kubelet server" May 14 00:37:27.797234 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). May 14 00:37:27.797665 kubelet[1424]: I0514 00:37:27.797650 1424 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 14 00:37:27.798103 kubelet[1424]: I0514 00:37:27.797909 1424 volume_manager.go:291] "Starting Kubelet Volume Manager" May 14 00:37:27.798103 kubelet[1424]: I0514 00:37:27.798084 1424 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 14 00:37:27.799044 kubelet[1424]: I0514 00:37:27.798999 1424 reconciler.go:26] "Reconciler: start to sync state" May 14 00:37:27.799528 kubelet[1424]: E0514 00:37:27.799290 1424 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.50.183f3db998fdcb31 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.50,UID:10.0.0.50,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:10.0.0.50,},FirstTimestamp:2025-05-14 00:37:27.788055345 +0000 UTC m=+0.535055801,LastTimestamp:2025-05-14 00:37:27.788055345 +0000 UTC m=+0.535055801,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.50,}" May 14 00:37:27.800311 kubelet[1424]: E0514 00:37:27.800286 1424 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 14 00:37:27.800952 kubelet[1424]: I0514 00:37:27.800928 1424 factory.go:221] Registration of the systemd container factory successfully May 14 00:37:27.801126 kubelet[1424]: I0514 00:37:27.801099 1424 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 14 00:37:27.805413 kubelet[1424]: I0514 00:37:27.805010 1424 factory.go:221] Registration of the containerd container factory successfully May 14 00:37:27.809350 kubelet[1424]: E0514 00:37:27.809311 1424 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.0.0.50\" not found" node="10.0.0.50" May 14 00:37:27.816534 kubelet[1424]: I0514 00:37:27.816509 1424 cpu_manager.go:214] "Starting CPU manager" policy="none" May 14 00:37:27.816534 kubelet[1424]: I0514 00:37:27.816525 1424 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 14 00:37:27.816534 kubelet[1424]: I0514 00:37:27.816545 1424 state_mem.go:36] "Initialized new in-memory state store" May 14 00:37:27.879640 kubelet[1424]: I0514 00:37:27.879608 1424 policy_none.go:49] "None policy: Start" May 14 00:37:27.880319 kubelet[1424]: I0514 00:37:27.880302 1424 memory_manager.go:170] "Starting memorymanager" policy="None" May 14 00:37:27.880367 kubelet[1424]: I0514 00:37:27.880329 1424 state_mem.go:35] "Initializing new in-memory state store" May 14 00:37:27.886107 systemd[1]: Created slice kubepods.slice. May 14 00:37:27.890142 systemd[1]: Created slice kubepods-burstable.slice. May 14 00:37:27.892700 systemd[1]: Created slice kubepods-besteffort.slice. May 14 00:37:27.898711 kubelet[1424]: I0514 00:37:27.898677 1424 kubelet_node_status.go:73] "Attempting to register node" node="10.0.0.50" May 14 00:37:27.903637 kubelet[1424]: I0514 00:37:27.903587 1424 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 14 00:37:27.903836 kubelet[1424]: I0514 00:37:27.903773 1424 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 14 00:37:27.904013 kubelet[1424]: I0514 00:37:27.903995 1424 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 14 00:37:27.908225 kubelet[1424]: I0514 00:37:27.908184 1424 kubelet_node_status.go:76] "Successfully registered node" node="10.0.0.50" May 14 00:37:27.922136 kubelet[1424]: I0514 00:37:27.922101 1424 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" May 14 00:37:27.922435 env[1214]: time="2025-05-14T00:37:27.922394465Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 14 00:37:27.922670 kubelet[1424]: I0514 00:37:27.922573 1424 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" May 14 00:37:27.987515 kubelet[1424]: I0514 00:37:27.987464 1424 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 14 00:37:27.988919 kubelet[1424]: I0514 00:37:27.988886 1424 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 14 00:37:27.989047 kubelet[1424]: I0514 00:37:27.988987 1424 status_manager.go:217] "Starting to sync pod status with apiserver" May 14 00:37:27.989047 kubelet[1424]: I0514 00:37:27.989010 1424 kubelet.go:2337] "Starting kubelet main sync loop" May 14 00:37:27.989098 kubelet[1424]: E0514 00:37:27.989067 1424 kubelet.go:2361] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" May 14 00:37:28.268321 sudo[1314]: pam_unix(sudo:session): session closed for user root May 14 00:37:28.270040 sshd[1311]: pam_unix(sshd:session): session closed for user core May 14 00:37:28.272435 systemd[1]: session-5.scope: Deactivated successfully. May 14 00:37:28.273005 systemd-logind[1203]: Session 5 logged out. Waiting for processes to exit. May 14 00:37:28.273116 systemd[1]: sshd@4-10.0.0.50:22-10.0.0.1:51302.service: Deactivated successfully. May 14 00:37:28.274070 systemd-logind[1203]: Removed session 5. May 14 00:37:28.745055 kubelet[1424]: I0514 00:37:28.744739 1424 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" May 14 00:37:28.745685 kubelet[1424]: W0514 00:37:28.745651 1424 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.Service ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received May 14 00:37:28.745771 kubelet[1424]: W0514 00:37:28.745652 1424 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received May 14 00:37:28.745962 kubelet[1424]: W0514 00:37:28.745922 1424 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received May 14 00:37:28.786488 kubelet[1424]: I0514 00:37:28.786419 1424 apiserver.go:52] "Watching apiserver" May 14 00:37:28.786695 kubelet[1424]: E0514 00:37:28.786662 1424 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 14 00:37:28.793181 kubelet[1424]: I0514 00:37:28.793137 1424 topology_manager.go:215] "Topology Admit Handler" podUID="516e491b-7fe2-4aab-b37a-2c53dab4bc62" podNamespace="kube-system" podName="cilium-vn68s" May 14 00:37:28.793304 kubelet[1424]: I0514 00:37:28.793290 1424 topology_manager.go:215] "Topology Admit Handler" podUID="036956d8-5a14-448f-a8e2-00d8c277504d" podNamespace="kube-system" podName="kube-proxy-scrl5" May 14 00:37:28.797904 systemd[1]: Created slice kubepods-besteffort-pod036956d8_5a14_448f_a8e2_00d8c277504d.slice. May 14 00:37:28.798871 kubelet[1424]: I0514 00:37:28.798852 1424 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 14 00:37:28.804885 kubelet[1424]: I0514 00:37:28.804840 1424 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/516e491b-7fe2-4aab-b37a-2c53dab4bc62-cilium-cgroup\") pod \"cilium-vn68s\" (UID: \"516e491b-7fe2-4aab-b37a-2c53dab4bc62\") " pod="kube-system/cilium-vn68s" May 14 00:37:28.804885 kubelet[1424]: I0514 00:37:28.804882 1424 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/516e491b-7fe2-4aab-b37a-2c53dab4bc62-host-proc-sys-kernel\") pod \"cilium-vn68s\" (UID: \"516e491b-7fe2-4aab-b37a-2c53dab4bc62\") " pod="kube-system/cilium-vn68s" May 14 00:37:28.804972 kubelet[1424]: I0514 00:37:28.804900 1424 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/516e491b-7fe2-4aab-b37a-2c53dab4bc62-hubble-tls\") pod \"cilium-vn68s\" (UID: \"516e491b-7fe2-4aab-b37a-2c53dab4bc62\") " pod="kube-system/cilium-vn68s" May 14 00:37:28.804972 kubelet[1424]: I0514 00:37:28.804916 1424 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-br7sd\" (UniqueName: \"kubernetes.io/projected/516e491b-7fe2-4aab-b37a-2c53dab4bc62-kube-api-access-br7sd\") pod \"cilium-vn68s\" (UID: \"516e491b-7fe2-4aab-b37a-2c53dab4bc62\") " pod="kube-system/cilium-vn68s" May 14 00:37:28.804972 kubelet[1424]: I0514 00:37:28.804932 1424 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/036956d8-5a14-448f-a8e2-00d8c277504d-kube-proxy\") pod \"kube-proxy-scrl5\" (UID: \"036956d8-5a14-448f-a8e2-00d8c277504d\") " pod="kube-system/kube-proxy-scrl5" May 14 00:37:28.804972 kubelet[1424]: I0514 00:37:28.804948 1424 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9gr5d\" (UniqueName: \"kubernetes.io/projected/036956d8-5a14-448f-a8e2-00d8c277504d-kube-api-access-9gr5d\") pod \"kube-proxy-scrl5\" (UID: \"036956d8-5a14-448f-a8e2-00d8c277504d\") " pod="kube-system/kube-proxy-scrl5" May 14 00:37:28.804972 kubelet[1424]: I0514 00:37:28.804963 1424 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/516e491b-7fe2-4aab-b37a-2c53dab4bc62-cilium-run\") pod \"cilium-vn68s\" (UID: \"516e491b-7fe2-4aab-b37a-2c53dab4bc62\") " pod="kube-system/cilium-vn68s" May 14 00:37:28.805089 kubelet[1424]: I0514 00:37:28.804976 1424 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/516e491b-7fe2-4aab-b37a-2c53dab4bc62-hostproc\") pod \"cilium-vn68s\" (UID: \"516e491b-7fe2-4aab-b37a-2c53dab4bc62\") " pod="kube-system/cilium-vn68s" May 14 00:37:28.805089 kubelet[1424]: I0514 00:37:28.804992 1424 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/516e491b-7fe2-4aab-b37a-2c53dab4bc62-cni-path\") pod \"cilium-vn68s\" (UID: \"516e491b-7fe2-4aab-b37a-2c53dab4bc62\") " pod="kube-system/cilium-vn68s" May 14 00:37:28.805089 kubelet[1424]: I0514 00:37:28.805005 1424 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/516e491b-7fe2-4aab-b37a-2c53dab4bc62-etc-cni-netd\") pod \"cilium-vn68s\" (UID: \"516e491b-7fe2-4aab-b37a-2c53dab4bc62\") " pod="kube-system/cilium-vn68s" May 14 00:37:28.805089 kubelet[1424]: I0514 00:37:28.805029 1424 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/516e491b-7fe2-4aab-b37a-2c53dab4bc62-clustermesh-secrets\") pod \"cilium-vn68s\" (UID: \"516e491b-7fe2-4aab-b37a-2c53dab4bc62\") " pod="kube-system/cilium-vn68s" May 14 00:37:28.805089 kubelet[1424]: I0514 00:37:28.805043 1424 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/036956d8-5a14-448f-a8e2-00d8c277504d-lib-modules\") pod \"kube-proxy-scrl5\" (UID: \"036956d8-5a14-448f-a8e2-00d8c277504d\") " pod="kube-system/kube-proxy-scrl5" May 14 00:37:28.805089 kubelet[1424]: I0514 00:37:28.805057 1424 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/516e491b-7fe2-4aab-b37a-2c53dab4bc62-cilium-config-path\") pod \"cilium-vn68s\" (UID: \"516e491b-7fe2-4aab-b37a-2c53dab4bc62\") " pod="kube-system/cilium-vn68s" May 14 00:37:28.805203 kubelet[1424]: I0514 00:37:28.805071 1424 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/036956d8-5a14-448f-a8e2-00d8c277504d-xtables-lock\") pod \"kube-proxy-scrl5\" (UID: \"036956d8-5a14-448f-a8e2-00d8c277504d\") " pod="kube-system/kube-proxy-scrl5" May 14 00:37:28.805203 kubelet[1424]: I0514 00:37:28.805085 1424 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/516e491b-7fe2-4aab-b37a-2c53dab4bc62-bpf-maps\") pod \"cilium-vn68s\" (UID: \"516e491b-7fe2-4aab-b37a-2c53dab4bc62\") " pod="kube-system/cilium-vn68s" May 14 00:37:28.805203 kubelet[1424]: I0514 00:37:28.805104 1424 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/516e491b-7fe2-4aab-b37a-2c53dab4bc62-lib-modules\") pod \"cilium-vn68s\" (UID: \"516e491b-7fe2-4aab-b37a-2c53dab4bc62\") " pod="kube-system/cilium-vn68s" May 14 00:37:28.805203 kubelet[1424]: I0514 00:37:28.805120 1424 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/516e491b-7fe2-4aab-b37a-2c53dab4bc62-xtables-lock\") pod \"cilium-vn68s\" (UID: \"516e491b-7fe2-4aab-b37a-2c53dab4bc62\") " pod="kube-system/cilium-vn68s" May 14 00:37:28.805203 kubelet[1424]: I0514 00:37:28.805135 1424 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/516e491b-7fe2-4aab-b37a-2c53dab4bc62-host-proc-sys-net\") pod \"cilium-vn68s\" (UID: \"516e491b-7fe2-4aab-b37a-2c53dab4bc62\") " pod="kube-system/cilium-vn68s" May 14 00:37:28.808367 systemd[1]: Created slice kubepods-burstable-pod516e491b_7fe2_4aab_b37a_2c53dab4bc62.slice. May 14 00:37:29.107991 kubelet[1424]: E0514 00:37:29.107114 1424 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:37:29.108100 env[1214]: time="2025-05-14T00:37:29.108065625Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-scrl5,Uid:036956d8-5a14-448f-a8e2-00d8c277504d,Namespace:kube-system,Attempt:0,}" May 14 00:37:29.120261 kubelet[1424]: E0514 00:37:29.120220 1424 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:37:29.121000 env[1214]: time="2025-05-14T00:37:29.120958745Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-vn68s,Uid:516e491b-7fe2-4aab-b37a-2c53dab4bc62,Namespace:kube-system,Attempt:0,}" May 14 00:37:29.676164 env[1214]: time="2025-05-14T00:37:29.676115865Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:37:29.677201 env[1214]: time="2025-05-14T00:37:29.677169025Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:37:29.679283 env[1214]: time="2025-05-14T00:37:29.679252385Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:37:29.680960 env[1214]: time="2025-05-14T00:37:29.680932185Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:37:29.681656 env[1214]: time="2025-05-14T00:37:29.681630985Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:37:29.683565 env[1214]: time="2025-05-14T00:37:29.683538425Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:37:29.685845 env[1214]: time="2025-05-14T00:37:29.685789225Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:37:29.687400 env[1214]: time="2025-05-14T00:37:29.687364905Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:37:29.711319 env[1214]: time="2025-05-14T00:37:29.711162425Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 14 00:37:29.711319 env[1214]: time="2025-05-14T00:37:29.711202865Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 14 00:37:29.711319 env[1214]: time="2025-05-14T00:37:29.711213185Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 00:37:29.711470 env[1214]: time="2025-05-14T00:37:29.711410585Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8a28b58f6870c3ff26daf74543fed49cd4ab6bb8d046f7e80a4cae0a9f27b3a6 pid=1488 runtime=io.containerd.runc.v2 May 14 00:37:29.712141 env[1214]: time="2025-05-14T00:37:29.712082825Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 14 00:37:29.712208 env[1214]: time="2025-05-14T00:37:29.712155865Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 14 00:37:29.712208 env[1214]: time="2025-05-14T00:37:29.712183185Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 00:37:29.712411 env[1214]: time="2025-05-14T00:37:29.712368745Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/be27fcced800d1ccb7d5f4b663fb22356b1b27b12ba9e46349745ba00f1c6682 pid=1487 runtime=io.containerd.runc.v2 May 14 00:37:29.730251 systemd[1]: Started cri-containerd-8a28b58f6870c3ff26daf74543fed49cd4ab6bb8d046f7e80a4cae0a9f27b3a6.scope. May 14 00:37:29.734799 systemd[1]: Started cri-containerd-be27fcced800d1ccb7d5f4b663fb22356b1b27b12ba9e46349745ba00f1c6682.scope. May 14 00:37:29.774499 env[1214]: time="2025-05-14T00:37:29.774453665Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-scrl5,Uid:036956d8-5a14-448f-a8e2-00d8c277504d,Namespace:kube-system,Attempt:0,} returns sandbox id \"8a28b58f6870c3ff26daf74543fed49cd4ab6bb8d046f7e80a4cae0a9f27b3a6\"" May 14 00:37:29.776010 kubelet[1424]: E0514 00:37:29.775508 1424 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:37:29.776953 env[1214]: time="2025-05-14T00:37:29.776622785Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-vn68s,Uid:516e491b-7fe2-4aab-b37a-2c53dab4bc62,Namespace:kube-system,Attempt:0,} returns sandbox id \"be27fcced800d1ccb7d5f4b663fb22356b1b27b12ba9e46349745ba00f1c6682\"" May 14 00:37:29.777051 env[1214]: time="2025-05-14T00:37:29.777017825Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\"" May 14 00:37:29.777489 kubelet[1424]: E0514 00:37:29.777291 1424 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:37:29.787464 kubelet[1424]: E0514 00:37:29.787377 1424 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 14 00:37:29.912926 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount533004282.mount: Deactivated successfully. May 14 00:37:30.787991 kubelet[1424]: E0514 00:37:30.787945 1424 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 14 00:37:30.822382 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3687325253.mount: Deactivated successfully. May 14 00:37:31.273386 env[1214]: time="2025-05-14T00:37:31.273267105Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:37:31.274629 env[1214]: time="2025-05-14T00:37:31.274597025Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b4250a9efcae16f8d20358e204a159844e2b7e854edad08aee8791774acbdaed,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:37:31.276270 env[1214]: time="2025-05-14T00:37:31.276233145Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.30.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:37:31.277651 env[1214]: time="2025-05-14T00:37:31.277622465Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:37:31.278185 env[1214]: time="2025-05-14T00:37:31.278152905Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\" returns image reference \"sha256:b4250a9efcae16f8d20358e204a159844e2b7e854edad08aee8791774acbdaed\"" May 14 00:37:31.280044 env[1214]: time="2025-05-14T00:37:31.280014545Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 14 00:37:31.281162 env[1214]: time="2025-05-14T00:37:31.281121785Z" level=info msg="CreateContainer within sandbox \"8a28b58f6870c3ff26daf74543fed49cd4ab6bb8d046f7e80a4cae0a9f27b3a6\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 14 00:37:31.295736 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3420278913.mount: Deactivated successfully. May 14 00:37:31.299990 env[1214]: time="2025-05-14T00:37:31.299940465Z" level=info msg="CreateContainer within sandbox \"8a28b58f6870c3ff26daf74543fed49cd4ab6bb8d046f7e80a4cae0a9f27b3a6\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"6f2438069137187a2f395266798aee0577c979bda04006f9d17ce5187c99960b\"" May 14 00:37:31.300582 env[1214]: time="2025-05-14T00:37:31.300555185Z" level=info msg="StartContainer for \"6f2438069137187a2f395266798aee0577c979bda04006f9d17ce5187c99960b\"" May 14 00:37:31.315741 systemd[1]: Started cri-containerd-6f2438069137187a2f395266798aee0577c979bda04006f9d17ce5187c99960b.scope. May 14 00:37:31.355826 env[1214]: time="2025-05-14T00:37:31.353649745Z" level=info msg="StartContainer for \"6f2438069137187a2f395266798aee0577c979bda04006f9d17ce5187c99960b\" returns successfully" May 14 00:37:31.788868 kubelet[1424]: E0514 00:37:31.788776 1424 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 14 00:37:31.998733 kubelet[1424]: E0514 00:37:31.998696 1424 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:37:32.789387 kubelet[1424]: E0514 00:37:32.789347 1424 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 14 00:37:33.000474 kubelet[1424]: E0514 00:37:33.000443 1424 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:37:33.789856 kubelet[1424]: E0514 00:37:33.789813 1424 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 14 00:37:34.790904 kubelet[1424]: E0514 00:37:34.790848 1424 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 14 00:37:35.156461 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount507927344.mount: Deactivated successfully. May 14 00:37:35.791282 kubelet[1424]: E0514 00:37:35.791230 1424 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 14 00:37:36.792255 kubelet[1424]: E0514 00:37:36.792211 1424 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 14 00:37:37.218964 env[1214]: time="2025-05-14T00:37:37.218850865Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:37:37.220533 env[1214]: time="2025-05-14T00:37:37.220493745Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:37:37.222078 env[1214]: time="2025-05-14T00:37:37.222045905Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:37:37.223376 env[1214]: time="2025-05-14T00:37:37.223341185Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" May 14 00:37:37.225550 env[1214]: time="2025-05-14T00:37:37.225516785Z" level=info msg="CreateContainer within sandbox \"be27fcced800d1ccb7d5f4b663fb22356b1b27b12ba9e46349745ba00f1c6682\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 14 00:37:37.234415 env[1214]: time="2025-05-14T00:37:37.234367265Z" level=info msg="CreateContainer within sandbox \"be27fcced800d1ccb7d5f4b663fb22356b1b27b12ba9e46349745ba00f1c6682\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"91cce96748e80bd26c02059166cbaad67cf00643927234353b127a19214900cb\"" May 14 00:37:37.234883 env[1214]: time="2025-05-14T00:37:37.234856105Z" level=info msg="StartContainer for \"91cce96748e80bd26c02059166cbaad67cf00643927234353b127a19214900cb\"" May 14 00:37:37.251258 systemd[1]: Started cri-containerd-91cce96748e80bd26c02059166cbaad67cf00643927234353b127a19214900cb.scope. May 14 00:37:37.284992 env[1214]: time="2025-05-14T00:37:37.284945905Z" level=info msg="StartContainer for \"91cce96748e80bd26c02059166cbaad67cf00643927234353b127a19214900cb\" returns successfully" May 14 00:37:37.352473 systemd[1]: cri-containerd-91cce96748e80bd26c02059166cbaad67cf00643927234353b127a19214900cb.scope: Deactivated successfully. May 14 00:37:37.479668 env[1214]: time="2025-05-14T00:37:37.479544385Z" level=info msg="shim disconnected" id=91cce96748e80bd26c02059166cbaad67cf00643927234353b127a19214900cb May 14 00:37:37.479906 env[1214]: time="2025-05-14T00:37:37.479884065Z" level=warning msg="cleaning up after shim disconnected" id=91cce96748e80bd26c02059166cbaad67cf00643927234353b127a19214900cb namespace=k8s.io May 14 00:37:37.479971 env[1214]: time="2025-05-14T00:37:37.479957305Z" level=info msg="cleaning up dead shim" May 14 00:37:37.486746 env[1214]: time="2025-05-14T00:37:37.486714025Z" level=warning msg="cleanup warnings time=\"2025-05-14T00:37:37Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1767 runtime=io.containerd.runc.v2\n" May 14 00:37:37.792681 kubelet[1424]: E0514 00:37:37.792569 1424 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 14 00:37:38.008268 kubelet[1424]: E0514 00:37:38.008184 1424 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:37:38.010000 env[1214]: time="2025-05-14T00:37:38.009956505Z" level=info msg="CreateContainer within sandbox \"be27fcced800d1ccb7d5f4b663fb22356b1b27b12ba9e46349745ba00f1c6682\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 14 00:37:38.020823 env[1214]: time="2025-05-14T00:37:38.020768585Z" level=info msg="CreateContainer within sandbox \"be27fcced800d1ccb7d5f4b663fb22356b1b27b12ba9e46349745ba00f1c6682\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"b51bbeee810a4c820116c13319f57496b4ece0ec431828ccc44df811c5d1cb0e\"" May 14 00:37:38.021350 env[1214]: time="2025-05-14T00:37:38.021315905Z" level=info msg="StartContainer for \"b51bbeee810a4c820116c13319f57496b4ece0ec431828ccc44df811c5d1cb0e\"" May 14 00:37:38.021636 kubelet[1424]: I0514 00:37:38.021580 1424 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-scrl5" podStartSLOduration=9.518933105 podStartE2EDuration="11.021565865s" podCreationTimestamp="2025-05-14 00:37:27 +0000 UTC" firstStartedPulling="2025-05-14 00:37:29.776616545 +0000 UTC m=+2.523617041" lastFinishedPulling="2025-05-14 00:37:31.279249345 +0000 UTC m=+4.026249801" observedRunningTime="2025-05-14 00:37:32.006371905 +0000 UTC m=+4.753372401" watchObservedRunningTime="2025-05-14 00:37:38.021565865 +0000 UTC m=+10.768566361" May 14 00:37:38.034104 systemd[1]: Started cri-containerd-b51bbeee810a4c820116c13319f57496b4ece0ec431828ccc44df811c5d1cb0e.scope. May 14 00:37:38.061000 env[1214]: time="2025-05-14T00:37:38.060898145Z" level=info msg="StartContainer for \"b51bbeee810a4c820116c13319f57496b4ece0ec431828ccc44df811c5d1cb0e\" returns successfully" May 14 00:37:38.074203 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 14 00:37:38.074392 systemd[1]: Stopped systemd-sysctl.service. May 14 00:37:38.074557 systemd[1]: Stopping systemd-sysctl.service... May 14 00:37:38.076076 systemd[1]: Starting systemd-sysctl.service... May 14 00:37:38.076309 systemd[1]: cri-containerd-b51bbeee810a4c820116c13319f57496b4ece0ec431828ccc44df811c5d1cb0e.scope: Deactivated successfully. May 14 00:37:38.083603 systemd[1]: Finished systemd-sysctl.service. May 14 00:37:38.096644 env[1214]: time="2025-05-14T00:37:38.096594585Z" level=info msg="shim disconnected" id=b51bbeee810a4c820116c13319f57496b4ece0ec431828ccc44df811c5d1cb0e May 14 00:37:38.096644 env[1214]: time="2025-05-14T00:37:38.096643185Z" level=warning msg="cleaning up after shim disconnected" id=b51bbeee810a4c820116c13319f57496b4ece0ec431828ccc44df811c5d1cb0e namespace=k8s.io May 14 00:37:38.096854 env[1214]: time="2025-05-14T00:37:38.096654985Z" level=info msg="cleaning up dead shim" May 14 00:37:38.103608 env[1214]: time="2025-05-14T00:37:38.103556305Z" level=warning msg="cleanup warnings time=\"2025-05-14T00:37:38Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1841 runtime=io.containerd.runc.v2\n" May 14 00:37:38.231736 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-91cce96748e80bd26c02059166cbaad67cf00643927234353b127a19214900cb-rootfs.mount: Deactivated successfully. May 14 00:37:38.793581 kubelet[1424]: E0514 00:37:38.793521 1424 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 14 00:37:39.011549 kubelet[1424]: E0514 00:37:39.011518 1424 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:37:39.013370 env[1214]: time="2025-05-14T00:37:39.013329265Z" level=info msg="CreateContainer within sandbox \"be27fcced800d1ccb7d5f4b663fb22356b1b27b12ba9e46349745ba00f1c6682\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 14 00:37:39.025156 env[1214]: time="2025-05-14T00:37:39.025106185Z" level=info msg="CreateContainer within sandbox \"be27fcced800d1ccb7d5f4b663fb22356b1b27b12ba9e46349745ba00f1c6682\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"a6d7ddbd70d0277ec4d076940e051c78aa534ba188353898e42819b6e4794493\"" May 14 00:37:39.025901 env[1214]: time="2025-05-14T00:37:39.025867905Z" level=info msg="StartContainer for \"a6d7ddbd70d0277ec4d076940e051c78aa534ba188353898e42819b6e4794493\"" May 14 00:37:39.041900 systemd[1]: Started cri-containerd-a6d7ddbd70d0277ec4d076940e051c78aa534ba188353898e42819b6e4794493.scope. May 14 00:37:39.075190 env[1214]: time="2025-05-14T00:37:39.074918905Z" level=info msg="StartContainer for \"a6d7ddbd70d0277ec4d076940e051c78aa534ba188353898e42819b6e4794493\" returns successfully" May 14 00:37:39.086693 systemd[1]: cri-containerd-a6d7ddbd70d0277ec4d076940e051c78aa534ba188353898e42819b6e4794493.scope: Deactivated successfully. May 14 00:37:39.105360 env[1214]: time="2025-05-14T00:37:39.105318025Z" level=info msg="shim disconnected" id=a6d7ddbd70d0277ec4d076940e051c78aa534ba188353898e42819b6e4794493 May 14 00:37:39.105564 env[1214]: time="2025-05-14T00:37:39.105543505Z" level=warning msg="cleaning up after shim disconnected" id=a6d7ddbd70d0277ec4d076940e051c78aa534ba188353898e42819b6e4794493 namespace=k8s.io May 14 00:37:39.105648 env[1214]: time="2025-05-14T00:37:39.105631465Z" level=info msg="cleaning up dead shim" May 14 00:37:39.112353 env[1214]: time="2025-05-14T00:37:39.112318985Z" level=warning msg="cleanup warnings time=\"2025-05-14T00:37:39Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1899 runtime=io.containerd.runc.v2\n" May 14 00:37:39.231394 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a6d7ddbd70d0277ec4d076940e051c78aa534ba188353898e42819b6e4794493-rootfs.mount: Deactivated successfully. May 14 00:37:39.794118 kubelet[1424]: E0514 00:37:39.794057 1424 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 14 00:37:40.015304 kubelet[1424]: E0514 00:37:40.015255 1424 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:37:40.017356 env[1214]: time="2025-05-14T00:37:40.017317505Z" level=info msg="CreateContainer within sandbox \"be27fcced800d1ccb7d5f4b663fb22356b1b27b12ba9e46349745ba00f1c6682\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 14 00:37:40.026505 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1258808075.mount: Deactivated successfully. May 14 00:37:40.031168 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3634822444.mount: Deactivated successfully. May 14 00:37:40.033879 env[1214]: time="2025-05-14T00:37:40.033839225Z" level=info msg="CreateContainer within sandbox \"be27fcced800d1ccb7d5f4b663fb22356b1b27b12ba9e46349745ba00f1c6682\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"b3a9f344f7a9aef1b4a7f084858c61dcee684c1986f486bd8c66c385cb3e122d\"" May 14 00:37:40.034581 env[1214]: time="2025-05-14T00:37:40.034553585Z" level=info msg="StartContainer for \"b3a9f344f7a9aef1b4a7f084858c61dcee684c1986f486bd8c66c385cb3e122d\"" May 14 00:37:40.047715 systemd[1]: Started cri-containerd-b3a9f344f7a9aef1b4a7f084858c61dcee684c1986f486bd8c66c385cb3e122d.scope. May 14 00:37:40.080432 systemd[1]: cri-containerd-b3a9f344f7a9aef1b4a7f084858c61dcee684c1986f486bd8c66c385cb3e122d.scope: Deactivated successfully. May 14 00:37:40.081715 env[1214]: time="2025-05-14T00:37:40.081633465Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod516e491b_7fe2_4aab_b37a_2c53dab4bc62.slice/cri-containerd-b3a9f344f7a9aef1b4a7f084858c61dcee684c1986f486bd8c66c385cb3e122d.scope/memory.events\": no such file or directory" May 14 00:37:40.083422 env[1214]: time="2025-05-14T00:37:40.083376705Z" level=info msg="StartContainer for \"b3a9f344f7a9aef1b4a7f084858c61dcee684c1986f486bd8c66c385cb3e122d\" returns successfully" May 14 00:37:40.100430 env[1214]: time="2025-05-14T00:37:40.100386025Z" level=info msg="shim disconnected" id=b3a9f344f7a9aef1b4a7f084858c61dcee684c1986f486bd8c66c385cb3e122d May 14 00:37:40.100430 env[1214]: time="2025-05-14T00:37:40.100430665Z" level=warning msg="cleaning up after shim disconnected" id=b3a9f344f7a9aef1b4a7f084858c61dcee684c1986f486bd8c66c385cb3e122d namespace=k8s.io May 14 00:37:40.100610 env[1214]: time="2025-05-14T00:37:40.100440225Z" level=info msg="cleaning up dead shim" May 14 00:37:40.106436 env[1214]: time="2025-05-14T00:37:40.106405905Z" level=warning msg="cleanup warnings time=\"2025-05-14T00:37:40Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1954 runtime=io.containerd.runc.v2\n" May 14 00:37:40.795178 kubelet[1424]: E0514 00:37:40.795140 1424 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 14 00:37:41.019068 kubelet[1424]: E0514 00:37:41.019035 1424 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:37:41.021481 env[1214]: time="2025-05-14T00:37:41.021434905Z" level=info msg="CreateContainer within sandbox \"be27fcced800d1ccb7d5f4b663fb22356b1b27b12ba9e46349745ba00f1c6682\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 14 00:37:41.032960 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2712736718.mount: Deactivated successfully. May 14 00:37:41.035870 env[1214]: time="2025-05-14T00:37:41.035829785Z" level=info msg="CreateContainer within sandbox \"be27fcced800d1ccb7d5f4b663fb22356b1b27b12ba9e46349745ba00f1c6682\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"7558f1227d561984b2106cd8ba066a30f734245c6015f6d45bd4b7382294f5ea\"" May 14 00:37:41.036352 env[1214]: time="2025-05-14T00:37:41.036322625Z" level=info msg="StartContainer for \"7558f1227d561984b2106cd8ba066a30f734245c6015f6d45bd4b7382294f5ea\"" May 14 00:37:41.050624 systemd[1]: Started cri-containerd-7558f1227d561984b2106cd8ba066a30f734245c6015f6d45bd4b7382294f5ea.scope. May 14 00:37:41.092368 env[1214]: time="2025-05-14T00:37:41.092313145Z" level=info msg="StartContainer for \"7558f1227d561984b2106cd8ba066a30f734245c6015f6d45bd4b7382294f5ea\" returns successfully" May 14 00:37:41.233088 kubelet[1424]: I0514 00:37:41.233055 1424 kubelet_node_status.go:497] "Fast updating node status as it just became ready" May 14 00:37:41.335867 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! May 14 00:37:41.571886 kernel: Initializing XFRM netlink socket May 14 00:37:41.573902 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! May 14 00:37:41.795996 kubelet[1424]: E0514 00:37:41.795952 1424 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 14 00:37:42.023479 kubelet[1424]: E0514 00:37:42.022793 1424 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:37:42.036298 kubelet[1424]: I0514 00:37:42.036248 1424 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-vn68s" podStartSLOduration=7.589982225 podStartE2EDuration="15.036232825s" podCreationTimestamp="2025-05-14 00:37:27 +0000 UTC" firstStartedPulling="2025-05-14 00:37:29.777911305 +0000 UTC m=+2.524911801" lastFinishedPulling="2025-05-14 00:37:37.224161945 +0000 UTC m=+9.971162401" observedRunningTime="2025-05-14 00:37:42.035661225 +0000 UTC m=+14.782661721" watchObservedRunningTime="2025-05-14 00:37:42.036232825 +0000 UTC m=+14.783233321" May 14 00:37:42.796437 kubelet[1424]: E0514 00:37:42.796380 1424 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 14 00:37:43.024028 kubelet[1424]: E0514 00:37:43.023991 1424 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:37:43.177545 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready May 14 00:37:43.177646 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready May 14 00:37:43.176182 systemd-networkd[1044]: cilium_host: Link UP May 14 00:37:43.176278 systemd-networkd[1044]: cilium_net: Link UP May 14 00:37:43.176860 systemd-networkd[1044]: cilium_net: Gained carrier May 14 00:37:43.177460 systemd-networkd[1044]: cilium_host: Gained carrier May 14 00:37:43.257265 systemd-networkd[1044]: cilium_vxlan: Link UP May 14 00:37:43.257271 systemd-networkd[1044]: cilium_vxlan: Gained carrier May 14 00:37:43.376951 systemd-networkd[1044]: cilium_host: Gained IPv6LL May 14 00:37:43.558849 kernel: NET: Registered PF_ALG protocol family May 14 00:37:43.664971 systemd-networkd[1044]: cilium_net: Gained IPv6LL May 14 00:37:43.796891 kubelet[1424]: E0514 00:37:43.796842 1424 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 14 00:37:44.025461 kubelet[1424]: E0514 00:37:44.025376 1424 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:37:44.116969 systemd-networkd[1044]: lxc_health: Link UP May 14 00:37:44.127717 systemd-networkd[1044]: lxc_health: Gained carrier May 14 00:37:44.127886 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready May 14 00:37:44.163364 kubelet[1424]: I0514 00:37:44.163303 1424 topology_manager.go:215] "Topology Admit Handler" podUID="ca94285d-ed55-4bb6-b9e2-1d6b60919c64" podNamespace="default" podName="nginx-deployment-85f456d6dd-jkxbd" May 14 00:37:44.167955 systemd[1]: Created slice kubepods-besteffort-podca94285d_ed55_4bb6_b9e2_1d6b60919c64.slice. May 14 00:37:44.290456 kubelet[1424]: I0514 00:37:44.290311 1424 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tj8xl\" (UniqueName: \"kubernetes.io/projected/ca94285d-ed55-4bb6-b9e2-1d6b60919c64-kube-api-access-tj8xl\") pod \"nginx-deployment-85f456d6dd-jkxbd\" (UID: \"ca94285d-ed55-4bb6-b9e2-1d6b60919c64\") " pod="default/nginx-deployment-85f456d6dd-jkxbd" May 14 00:37:44.470258 env[1214]: time="2025-05-14T00:37:44.470204865Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-jkxbd,Uid:ca94285d-ed55-4bb6-b9e2-1d6b60919c64,Namespace:default,Attempt:0,}" May 14 00:37:44.505997 systemd-networkd[1044]: lxc30c05d0694e3: Link UP May 14 00:37:44.512839 kernel: eth0: renamed from tmp1b9c8 May 14 00:37:44.522648 systemd-networkd[1044]: lxc30c05d0694e3: Gained carrier May 14 00:37:44.522823 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready May 14 00:37:44.522863 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc30c05d0694e3: link becomes ready May 14 00:37:44.544901 systemd-networkd[1044]: cilium_vxlan: Gained IPv6LL May 14 00:37:44.797371 kubelet[1424]: E0514 00:37:44.797249 1424 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 14 00:37:45.332691 kubelet[1424]: E0514 00:37:45.332656 1424 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:37:45.632975 systemd-networkd[1044]: lxc_health: Gained IPv6LL May 14 00:37:45.797642 kubelet[1424]: E0514 00:37:45.797591 1424 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 14 00:37:46.209035 systemd-networkd[1044]: lxc30c05d0694e3: Gained IPv6LL May 14 00:37:46.798383 kubelet[1424]: E0514 00:37:46.798320 1424 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 14 00:37:47.785341 kubelet[1424]: E0514 00:37:47.785290 1424 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 14 00:37:47.798633 kubelet[1424]: E0514 00:37:47.798600 1424 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 14 00:37:47.994744 env[1214]: time="2025-05-14T00:37:47.994673865Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 14 00:37:47.995122 env[1214]: time="2025-05-14T00:37:47.995091625Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 14 00:37:47.995222 env[1214]: time="2025-05-14T00:37:47.995189905Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 00:37:47.995505 env[1214]: time="2025-05-14T00:37:47.995457745Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1b9c82bb57279b88caad0507592fbbf445153d2655178a1c159b59d6904f9b06 pid=2509 runtime=io.containerd.runc.v2 May 14 00:37:48.007063 systemd[1]: Started cri-containerd-1b9c82bb57279b88caad0507592fbbf445153d2655178a1c159b59d6904f9b06.scope. May 14 00:37:48.061455 systemd-resolved[1157]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 14 00:37:48.076365 env[1214]: time="2025-05-14T00:37:48.076325425Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-jkxbd,Uid:ca94285d-ed55-4bb6-b9e2-1d6b60919c64,Namespace:default,Attempt:0,} returns sandbox id \"1b9c82bb57279b88caad0507592fbbf445153d2655178a1c159b59d6904f9b06\"" May 14 00:37:48.077888 env[1214]: time="2025-05-14T00:37:48.077857745Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" May 14 00:37:48.799739 kubelet[1424]: E0514 00:37:48.799692 1424 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 14 00:37:49.800445 kubelet[1424]: E0514 00:37:49.800393 1424 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 14 00:37:50.557852 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount90990183.mount: Deactivated successfully. May 14 00:37:50.800893 kubelet[1424]: E0514 00:37:50.800843 1424 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 14 00:37:51.765767 env[1214]: time="2025-05-14T00:37:51.765720865Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:37:51.767220 env[1214]: time="2025-05-14T00:37:51.767183265Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e8b1cb61bd96acc3ff3c695318c9cc691213d532eee3731d038af92816fcb5f4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:37:51.768964 env[1214]: time="2025-05-14T00:37:51.768935545Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:37:51.770598 env[1214]: time="2025-05-14T00:37:51.770572625Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx@sha256:beabce8f1782671ba500ddff99dd260fbf9c5ec85fb9c3162e35a3c40bafd023,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:37:51.772147 env[1214]: time="2025-05-14T00:37:51.772109825Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:e8b1cb61bd96acc3ff3c695318c9cc691213d532eee3731d038af92816fcb5f4\"" May 14 00:37:51.773899 env[1214]: time="2025-05-14T00:37:51.773869065Z" level=info msg="CreateContainer within sandbox \"1b9c82bb57279b88caad0507592fbbf445153d2655178a1c159b59d6904f9b06\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" May 14 00:37:51.782436 env[1214]: time="2025-05-14T00:37:51.782405665Z" level=info msg="CreateContainer within sandbox \"1b9c82bb57279b88caad0507592fbbf445153d2655178a1c159b59d6904f9b06\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"33064dadeb5c079d79770ff071ce8e03f4580dd8ecb6f6c045c8bf79af360d5c\"" May 14 00:37:51.783011 env[1214]: time="2025-05-14T00:37:51.782902585Z" level=info msg="StartContainer for \"33064dadeb5c079d79770ff071ce8e03f4580dd8ecb6f6c045c8bf79af360d5c\"" May 14 00:37:51.801517 kubelet[1424]: E0514 00:37:51.801484 1424 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 14 00:37:51.802021 systemd[1]: run-containerd-runc-k8s.io-33064dadeb5c079d79770ff071ce8e03f4580dd8ecb6f6c045c8bf79af360d5c-runc.Tb9BKs.mount: Deactivated successfully. May 14 00:37:51.804314 systemd[1]: Started cri-containerd-33064dadeb5c079d79770ff071ce8e03f4580dd8ecb6f6c045c8bf79af360d5c.scope. May 14 00:37:51.836519 env[1214]: time="2025-05-14T00:37:51.836467505Z" level=info msg="StartContainer for \"33064dadeb5c079d79770ff071ce8e03f4580dd8ecb6f6c045c8bf79af360d5c\" returns successfully" May 14 00:37:52.047922 kubelet[1424]: I0514 00:37:52.047765 1424 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-85f456d6dd-jkxbd" podStartSLOduration=4.352313065 podStartE2EDuration="8.047742665s" podCreationTimestamp="2025-05-14 00:37:44 +0000 UTC" firstStartedPulling="2025-05-14 00:37:48.077353385 +0000 UTC m=+20.824353881" lastFinishedPulling="2025-05-14 00:37:51.772782985 +0000 UTC m=+24.519783481" observedRunningTime="2025-05-14 00:37:52.047509385 +0000 UTC m=+24.794509881" watchObservedRunningTime="2025-05-14 00:37:52.047742665 +0000 UTC m=+24.794743161" May 14 00:37:52.802237 kubelet[1424]: E0514 00:37:52.802191 1424 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 14 00:37:53.199454 kubelet[1424]: I0514 00:37:53.199313 1424 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 14 00:37:53.200169 kubelet[1424]: E0514 00:37:53.200116 1424 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:37:53.803282 kubelet[1424]: E0514 00:37:53.803242 1424 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 14 00:37:54.043086 kubelet[1424]: E0514 00:37:54.043053 1424 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:37:54.803504 kubelet[1424]: E0514 00:37:54.803465 1424 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 14 00:37:55.804510 kubelet[1424]: E0514 00:37:55.804464 1424 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 14 00:37:56.174571 kubelet[1424]: I0514 00:37:56.174443 1424 topology_manager.go:215] "Topology Admit Handler" podUID="0936b493-1cc3-4f1a-9ca1-6e2bc9672b9c" podNamespace="default" podName="nfs-server-provisioner-0" May 14 00:37:56.179617 systemd[1]: Created slice kubepods-besteffort-pod0936b493_1cc3_4f1a_9ca1_6e2bc9672b9c.slice. May 14 00:37:56.356115 kubelet[1424]: I0514 00:37:56.356061 1424 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/0936b493-1cc3-4f1a-9ca1-6e2bc9672b9c-data\") pod \"nfs-server-provisioner-0\" (UID: \"0936b493-1cc3-4f1a-9ca1-6e2bc9672b9c\") " pod="default/nfs-server-provisioner-0" May 14 00:37:56.356115 kubelet[1424]: I0514 00:37:56.356107 1424 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wj6n8\" (UniqueName: \"kubernetes.io/projected/0936b493-1cc3-4f1a-9ca1-6e2bc9672b9c-kube-api-access-wj6n8\") pod \"nfs-server-provisioner-0\" (UID: \"0936b493-1cc3-4f1a-9ca1-6e2bc9672b9c\") " pod="default/nfs-server-provisioner-0" May 14 00:37:56.482461 env[1214]: time="2025-05-14T00:37:56.482371261Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:0936b493-1cc3-4f1a-9ca1-6e2bc9672b9c,Namespace:default,Attempt:0,}" May 14 00:37:56.511328 systemd-networkd[1044]: lxc511347dcf091: Link UP May 14 00:37:56.519843 kernel: eth0: renamed from tmp90e3d May 14 00:37:56.529639 systemd-networkd[1044]: lxc511347dcf091: Gained carrier May 14 00:37:56.529848 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready May 14 00:37:56.529886 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc511347dcf091: link becomes ready May 14 00:37:56.702326 env[1214]: time="2025-05-14T00:37:56.702250417Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 14 00:37:56.702326 env[1214]: time="2025-05-14T00:37:56.702292296Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 14 00:37:56.702326 env[1214]: time="2025-05-14T00:37:56.702303216Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 00:37:56.702518 env[1214]: time="2025-05-14T00:37:56.702418136Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/90e3daae08f903be07ab9ae352c022b61be68112e57218161c8923143ed3e632 pid=2639 runtime=io.containerd.runc.v2 May 14 00:37:56.714526 systemd[1]: Started cri-containerd-90e3daae08f903be07ab9ae352c022b61be68112e57218161c8923143ed3e632.scope. May 14 00:37:56.739771 systemd-resolved[1157]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 14 00:37:56.756332 env[1214]: time="2025-05-14T00:37:56.756288469Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:0936b493-1cc3-4f1a-9ca1-6e2bc9672b9c,Namespace:default,Attempt:0,} returns sandbox id \"90e3daae08f903be07ab9ae352c022b61be68112e57218161c8923143ed3e632\"" May 14 00:37:56.757860 env[1214]: time="2025-05-14T00:37:56.757822303Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" May 14 00:37:56.804900 kubelet[1424]: E0514 00:37:56.804862 1424 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 14 00:37:57.805987 kubelet[1424]: E0514 00:37:57.805944 1424 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 14 00:37:58.368948 systemd-networkd[1044]: lxc511347dcf091: Gained IPv6LL May 14 00:37:58.807016 kubelet[1424]: E0514 00:37:58.806910 1424 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 14 00:37:59.053521 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2761994016.mount: Deactivated successfully. May 14 00:37:59.807194 kubelet[1424]: E0514 00:37:59.807149 1424 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 14 00:38:00.759489 env[1214]: time="2025-05-14T00:38:00.759433008Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:38:00.761769 env[1214]: time="2025-05-14T00:38:00.761744881Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:38:00.763879 env[1214]: time="2025-05-14T00:38:00.763849636Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:38:00.765730 env[1214]: time="2025-05-14T00:38:00.765674351Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:38:00.766465 env[1214]: time="2025-05-14T00:38:00.766417589Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\"" May 14 00:38:00.769437 env[1214]: time="2025-05-14T00:38:00.769396781Z" level=info msg="CreateContainer within sandbox \"90e3daae08f903be07ab9ae352c022b61be68112e57218161c8923143ed3e632\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" May 14 00:38:00.779985 env[1214]: time="2025-05-14T00:38:00.779953112Z" level=info msg="CreateContainer within sandbox \"90e3daae08f903be07ab9ae352c022b61be68112e57218161c8923143ed3e632\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"e0f587142de1ca19caf53ed23443936664b3b3bf9cad0760801b623d94a49ca8\"" May 14 00:38:00.780571 env[1214]: time="2025-05-14T00:38:00.780543271Z" level=info msg="StartContainer for \"e0f587142de1ca19caf53ed23443936664b3b3bf9cad0760801b623d94a49ca8\"" May 14 00:38:00.801725 systemd[1]: Started cri-containerd-e0f587142de1ca19caf53ed23443936664b3b3bf9cad0760801b623d94a49ca8.scope. May 14 00:38:00.807666 kubelet[1424]: E0514 00:38:00.807488 1424 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 14 00:38:00.854280 env[1214]: time="2025-05-14T00:38:00.854237153Z" level=info msg="StartContainer for \"e0f587142de1ca19caf53ed23443936664b3b3bf9cad0760801b623d94a49ca8\" returns successfully" May 14 00:38:01.065862 kubelet[1424]: I0514 00:38:01.065733 1424 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=1.055557555 podStartE2EDuration="5.065717716s" podCreationTimestamp="2025-05-14 00:37:56 +0000 UTC" firstStartedPulling="2025-05-14 00:37:56.757548944 +0000 UTC m=+29.504549440" lastFinishedPulling="2025-05-14 00:38:00.767709105 +0000 UTC m=+33.514709601" observedRunningTime="2025-05-14 00:38:01.065417837 +0000 UTC m=+33.812418333" watchObservedRunningTime="2025-05-14 00:38:01.065717716 +0000 UTC m=+33.812718212" May 14 00:38:01.777693 systemd[1]: run-containerd-runc-k8s.io-e0f587142de1ca19caf53ed23443936664b3b3bf9cad0760801b623d94a49ca8-runc.nhJYpD.mount: Deactivated successfully. May 14 00:38:01.808106 kubelet[1424]: E0514 00:38:01.808061 1424 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 14 00:38:02.808893 kubelet[1424]: E0514 00:38:02.808853 1424 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 14 00:38:03.809940 kubelet[1424]: E0514 00:38:03.809899 1424 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 14 00:38:04.810929 kubelet[1424]: E0514 00:38:04.810889 1424 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 14 00:38:05.518230 update_engine[1207]: I0514 00:38:05.518168 1207 update_attempter.cc:509] Updating boot flags... May 14 00:38:05.811606 kubelet[1424]: E0514 00:38:05.811460 1424 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 14 00:38:06.812229 kubelet[1424]: E0514 00:38:06.812189 1424 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 14 00:38:07.785276 kubelet[1424]: E0514 00:38:07.785225 1424 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 14 00:38:07.812511 kubelet[1424]: E0514 00:38:07.812487 1424 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 14 00:38:08.813827 kubelet[1424]: E0514 00:38:08.813769 1424 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 14 00:38:09.814401 kubelet[1424]: E0514 00:38:09.814349 1424 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 14 00:38:10.814595 kubelet[1424]: E0514 00:38:10.814557 1424 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 14 00:38:11.107183 kubelet[1424]: I0514 00:38:11.107074 1424 topology_manager.go:215] "Topology Admit Handler" podUID="34ef8eff-ff9d-468c-b0ca-3a2af3965ad0" podNamespace="default" podName="test-pod-1" May 14 00:38:11.111569 systemd[1]: Created slice kubepods-besteffort-pod34ef8eff_ff9d_468c_b0ca_3a2af3965ad0.slice. May 14 00:38:11.234756 kubelet[1424]: I0514 00:38:11.234722 1424 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-2537ef5b-ba02-43b0-b3f6-093abb83ba60\" (UniqueName: \"kubernetes.io/nfs/34ef8eff-ff9d-468c-b0ca-3a2af3965ad0-pvc-2537ef5b-ba02-43b0-b3f6-093abb83ba60\") pod \"test-pod-1\" (UID: \"34ef8eff-ff9d-468c-b0ca-3a2af3965ad0\") " pod="default/test-pod-1" May 14 00:38:11.234756 kubelet[1424]: I0514 00:38:11.234761 1424 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kgqbj\" (UniqueName: \"kubernetes.io/projected/34ef8eff-ff9d-468c-b0ca-3a2af3965ad0-kube-api-access-kgqbj\") pod \"test-pod-1\" (UID: \"34ef8eff-ff9d-468c-b0ca-3a2af3965ad0\") " pod="default/test-pod-1" May 14 00:38:11.356828 kernel: FS-Cache: Loaded May 14 00:38:11.385264 kernel: RPC: Registered named UNIX socket transport module. May 14 00:38:11.385439 kernel: RPC: Registered udp transport module. May 14 00:38:11.385505 kernel: RPC: Registered tcp transport module. May 14 00:38:11.385548 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. May 14 00:38:11.425842 kernel: FS-Cache: Netfs 'nfs' registered for caching May 14 00:38:11.554169 kernel: NFS: Registering the id_resolver key type May 14 00:38:11.554282 kernel: Key type id_resolver registered May 14 00:38:11.554334 kernel: Key type id_legacy registered May 14 00:38:11.580511 nfsidmap[2775]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' May 14 00:38:11.585831 nfsidmap[2778]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' May 14 00:38:11.714255 env[1214]: time="2025-05-14T00:38:11.714133100Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:34ef8eff-ff9d-468c-b0ca-3a2af3965ad0,Namespace:default,Attempt:0,}" May 14 00:38:11.739984 systemd-networkd[1044]: lxc21fdd8cf3a77: Link UP May 14 00:38:11.750829 kernel: eth0: renamed from tmp5ee52 May 14 00:38:11.758683 systemd-networkd[1044]: lxc21fdd8cf3a77: Gained carrier May 14 00:38:11.758877 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready May 14 00:38:11.758912 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc21fdd8cf3a77: link becomes ready May 14 00:38:11.815407 kubelet[1424]: E0514 00:38:11.815354 1424 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 14 00:38:11.928977 env[1214]: time="2025-05-14T00:38:11.928913417Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 14 00:38:11.928977 env[1214]: time="2025-05-14T00:38:11.928954417Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 14 00:38:11.928977 env[1214]: time="2025-05-14T00:38:11.928964657Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 00:38:11.929265 env[1214]: time="2025-05-14T00:38:11.929230296Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/5ee525c50778d8909ff31a6c8c260fbfe040d1957c559cc9c3134cb765cd56f7 pid=2815 runtime=io.containerd.runc.v2 May 14 00:38:11.939490 systemd[1]: Started cri-containerd-5ee525c50778d8909ff31a6c8c260fbfe040d1957c559cc9c3134cb765cd56f7.scope. May 14 00:38:11.968505 systemd-resolved[1157]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 14 00:38:11.985269 env[1214]: time="2025-05-14T00:38:11.985226222Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:34ef8eff-ff9d-468c-b0ca-3a2af3965ad0,Namespace:default,Attempt:0,} returns sandbox id \"5ee525c50778d8909ff31a6c8c260fbfe040d1957c559cc9c3134cb765cd56f7\"" May 14 00:38:11.987067 env[1214]: time="2025-05-14T00:38:11.987036940Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" May 14 00:38:12.238538 env[1214]: time="2025-05-14T00:38:12.238192188Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:38:12.240324 env[1214]: time="2025-05-14T00:38:12.240281905Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:e8b1cb61bd96acc3ff3c695318c9cc691213d532eee3731d038af92816fcb5f4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:38:12.242010 env[1214]: time="2025-05-14T00:38:12.241984263Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:38:12.244567 env[1214]: time="2025-05-14T00:38:12.244536100Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx@sha256:beabce8f1782671ba500ddff99dd260fbf9c5ec85fb9c3162e35a3c40bafd023,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:38:12.245276 env[1214]: time="2025-05-14T00:38:12.245241699Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:e8b1cb61bd96acc3ff3c695318c9cc691213d532eee3731d038af92816fcb5f4\"" May 14 00:38:12.247597 env[1214]: time="2025-05-14T00:38:12.247556416Z" level=info msg="CreateContainer within sandbox \"5ee525c50778d8909ff31a6c8c260fbfe040d1957c559cc9c3134cb765cd56f7\" for container &ContainerMetadata{Name:test,Attempt:0,}" May 14 00:38:12.257310 env[1214]: time="2025-05-14T00:38:12.257268404Z" level=info msg="CreateContainer within sandbox \"5ee525c50778d8909ff31a6c8c260fbfe040d1957c559cc9c3134cb765cd56f7\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"9b5211fdeb3a3d020f6fc97f189dd53687665bee3c19b14c747fb6814f373372\"" May 14 00:38:12.257685 env[1214]: time="2025-05-14T00:38:12.257653924Z" level=info msg="StartContainer for \"9b5211fdeb3a3d020f6fc97f189dd53687665bee3c19b14c747fb6814f373372\"" May 14 00:38:12.271068 systemd[1]: Started cri-containerd-9b5211fdeb3a3d020f6fc97f189dd53687665bee3c19b14c747fb6814f373372.scope. May 14 00:38:12.308475 env[1214]: time="2025-05-14T00:38:12.308438581Z" level=info msg="StartContainer for \"9b5211fdeb3a3d020f6fc97f189dd53687665bee3c19b14c747fb6814f373372\" returns successfully" May 14 00:38:12.815920 kubelet[1424]: E0514 00:38:12.815873 1424 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 14 00:38:13.084325 kubelet[1424]: I0514 00:38:13.084018 1424 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=16.82437407 podStartE2EDuration="17.084000547s" podCreationTimestamp="2025-05-14 00:37:56 +0000 UTC" firstStartedPulling="2025-05-14 00:38:11.986642861 +0000 UTC m=+44.733643357" lastFinishedPulling="2025-05-14 00:38:12.246269338 +0000 UTC m=+44.993269834" observedRunningTime="2025-05-14 00:38:13.083974147 +0000 UTC m=+45.830974643" watchObservedRunningTime="2025-05-14 00:38:13.084000547 +0000 UTC m=+45.831001043" May 14 00:38:13.792971 systemd-networkd[1044]: lxc21fdd8cf3a77: Gained IPv6LL May 14 00:38:13.816040 kubelet[1424]: E0514 00:38:13.815983 1424 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 14 00:38:14.816897 kubelet[1424]: E0514 00:38:14.816846 1424 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 14 00:38:15.817816 kubelet[1424]: E0514 00:38:15.817766 1424 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 14 00:38:16.818897 kubelet[1424]: E0514 00:38:16.818854 1424 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 14 00:38:17.819668 kubelet[1424]: E0514 00:38:17.819624 1424 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 14 00:38:18.820394 kubelet[1424]: E0514 00:38:18.820361 1424 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 14 00:38:19.181503 systemd[1]: run-containerd-runc-k8s.io-7558f1227d561984b2106cd8ba066a30f734245c6015f6d45bd4b7382294f5ea-runc.2PZuqo.mount: Deactivated successfully. May 14 00:38:19.216657 env[1214]: time="2025-05-14T00:38:19.216588750Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 14 00:38:19.221839 env[1214]: time="2025-05-14T00:38:19.221793305Z" level=info msg="StopContainer for \"7558f1227d561984b2106cd8ba066a30f734245c6015f6d45bd4b7382294f5ea\" with timeout 2 (s)" May 14 00:38:19.222073 env[1214]: time="2025-05-14T00:38:19.222044625Z" level=info msg="Stop container \"7558f1227d561984b2106cd8ba066a30f734245c6015f6d45bd4b7382294f5ea\" with signal terminated" May 14 00:38:19.227892 systemd-networkd[1044]: lxc_health: Link DOWN May 14 00:38:19.227897 systemd-networkd[1044]: lxc_health: Lost carrier May 14 00:38:19.271402 systemd[1]: cri-containerd-7558f1227d561984b2106cd8ba066a30f734245c6015f6d45bd4b7382294f5ea.scope: Deactivated successfully. May 14 00:38:19.271730 systemd[1]: cri-containerd-7558f1227d561984b2106cd8ba066a30f734245c6015f6d45bd4b7382294f5ea.scope: Consumed 6.314s CPU time. May 14 00:38:19.288038 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7558f1227d561984b2106cd8ba066a30f734245c6015f6d45bd4b7382294f5ea-rootfs.mount: Deactivated successfully. May 14 00:38:19.298933 env[1214]: time="2025-05-14T00:38:19.298887285Z" level=info msg="shim disconnected" id=7558f1227d561984b2106cd8ba066a30f734245c6015f6d45bd4b7382294f5ea May 14 00:38:19.298933 env[1214]: time="2025-05-14T00:38:19.298929765Z" level=warning msg="cleaning up after shim disconnected" id=7558f1227d561984b2106cd8ba066a30f734245c6015f6d45bd4b7382294f5ea namespace=k8s.io May 14 00:38:19.298933 env[1214]: time="2025-05-14T00:38:19.298939765Z" level=info msg="cleaning up dead shim" May 14 00:38:19.305720 env[1214]: time="2025-05-14T00:38:19.305675839Z" level=warning msg="cleanup warnings time=\"2025-05-14T00:38:19Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2944 runtime=io.containerd.runc.v2\n" May 14 00:38:19.307871 env[1214]: time="2025-05-14T00:38:19.307829958Z" level=info msg="StopContainer for \"7558f1227d561984b2106cd8ba066a30f734245c6015f6d45bd4b7382294f5ea\" returns successfully" May 14 00:38:19.308476 env[1214]: time="2025-05-14T00:38:19.308449037Z" level=info msg="StopPodSandbox for \"be27fcced800d1ccb7d5f4b663fb22356b1b27b12ba9e46349745ba00f1c6682\"" May 14 00:38:19.308533 env[1214]: time="2025-05-14T00:38:19.308514517Z" level=info msg="Container to stop \"91cce96748e80bd26c02059166cbaad67cf00643927234353b127a19214900cb\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 14 00:38:19.308567 env[1214]: time="2025-05-14T00:38:19.308533237Z" level=info msg="Container to stop \"b51bbeee810a4c820116c13319f57496b4ece0ec431828ccc44df811c5d1cb0e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 14 00:38:19.308567 env[1214]: time="2025-05-14T00:38:19.308548277Z" level=info msg="Container to stop \"a6d7ddbd70d0277ec4d076940e051c78aa534ba188353898e42819b6e4794493\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 14 00:38:19.308567 env[1214]: time="2025-05-14T00:38:19.308560237Z" level=info msg="Container to stop \"7558f1227d561984b2106cd8ba066a30f734245c6015f6d45bd4b7382294f5ea\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 14 00:38:19.308663 env[1214]: time="2025-05-14T00:38:19.308571437Z" level=info msg="Container to stop \"b3a9f344f7a9aef1b4a7f084858c61dcee684c1986f486bd8c66c385cb3e122d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 14 00:38:19.311204 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-be27fcced800d1ccb7d5f4b663fb22356b1b27b12ba9e46349745ba00f1c6682-shm.mount: Deactivated successfully. May 14 00:38:19.316627 systemd[1]: cri-containerd-be27fcced800d1ccb7d5f4b663fb22356b1b27b12ba9e46349745ba00f1c6682.scope: Deactivated successfully. May 14 00:38:19.330534 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-be27fcced800d1ccb7d5f4b663fb22356b1b27b12ba9e46349745ba00f1c6682-rootfs.mount: Deactivated successfully. May 14 00:38:19.333892 env[1214]: time="2025-05-14T00:38:19.333844617Z" level=info msg="shim disconnected" id=be27fcced800d1ccb7d5f4b663fb22356b1b27b12ba9e46349745ba00f1c6682 May 14 00:38:19.333892 env[1214]: time="2025-05-14T00:38:19.333892977Z" level=warning msg="cleaning up after shim disconnected" id=be27fcced800d1ccb7d5f4b663fb22356b1b27b12ba9e46349745ba00f1c6682 namespace=k8s.io May 14 00:38:19.334032 env[1214]: time="2025-05-14T00:38:19.333901897Z" level=info msg="cleaning up dead shim" May 14 00:38:19.340873 env[1214]: time="2025-05-14T00:38:19.340828572Z" level=warning msg="cleanup warnings time=\"2025-05-14T00:38:19Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2974 runtime=io.containerd.runc.v2\n" May 14 00:38:19.341178 env[1214]: time="2025-05-14T00:38:19.341151091Z" level=info msg="TearDown network for sandbox \"be27fcced800d1ccb7d5f4b663fb22356b1b27b12ba9e46349745ba00f1c6682\" successfully" May 14 00:38:19.341213 env[1214]: time="2025-05-14T00:38:19.341177171Z" level=info msg="StopPodSandbox for \"be27fcced800d1ccb7d5f4b663fb22356b1b27b12ba9e46349745ba00f1c6682\" returns successfully" May 14 00:38:19.480620 kubelet[1424]: I0514 00:38:19.479874 1424 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/516e491b-7fe2-4aab-b37a-2c53dab4bc62-etc-cni-netd\") pod \"516e491b-7fe2-4aab-b37a-2c53dab4bc62\" (UID: \"516e491b-7fe2-4aab-b37a-2c53dab4bc62\") " May 14 00:38:19.480620 kubelet[1424]: I0514 00:38:19.479933 1424 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/516e491b-7fe2-4aab-b37a-2c53dab4bc62-cilium-config-path\") pod \"516e491b-7fe2-4aab-b37a-2c53dab4bc62\" (UID: \"516e491b-7fe2-4aab-b37a-2c53dab4bc62\") " May 14 00:38:19.480620 kubelet[1424]: I0514 00:38:19.479954 1424 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/516e491b-7fe2-4aab-b37a-2c53dab4bc62-lib-modules\") pod \"516e491b-7fe2-4aab-b37a-2c53dab4bc62\" (UID: \"516e491b-7fe2-4aab-b37a-2c53dab4bc62\") " May 14 00:38:19.480620 kubelet[1424]: I0514 00:38:19.479972 1424 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/516e491b-7fe2-4aab-b37a-2c53dab4bc62-xtables-lock\") pod \"516e491b-7fe2-4aab-b37a-2c53dab4bc62\" (UID: \"516e491b-7fe2-4aab-b37a-2c53dab4bc62\") " May 14 00:38:19.480620 kubelet[1424]: I0514 00:38:19.479988 1424 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/516e491b-7fe2-4aab-b37a-2c53dab4bc62-cilium-cgroup\") pod \"516e491b-7fe2-4aab-b37a-2c53dab4bc62\" (UID: \"516e491b-7fe2-4aab-b37a-2c53dab4bc62\") " May 14 00:38:19.480620 kubelet[1424]: I0514 00:38:19.480005 1424 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/516e491b-7fe2-4aab-b37a-2c53dab4bc62-host-proc-sys-kernel\") pod \"516e491b-7fe2-4aab-b37a-2c53dab4bc62\" (UID: \"516e491b-7fe2-4aab-b37a-2c53dab4bc62\") " May 14 00:38:19.480921 kubelet[1424]: I0514 00:38:19.480021 1424 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/516e491b-7fe2-4aab-b37a-2c53dab4bc62-bpf-maps\") pod \"516e491b-7fe2-4aab-b37a-2c53dab4bc62\" (UID: \"516e491b-7fe2-4aab-b37a-2c53dab4bc62\") " May 14 00:38:19.480921 kubelet[1424]: I0514 00:38:19.480040 1424 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/516e491b-7fe2-4aab-b37a-2c53dab4bc62-hubble-tls\") pod \"516e491b-7fe2-4aab-b37a-2c53dab4bc62\" (UID: \"516e491b-7fe2-4aab-b37a-2c53dab4bc62\") " May 14 00:38:19.480921 kubelet[1424]: I0514 00:38:19.480053 1424 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/516e491b-7fe2-4aab-b37a-2c53dab4bc62-cilium-run\") pod \"516e491b-7fe2-4aab-b37a-2c53dab4bc62\" (UID: \"516e491b-7fe2-4aab-b37a-2c53dab4bc62\") " May 14 00:38:19.480921 kubelet[1424]: I0514 00:38:19.480066 1424 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/516e491b-7fe2-4aab-b37a-2c53dab4bc62-hostproc\") pod \"516e491b-7fe2-4aab-b37a-2c53dab4bc62\" (UID: \"516e491b-7fe2-4aab-b37a-2c53dab4bc62\") " May 14 00:38:19.480921 kubelet[1424]: I0514 00:38:19.480079 1424 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/516e491b-7fe2-4aab-b37a-2c53dab4bc62-cni-path\") pod \"516e491b-7fe2-4aab-b37a-2c53dab4bc62\" (UID: \"516e491b-7fe2-4aab-b37a-2c53dab4bc62\") " May 14 00:38:19.480921 kubelet[1424]: I0514 00:38:19.480099 1424 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/516e491b-7fe2-4aab-b37a-2c53dab4bc62-clustermesh-secrets\") pod \"516e491b-7fe2-4aab-b37a-2c53dab4bc62\" (UID: \"516e491b-7fe2-4aab-b37a-2c53dab4bc62\") " May 14 00:38:19.481061 kubelet[1424]: I0514 00:38:19.480115 1424 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/516e491b-7fe2-4aab-b37a-2c53dab4bc62-host-proc-sys-net\") pod \"516e491b-7fe2-4aab-b37a-2c53dab4bc62\" (UID: \"516e491b-7fe2-4aab-b37a-2c53dab4bc62\") " May 14 00:38:19.481061 kubelet[1424]: I0514 00:38:19.480130 1424 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-br7sd\" (UniqueName: \"kubernetes.io/projected/516e491b-7fe2-4aab-b37a-2c53dab4bc62-kube-api-access-br7sd\") pod \"516e491b-7fe2-4aab-b37a-2c53dab4bc62\" (UID: \"516e491b-7fe2-4aab-b37a-2c53dab4bc62\") " May 14 00:38:19.481061 kubelet[1424]: I0514 00:38:19.479855 1424 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/516e491b-7fe2-4aab-b37a-2c53dab4bc62-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "516e491b-7fe2-4aab-b37a-2c53dab4bc62" (UID: "516e491b-7fe2-4aab-b37a-2c53dab4bc62"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 00:38:19.481061 kubelet[1424]: I0514 00:38:19.480439 1424 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/516e491b-7fe2-4aab-b37a-2c53dab4bc62-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "516e491b-7fe2-4aab-b37a-2c53dab4bc62" (UID: "516e491b-7fe2-4aab-b37a-2c53dab4bc62"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 00:38:19.481061 kubelet[1424]: I0514 00:38:19.480491 1424 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/516e491b-7fe2-4aab-b37a-2c53dab4bc62-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "516e491b-7fe2-4aab-b37a-2c53dab4bc62" (UID: "516e491b-7fe2-4aab-b37a-2c53dab4bc62"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 00:38:19.481171 kubelet[1424]: I0514 00:38:19.480489 1424 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/516e491b-7fe2-4aab-b37a-2c53dab4bc62-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "516e491b-7fe2-4aab-b37a-2c53dab4bc62" (UID: "516e491b-7fe2-4aab-b37a-2c53dab4bc62"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 00:38:19.481171 kubelet[1424]: I0514 00:38:19.480512 1424 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/516e491b-7fe2-4aab-b37a-2c53dab4bc62-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "516e491b-7fe2-4aab-b37a-2c53dab4bc62" (UID: "516e491b-7fe2-4aab-b37a-2c53dab4bc62"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 00:38:19.481171 kubelet[1424]: I0514 00:38:19.480530 1424 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/516e491b-7fe2-4aab-b37a-2c53dab4bc62-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "516e491b-7fe2-4aab-b37a-2c53dab4bc62" (UID: "516e491b-7fe2-4aab-b37a-2c53dab4bc62"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 00:38:19.481171 kubelet[1424]: I0514 00:38:19.480546 1424 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/516e491b-7fe2-4aab-b37a-2c53dab4bc62-hostproc" (OuterVolumeSpecName: "hostproc") pod "516e491b-7fe2-4aab-b37a-2c53dab4bc62" (UID: "516e491b-7fe2-4aab-b37a-2c53dab4bc62"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 00:38:19.481171 kubelet[1424]: I0514 00:38:19.481086 1424 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/516e491b-7fe2-4aab-b37a-2c53dab4bc62-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "516e491b-7fe2-4aab-b37a-2c53dab4bc62" (UID: "516e491b-7fe2-4aab-b37a-2c53dab4bc62"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 00:38:19.481285 kubelet[1424]: I0514 00:38:19.481118 1424 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/516e491b-7fe2-4aab-b37a-2c53dab4bc62-cni-path" (OuterVolumeSpecName: "cni-path") pod "516e491b-7fe2-4aab-b37a-2c53dab4bc62" (UID: "516e491b-7fe2-4aab-b37a-2c53dab4bc62"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 00:38:19.481285 kubelet[1424]: I0514 00:38:19.481146 1424 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/516e491b-7fe2-4aab-b37a-2c53dab4bc62-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "516e491b-7fe2-4aab-b37a-2c53dab4bc62" (UID: "516e491b-7fe2-4aab-b37a-2c53dab4bc62"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 00:38:19.482912 kubelet[1424]: I0514 00:38:19.482871 1424 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/516e491b-7fe2-4aab-b37a-2c53dab4bc62-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "516e491b-7fe2-4aab-b37a-2c53dab4bc62" (UID: "516e491b-7fe2-4aab-b37a-2c53dab4bc62"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 14 00:38:19.483696 kubelet[1424]: I0514 00:38:19.483661 1424 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/516e491b-7fe2-4aab-b37a-2c53dab4bc62-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "516e491b-7fe2-4aab-b37a-2c53dab4bc62" (UID: "516e491b-7fe2-4aab-b37a-2c53dab4bc62"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" May 14 00:38:19.483818 kubelet[1424]: I0514 00:38:19.483781 1424 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/516e491b-7fe2-4aab-b37a-2c53dab4bc62-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "516e491b-7fe2-4aab-b37a-2c53dab4bc62" (UID: "516e491b-7fe2-4aab-b37a-2c53dab4bc62"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" May 14 00:38:19.483985 kubelet[1424]: I0514 00:38:19.483946 1424 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/516e491b-7fe2-4aab-b37a-2c53dab4bc62-kube-api-access-br7sd" (OuterVolumeSpecName: "kube-api-access-br7sd") pod "516e491b-7fe2-4aab-b37a-2c53dab4bc62" (UID: "516e491b-7fe2-4aab-b37a-2c53dab4bc62"). InnerVolumeSpecName "kube-api-access-br7sd". PluginName "kubernetes.io/projected", VolumeGidValue "" May 14 00:38:19.581335 kubelet[1424]: I0514 00:38:19.581282 1424 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/516e491b-7fe2-4aab-b37a-2c53dab4bc62-hubble-tls\") on node \"10.0.0.50\" DevicePath \"\"" May 14 00:38:19.581335 kubelet[1424]: I0514 00:38:19.581335 1424 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/516e491b-7fe2-4aab-b37a-2c53dab4bc62-cilium-run\") on node \"10.0.0.50\" DevicePath \"\"" May 14 00:38:19.581458 kubelet[1424]: I0514 00:38:19.581353 1424 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/516e491b-7fe2-4aab-b37a-2c53dab4bc62-bpf-maps\") on node \"10.0.0.50\" DevicePath \"\"" May 14 00:38:19.581458 kubelet[1424]: I0514 00:38:19.581363 1424 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/516e491b-7fe2-4aab-b37a-2c53dab4bc62-host-proc-sys-net\") on node \"10.0.0.50\" DevicePath \"\"" May 14 00:38:19.581458 kubelet[1424]: I0514 00:38:19.581372 1424 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-br7sd\" (UniqueName: \"kubernetes.io/projected/516e491b-7fe2-4aab-b37a-2c53dab4bc62-kube-api-access-br7sd\") on node \"10.0.0.50\" DevicePath \"\"" May 14 00:38:19.581458 kubelet[1424]: I0514 00:38:19.581383 1424 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/516e491b-7fe2-4aab-b37a-2c53dab4bc62-hostproc\") on node \"10.0.0.50\" DevicePath \"\"" May 14 00:38:19.581458 kubelet[1424]: I0514 00:38:19.581391 1424 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/516e491b-7fe2-4aab-b37a-2c53dab4bc62-cni-path\") on node \"10.0.0.50\" DevicePath \"\"" May 14 00:38:19.581458 kubelet[1424]: I0514 00:38:19.581398 1424 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/516e491b-7fe2-4aab-b37a-2c53dab4bc62-clustermesh-secrets\") on node \"10.0.0.50\" DevicePath \"\"" May 14 00:38:19.581458 kubelet[1424]: I0514 00:38:19.581406 1424 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/516e491b-7fe2-4aab-b37a-2c53dab4bc62-xtables-lock\") on node \"10.0.0.50\" DevicePath \"\"" May 14 00:38:19.581458 kubelet[1424]: I0514 00:38:19.581413 1424 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/516e491b-7fe2-4aab-b37a-2c53dab4bc62-cilium-cgroup\") on node \"10.0.0.50\" DevicePath \"\"" May 14 00:38:19.581632 kubelet[1424]: I0514 00:38:19.581420 1424 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/516e491b-7fe2-4aab-b37a-2c53dab4bc62-etc-cni-netd\") on node \"10.0.0.50\" DevicePath \"\"" May 14 00:38:19.581632 kubelet[1424]: I0514 00:38:19.581428 1424 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/516e491b-7fe2-4aab-b37a-2c53dab4bc62-cilium-config-path\") on node \"10.0.0.50\" DevicePath \"\"" May 14 00:38:19.581632 kubelet[1424]: I0514 00:38:19.581435 1424 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/516e491b-7fe2-4aab-b37a-2c53dab4bc62-lib-modules\") on node \"10.0.0.50\" DevicePath \"\"" May 14 00:38:19.581632 kubelet[1424]: I0514 00:38:19.581442 1424 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/516e491b-7fe2-4aab-b37a-2c53dab4bc62-host-proc-sys-kernel\") on node \"10.0.0.50\" DevicePath \"\"" May 14 00:38:19.820486 kubelet[1424]: E0514 00:38:19.820448 1424 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 14 00:38:19.994932 systemd[1]: Removed slice kubepods-burstable-pod516e491b_7fe2_4aab_b37a_2c53dab4bc62.slice. May 14 00:38:19.995016 systemd[1]: kubepods-burstable-pod516e491b_7fe2_4aab_b37a_2c53dab4bc62.slice: Consumed 6.527s CPU time. May 14 00:38:20.090573 kubelet[1424]: I0514 00:38:20.090467 1424 scope.go:117] "RemoveContainer" containerID="7558f1227d561984b2106cd8ba066a30f734245c6015f6d45bd4b7382294f5ea" May 14 00:38:20.092797 env[1214]: time="2025-05-14T00:38:20.092746344Z" level=info msg="RemoveContainer for \"7558f1227d561984b2106cd8ba066a30f734245c6015f6d45bd4b7382294f5ea\"" May 14 00:38:20.096920 env[1214]: time="2025-05-14T00:38:20.096877061Z" level=info msg="RemoveContainer for \"7558f1227d561984b2106cd8ba066a30f734245c6015f6d45bd4b7382294f5ea\" returns successfully" May 14 00:38:20.097182 kubelet[1424]: I0514 00:38:20.097153 1424 scope.go:117] "RemoveContainer" containerID="b3a9f344f7a9aef1b4a7f084858c61dcee684c1986f486bd8c66c385cb3e122d" May 14 00:38:20.098082 env[1214]: time="2025-05-14T00:38:20.098050740Z" level=info msg="RemoveContainer for \"b3a9f344f7a9aef1b4a7f084858c61dcee684c1986f486bd8c66c385cb3e122d\"" May 14 00:38:20.100224 env[1214]: time="2025-05-14T00:38:20.100186059Z" level=info msg="RemoveContainer for \"b3a9f344f7a9aef1b4a7f084858c61dcee684c1986f486bd8c66c385cb3e122d\" returns successfully" May 14 00:38:20.100421 kubelet[1424]: I0514 00:38:20.100400 1424 scope.go:117] "RemoveContainer" containerID="a6d7ddbd70d0277ec4d076940e051c78aa534ba188353898e42819b6e4794493" May 14 00:38:20.101980 env[1214]: time="2025-05-14T00:38:20.101948537Z" level=info msg="RemoveContainer for \"a6d7ddbd70d0277ec4d076940e051c78aa534ba188353898e42819b6e4794493\"" May 14 00:38:20.104495 env[1214]: time="2025-05-14T00:38:20.104456135Z" level=info msg="RemoveContainer for \"a6d7ddbd70d0277ec4d076940e051c78aa534ba188353898e42819b6e4794493\" returns successfully" May 14 00:38:20.104721 kubelet[1424]: I0514 00:38:20.104698 1424 scope.go:117] "RemoveContainer" containerID="b51bbeee810a4c820116c13319f57496b4ece0ec431828ccc44df811c5d1cb0e" May 14 00:38:20.106108 env[1214]: time="2025-05-14T00:38:20.105855894Z" level=info msg="RemoveContainer for \"b51bbeee810a4c820116c13319f57496b4ece0ec431828ccc44df811c5d1cb0e\"" May 14 00:38:20.108116 env[1214]: time="2025-05-14T00:38:20.108029893Z" level=info msg="RemoveContainer for \"b51bbeee810a4c820116c13319f57496b4ece0ec431828ccc44df811c5d1cb0e\" returns successfully" May 14 00:38:20.108212 kubelet[1424]: I0514 00:38:20.108189 1424 scope.go:117] "RemoveContainer" containerID="91cce96748e80bd26c02059166cbaad67cf00643927234353b127a19214900cb" May 14 00:38:20.109196 env[1214]: time="2025-05-14T00:38:20.109167452Z" level=info msg="RemoveContainer for \"91cce96748e80bd26c02059166cbaad67cf00643927234353b127a19214900cb\"" May 14 00:38:20.111316 env[1214]: time="2025-05-14T00:38:20.111288690Z" level=info msg="RemoveContainer for \"91cce96748e80bd26c02059166cbaad67cf00643927234353b127a19214900cb\" returns successfully" May 14 00:38:20.111537 kubelet[1424]: I0514 00:38:20.111520 1424 scope.go:117] "RemoveContainer" containerID="7558f1227d561984b2106cd8ba066a30f734245c6015f6d45bd4b7382294f5ea" May 14 00:38:20.111880 env[1214]: time="2025-05-14T00:38:20.111789810Z" level=error msg="ContainerStatus for \"7558f1227d561984b2106cd8ba066a30f734245c6015f6d45bd4b7382294f5ea\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7558f1227d561984b2106cd8ba066a30f734245c6015f6d45bd4b7382294f5ea\": not found" May 14 00:38:20.112075 kubelet[1424]: E0514 00:38:20.112035 1424 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7558f1227d561984b2106cd8ba066a30f734245c6015f6d45bd4b7382294f5ea\": not found" containerID="7558f1227d561984b2106cd8ba066a30f734245c6015f6d45bd4b7382294f5ea" May 14 00:38:20.112179 kubelet[1424]: I0514 00:38:20.112083 1424 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7558f1227d561984b2106cd8ba066a30f734245c6015f6d45bd4b7382294f5ea"} err="failed to get container status \"7558f1227d561984b2106cd8ba066a30f734245c6015f6d45bd4b7382294f5ea\": rpc error: code = NotFound desc = an error occurred when try to find container \"7558f1227d561984b2106cd8ba066a30f734245c6015f6d45bd4b7382294f5ea\": not found" May 14 00:38:20.112214 kubelet[1424]: I0514 00:38:20.112179 1424 scope.go:117] "RemoveContainer" containerID="b3a9f344f7a9aef1b4a7f084858c61dcee684c1986f486bd8c66c385cb3e122d" May 14 00:38:20.112407 env[1214]: time="2025-05-14T00:38:20.112354410Z" level=error msg="ContainerStatus for \"b3a9f344f7a9aef1b4a7f084858c61dcee684c1986f486bd8c66c385cb3e122d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b3a9f344f7a9aef1b4a7f084858c61dcee684c1986f486bd8c66c385cb3e122d\": not found" May 14 00:38:20.112540 kubelet[1424]: E0514 00:38:20.112520 1424 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b3a9f344f7a9aef1b4a7f084858c61dcee684c1986f486bd8c66c385cb3e122d\": not found" containerID="b3a9f344f7a9aef1b4a7f084858c61dcee684c1986f486bd8c66c385cb3e122d" May 14 00:38:20.112643 kubelet[1424]: I0514 00:38:20.112621 1424 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b3a9f344f7a9aef1b4a7f084858c61dcee684c1986f486bd8c66c385cb3e122d"} err="failed to get container status \"b3a9f344f7a9aef1b4a7f084858c61dcee684c1986f486bd8c66c385cb3e122d\": rpc error: code = NotFound desc = an error occurred when try to find container \"b3a9f344f7a9aef1b4a7f084858c61dcee684c1986f486bd8c66c385cb3e122d\": not found" May 14 00:38:20.112707 kubelet[1424]: I0514 00:38:20.112696 1424 scope.go:117] "RemoveContainer" containerID="a6d7ddbd70d0277ec4d076940e051c78aa534ba188353898e42819b6e4794493" May 14 00:38:20.112983 env[1214]: time="2025-05-14T00:38:20.112933489Z" level=error msg="ContainerStatus for \"a6d7ddbd70d0277ec4d076940e051c78aa534ba188353898e42819b6e4794493\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a6d7ddbd70d0277ec4d076940e051c78aa534ba188353898e42819b6e4794493\": not found" May 14 00:38:20.113112 kubelet[1424]: E0514 00:38:20.113092 1424 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a6d7ddbd70d0277ec4d076940e051c78aa534ba188353898e42819b6e4794493\": not found" containerID="a6d7ddbd70d0277ec4d076940e051c78aa534ba188353898e42819b6e4794493" May 14 00:38:20.113154 kubelet[1424]: I0514 00:38:20.113124 1424 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a6d7ddbd70d0277ec4d076940e051c78aa534ba188353898e42819b6e4794493"} err="failed to get container status \"a6d7ddbd70d0277ec4d076940e051c78aa534ba188353898e42819b6e4794493\": rpc error: code = NotFound desc = an error occurred when try to find container \"a6d7ddbd70d0277ec4d076940e051c78aa534ba188353898e42819b6e4794493\": not found" May 14 00:38:20.113154 kubelet[1424]: I0514 00:38:20.113139 1424 scope.go:117] "RemoveContainer" containerID="b51bbeee810a4c820116c13319f57496b4ece0ec431828ccc44df811c5d1cb0e" May 14 00:38:20.113358 env[1214]: time="2025-05-14T00:38:20.113308529Z" level=error msg="ContainerStatus for \"b51bbeee810a4c820116c13319f57496b4ece0ec431828ccc44df811c5d1cb0e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b51bbeee810a4c820116c13319f57496b4ece0ec431828ccc44df811c5d1cb0e\": not found" May 14 00:38:20.113498 kubelet[1424]: E0514 00:38:20.113478 1424 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b51bbeee810a4c820116c13319f57496b4ece0ec431828ccc44df811c5d1cb0e\": not found" containerID="b51bbeee810a4c820116c13319f57496b4ece0ec431828ccc44df811c5d1cb0e" May 14 00:38:20.113583 kubelet[1424]: I0514 00:38:20.113564 1424 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b51bbeee810a4c820116c13319f57496b4ece0ec431828ccc44df811c5d1cb0e"} err="failed to get container status \"b51bbeee810a4c820116c13319f57496b4ece0ec431828ccc44df811c5d1cb0e\": rpc error: code = NotFound desc = an error occurred when try to find container \"b51bbeee810a4c820116c13319f57496b4ece0ec431828ccc44df811c5d1cb0e\": not found" May 14 00:38:20.113646 kubelet[1424]: I0514 00:38:20.113634 1424 scope.go:117] "RemoveContainer" containerID="91cce96748e80bd26c02059166cbaad67cf00643927234353b127a19214900cb" May 14 00:38:20.113915 env[1214]: time="2025-05-14T00:38:20.113864928Z" level=error msg="ContainerStatus for \"91cce96748e80bd26c02059166cbaad67cf00643927234353b127a19214900cb\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"91cce96748e80bd26c02059166cbaad67cf00643927234353b127a19214900cb\": not found" May 14 00:38:20.114062 kubelet[1424]: E0514 00:38:20.114042 1424 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"91cce96748e80bd26c02059166cbaad67cf00643927234353b127a19214900cb\": not found" containerID="91cce96748e80bd26c02059166cbaad67cf00643927234353b127a19214900cb" May 14 00:38:20.114150 kubelet[1424]: I0514 00:38:20.114132 1424 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"91cce96748e80bd26c02059166cbaad67cf00643927234353b127a19214900cb"} err="failed to get container status \"91cce96748e80bd26c02059166cbaad67cf00643927234353b127a19214900cb\": rpc error: code = NotFound desc = an error occurred when try to find container \"91cce96748e80bd26c02059166cbaad67cf00643927234353b127a19214900cb\": not found" May 14 00:38:20.177575 systemd[1]: var-lib-kubelet-pods-516e491b\x2d7fe2\x2d4aab\x2db37a\x2d2c53dab4bc62-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dbr7sd.mount: Deactivated successfully. May 14 00:38:20.177672 systemd[1]: var-lib-kubelet-pods-516e491b\x2d7fe2\x2d4aab\x2db37a\x2d2c53dab4bc62-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 14 00:38:20.177731 systemd[1]: var-lib-kubelet-pods-516e491b\x2d7fe2\x2d4aab\x2db37a\x2d2c53dab4bc62-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 14 00:38:20.820772 kubelet[1424]: E0514 00:38:20.820725 1424 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 14 00:38:21.820958 kubelet[1424]: E0514 00:38:21.820891 1424 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 14 00:38:21.972394 kubelet[1424]: I0514 00:38:21.972354 1424 topology_manager.go:215] "Topology Admit Handler" podUID="57bac420-5f5e-4ce7-a8db-fc83771ee87e" podNamespace="kube-system" podName="cilium-operator-599987898-b7bs2" May 14 00:38:21.972480 kubelet[1424]: E0514 00:38:21.972404 1424 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="516e491b-7fe2-4aab-b37a-2c53dab4bc62" containerName="mount-cgroup" May 14 00:38:21.972480 kubelet[1424]: E0514 00:38:21.972414 1424 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="516e491b-7fe2-4aab-b37a-2c53dab4bc62" containerName="clean-cilium-state" May 14 00:38:21.972480 kubelet[1424]: E0514 00:38:21.972429 1424 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="516e491b-7fe2-4aab-b37a-2c53dab4bc62" containerName="apply-sysctl-overwrites" May 14 00:38:21.972480 kubelet[1424]: E0514 00:38:21.972438 1424 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="516e491b-7fe2-4aab-b37a-2c53dab4bc62" containerName="mount-bpf-fs" May 14 00:38:21.972480 kubelet[1424]: E0514 00:38:21.972445 1424 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="516e491b-7fe2-4aab-b37a-2c53dab4bc62" containerName="cilium-agent" May 14 00:38:21.972480 kubelet[1424]: I0514 00:38:21.972466 1424 memory_manager.go:354] "RemoveStaleState removing state" podUID="516e491b-7fe2-4aab-b37a-2c53dab4bc62" containerName="cilium-agent" May 14 00:38:21.973514 kubelet[1424]: I0514 00:38:21.973477 1424 topology_manager.go:215] "Topology Admit Handler" podUID="a8e1dc39-9728-46f7-8233-ddc11acd77a0" podNamespace="kube-system" podName="cilium-6mqd9" May 14 00:38:21.977770 systemd[1]: Created slice kubepods-besteffort-pod57bac420_5f5e_4ce7_a8db_fc83771ee87e.slice. May 14 00:38:21.981535 systemd[1]: Created slice kubepods-burstable-poda8e1dc39_9728_46f7_8233_ddc11acd77a0.slice. May 14 00:38:21.992380 kubelet[1424]: I0514 00:38:21.992346 1424 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="516e491b-7fe2-4aab-b37a-2c53dab4bc62" path="/var/lib/kubelet/pods/516e491b-7fe2-4aab-b37a-2c53dab4bc62/volumes" May 14 00:38:22.093939 kubelet[1424]: I0514 00:38:22.093828 1424 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a8e1dc39-9728-46f7-8233-ddc11acd77a0-hubble-tls\") pod \"cilium-6mqd9\" (UID: \"a8e1dc39-9728-46f7-8233-ddc11acd77a0\") " pod="kube-system/cilium-6mqd9" May 14 00:38:22.093939 kubelet[1424]: I0514 00:38:22.093867 1424 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p9lkl\" (UniqueName: \"kubernetes.io/projected/a8e1dc39-9728-46f7-8233-ddc11acd77a0-kube-api-access-p9lkl\") pod \"cilium-6mqd9\" (UID: \"a8e1dc39-9728-46f7-8233-ddc11acd77a0\") " pod="kube-system/cilium-6mqd9" May 14 00:38:22.093939 kubelet[1424]: I0514 00:38:22.093904 1424 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a8e1dc39-9728-46f7-8233-ddc11acd77a0-bpf-maps\") pod \"cilium-6mqd9\" (UID: \"a8e1dc39-9728-46f7-8233-ddc11acd77a0\") " pod="kube-system/cilium-6mqd9" May 14 00:38:22.094309 kubelet[1424]: I0514 00:38:22.093923 1424 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a8e1dc39-9728-46f7-8233-ddc11acd77a0-hostproc\") pod \"cilium-6mqd9\" (UID: \"a8e1dc39-9728-46f7-8233-ddc11acd77a0\") " pod="kube-system/cilium-6mqd9" May 14 00:38:22.094363 kubelet[1424]: I0514 00:38:22.094317 1424 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a8e1dc39-9728-46f7-8233-ddc11acd77a0-etc-cni-netd\") pod \"cilium-6mqd9\" (UID: \"a8e1dc39-9728-46f7-8233-ddc11acd77a0\") " pod="kube-system/cilium-6mqd9" May 14 00:38:22.094393 kubelet[1424]: I0514 00:38:22.094345 1424 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a8e1dc39-9728-46f7-8233-ddc11acd77a0-cilium-config-path\") pod \"cilium-6mqd9\" (UID: \"a8e1dc39-9728-46f7-8233-ddc11acd77a0\") " pod="kube-system/cilium-6mqd9" May 14 00:38:22.094417 kubelet[1424]: I0514 00:38:22.094407 1424 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/a8e1dc39-9728-46f7-8233-ddc11acd77a0-cilium-ipsec-secrets\") pod \"cilium-6mqd9\" (UID: \"a8e1dc39-9728-46f7-8233-ddc11acd77a0\") " pod="kube-system/cilium-6mqd9" May 14 00:38:22.094444 kubelet[1424]: I0514 00:38:22.094427 1424 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/57bac420-5f5e-4ce7-a8db-fc83771ee87e-cilium-config-path\") pod \"cilium-operator-599987898-b7bs2\" (UID: \"57bac420-5f5e-4ce7-a8db-fc83771ee87e\") " pod="kube-system/cilium-operator-599987898-b7bs2" May 14 00:38:22.094470 kubelet[1424]: I0514 00:38:22.094456 1424 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ww2tw\" (UniqueName: \"kubernetes.io/projected/57bac420-5f5e-4ce7-a8db-fc83771ee87e-kube-api-access-ww2tw\") pod \"cilium-operator-599987898-b7bs2\" (UID: \"57bac420-5f5e-4ce7-a8db-fc83771ee87e\") " pod="kube-system/cilium-operator-599987898-b7bs2" May 14 00:38:22.094498 kubelet[1424]: I0514 00:38:22.094473 1424 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a8e1dc39-9728-46f7-8233-ddc11acd77a0-host-proc-sys-net\") pod \"cilium-6mqd9\" (UID: \"a8e1dc39-9728-46f7-8233-ddc11acd77a0\") " pod="kube-system/cilium-6mqd9" May 14 00:38:22.094498 kubelet[1424]: I0514 00:38:22.094491 1424 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a8e1dc39-9728-46f7-8233-ddc11acd77a0-cni-path\") pod \"cilium-6mqd9\" (UID: \"a8e1dc39-9728-46f7-8233-ddc11acd77a0\") " pod="kube-system/cilium-6mqd9" May 14 00:38:22.094539 kubelet[1424]: I0514 00:38:22.094508 1424 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a8e1dc39-9728-46f7-8233-ddc11acd77a0-clustermesh-secrets\") pod \"cilium-6mqd9\" (UID: \"a8e1dc39-9728-46f7-8233-ddc11acd77a0\") " pod="kube-system/cilium-6mqd9" May 14 00:38:22.094539 kubelet[1424]: I0514 00:38:22.094534 1424 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a8e1dc39-9728-46f7-8233-ddc11acd77a0-xtables-lock\") pod \"cilium-6mqd9\" (UID: \"a8e1dc39-9728-46f7-8233-ddc11acd77a0\") " pod="kube-system/cilium-6mqd9" May 14 00:38:22.094610 kubelet[1424]: I0514 00:38:22.094576 1424 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a8e1dc39-9728-46f7-8233-ddc11acd77a0-host-proc-sys-kernel\") pod \"cilium-6mqd9\" (UID: \"a8e1dc39-9728-46f7-8233-ddc11acd77a0\") " pod="kube-system/cilium-6mqd9" May 14 00:38:22.094659 kubelet[1424]: I0514 00:38:22.094643 1424 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a8e1dc39-9728-46f7-8233-ddc11acd77a0-cilium-run\") pod \"cilium-6mqd9\" (UID: \"a8e1dc39-9728-46f7-8233-ddc11acd77a0\") " pod="kube-system/cilium-6mqd9" May 14 00:38:22.094685 kubelet[1424]: I0514 00:38:22.094673 1424 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a8e1dc39-9728-46f7-8233-ddc11acd77a0-cilium-cgroup\") pod \"cilium-6mqd9\" (UID: \"a8e1dc39-9728-46f7-8233-ddc11acd77a0\") " pod="kube-system/cilium-6mqd9" May 14 00:38:22.094711 kubelet[1424]: I0514 00:38:22.094695 1424 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a8e1dc39-9728-46f7-8233-ddc11acd77a0-lib-modules\") pod \"cilium-6mqd9\" (UID: \"a8e1dc39-9728-46f7-8233-ddc11acd77a0\") " pod="kube-system/cilium-6mqd9" May 14 00:38:22.120282 kubelet[1424]: E0514 00:38:22.120239 1424 pod_workers.go:1298] "Error syncing pod, skipping" err="unmounted volumes=[bpf-maps cilium-cgroup cilium-config-path cilium-ipsec-secrets cilium-run clustermesh-secrets cni-path etc-cni-netd host-proc-sys-kernel host-proc-sys-net hostproc hubble-tls kube-api-access-p9lkl lib-modules xtables-lock], unattached volumes=[], failed to process volumes=[]: context canceled" pod="kube-system/cilium-6mqd9" podUID="a8e1dc39-9728-46f7-8233-ddc11acd77a0" May 14 00:38:22.281084 kubelet[1424]: E0514 00:38:22.281012 1424 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:38:22.281811 env[1214]: time="2025-05-14T00:38:22.281550919Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-b7bs2,Uid:57bac420-5f5e-4ce7-a8db-fc83771ee87e,Namespace:kube-system,Attempt:0,}" May 14 00:38:22.295556 env[1214]: time="2025-05-14T00:38:22.295494870Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 14 00:38:22.295556 env[1214]: time="2025-05-14T00:38:22.295534350Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 14 00:38:22.295556 env[1214]: time="2025-05-14T00:38:22.295544550Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 00:38:22.295711 env[1214]: time="2025-05-14T00:38:22.295651950Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/829ba6255799cc3f6687b5e26b54357628bcbb4888a64a24b710263e70667eba pid=3005 runtime=io.containerd.runc.v2 May 14 00:38:22.305366 systemd[1]: Started cri-containerd-829ba6255799cc3f6687b5e26b54357628bcbb4888a64a24b710263e70667eba.scope. May 14 00:38:22.353908 env[1214]: time="2025-05-14T00:38:22.353797472Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-b7bs2,Uid:57bac420-5f5e-4ce7-a8db-fc83771ee87e,Namespace:kube-system,Attempt:0,} returns sandbox id \"829ba6255799cc3f6687b5e26b54357628bcbb4888a64a24b710263e70667eba\"" May 14 00:38:22.355368 kubelet[1424]: E0514 00:38:22.354818 1424 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:38:22.355924 env[1214]: time="2025-05-14T00:38:22.355893111Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 14 00:38:22.821815 kubelet[1424]: E0514 00:38:22.821762 1424 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 14 00:38:22.914747 kubelet[1424]: E0514 00:38:22.914686 1424 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 14 00:38:23.202481 kubelet[1424]: I0514 00:38:23.202312 1424 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a8e1dc39-9728-46f7-8233-ddc11acd77a0-xtables-lock\") pod \"a8e1dc39-9728-46f7-8233-ddc11acd77a0\" (UID: \"a8e1dc39-9728-46f7-8233-ddc11acd77a0\") " May 14 00:38:23.202481 kubelet[1424]: I0514 00:38:23.202363 1424 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a8e1dc39-9728-46f7-8233-ddc11acd77a0-clustermesh-secrets\") pod \"a8e1dc39-9728-46f7-8233-ddc11acd77a0\" (UID: \"a8e1dc39-9728-46f7-8233-ddc11acd77a0\") " May 14 00:38:23.202481 kubelet[1424]: I0514 00:38:23.202385 1424 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a8e1dc39-9728-46f7-8233-ddc11acd77a0-bpf-maps\") pod \"a8e1dc39-9728-46f7-8233-ddc11acd77a0\" (UID: \"a8e1dc39-9728-46f7-8233-ddc11acd77a0\") " May 14 00:38:23.202481 kubelet[1424]: I0514 00:38:23.202385 1424 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a8e1dc39-9728-46f7-8233-ddc11acd77a0-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "a8e1dc39-9728-46f7-8233-ddc11acd77a0" (UID: "a8e1dc39-9728-46f7-8233-ddc11acd77a0"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 00:38:23.203626 kubelet[1424]: I0514 00:38:23.202425 1424 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a8e1dc39-9728-46f7-8233-ddc11acd77a0-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "a8e1dc39-9728-46f7-8233-ddc11acd77a0" (UID: "a8e1dc39-9728-46f7-8233-ddc11acd77a0"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 00:38:23.203626 kubelet[1424]: I0514 00:38:23.202440 1424 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a8e1dc39-9728-46f7-8233-ddc11acd77a0-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "a8e1dc39-9728-46f7-8233-ddc11acd77a0" (UID: "a8e1dc39-9728-46f7-8233-ddc11acd77a0"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 00:38:23.203626 kubelet[1424]: I0514 00:38:23.202401 1424 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a8e1dc39-9728-46f7-8233-ddc11acd77a0-host-proc-sys-kernel\") pod \"a8e1dc39-9728-46f7-8233-ddc11acd77a0\" (UID: \"a8e1dc39-9728-46f7-8233-ddc11acd77a0\") " May 14 00:38:23.203626 kubelet[1424]: I0514 00:38:23.202740 1424 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a8e1dc39-9728-46f7-8233-ddc11acd77a0-lib-modules\") pod \"a8e1dc39-9728-46f7-8233-ddc11acd77a0\" (UID: \"a8e1dc39-9728-46f7-8233-ddc11acd77a0\") " May 14 00:38:23.203626 kubelet[1424]: I0514 00:38:23.202761 1424 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a8e1dc39-9728-46f7-8233-ddc11acd77a0-cilium-run\") pod \"a8e1dc39-9728-46f7-8233-ddc11acd77a0\" (UID: \"a8e1dc39-9728-46f7-8233-ddc11acd77a0\") " May 14 00:38:23.203790 kubelet[1424]: I0514 00:38:23.202785 1424 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p9lkl\" (UniqueName: \"kubernetes.io/projected/a8e1dc39-9728-46f7-8233-ddc11acd77a0-kube-api-access-p9lkl\") pod \"a8e1dc39-9728-46f7-8233-ddc11acd77a0\" (UID: \"a8e1dc39-9728-46f7-8233-ddc11acd77a0\") " May 14 00:38:23.203790 kubelet[1424]: I0514 00:38:23.202845 1424 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a8e1dc39-9728-46f7-8233-ddc11acd77a0-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "a8e1dc39-9728-46f7-8233-ddc11acd77a0" (UID: "a8e1dc39-9728-46f7-8233-ddc11acd77a0"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 00:38:23.203790 kubelet[1424]: I0514 00:38:23.202884 1424 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a8e1dc39-9728-46f7-8233-ddc11acd77a0-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "a8e1dc39-9728-46f7-8233-ddc11acd77a0" (UID: "a8e1dc39-9728-46f7-8233-ddc11acd77a0"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 00:38:23.203790 kubelet[1424]: I0514 00:38:23.202957 1424 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a8e1dc39-9728-46f7-8233-ddc11acd77a0-etc-cni-netd\") pod \"a8e1dc39-9728-46f7-8233-ddc11acd77a0\" (UID: \"a8e1dc39-9728-46f7-8233-ddc11acd77a0\") " May 14 00:38:23.203790 kubelet[1424]: I0514 00:38:23.203008 1424 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a8e1dc39-9728-46f7-8233-ddc11acd77a0-cilium-config-path\") pod \"a8e1dc39-9728-46f7-8233-ddc11acd77a0\" (UID: \"a8e1dc39-9728-46f7-8233-ddc11acd77a0\") " May 14 00:38:23.205450 kubelet[1424]: I0514 00:38:23.203050 1424 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/a8e1dc39-9728-46f7-8233-ddc11acd77a0-cilium-ipsec-secrets\") pod \"a8e1dc39-9728-46f7-8233-ddc11acd77a0\" (UID: \"a8e1dc39-9728-46f7-8233-ddc11acd77a0\") " May 14 00:38:23.205450 kubelet[1424]: I0514 00:38:23.203066 1424 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a8e1dc39-9728-46f7-8233-ddc11acd77a0-host-proc-sys-net\") pod \"a8e1dc39-9728-46f7-8233-ddc11acd77a0\" (UID: \"a8e1dc39-9728-46f7-8233-ddc11acd77a0\") " May 14 00:38:23.205450 kubelet[1424]: I0514 00:38:23.203081 1424 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a8e1dc39-9728-46f7-8233-ddc11acd77a0-cni-path\") pod \"a8e1dc39-9728-46f7-8233-ddc11acd77a0\" (UID: \"a8e1dc39-9728-46f7-8233-ddc11acd77a0\") " May 14 00:38:23.205450 kubelet[1424]: I0514 00:38:23.203099 1424 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a8e1dc39-9728-46f7-8233-ddc11acd77a0-cilium-cgroup\") pod \"a8e1dc39-9728-46f7-8233-ddc11acd77a0\" (UID: \"a8e1dc39-9728-46f7-8233-ddc11acd77a0\") " May 14 00:38:23.205450 kubelet[1424]: I0514 00:38:23.203115 1424 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a8e1dc39-9728-46f7-8233-ddc11acd77a0-hubble-tls\") pod \"a8e1dc39-9728-46f7-8233-ddc11acd77a0\" (UID: \"a8e1dc39-9728-46f7-8233-ddc11acd77a0\") " May 14 00:38:23.205450 kubelet[1424]: I0514 00:38:23.203134 1424 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a8e1dc39-9728-46f7-8233-ddc11acd77a0-hostproc\") pod \"a8e1dc39-9728-46f7-8233-ddc11acd77a0\" (UID: \"a8e1dc39-9728-46f7-8233-ddc11acd77a0\") " May 14 00:38:23.205587 kubelet[1424]: I0514 00:38:23.203165 1424 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a8e1dc39-9728-46f7-8233-ddc11acd77a0-xtables-lock\") on node \"10.0.0.50\" DevicePath \"\"" May 14 00:38:23.205587 kubelet[1424]: I0514 00:38:23.203176 1424 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a8e1dc39-9728-46f7-8233-ddc11acd77a0-bpf-maps\") on node \"10.0.0.50\" DevicePath \"\"" May 14 00:38:23.205587 kubelet[1424]: I0514 00:38:23.203185 1424 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a8e1dc39-9728-46f7-8233-ddc11acd77a0-host-proc-sys-kernel\") on node \"10.0.0.50\" DevicePath \"\"" May 14 00:38:23.205587 kubelet[1424]: I0514 00:38:23.203193 1424 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a8e1dc39-9728-46f7-8233-ddc11acd77a0-lib-modules\") on node \"10.0.0.50\" DevicePath \"\"" May 14 00:38:23.205587 kubelet[1424]: I0514 00:38:23.203193 1424 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a8e1dc39-9728-46f7-8233-ddc11acd77a0-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "a8e1dc39-9728-46f7-8233-ddc11acd77a0" (UID: "a8e1dc39-9728-46f7-8233-ddc11acd77a0"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 00:38:23.205587 kubelet[1424]: I0514 00:38:23.203201 1424 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a8e1dc39-9728-46f7-8233-ddc11acd77a0-cilium-run\") on node \"10.0.0.50\" DevicePath \"\"" May 14 00:38:23.205587 kubelet[1424]: I0514 00:38:23.203229 1424 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a8e1dc39-9728-46f7-8233-ddc11acd77a0-hostproc" (OuterVolumeSpecName: "hostproc") pod "a8e1dc39-9728-46f7-8233-ddc11acd77a0" (UID: "a8e1dc39-9728-46f7-8233-ddc11acd77a0"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 00:38:23.205729 kubelet[1424]: I0514 00:38:23.203230 1424 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a8e1dc39-9728-46f7-8233-ddc11acd77a0-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "a8e1dc39-9728-46f7-8233-ddc11acd77a0" (UID: "a8e1dc39-9728-46f7-8233-ddc11acd77a0"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 00:38:23.205729 kubelet[1424]: I0514 00:38:23.203258 1424 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a8e1dc39-9728-46f7-8233-ddc11acd77a0-cni-path" (OuterVolumeSpecName: "cni-path") pod "a8e1dc39-9728-46f7-8233-ddc11acd77a0" (UID: "a8e1dc39-9728-46f7-8233-ddc11acd77a0"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 00:38:23.205729 kubelet[1424]: I0514 00:38:23.203276 1424 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a8e1dc39-9728-46f7-8233-ddc11acd77a0-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "a8e1dc39-9728-46f7-8233-ddc11acd77a0" (UID: "a8e1dc39-9728-46f7-8233-ddc11acd77a0"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 00:38:23.205729 kubelet[1424]: I0514 00:38:23.205095 1424 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a8e1dc39-9728-46f7-8233-ddc11acd77a0-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "a8e1dc39-9728-46f7-8233-ddc11acd77a0" (UID: "a8e1dc39-9728-46f7-8233-ddc11acd77a0"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 14 00:38:23.205729 kubelet[1424]: I0514 00:38:23.205386 1424 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8e1dc39-9728-46f7-8233-ddc11acd77a0-kube-api-access-p9lkl" (OuterVolumeSpecName: "kube-api-access-p9lkl") pod "a8e1dc39-9728-46f7-8233-ddc11acd77a0" (UID: "a8e1dc39-9728-46f7-8233-ddc11acd77a0"). InnerVolumeSpecName "kube-api-access-p9lkl". PluginName "kubernetes.io/projected", VolumeGidValue "" May 14 00:38:23.206262 systemd[1]: var-lib-kubelet-pods-a8e1dc39\x2d9728\x2d46f7\x2d8233\x2dddc11acd77a0-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dp9lkl.mount: Deactivated successfully. May 14 00:38:23.206358 systemd[1]: var-lib-kubelet-pods-a8e1dc39\x2d9728\x2d46f7\x2d8233\x2dddc11acd77a0-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 14 00:38:23.208298 systemd[1]: var-lib-kubelet-pods-a8e1dc39\x2d9728\x2d46f7\x2d8233\x2dddc11acd77a0-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 14 00:38:23.208409 kubelet[1424]: I0514 00:38:23.208376 1424 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8e1dc39-9728-46f7-8233-ddc11acd77a0-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "a8e1dc39-9728-46f7-8233-ddc11acd77a0" (UID: "a8e1dc39-9728-46f7-8233-ddc11acd77a0"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" May 14 00:38:23.208409 kubelet[1424]: I0514 00:38:23.208403 1424 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8e1dc39-9728-46f7-8233-ddc11acd77a0-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "a8e1dc39-9728-46f7-8233-ddc11acd77a0" (UID: "a8e1dc39-9728-46f7-8233-ddc11acd77a0"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" May 14 00:38:23.209958 systemd[1]: var-lib-kubelet-pods-a8e1dc39\x2d9728\x2d46f7\x2d8233\x2dddc11acd77a0-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. May 14 00:38:23.210299 kubelet[1424]: I0514 00:38:23.210276 1424 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a8e1dc39-9728-46f7-8233-ddc11acd77a0-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "a8e1dc39-9728-46f7-8233-ddc11acd77a0" (UID: "a8e1dc39-9728-46f7-8233-ddc11acd77a0"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" May 14 00:38:23.303928 kubelet[1424]: I0514 00:38:23.303897 1424 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a8e1dc39-9728-46f7-8233-ddc11acd77a0-etc-cni-netd\") on node \"10.0.0.50\" DevicePath \"\"" May 14 00:38:23.304088 kubelet[1424]: I0514 00:38:23.304072 1424 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a8e1dc39-9728-46f7-8233-ddc11acd77a0-cilium-config-path\") on node \"10.0.0.50\" DevicePath \"\"" May 14 00:38:23.304144 kubelet[1424]: I0514 00:38:23.304135 1424 reconciler_common.go:289] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/a8e1dc39-9728-46f7-8233-ddc11acd77a0-cilium-ipsec-secrets\") on node \"10.0.0.50\" DevicePath \"\"" May 14 00:38:23.304207 kubelet[1424]: I0514 00:38:23.304198 1424 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a8e1dc39-9728-46f7-8233-ddc11acd77a0-host-proc-sys-net\") on node \"10.0.0.50\" DevicePath \"\"" May 14 00:38:23.304265 kubelet[1424]: I0514 00:38:23.304254 1424 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-p9lkl\" (UniqueName: \"kubernetes.io/projected/a8e1dc39-9728-46f7-8233-ddc11acd77a0-kube-api-access-p9lkl\") on node \"10.0.0.50\" DevicePath \"\"" May 14 00:38:23.304336 kubelet[1424]: I0514 00:38:23.304312 1424 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a8e1dc39-9728-46f7-8233-ddc11acd77a0-hostproc\") on node \"10.0.0.50\" DevicePath \"\"" May 14 00:38:23.304392 kubelet[1424]: I0514 00:38:23.304381 1424 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a8e1dc39-9728-46f7-8233-ddc11acd77a0-cni-path\") on node \"10.0.0.50\" DevicePath \"\"" May 14 00:38:23.304448 kubelet[1424]: I0514 00:38:23.304438 1424 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a8e1dc39-9728-46f7-8233-ddc11acd77a0-cilium-cgroup\") on node \"10.0.0.50\" DevicePath \"\"" May 14 00:38:23.304508 kubelet[1424]: I0514 00:38:23.304499 1424 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a8e1dc39-9728-46f7-8233-ddc11acd77a0-hubble-tls\") on node \"10.0.0.50\" DevicePath \"\"" May 14 00:38:23.304561 kubelet[1424]: I0514 00:38:23.304551 1424 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a8e1dc39-9728-46f7-8233-ddc11acd77a0-clustermesh-secrets\") on node \"10.0.0.50\" DevicePath \"\"" May 14 00:38:23.822124 kubelet[1424]: E0514 00:38:23.822090 1424 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 14 00:38:23.995935 systemd[1]: Removed slice kubepods-burstable-poda8e1dc39_9728_46f7_8233_ddc11acd77a0.slice. May 14 00:38:24.131604 kubelet[1424]: I0514 00:38:24.131488 1424 topology_manager.go:215] "Topology Admit Handler" podUID="c8e17503-e41e-4599-ab8b-f820440e3c01" podNamespace="kube-system" podName="cilium-vjxr4" May 14 00:38:24.137074 systemd[1]: Created slice kubepods-burstable-podc8e17503_e41e_4599_ab8b_f820440e3c01.slice. May 14 00:38:24.200336 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2551774167.mount: Deactivated successfully. May 14 00:38:24.310646 kubelet[1424]: I0514 00:38:24.310610 1424 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c8e17503-e41e-4599-ab8b-f820440e3c01-clustermesh-secrets\") pod \"cilium-vjxr4\" (UID: \"c8e17503-e41e-4599-ab8b-f820440e3c01\") " pod="kube-system/cilium-vjxr4" May 14 00:38:24.310646 kubelet[1424]: I0514 00:38:24.310647 1424 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/c8e17503-e41e-4599-ab8b-f820440e3c01-cilium-ipsec-secrets\") pod \"cilium-vjxr4\" (UID: \"c8e17503-e41e-4599-ab8b-f820440e3c01\") " pod="kube-system/cilium-vjxr4" May 14 00:38:24.310849 kubelet[1424]: I0514 00:38:24.310667 1424 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c8e17503-e41e-4599-ab8b-f820440e3c01-cilium-config-path\") pod \"cilium-vjxr4\" (UID: \"c8e17503-e41e-4599-ab8b-f820440e3c01\") " pod="kube-system/cilium-vjxr4" May 14 00:38:24.310849 kubelet[1424]: I0514 00:38:24.310721 1424 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qnj5b\" (UniqueName: \"kubernetes.io/projected/c8e17503-e41e-4599-ab8b-f820440e3c01-kube-api-access-qnj5b\") pod \"cilium-vjxr4\" (UID: \"c8e17503-e41e-4599-ab8b-f820440e3c01\") " pod="kube-system/cilium-vjxr4" May 14 00:38:24.310849 kubelet[1424]: I0514 00:38:24.310741 1424 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c8e17503-e41e-4599-ab8b-f820440e3c01-cni-path\") pod \"cilium-vjxr4\" (UID: \"c8e17503-e41e-4599-ab8b-f820440e3c01\") " pod="kube-system/cilium-vjxr4" May 14 00:38:24.310849 kubelet[1424]: I0514 00:38:24.310761 1424 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c8e17503-e41e-4599-ab8b-f820440e3c01-host-proc-sys-kernel\") pod \"cilium-vjxr4\" (UID: \"c8e17503-e41e-4599-ab8b-f820440e3c01\") " pod="kube-system/cilium-vjxr4" May 14 00:38:24.310849 kubelet[1424]: I0514 00:38:24.310796 1424 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c8e17503-e41e-4599-ab8b-f820440e3c01-cilium-run\") pod \"cilium-vjxr4\" (UID: \"c8e17503-e41e-4599-ab8b-f820440e3c01\") " pod="kube-system/cilium-vjxr4" May 14 00:38:24.310849 kubelet[1424]: I0514 00:38:24.310835 1424 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c8e17503-e41e-4599-ab8b-f820440e3c01-cilium-cgroup\") pod \"cilium-vjxr4\" (UID: \"c8e17503-e41e-4599-ab8b-f820440e3c01\") " pod="kube-system/cilium-vjxr4" May 14 00:38:24.310991 kubelet[1424]: I0514 00:38:24.310851 1424 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c8e17503-e41e-4599-ab8b-f820440e3c01-lib-modules\") pod \"cilium-vjxr4\" (UID: \"c8e17503-e41e-4599-ab8b-f820440e3c01\") " pod="kube-system/cilium-vjxr4" May 14 00:38:24.310991 kubelet[1424]: I0514 00:38:24.310868 1424 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c8e17503-e41e-4599-ab8b-f820440e3c01-xtables-lock\") pod \"cilium-vjxr4\" (UID: \"c8e17503-e41e-4599-ab8b-f820440e3c01\") " pod="kube-system/cilium-vjxr4" May 14 00:38:24.310991 kubelet[1424]: I0514 00:38:24.310887 1424 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c8e17503-e41e-4599-ab8b-f820440e3c01-bpf-maps\") pod \"cilium-vjxr4\" (UID: \"c8e17503-e41e-4599-ab8b-f820440e3c01\") " pod="kube-system/cilium-vjxr4" May 14 00:38:24.310991 kubelet[1424]: I0514 00:38:24.310902 1424 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c8e17503-e41e-4599-ab8b-f820440e3c01-hostproc\") pod \"cilium-vjxr4\" (UID: \"c8e17503-e41e-4599-ab8b-f820440e3c01\") " pod="kube-system/cilium-vjxr4" May 14 00:38:24.310991 kubelet[1424]: I0514 00:38:24.310919 1424 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c8e17503-e41e-4599-ab8b-f820440e3c01-etc-cni-netd\") pod \"cilium-vjxr4\" (UID: \"c8e17503-e41e-4599-ab8b-f820440e3c01\") " pod="kube-system/cilium-vjxr4" May 14 00:38:24.310991 kubelet[1424]: I0514 00:38:24.310935 1424 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c8e17503-e41e-4599-ab8b-f820440e3c01-host-proc-sys-net\") pod \"cilium-vjxr4\" (UID: \"c8e17503-e41e-4599-ab8b-f820440e3c01\") " pod="kube-system/cilium-vjxr4" May 14 00:38:24.311120 kubelet[1424]: I0514 00:38:24.310950 1424 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c8e17503-e41e-4599-ab8b-f820440e3c01-hubble-tls\") pod \"cilium-vjxr4\" (UID: \"c8e17503-e41e-4599-ab8b-f820440e3c01\") " pod="kube-system/cilium-vjxr4" May 14 00:38:24.446353 kubelet[1424]: E0514 00:38:24.446240 1424 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:38:24.447219 env[1214]: time="2025-05-14T00:38:24.447169349Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-vjxr4,Uid:c8e17503-e41e-4599-ab8b-f820440e3c01,Namespace:kube-system,Attempt:0,}" May 14 00:38:24.459285 env[1214]: time="2025-05-14T00:38:24.459208702Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 14 00:38:24.459285 env[1214]: time="2025-05-14T00:38:24.459248902Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 14 00:38:24.459285 env[1214]: time="2025-05-14T00:38:24.459259222Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 14 00:38:24.459698 env[1214]: time="2025-05-14T00:38:24.459664902Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/adfb9d53eef6447023d8fe82119a43b6cc6e7c891533a77c3e6a0256c3e5313d pid=3054 runtime=io.containerd.runc.v2 May 14 00:38:24.470898 systemd[1]: Started cri-containerd-adfb9d53eef6447023d8fe82119a43b6cc6e7c891533a77c3e6a0256c3e5313d.scope. May 14 00:38:24.497107 env[1214]: time="2025-05-14T00:38:24.496996201Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-vjxr4,Uid:c8e17503-e41e-4599-ab8b-f820440e3c01,Namespace:kube-system,Attempt:0,} returns sandbox id \"adfb9d53eef6447023d8fe82119a43b6cc6e7c891533a77c3e6a0256c3e5313d\"" May 14 00:38:24.497848 kubelet[1424]: E0514 00:38:24.497786 1424 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:38:24.500491 env[1214]: time="2025-05-14T00:38:24.500431279Z" level=info msg="CreateContainer within sandbox \"adfb9d53eef6447023d8fe82119a43b6cc6e7c891533a77c3e6a0256c3e5313d\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 14 00:38:24.509402 env[1214]: time="2025-05-14T00:38:24.509347914Z" level=info msg="CreateContainer within sandbox \"adfb9d53eef6447023d8fe82119a43b6cc6e7c891533a77c3e6a0256c3e5313d\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"75dbe18b51aa46a45b7c659cf6558e8e7c2a6844a0473efd69dd45aa97e829d0\"" May 14 00:38:24.509817 env[1214]: time="2025-05-14T00:38:24.509780993Z" level=info msg="StartContainer for \"75dbe18b51aa46a45b7c659cf6558e8e7c2a6844a0473efd69dd45aa97e829d0\"" May 14 00:38:24.523246 systemd[1]: Started cri-containerd-75dbe18b51aa46a45b7c659cf6558e8e7c2a6844a0473efd69dd45aa97e829d0.scope. May 14 00:38:24.550845 env[1214]: time="2025-05-14T00:38:24.550124850Z" level=info msg="StartContainer for \"75dbe18b51aa46a45b7c659cf6558e8e7c2a6844a0473efd69dd45aa97e829d0\" returns successfully" May 14 00:38:24.559361 systemd[1]: cri-containerd-75dbe18b51aa46a45b7c659cf6558e8e7c2a6844a0473efd69dd45aa97e829d0.scope: Deactivated successfully. May 14 00:38:24.600281 env[1214]: time="2025-05-14T00:38:24.600214982Z" level=info msg="shim disconnected" id=75dbe18b51aa46a45b7c659cf6558e8e7c2a6844a0473efd69dd45aa97e829d0 May 14 00:38:24.600281 env[1214]: time="2025-05-14T00:38:24.600262702Z" level=warning msg="cleaning up after shim disconnected" id=75dbe18b51aa46a45b7c659cf6558e8e7c2a6844a0473efd69dd45aa97e829d0 namespace=k8s.io May 14 00:38:24.600281 env[1214]: time="2025-05-14T00:38:24.600272702Z" level=info msg="cleaning up dead shim" May 14 00:38:24.606666 env[1214]: time="2025-05-14T00:38:24.606618498Z" level=warning msg="cleanup warnings time=\"2025-05-14T00:38:24Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3139 runtime=io.containerd.runc.v2\n" May 14 00:38:24.823465 kubelet[1424]: E0514 00:38:24.823428 1424 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 14 00:38:25.031970 env[1214]: time="2025-05-14T00:38:25.031922577Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:38:25.033132 env[1214]: time="2025-05-14T00:38:25.033103256Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:38:25.035105 env[1214]: time="2025-05-14T00:38:25.035076295Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 14 00:38:25.036323 env[1214]: time="2025-05-14T00:38:25.035799135Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" May 14 00:38:25.038445 env[1214]: time="2025-05-14T00:38:25.038409893Z" level=info msg="CreateContainer within sandbox \"829ba6255799cc3f6687b5e26b54357628bcbb4888a64a24b710263e70667eba\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 14 00:38:25.048560 env[1214]: time="2025-05-14T00:38:25.048521248Z" level=info msg="CreateContainer within sandbox \"829ba6255799cc3f6687b5e26b54357628bcbb4888a64a24b710263e70667eba\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"513f3f12749d24dc168cd19779a9ca75e0ee237175f375a287141d5b8ca1e631\"" May 14 00:38:25.049079 env[1214]: time="2025-05-14T00:38:25.049050647Z" level=info msg="StartContainer for \"513f3f12749d24dc168cd19779a9ca75e0ee237175f375a287141d5b8ca1e631\"" May 14 00:38:25.063094 systemd[1]: Started cri-containerd-513f3f12749d24dc168cd19779a9ca75e0ee237175f375a287141d5b8ca1e631.scope. May 14 00:38:25.105217 kubelet[1424]: E0514 00:38:25.105118 1424 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:38:25.107521 env[1214]: time="2025-05-14T00:38:25.107481736Z" level=info msg="CreateContainer within sandbox \"adfb9d53eef6447023d8fe82119a43b6cc6e7c891533a77c3e6a0256c3e5313d\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 14 00:38:25.159410 env[1214]: time="2025-05-14T00:38:25.159362028Z" level=info msg="StartContainer for \"513f3f12749d24dc168cd19779a9ca75e0ee237175f375a287141d5b8ca1e631\" returns successfully" May 14 00:38:25.168319 env[1214]: time="2025-05-14T00:38:25.168258384Z" level=info msg="CreateContainer within sandbox \"adfb9d53eef6447023d8fe82119a43b6cc6e7c891533a77c3e6a0256c3e5313d\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"53bca9fd9f6070d2700e86f27cfad3b897ad2859189b3213a58d33a8cdbaa4c9\"" May 14 00:38:25.168836 env[1214]: time="2025-05-14T00:38:25.168810623Z" level=info msg="StartContainer for \"53bca9fd9f6070d2700e86f27cfad3b897ad2859189b3213a58d33a8cdbaa4c9\"" May 14 00:38:25.184688 systemd[1]: Started cri-containerd-53bca9fd9f6070d2700e86f27cfad3b897ad2859189b3213a58d33a8cdbaa4c9.scope. May 14 00:38:25.228584 env[1214]: time="2025-05-14T00:38:25.228535951Z" level=info msg="StartContainer for \"53bca9fd9f6070d2700e86f27cfad3b897ad2859189b3213a58d33a8cdbaa4c9\" returns successfully" May 14 00:38:25.245203 systemd[1]: cri-containerd-53bca9fd9f6070d2700e86f27cfad3b897ad2859189b3213a58d33a8cdbaa4c9.scope: Deactivated successfully. May 14 00:38:25.265033 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-53bca9fd9f6070d2700e86f27cfad3b897ad2859189b3213a58d33a8cdbaa4c9-rootfs.mount: Deactivated successfully. May 14 00:38:25.270197 env[1214]: time="2025-05-14T00:38:25.270146769Z" level=info msg="shim disconnected" id=53bca9fd9f6070d2700e86f27cfad3b897ad2859189b3213a58d33a8cdbaa4c9 May 14 00:38:25.270387 env[1214]: time="2025-05-14T00:38:25.270364489Z" level=warning msg="cleaning up after shim disconnected" id=53bca9fd9f6070d2700e86f27cfad3b897ad2859189b3213a58d33a8cdbaa4c9 namespace=k8s.io May 14 00:38:25.270451 env[1214]: time="2025-05-14T00:38:25.270438209Z" level=info msg="cleaning up dead shim" May 14 00:38:25.276794 env[1214]: time="2025-05-14T00:38:25.276751286Z" level=warning msg="cleanup warnings time=\"2025-05-14T00:38:25Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3240 runtime=io.containerd.runc.v2\n" May 14 00:38:25.823843 kubelet[1424]: E0514 00:38:25.823768 1424 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 14 00:38:25.992332 kubelet[1424]: I0514 00:38:25.992284 1424 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a8e1dc39-9728-46f7-8233-ddc11acd77a0" path="/var/lib/kubelet/pods/a8e1dc39-9728-46f7-8233-ddc11acd77a0/volumes" May 14 00:38:26.107565 kubelet[1424]: E0514 00:38:26.107450 1424 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:38:26.109694 kubelet[1424]: E0514 00:38:26.109644 1424 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:38:26.111527 env[1214]: time="2025-05-14T00:38:26.111468403Z" level=info msg="CreateContainer within sandbox \"adfb9d53eef6447023d8fe82119a43b6cc6e7c891533a77c3e6a0256c3e5313d\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 14 00:38:26.116387 kubelet[1424]: I0514 00:38:26.116336 1424 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-b7bs2" podStartSLOduration=2.434878257 podStartE2EDuration="5.11632268s" podCreationTimestamp="2025-05-14 00:38:21 +0000 UTC" firstStartedPulling="2025-05-14 00:38:22.355642671 +0000 UTC m=+55.102643167" lastFinishedPulling="2025-05-14 00:38:25.037087134 +0000 UTC m=+57.784087590" observedRunningTime="2025-05-14 00:38:26.11600884 +0000 UTC m=+58.863009336" watchObservedRunningTime="2025-05-14 00:38:26.11632268 +0000 UTC m=+58.863323176" May 14 00:38:26.123314 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3985852835.mount: Deactivated successfully. May 14 00:38:26.127503 env[1214]: time="2025-05-14T00:38:26.127441955Z" level=info msg="CreateContainer within sandbox \"adfb9d53eef6447023d8fe82119a43b6cc6e7c891533a77c3e6a0256c3e5313d\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"a98920a6747dd0ddf85e9c58ca587f09ac797cdf2fc316b8b08777e797f55066\"" May 14 00:38:26.128120 env[1214]: time="2025-05-14T00:38:26.128072354Z" level=info msg="StartContainer for \"a98920a6747dd0ddf85e9c58ca587f09ac797cdf2fc316b8b08777e797f55066\"" May 14 00:38:26.142002 systemd[1]: Started cri-containerd-a98920a6747dd0ddf85e9c58ca587f09ac797cdf2fc316b8b08777e797f55066.scope. May 14 00:38:26.177463 env[1214]: time="2025-05-14T00:38:26.177413890Z" level=info msg="StartContainer for \"a98920a6747dd0ddf85e9c58ca587f09ac797cdf2fc316b8b08777e797f55066\" returns successfully" May 14 00:38:26.179131 systemd[1]: cri-containerd-a98920a6747dd0ddf85e9c58ca587f09ac797cdf2fc316b8b08777e797f55066.scope: Deactivated successfully. May 14 00:38:26.195622 env[1214]: time="2025-05-14T00:38:26.195579760Z" level=info msg="shim disconnected" id=a98920a6747dd0ddf85e9c58ca587f09ac797cdf2fc316b8b08777e797f55066 May 14 00:38:26.195759 env[1214]: time="2025-05-14T00:38:26.195623760Z" level=warning msg="cleaning up after shim disconnected" id=a98920a6747dd0ddf85e9c58ca587f09ac797cdf2fc316b8b08777e797f55066 namespace=k8s.io May 14 00:38:26.195759 env[1214]: time="2025-05-14T00:38:26.195633800Z" level=info msg="cleaning up dead shim" May 14 00:38:26.202796 env[1214]: time="2025-05-14T00:38:26.202756197Z" level=warning msg="cleanup warnings time=\"2025-05-14T00:38:26Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3298 runtime=io.containerd.runc.v2\n" May 14 00:38:26.824716 kubelet[1424]: E0514 00:38:26.824668 1424 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 14 00:38:27.113399 kubelet[1424]: E0514 00:38:27.113260 1424 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:38:27.113591 kubelet[1424]: E0514 00:38:27.113572 1424 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:38:27.115257 env[1214]: time="2025-05-14T00:38:27.115212702Z" level=info msg="CreateContainer within sandbox \"adfb9d53eef6447023d8fe82119a43b6cc6e7c891533a77c3e6a0256c3e5313d\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 14 00:38:27.126763 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1091485479.mount: Deactivated successfully. May 14 00:38:27.131393 env[1214]: time="2025-05-14T00:38:27.131325295Z" level=info msg="CreateContainer within sandbox \"adfb9d53eef6447023d8fe82119a43b6cc6e7c891533a77c3e6a0256c3e5313d\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"cdca6a56b34000426b7e4ef147ef64624cbbdfb5ccef26f2d14371f82865a699\"" May 14 00:38:27.132044 env[1214]: time="2025-05-14T00:38:27.132015894Z" level=info msg="StartContainer for \"cdca6a56b34000426b7e4ef147ef64624cbbdfb5ccef26f2d14371f82865a699\"" May 14 00:38:27.147609 systemd[1]: Started cri-containerd-cdca6a56b34000426b7e4ef147ef64624cbbdfb5ccef26f2d14371f82865a699.scope. May 14 00:38:27.178468 systemd[1]: cri-containerd-cdca6a56b34000426b7e4ef147ef64624cbbdfb5ccef26f2d14371f82865a699.scope: Deactivated successfully. May 14 00:38:27.179274 env[1214]: time="2025-05-14T00:38:27.179229392Z" level=info msg="StartContainer for \"cdca6a56b34000426b7e4ef147ef64624cbbdfb5ccef26f2d14371f82865a699\" returns successfully" May 14 00:38:27.196710 env[1214]: time="2025-05-14T00:38:27.196664504Z" level=info msg="shim disconnected" id=cdca6a56b34000426b7e4ef147ef64624cbbdfb5ccef26f2d14371f82865a699 May 14 00:38:27.196710 env[1214]: time="2025-05-14T00:38:27.196710184Z" level=warning msg="cleaning up after shim disconnected" id=cdca6a56b34000426b7e4ef147ef64624cbbdfb5ccef26f2d14371f82865a699 namespace=k8s.io May 14 00:38:27.196896 env[1214]: time="2025-05-14T00:38:27.196720264Z" level=info msg="cleaning up dead shim" May 14 00:38:27.200468 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cdca6a56b34000426b7e4ef147ef64624cbbdfb5ccef26f2d14371f82865a699-rootfs.mount: Deactivated successfully. May 14 00:38:27.203035 env[1214]: time="2025-05-14T00:38:27.202996981Z" level=warning msg="cleanup warnings time=\"2025-05-14T00:38:27Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3352 runtime=io.containerd.runc.v2\n" May 14 00:38:27.786018 kubelet[1424]: E0514 00:38:27.785978 1424 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 14 00:38:27.801214 env[1214]: time="2025-05-14T00:38:27.801180140Z" level=info msg="StopPodSandbox for \"be27fcced800d1ccb7d5f4b663fb22356b1b27b12ba9e46349745ba00f1c6682\"" May 14 00:38:27.801517 env[1214]: time="2025-05-14T00:38:27.801273660Z" level=info msg="TearDown network for sandbox \"be27fcced800d1ccb7d5f4b663fb22356b1b27b12ba9e46349745ba00f1c6682\" successfully" May 14 00:38:27.801618 env[1214]: time="2025-05-14T00:38:27.801509620Z" level=info msg="StopPodSandbox for \"be27fcced800d1ccb7d5f4b663fb22356b1b27b12ba9e46349745ba00f1c6682\" returns successfully" May 14 00:38:27.802936 env[1214]: time="2025-05-14T00:38:27.802884579Z" level=info msg="RemovePodSandbox for \"be27fcced800d1ccb7d5f4b663fb22356b1b27b12ba9e46349745ba00f1c6682\"" May 14 00:38:27.803063 env[1214]: time="2025-05-14T00:38:27.803010539Z" level=info msg="Forcibly stopping sandbox \"be27fcced800d1ccb7d5f4b663fb22356b1b27b12ba9e46349745ba00f1c6682\"" May 14 00:38:27.804121 env[1214]: time="2025-05-14T00:38:27.804084059Z" level=info msg="TearDown network for sandbox \"be27fcced800d1ccb7d5f4b663fb22356b1b27b12ba9e46349745ba00f1c6682\" successfully" May 14 00:38:27.807864 env[1214]: time="2025-05-14T00:38:27.807831057Z" level=info msg="RemovePodSandbox \"be27fcced800d1ccb7d5f4b663fb22356b1b27b12ba9e46349745ba00f1c6682\" returns successfully" May 14 00:38:27.825391 kubelet[1424]: E0514 00:38:27.825370 1424 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 14 00:38:27.915274 kubelet[1424]: E0514 00:38:27.915248 1424 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 14 00:38:28.117575 kubelet[1424]: E0514 00:38:28.117544 1424 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:38:28.119891 env[1214]: time="2025-05-14T00:38:28.119725794Z" level=info msg="CreateContainer within sandbox \"adfb9d53eef6447023d8fe82119a43b6cc6e7c891533a77c3e6a0256c3e5313d\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 14 00:38:28.137344 env[1214]: time="2025-05-14T00:38:28.137288586Z" level=info msg="CreateContainer within sandbox \"adfb9d53eef6447023d8fe82119a43b6cc6e7c891533a77c3e6a0256c3e5313d\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"8ea16fcc2f4e9b2fa01cf2bfd4a96dd61740dc7765f19edfb5bedc89f1490182\"" May 14 00:38:28.140587 env[1214]: time="2025-05-14T00:38:28.140549304Z" level=info msg="StartContainer for \"8ea16fcc2f4e9b2fa01cf2bfd4a96dd61740dc7765f19edfb5bedc89f1490182\"" May 14 00:38:28.164685 systemd[1]: Started cri-containerd-8ea16fcc2f4e9b2fa01cf2bfd4a96dd61740dc7765f19edfb5bedc89f1490182.scope. May 14 00:38:28.203843 systemd[1]: run-containerd-runc-k8s.io-8ea16fcc2f4e9b2fa01cf2bfd4a96dd61740dc7765f19edfb5bedc89f1490182-runc.IafawY.mount: Deactivated successfully. May 14 00:38:28.220627 env[1214]: time="2025-05-14T00:38:28.220580509Z" level=info msg="StartContainer for \"8ea16fcc2f4e9b2fa01cf2bfd4a96dd61740dc7765f19edfb5bedc89f1490182\" returns successfully" May 14 00:38:28.472834 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm-aes-ce))) May 14 00:38:28.825867 kubelet[1424]: E0514 00:38:28.825817 1424 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 14 00:38:28.924656 kubelet[1424]: I0514 00:38:28.924591 1424 setters.go:580] "Node became not ready" node="10.0.0.50" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-05-14T00:38:28Z","lastTransitionTime":"2025-05-14T00:38:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} May 14 00:38:29.122152 kubelet[1424]: E0514 00:38:29.121768 1424 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:38:29.135639 kubelet[1424]: I0514 00:38:29.135302 1424 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-vjxr4" podStartSLOduration=5.135281029 podStartE2EDuration="5.135281029s" podCreationTimestamp="2025-05-14 00:38:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 00:38:29.13476591 +0000 UTC m=+61.881766486" watchObservedRunningTime="2025-05-14 00:38:29.135281029 +0000 UTC m=+61.882281525" May 14 00:38:29.826280 kubelet[1424]: E0514 00:38:29.826237 1424 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 14 00:38:30.424730 systemd[1]: run-containerd-runc-k8s.io-8ea16fcc2f4e9b2fa01cf2bfd4a96dd61740dc7765f19edfb5bedc89f1490182-runc.ra4vSY.mount: Deactivated successfully. May 14 00:38:30.449828 kubelet[1424]: E0514 00:38:30.448412 1424 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:38:30.826928 kubelet[1424]: E0514 00:38:30.826893 1424 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 14 00:38:31.231661 systemd-networkd[1044]: lxc_health: Link UP May 14 00:38:31.244919 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready May 14 00:38:31.245087 systemd-networkd[1044]: lxc_health: Gained carrier May 14 00:38:31.827031 kubelet[1424]: E0514 00:38:31.826992 1424 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 14 00:38:32.417967 systemd-networkd[1044]: lxc_health: Gained IPv6LL May 14 00:38:32.449281 kubelet[1424]: E0514 00:38:32.448888 1424 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:38:32.575410 systemd[1]: run-containerd-runc-k8s.io-8ea16fcc2f4e9b2fa01cf2bfd4a96dd61740dc7765f19edfb5bedc89f1490182-runc.dyJ2pJ.mount: Deactivated successfully. May 14 00:38:32.827563 kubelet[1424]: E0514 00:38:32.827523 1424 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 14 00:38:33.128770 kubelet[1424]: E0514 00:38:33.128625 1424 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:38:33.828637 kubelet[1424]: E0514 00:38:33.828600 1424 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 14 00:38:34.130289 kubelet[1424]: E0514 00:38:34.130182 1424 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:38:34.714175 systemd[1]: run-containerd-runc-k8s.io-8ea16fcc2f4e9b2fa01cf2bfd4a96dd61740dc7765f19edfb5bedc89f1490182-runc.PPVCQ4.mount: Deactivated successfully. May 14 00:38:34.829683 kubelet[1424]: E0514 00:38:34.829638 1424 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 14 00:38:34.990058 kubelet[1424]: E0514 00:38:34.989969 1424 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 00:38:35.829892 kubelet[1424]: E0514 00:38:35.829790 1424 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 14 00:38:36.831702 kubelet[1424]: E0514 00:38:36.831646 1424 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 14 00:38:37.832288 kubelet[1424]: E0514 00:38:37.832245 1424 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"