May 16 00:44:05.711157 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] May 16 00:44:05.711175 kernel: Linux version 5.15.181-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Thu May 15 23:21:39 -00 2025 May 16 00:44:05.711183 kernel: efi: EFI v2.70 by EDK II May 16 00:44:05.711188 kernel: efi: SMBIOS 3.0=0xd9260000 ACPI 2.0=0xd9240000 MEMATTR=0xda32b018 RNG=0xd9220018 MEMRESERVE=0xd9521c18 May 16 00:44:05.711193 kernel: random: crng init done May 16 00:44:05.711198 kernel: ACPI: Early table checksum verification disabled May 16 00:44:05.711205 kernel: ACPI: RSDP 0x00000000D9240000 000024 (v02 BOCHS ) May 16 00:44:05.711211 kernel: ACPI: XSDT 0x00000000D9230000 000064 (v01 BOCHS BXPC 00000001 01000013) May 16 00:44:05.711217 kernel: ACPI: FACP 0x00000000D91E0000 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) May 16 00:44:05.711222 kernel: ACPI: DSDT 0x00000000D91F0000 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 16 00:44:05.711228 kernel: ACPI: APIC 0x00000000D91D0000 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) May 16 00:44:05.711233 kernel: ACPI: PPTT 0x00000000D91C0000 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) May 16 00:44:05.711239 kernel: ACPI: GTDT 0x00000000D91B0000 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 16 00:44:05.711244 kernel: ACPI: MCFG 0x00000000D91A0000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 16 00:44:05.711252 kernel: ACPI: SPCR 0x00000000D9190000 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 16 00:44:05.711258 kernel: ACPI: DBG2 0x00000000D9180000 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) May 16 00:44:05.711264 kernel: ACPI: IORT 0x00000000D9170000 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 16 00:44:05.711269 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 May 16 00:44:05.711275 kernel: NUMA: Failed to initialise from firmware May 16 00:44:05.711281 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] May 16 00:44:05.711287 kernel: NUMA: NODE_DATA [mem 0xdcb0b900-0xdcb10fff] May 16 00:44:05.711292 kernel: Zone ranges: May 16 00:44:05.711298 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] May 16 00:44:05.711305 kernel: DMA32 empty May 16 00:44:05.711311 kernel: Normal empty May 16 00:44:05.711316 kernel: Movable zone start for each node May 16 00:44:05.711322 kernel: Early memory node ranges May 16 00:44:05.711328 kernel: node 0: [mem 0x0000000040000000-0x00000000d924ffff] May 16 00:44:05.711333 kernel: node 0: [mem 0x00000000d9250000-0x00000000d951ffff] May 16 00:44:05.711339 kernel: node 0: [mem 0x00000000d9520000-0x00000000dc7fffff] May 16 00:44:05.711345 kernel: node 0: [mem 0x00000000dc800000-0x00000000dc88ffff] May 16 00:44:05.711351 kernel: node 0: [mem 0x00000000dc890000-0x00000000dc89ffff] May 16 00:44:05.711356 kernel: node 0: [mem 0x00000000dc8a0000-0x00000000dc9bffff] May 16 00:44:05.711362 kernel: node 0: [mem 0x00000000dc9c0000-0x00000000dcffffff] May 16 00:44:05.711368 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] May 16 00:44:05.711375 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges May 16 00:44:05.711381 kernel: psci: probing for conduit method from ACPI. May 16 00:44:05.711387 kernel: psci: PSCIv1.1 detected in firmware. May 16 00:44:05.711393 kernel: psci: Using standard PSCI v0.2 function IDs May 16 00:44:05.711399 kernel: psci: Trusted OS migration not required May 16 00:44:05.711407 kernel: psci: SMC Calling Convention v1.1 May 16 00:44:05.711413 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) May 16 00:44:05.711421 kernel: ACPI: SRAT not present May 16 00:44:05.711427 kernel: percpu: Embedded 30 pages/cpu s83032 r8192 d31656 u122880 May 16 00:44:05.711434 kernel: pcpu-alloc: s83032 r8192 d31656 u122880 alloc=30*4096 May 16 00:44:05.711440 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 May 16 00:44:05.711446 kernel: Detected PIPT I-cache on CPU0 May 16 00:44:05.711452 kernel: CPU features: detected: GIC system register CPU interface May 16 00:44:05.711458 kernel: CPU features: detected: Hardware dirty bit management May 16 00:44:05.711464 kernel: CPU features: detected: Spectre-v4 May 16 00:44:05.711471 kernel: CPU features: detected: Spectre-BHB May 16 00:44:05.711484 kernel: CPU features: kernel page table isolation forced ON by KASLR May 16 00:44:05.711491 kernel: CPU features: detected: Kernel page table isolation (KPTI) May 16 00:44:05.711497 kernel: CPU features: detected: ARM erratum 1418040 May 16 00:44:05.711506 kernel: CPU features: detected: SSBS not fully self-synchronizing May 16 00:44:05.711513 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 May 16 00:44:05.711519 kernel: Policy zone: DMA May 16 00:44:05.711526 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=2d88e96fdc9dc9b028836e57c250f3fd2abd3e6490e27ecbf72d8b216e3efce8 May 16 00:44:05.711533 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 16 00:44:05.711539 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 16 00:44:05.711545 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 16 00:44:05.711551 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 16 00:44:05.711559 kernel: Memory: 2457340K/2572288K available (9792K kernel code, 2094K rwdata, 7584K rodata, 36480K init, 777K bss, 114948K reserved, 0K cma-reserved) May 16 00:44:05.711565 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 16 00:44:05.711571 kernel: trace event string verifier disabled May 16 00:44:05.711577 kernel: rcu: Preemptible hierarchical RCU implementation. May 16 00:44:05.711584 kernel: rcu: RCU event tracing is enabled. May 16 00:44:05.711590 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 16 00:44:05.711596 kernel: Trampoline variant of Tasks RCU enabled. May 16 00:44:05.711602 kernel: Tracing variant of Tasks RCU enabled. May 16 00:44:05.711609 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 16 00:44:05.711615 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 16 00:44:05.711621 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 May 16 00:44:05.711628 kernel: GICv3: 256 SPIs implemented May 16 00:44:05.711634 kernel: GICv3: 0 Extended SPIs implemented May 16 00:44:05.711640 kernel: GICv3: Distributor has no Range Selector support May 16 00:44:05.711646 kernel: Root IRQ handler: gic_handle_irq May 16 00:44:05.711652 kernel: GICv3: 16 PPIs implemented May 16 00:44:05.711658 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 May 16 00:44:05.711664 kernel: ACPI: SRAT not present May 16 00:44:05.711670 kernel: ITS [mem 0x08080000-0x0809ffff] May 16 00:44:05.711684 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400b0000 (indirect, esz 8, psz 64K, shr 1) May 16 00:44:05.711693 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400c0000 (flat, esz 8, psz 64K, shr 1) May 16 00:44:05.711713 kernel: GICv3: using LPI property table @0x00000000400d0000 May 16 00:44:05.711719 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000000400e0000 May 16 00:44:05.711727 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 16 00:44:05.711733 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). May 16 00:44:05.711740 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns May 16 00:44:05.711746 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns May 16 00:44:05.711752 kernel: arm-pv: using stolen time PV May 16 00:44:05.711758 kernel: Console: colour dummy device 80x25 May 16 00:44:05.711764 kernel: ACPI: Core revision 20210730 May 16 00:44:05.711771 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) May 16 00:44:05.711777 kernel: pid_max: default: 32768 minimum: 301 May 16 00:44:05.711783 kernel: LSM: Security Framework initializing May 16 00:44:05.711790 kernel: SELinux: Initializing. May 16 00:44:05.711796 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 16 00:44:05.711803 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 16 00:44:05.711809 kernel: ACPI PPTT: PPTT table found, but unable to locate core 3 (3) May 16 00:44:05.711815 kernel: rcu: Hierarchical SRCU implementation. May 16 00:44:05.711821 kernel: Platform MSI: ITS@0x8080000 domain created May 16 00:44:05.711828 kernel: PCI/MSI: ITS@0x8080000 domain created May 16 00:44:05.711834 kernel: Remapping and enabling EFI services. May 16 00:44:05.711840 kernel: smp: Bringing up secondary CPUs ... May 16 00:44:05.711847 kernel: Detected PIPT I-cache on CPU1 May 16 00:44:05.711854 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 May 16 00:44:05.711860 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000000400f0000 May 16 00:44:05.711866 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 16 00:44:05.711873 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] May 16 00:44:05.711879 kernel: Detected PIPT I-cache on CPU2 May 16 00:44:05.711885 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 May 16 00:44:05.711892 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040100000 May 16 00:44:05.711906 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 16 00:44:05.711912 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] May 16 00:44:05.711920 kernel: Detected PIPT I-cache on CPU3 May 16 00:44:05.711926 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 May 16 00:44:05.711932 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040110000 May 16 00:44:05.711939 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 16 00:44:05.711949 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] May 16 00:44:05.711957 kernel: smp: Brought up 1 node, 4 CPUs May 16 00:44:05.711963 kernel: SMP: Total of 4 processors activated. May 16 00:44:05.711970 kernel: CPU features: detected: 32-bit EL0 Support May 16 00:44:05.711977 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence May 16 00:44:05.711983 kernel: CPU features: detected: Common not Private translations May 16 00:44:05.711990 kernel: CPU features: detected: CRC32 instructions May 16 00:44:05.711996 kernel: CPU features: detected: RCpc load-acquire (LDAPR) May 16 00:44:05.712004 kernel: CPU features: detected: LSE atomic instructions May 16 00:44:05.712011 kernel: CPU features: detected: Privileged Access Never May 16 00:44:05.712017 kernel: CPU features: detected: RAS Extension Support May 16 00:44:05.712024 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) May 16 00:44:05.712030 kernel: CPU: All CPU(s) started at EL1 May 16 00:44:05.712038 kernel: alternatives: patching kernel code May 16 00:44:05.712044 kernel: devtmpfs: initialized May 16 00:44:05.712051 kernel: KASLR enabled May 16 00:44:05.712058 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 16 00:44:05.712065 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 16 00:44:05.712071 kernel: pinctrl core: initialized pinctrl subsystem May 16 00:44:05.712077 kernel: SMBIOS 3.0.0 present. May 16 00:44:05.712084 kernel: DMI: QEMU KVM Virtual Machine, BIOS 0.0.0 02/06/2015 May 16 00:44:05.712091 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 16 00:44:05.712099 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations May 16 00:44:05.712105 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations May 16 00:44:05.712112 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations May 16 00:44:05.712119 kernel: audit: initializing netlink subsys (disabled) May 16 00:44:05.712125 kernel: audit: type=2000 audit(0.035:1): state=initialized audit_enabled=0 res=1 May 16 00:44:05.712132 kernel: thermal_sys: Registered thermal governor 'step_wise' May 16 00:44:05.712138 kernel: cpuidle: using governor menu May 16 00:44:05.712145 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. May 16 00:44:05.712152 kernel: ASID allocator initialised with 32768 entries May 16 00:44:05.712159 kernel: ACPI: bus type PCI registered May 16 00:44:05.712166 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 16 00:44:05.712172 kernel: Serial: AMBA PL011 UART driver May 16 00:44:05.712179 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages May 16 00:44:05.712185 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages May 16 00:44:05.712192 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages May 16 00:44:05.712198 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages May 16 00:44:05.712205 kernel: cryptd: max_cpu_qlen set to 1000 May 16 00:44:05.712211 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) May 16 00:44:05.712219 kernel: ACPI: Added _OSI(Module Device) May 16 00:44:05.712226 kernel: ACPI: Added _OSI(Processor Device) May 16 00:44:05.712232 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 16 00:44:05.712239 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 16 00:44:05.712246 kernel: ACPI: Added _OSI(Linux-Dell-Video) May 16 00:44:05.712252 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) May 16 00:44:05.712259 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) May 16 00:44:05.712266 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 16 00:44:05.712272 kernel: ACPI: Interpreter enabled May 16 00:44:05.712280 kernel: ACPI: Using GIC for interrupt routing May 16 00:44:05.712286 kernel: ACPI: MCFG table detected, 1 entries May 16 00:44:05.712293 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA May 16 00:44:05.712299 kernel: printk: console [ttyAMA0] enabled May 16 00:44:05.712306 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 16 00:44:05.712430 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 16 00:44:05.712495 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] May 16 00:44:05.712554 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] May 16 00:44:05.712612 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 May 16 00:44:05.712669 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] May 16 00:44:05.712695 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] May 16 00:44:05.712702 kernel: PCI host bridge to bus 0000:00 May 16 00:44:05.712776 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] May 16 00:44:05.712831 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] May 16 00:44:05.712885 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] May 16 00:44:05.712956 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 16 00:44:05.713045 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 May 16 00:44:05.713114 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 May 16 00:44:05.713173 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] May 16 00:44:05.713233 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] May 16 00:44:05.713292 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] May 16 00:44:05.713355 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] May 16 00:44:05.713414 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] May 16 00:44:05.713474 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] May 16 00:44:05.713526 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] May 16 00:44:05.713580 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] May 16 00:44:05.713634 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] May 16 00:44:05.713643 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 May 16 00:44:05.713651 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 May 16 00:44:05.713659 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 May 16 00:44:05.713666 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 May 16 00:44:05.713672 kernel: iommu: Default domain type: Translated May 16 00:44:05.713689 kernel: iommu: DMA domain TLB invalidation policy: strict mode May 16 00:44:05.713696 kernel: vgaarb: loaded May 16 00:44:05.713703 kernel: pps_core: LinuxPPS API ver. 1 registered May 16 00:44:05.713709 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti May 16 00:44:05.713716 kernel: PTP clock support registered May 16 00:44:05.713723 kernel: Registered efivars operations May 16 00:44:05.713731 kernel: clocksource: Switched to clocksource arch_sys_counter May 16 00:44:05.713738 kernel: VFS: Disk quotas dquot_6.6.0 May 16 00:44:05.713744 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 16 00:44:05.713751 kernel: pnp: PnP ACPI init May 16 00:44:05.713823 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved May 16 00:44:05.713833 kernel: pnp: PnP ACPI: found 1 devices May 16 00:44:05.713840 kernel: NET: Registered PF_INET protocol family May 16 00:44:05.713846 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 16 00:44:05.713855 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 16 00:44:05.713861 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 16 00:44:05.713868 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 16 00:44:05.713875 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) May 16 00:44:05.713881 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 16 00:44:05.713888 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 16 00:44:05.713903 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 16 00:44:05.713910 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 16 00:44:05.713916 kernel: PCI: CLS 0 bytes, default 64 May 16 00:44:05.713924 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available May 16 00:44:05.713931 kernel: kvm [1]: HYP mode not available May 16 00:44:05.713937 kernel: Initialise system trusted keyrings May 16 00:44:05.713944 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 16 00:44:05.713950 kernel: Key type asymmetric registered May 16 00:44:05.713957 kernel: Asymmetric key parser 'x509' registered May 16 00:44:05.713963 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) May 16 00:44:05.713970 kernel: io scheduler mq-deadline registered May 16 00:44:05.713976 kernel: io scheduler kyber registered May 16 00:44:05.713984 kernel: io scheduler bfq registered May 16 00:44:05.713990 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 May 16 00:44:05.713997 kernel: ACPI: button: Power Button [PWRB] May 16 00:44:05.714004 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 May 16 00:44:05.714067 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) May 16 00:44:05.714105 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 16 00:44:05.714112 kernel: thunder_xcv, ver 1.0 May 16 00:44:05.714119 kernel: thunder_bgx, ver 1.0 May 16 00:44:05.714125 kernel: nicpf, ver 1.0 May 16 00:44:05.714133 kernel: nicvf, ver 1.0 May 16 00:44:05.714200 kernel: rtc-efi rtc-efi.0: registered as rtc0 May 16 00:44:05.714257 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-05-16T00:44:05 UTC (1747356245) May 16 00:44:05.714266 kernel: hid: raw HID events driver (C) Jiri Kosina May 16 00:44:05.714272 kernel: NET: Registered PF_INET6 protocol family May 16 00:44:05.714280 kernel: Segment Routing with IPv6 May 16 00:44:05.714286 kernel: In-situ OAM (IOAM) with IPv6 May 16 00:44:05.714293 kernel: NET: Registered PF_PACKET protocol family May 16 00:44:05.714301 kernel: Key type dns_resolver registered May 16 00:44:05.714307 kernel: registered taskstats version 1 May 16 00:44:05.714314 kernel: Loading compiled-in X.509 certificates May 16 00:44:05.714321 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.181-flatcar: 2793d535c1de6f1789b22ef06bd5666144f4eeb2' May 16 00:44:05.714328 kernel: Key type .fscrypt registered May 16 00:44:05.714346 kernel: Key type fscrypt-provisioning registered May 16 00:44:05.714353 kernel: ima: No TPM chip found, activating TPM-bypass! May 16 00:44:05.714360 kernel: ima: Allocated hash algorithm: sha1 May 16 00:44:05.714366 kernel: ima: No architecture policies found May 16 00:44:05.714375 kernel: clk: Disabling unused clocks May 16 00:44:05.714381 kernel: Freeing unused kernel memory: 36480K May 16 00:44:05.714388 kernel: Run /init as init process May 16 00:44:05.714394 kernel: with arguments: May 16 00:44:05.714400 kernel: /init May 16 00:44:05.714407 kernel: with environment: May 16 00:44:05.714413 kernel: HOME=/ May 16 00:44:05.714420 kernel: TERM=linux May 16 00:44:05.714426 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 16 00:44:05.714436 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) May 16 00:44:05.714445 systemd[1]: Detected virtualization kvm. May 16 00:44:05.714452 systemd[1]: Detected architecture arm64. May 16 00:44:05.714459 systemd[1]: Running in initrd. May 16 00:44:05.714466 systemd[1]: No hostname configured, using default hostname. May 16 00:44:05.714473 systemd[1]: Hostname set to . May 16 00:44:05.714481 systemd[1]: Initializing machine ID from VM UUID. May 16 00:44:05.714489 systemd[1]: Queued start job for default target initrd.target. May 16 00:44:05.714496 systemd[1]: Started systemd-ask-password-console.path. May 16 00:44:05.714503 systemd[1]: Reached target cryptsetup.target. May 16 00:44:05.714510 systemd[1]: Reached target paths.target. May 16 00:44:05.714517 systemd[1]: Reached target slices.target. May 16 00:44:05.714524 systemd[1]: Reached target swap.target. May 16 00:44:05.714531 systemd[1]: Reached target timers.target. May 16 00:44:05.714539 systemd[1]: Listening on iscsid.socket. May 16 00:44:05.714548 systemd[1]: Listening on iscsiuio.socket. May 16 00:44:05.714556 systemd[1]: Listening on systemd-journald-audit.socket. May 16 00:44:05.714563 systemd[1]: Listening on systemd-journald-dev-log.socket. May 16 00:44:05.714570 systemd[1]: Listening on systemd-journald.socket. May 16 00:44:05.714577 systemd[1]: Listening on systemd-networkd.socket. May 16 00:44:05.714584 systemd[1]: Listening on systemd-udevd-control.socket. May 16 00:44:05.714591 systemd[1]: Listening on systemd-udevd-kernel.socket. May 16 00:44:05.714599 systemd[1]: Reached target sockets.target. May 16 00:44:05.714607 systemd[1]: Starting kmod-static-nodes.service... May 16 00:44:05.714614 systemd[1]: Finished network-cleanup.service. May 16 00:44:05.714621 systemd[1]: Starting systemd-fsck-usr.service... May 16 00:44:05.714628 systemd[1]: Starting systemd-journald.service... May 16 00:44:05.714636 systemd[1]: Starting systemd-modules-load.service... May 16 00:44:05.714643 systemd[1]: Starting systemd-resolved.service... May 16 00:44:05.714650 systemd[1]: Starting systemd-vconsole-setup.service... May 16 00:44:05.714657 systemd[1]: Finished kmod-static-nodes.service. May 16 00:44:05.714664 systemd[1]: Finished systemd-fsck-usr.service. May 16 00:44:05.714672 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... May 16 00:44:05.714687 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. May 16 00:44:05.714695 kernel: audit: type=1130 audit(1747356245.712:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:44:05.714706 systemd-journald[290]: Journal started May 16 00:44:05.714746 systemd-journald[290]: Runtime Journal (/run/log/journal/8337541e062c4f72b24b025fc84dfce3) is 6.0M, max 48.7M, 42.6M free. May 16 00:44:05.712000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:44:05.707667 systemd-modules-load[291]: Inserted module 'overlay' May 16 00:44:05.717008 systemd[1]: Started systemd-journald.service. May 16 00:44:05.716000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:44:05.719694 kernel: audit: type=1130 audit(1747356245.716:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:44:05.719851 systemd[1]: Finished systemd-vconsole-setup.service. May 16 00:44:05.721281 systemd[1]: Starting dracut-cmdline-ask.service... May 16 00:44:05.719000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:44:05.724720 kernel: audit: type=1130 audit(1747356245.719:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:44:05.729103 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 16 00:44:05.735037 kernel: Bridge firewalling registered May 16 00:44:05.734786 systemd-modules-load[291]: Inserted module 'br_netfilter' May 16 00:44:05.738563 systemd[1]: Finished dracut-cmdline-ask.service. May 16 00:44:05.739000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:44:05.739807 systemd-resolved[292]: Positive Trust Anchors: May 16 00:44:05.742993 kernel: audit: type=1130 audit(1747356245.739:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:44:05.739815 systemd-resolved[292]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 16 00:44:05.739840 systemd-resolved[292]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test May 16 00:44:05.742319 systemd[1]: Starting dracut-cmdline.service... May 16 00:44:05.751000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:44:05.755717 kernel: audit: type=1130 audit(1747356245.751:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:44:05.755738 kernel: SCSI subsystem initialized May 16 00:44:05.743987 systemd-resolved[292]: Defaulting to hostname 'linux'. May 16 00:44:05.757339 dracut-cmdline[307]: dracut-dracut-053 May 16 00:44:05.745993 systemd[1]: Started systemd-resolved.service. May 16 00:44:05.753638 systemd[1]: Reached target nss-lookup.target. May 16 00:44:05.759814 dracut-cmdline[307]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=2d88e96fdc9dc9b028836e57c250f3fd2abd3e6490e27ecbf72d8b216e3efce8 May 16 00:44:05.767403 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 16 00:44:05.767457 kernel: device-mapper: uevent: version 1.0.3 May 16 00:44:05.767468 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com May 16 00:44:05.769727 systemd-modules-load[291]: Inserted module 'dm_multipath' May 16 00:44:05.770514 systemd[1]: Finished systemd-modules-load.service. May 16 00:44:05.771000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:44:05.773181 systemd[1]: Starting systemd-sysctl.service... May 16 00:44:05.776463 kernel: audit: type=1130 audit(1747356245.771:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:44:05.781277 systemd[1]: Finished systemd-sysctl.service. May 16 00:44:05.784709 kernel: audit: type=1130 audit(1747356245.781:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:44:05.781000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:44:05.829726 kernel: Loading iSCSI transport class v2.0-870. May 16 00:44:05.847704 kernel: iscsi: registered transport (tcp) May 16 00:44:05.864704 kernel: iscsi: registered transport (qla4xxx) May 16 00:44:05.864721 kernel: QLogic iSCSI HBA Driver May 16 00:44:05.919225 systemd[1]: Finished dracut-cmdline.service. May 16 00:44:05.919000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:44:05.920944 systemd[1]: Starting dracut-pre-udev.service... May 16 00:44:05.923883 kernel: audit: type=1130 audit(1747356245.919:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:44:05.971707 kernel: raid6: neonx8 gen() 11356 MB/s May 16 00:44:05.988704 kernel: raid6: neonx8 xor() 9065 MB/s May 16 00:44:06.005705 kernel: raid6: neonx4 gen() 12805 MB/s May 16 00:44:06.022697 kernel: raid6: neonx4 xor() 11031 MB/s May 16 00:44:06.039695 kernel: raid6: neonx2 gen() 12881 MB/s May 16 00:44:06.056701 kernel: raid6: neonx2 xor() 10338 MB/s May 16 00:44:06.073703 kernel: raid6: neonx1 gen() 10521 MB/s May 16 00:44:06.090736 kernel: raid6: neonx1 xor() 8700 MB/s May 16 00:44:06.107705 kernel: raid6: int64x8 gen() 6225 MB/s May 16 00:44:06.124726 kernel: raid6: int64x8 xor() 3531 MB/s May 16 00:44:06.141706 kernel: raid6: int64x4 gen() 7230 MB/s May 16 00:44:06.158699 kernel: raid6: int64x4 xor() 3854 MB/s May 16 00:44:06.175697 kernel: raid6: int64x2 gen() 6146 MB/s May 16 00:44:06.192694 kernel: raid6: int64x2 xor() 3307 MB/s May 16 00:44:06.209694 kernel: raid6: int64x1 gen() 5040 MB/s May 16 00:44:06.227013 kernel: raid6: int64x1 xor() 2645 MB/s May 16 00:44:06.227027 kernel: raid6: using algorithm neonx2 gen() 12881 MB/s May 16 00:44:06.227036 kernel: raid6: .... xor() 10338 MB/s, rmw enabled May 16 00:44:06.227044 kernel: raid6: using neon recovery algorithm May 16 00:44:06.237693 kernel: xor: measuring software checksum speed May 16 00:44:06.237709 kernel: 8regs : 17188 MB/sec May 16 00:44:06.238739 kernel: 32regs : 19519 MB/sec May 16 00:44:06.238754 kernel: arm64_neon : 26188 MB/sec May 16 00:44:06.238762 kernel: xor: using function: arm64_neon (26188 MB/sec) May 16 00:44:06.294700 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no May 16 00:44:06.305060 systemd[1]: Finished dracut-pre-udev.service. May 16 00:44:06.304000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:44:06.305000 audit: BPF prog-id=7 op=LOAD May 16 00:44:06.307000 audit: BPF prog-id=8 op=LOAD May 16 00:44:06.308363 systemd[1]: Starting systemd-udevd.service... May 16 00:44:06.309404 kernel: audit: type=1130 audit(1747356246.304:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:44:06.320920 systemd-udevd[489]: Using default interface naming scheme 'v252'. May 16 00:44:06.324420 systemd[1]: Started systemd-udevd.service. May 16 00:44:06.324000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:44:06.326311 systemd[1]: Starting dracut-pre-trigger.service... May 16 00:44:06.337691 dracut-pre-trigger[497]: rd.md=0: removing MD RAID activation May 16 00:44:06.367214 systemd[1]: Finished dracut-pre-trigger.service. May 16 00:44:06.368000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:44:06.369018 systemd[1]: Starting systemd-udev-trigger.service... May 16 00:44:06.403639 systemd[1]: Finished systemd-udev-trigger.service. May 16 00:44:06.404000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:44:06.436971 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 16 00:44:06.441001 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 16 00:44:06.441017 kernel: GPT:9289727 != 19775487 May 16 00:44:06.441026 kernel: GPT:Alternate GPT header not at the end of the disk. May 16 00:44:06.441043 kernel: GPT:9289727 != 19775487 May 16 00:44:06.441052 kernel: GPT: Use GNU Parted to correct GPT errors. May 16 00:44:06.441061 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 16 00:44:06.456515 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. May 16 00:44:06.460362 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. May 16 00:44:06.461354 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. May 16 00:44:06.464651 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (551) May 16 00:44:06.467993 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. May 16 00:44:06.475918 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. May 16 00:44:06.477700 systemd[1]: Starting disk-uuid.service... May 16 00:44:06.483716 disk-uuid[560]: Primary Header is updated. May 16 00:44:06.483716 disk-uuid[560]: Secondary Entries is updated. May 16 00:44:06.483716 disk-uuid[560]: Secondary Header is updated. May 16 00:44:06.495695 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 16 00:44:06.498699 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 16 00:44:06.501703 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 16 00:44:07.504609 disk-uuid[561]: The operation has completed successfully. May 16 00:44:07.505444 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 16 00:44:07.533168 systemd[1]: disk-uuid.service: Deactivated successfully. May 16 00:44:07.533000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:44:07.533000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:44:07.533264 systemd[1]: Finished disk-uuid.service. May 16 00:44:07.534955 systemd[1]: Starting verity-setup.service... May 16 00:44:07.550713 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" May 16 00:44:07.573358 systemd[1]: Found device dev-mapper-usr.device. May 16 00:44:07.575966 systemd[1]: Mounting sysusr-usr.mount... May 16 00:44:07.577882 systemd[1]: Finished verity-setup.service. May 16 00:44:07.578000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:44:07.636723 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. May 16 00:44:07.637123 systemd[1]: Mounted sysusr-usr.mount. May 16 00:44:07.637994 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. May 16 00:44:07.638793 systemd[1]: Starting ignition-setup.service... May 16 00:44:07.640751 systemd[1]: Starting parse-ip-for-networkd.service... May 16 00:44:07.648321 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 16 00:44:07.648443 kernel: BTRFS info (device vda6): using free space tree May 16 00:44:07.648470 kernel: BTRFS info (device vda6): has skinny extents May 16 00:44:07.656316 systemd[1]: mnt-oem.mount: Deactivated successfully. May 16 00:44:07.663451 systemd[1]: Finished ignition-setup.service. May 16 00:44:07.664000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:44:07.665213 systemd[1]: Starting ignition-fetch-offline.service... May 16 00:44:07.743216 systemd[1]: Finished parse-ip-for-networkd.service. May 16 00:44:07.743000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:44:07.743000 audit: BPF prog-id=9 op=LOAD May 16 00:44:07.745157 systemd[1]: Starting systemd-networkd.service... May 16 00:44:07.772554 ignition[641]: Ignition 2.14.0 May 16 00:44:07.772564 ignition[641]: Stage: fetch-offline May 16 00:44:07.772602 ignition[641]: no configs at "/usr/lib/ignition/base.d" May 16 00:44:07.772611 ignition[641]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 16 00:44:07.772764 ignition[641]: parsed url from cmdline: "" May 16 00:44:07.772767 ignition[641]: no config URL provided May 16 00:44:07.772772 ignition[641]: reading system config file "/usr/lib/ignition/user.ign" May 16 00:44:07.772779 ignition[641]: no config at "/usr/lib/ignition/user.ign" May 16 00:44:07.772798 ignition[641]: op(1): [started] loading QEMU firmware config module May 16 00:44:07.772803 ignition[641]: op(1): executing: "modprobe" "qemu_fw_cfg" May 16 00:44:07.780455 systemd-networkd[738]: lo: Link UP May 16 00:44:07.780463 systemd-networkd[738]: lo: Gained carrier May 16 00:44:07.781000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:44:07.781017 ignition[641]: op(1): [finished] loading QEMU firmware config module May 16 00:44:07.781103 systemd-networkd[738]: Enumeration completed May 16 00:44:07.781199 systemd[1]: Started systemd-networkd.service. May 16 00:44:07.781428 systemd-networkd[738]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 16 00:44:07.782204 systemd[1]: Reached target network.target. May 16 00:44:07.782797 systemd-networkd[738]: eth0: Link UP May 16 00:44:07.782801 systemd-networkd[738]: eth0: Gained carrier May 16 00:44:07.784158 systemd[1]: Starting iscsiuio.service... May 16 00:44:07.792890 systemd[1]: Started iscsiuio.service. May 16 00:44:07.792000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:44:07.794357 systemd[1]: Starting iscsid.service... May 16 00:44:07.795305 ignition[641]: parsing config with SHA512: 1b3898db7bfd4bc22066edee98fa48075efb906a7cf56a2cd1612b129b259c12378aebfc85df86458531f0fd3c7e209b3952439dbf12d42404e2b7d349d523f4 May 16 00:44:07.797790 iscsid[744]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi May 16 00:44:07.797790 iscsid[744]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. May 16 00:44:07.797790 iscsid[744]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. May 16 00:44:07.797790 iscsid[744]: If using hardware iscsi like qla4xxx this message can be ignored. May 16 00:44:07.797790 iscsid[744]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi May 16 00:44:07.797790 iscsid[744]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf May 16 00:44:07.805000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:44:07.806000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:44:07.800943 systemd[1]: Started iscsid.service. May 16 00:44:07.801345 ignition[641]: fetch-offline: fetch-offline passed May 16 00:44:07.800995 unknown[641]: fetched base config from "system" May 16 00:44:07.801412 ignition[641]: Ignition finished successfully May 16 00:44:07.801007 unknown[641]: fetched user config from "qemu" May 16 00:44:07.805953 systemd[1]: Finished ignition-fetch-offline.service. May 16 00:44:07.808095 systemd[1]: Starting dracut-initqueue.service... May 16 00:44:07.808951 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 16 00:44:07.809605 systemd[1]: Starting ignition-kargs.service... May 16 00:44:07.811827 systemd-networkd[738]: eth0: DHCPv4 address 10.0.0.92/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 16 00:44:07.818511 ignition[748]: Ignition 2.14.0 May 16 00:44:07.818517 ignition[748]: Stage: kargs May 16 00:44:07.818612 ignition[748]: no configs at "/usr/lib/ignition/base.d" May 16 00:44:07.818621 ignition[748]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 16 00:44:07.820999 systemd[1]: Finished dracut-initqueue.service. May 16 00:44:07.819269 ignition[748]: kargs: kargs passed May 16 00:44:07.822000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:44:07.822349 systemd[1]: Reached target remote-fs-pre.target. May 16 00:44:07.819308 ignition[748]: Ignition finished successfully May 16 00:44:07.823857 systemd[1]: Reached target remote-cryptsetup.target. May 16 00:44:07.825236 systemd[1]: Reached target remote-fs.target. May 16 00:44:07.829000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:44:07.827123 systemd[1]: Starting dracut-pre-mount.service... May 16 00:44:07.828541 systemd[1]: Finished ignition-kargs.service. May 16 00:44:07.830503 systemd[1]: Starting ignition-disks.service... May 16 00:44:07.834624 systemd[1]: Finished dracut-pre-mount.service. May 16 00:44:07.835000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:44:07.837622 ignition[762]: Ignition 2.14.0 May 16 00:44:07.837632 ignition[762]: Stage: disks May 16 00:44:07.837746 ignition[762]: no configs at "/usr/lib/ignition/base.d" May 16 00:44:07.837756 ignition[762]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 16 00:44:07.839000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:44:07.839252 systemd[1]: Finished ignition-disks.service. May 16 00:44:07.838441 ignition[762]: disks: disks passed May 16 00:44:07.840663 systemd[1]: Reached target initrd-root-device.target. May 16 00:44:07.838482 ignition[762]: Ignition finished successfully May 16 00:44:07.842384 systemd[1]: Reached target local-fs-pre.target. May 16 00:44:07.843718 systemd[1]: Reached target local-fs.target. May 16 00:44:07.844905 systemd[1]: Reached target sysinit.target. May 16 00:44:07.846245 systemd[1]: Reached target basic.target. May 16 00:44:07.848378 systemd[1]: Starting systemd-fsck-root.service... May 16 00:44:07.858825 systemd-fsck[774]: ROOT: clean, 619/553520 files, 56022/553472 blocks May 16 00:44:07.862767 systemd[1]: Finished systemd-fsck-root.service. May 16 00:44:07.865239 systemd[1]: Mounting sysroot.mount... May 16 00:44:07.864000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:44:07.872694 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. May 16 00:44:07.873300 systemd[1]: Mounted sysroot.mount. May 16 00:44:07.874056 systemd[1]: Reached target initrd-root-fs.target. May 16 00:44:07.876308 systemd[1]: Mounting sysroot-usr.mount... May 16 00:44:07.877225 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. May 16 00:44:07.877261 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 16 00:44:07.877282 systemd[1]: Reached target ignition-diskful.target. May 16 00:44:07.879260 systemd[1]: Mounted sysroot-usr.mount. May 16 00:44:07.880633 systemd[1]: Starting initrd-setup-root.service... May 16 00:44:07.884770 initrd-setup-root[784]: cut: /sysroot/etc/passwd: No such file or directory May 16 00:44:07.888865 initrd-setup-root[792]: cut: /sysroot/etc/group: No such file or directory May 16 00:44:07.893038 initrd-setup-root[800]: cut: /sysroot/etc/shadow: No such file or directory May 16 00:44:07.896509 initrd-setup-root[808]: cut: /sysroot/etc/gshadow: No such file or directory May 16 00:44:07.922942 systemd[1]: Finished initrd-setup-root.service. May 16 00:44:07.922000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:44:07.924353 systemd[1]: Starting ignition-mount.service... May 16 00:44:07.925501 systemd[1]: Starting sysroot-boot.service... May 16 00:44:07.929615 bash[825]: umount: /sysroot/usr/share/oem: not mounted. May 16 00:44:07.940138 ignition[826]: INFO : Ignition 2.14.0 May 16 00:44:07.940138 ignition[826]: INFO : Stage: mount May 16 00:44:07.941000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:44:07.943454 ignition[826]: INFO : no configs at "/usr/lib/ignition/base.d" May 16 00:44:07.943454 ignition[826]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 16 00:44:07.943454 ignition[826]: INFO : mount: mount passed May 16 00:44:07.943454 ignition[826]: INFO : Ignition finished successfully May 16 00:44:07.947000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:44:07.941798 systemd[1]: Finished ignition-mount.service. May 16 00:44:07.947404 systemd[1]: Finished sysroot-boot.service. May 16 00:44:08.584647 systemd[1]: Mounting sysroot-usr-share-oem.mount... May 16 00:44:08.591395 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (835) May 16 00:44:08.591430 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 16 00:44:08.591440 kernel: BTRFS info (device vda6): using free space tree May 16 00:44:08.591873 kernel: BTRFS info (device vda6): has skinny extents May 16 00:44:08.595083 systemd[1]: Mounted sysroot-usr-share-oem.mount. May 16 00:44:08.596629 systemd[1]: Starting ignition-files.service... May 16 00:44:08.609905 ignition[855]: INFO : Ignition 2.14.0 May 16 00:44:08.609905 ignition[855]: INFO : Stage: files May 16 00:44:08.611344 ignition[855]: INFO : no configs at "/usr/lib/ignition/base.d" May 16 00:44:08.611344 ignition[855]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 16 00:44:08.611344 ignition[855]: DEBUG : files: compiled without relabeling support, skipping May 16 00:44:08.613801 ignition[855]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 16 00:44:08.613801 ignition[855]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 16 00:44:08.616453 ignition[855]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 16 00:44:08.617535 ignition[855]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 16 00:44:08.618717 unknown[855]: wrote ssh authorized keys file for user: core May 16 00:44:08.619614 ignition[855]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 16 00:44:08.619614 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" May 16 00:44:08.619614 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" May 16 00:44:08.619614 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" May 16 00:44:08.624783 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 16 00:44:08.624783 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" May 16 00:44:08.624783 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" May 16 00:44:08.624783 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" May 16 00:44:08.624783 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 May 16 00:44:09.039082 systemd-networkd[738]: eth0: Gained IPv6LL May 16 00:44:09.143188 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK May 16 00:44:09.523094 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" May 16 00:44:09.523094 ignition[855]: INFO : files: op(7): [started] processing unit "coreos-metadata.service" May 16 00:44:09.523094 ignition[855]: INFO : files: op(7): op(8): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 16 00:44:09.528206 ignition[855]: INFO : files: op(7): op(8): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 16 00:44:09.528206 ignition[855]: INFO : files: op(7): [finished] processing unit "coreos-metadata.service" May 16 00:44:09.528206 ignition[855]: INFO : files: op(9): [started] setting preset to disabled for "coreos-metadata.service" May 16 00:44:09.528206 ignition[855]: INFO : files: op(9): op(a): [started] removing enablement symlink(s) for "coreos-metadata.service" May 16 00:44:09.584981 ignition[855]: INFO : files: op(9): op(a): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 16 00:44:09.586163 ignition[855]: INFO : files: op(9): [finished] setting preset to disabled for "coreos-metadata.service" May 16 00:44:09.586163 ignition[855]: INFO : files: createResultFile: createFiles: op(b): [started] writing file "/sysroot/etc/.ignition-result.json" May 16 00:44:09.586163 ignition[855]: INFO : files: createResultFile: createFiles: op(b): [finished] writing file "/sysroot/etc/.ignition-result.json" May 16 00:44:09.586163 ignition[855]: INFO : files: files passed May 16 00:44:09.586163 ignition[855]: INFO : Ignition finished successfully May 16 00:44:09.588000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:44:09.586430 systemd[1]: Finished ignition-files.service. May 16 00:44:09.591741 systemd[1]: Starting initrd-setup-root-after-ignition.service... May 16 00:44:09.592616 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). May 16 00:44:09.598821 initrd-setup-root-after-ignition[879]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory May 16 00:44:09.593448 systemd[1]: Starting ignition-quench.service... May 16 00:44:09.600477 initrd-setup-root-after-ignition[881]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 16 00:44:09.601000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:44:09.599784 systemd[1]: Finished initrd-setup-root-after-ignition.service. May 16 00:44:09.603000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:44:09.603000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:44:09.601749 systemd[1]: ignition-quench.service: Deactivated successfully. May 16 00:44:09.601831 systemd[1]: Finished ignition-quench.service. May 16 00:44:09.603334 systemd[1]: Reached target ignition-complete.target. May 16 00:44:09.605308 systemd[1]: Starting initrd-parse-etc.service... May 16 00:44:09.620213 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 16 00:44:09.620308 systemd[1]: Finished initrd-parse-etc.service. May 16 00:44:09.620000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:44:09.620000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:44:09.621793 systemd[1]: Reached target initrd-fs.target. May 16 00:44:09.622822 systemd[1]: Reached target initrd.target. May 16 00:44:09.623951 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. May 16 00:44:09.624669 systemd[1]: Starting dracut-pre-pivot.service... May 16 00:44:09.635313 systemd[1]: Finished dracut-pre-pivot.service. May 16 00:44:09.636000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:44:09.636931 systemd[1]: Starting initrd-cleanup.service... May 16 00:44:09.645195 systemd[1]: Stopped target nss-lookup.target. May 16 00:44:09.646121 systemd[1]: Stopped target remote-cryptsetup.target. May 16 00:44:09.647380 systemd[1]: Stopped target timers.target. May 16 00:44:09.648483 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 16 00:44:09.648000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:44:09.648591 systemd[1]: Stopped dracut-pre-pivot.service. May 16 00:44:09.649715 systemd[1]: Stopped target initrd.target. May 16 00:44:09.651046 systemd[1]: Stopped target basic.target. May 16 00:44:09.652154 systemd[1]: Stopped target ignition-complete.target. May 16 00:44:09.653447 systemd[1]: Stopped target ignition-diskful.target. May 16 00:44:09.654578 systemd[1]: Stopped target initrd-root-device.target. May 16 00:44:09.655889 systemd[1]: Stopped target remote-fs.target. May 16 00:44:09.657071 systemd[1]: Stopped target remote-fs-pre.target. May 16 00:44:09.658337 systemd[1]: Stopped target sysinit.target. May 16 00:44:09.659499 systemd[1]: Stopped target local-fs.target. May 16 00:44:09.660641 systemd[1]: Stopped target local-fs-pre.target. May 16 00:44:09.661785 systemd[1]: Stopped target swap.target. May 16 00:44:09.663000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:44:09.662835 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 16 00:44:09.662959 systemd[1]: Stopped dracut-pre-mount.service. May 16 00:44:09.666000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:44:09.664093 systemd[1]: Stopped target cryptsetup.target. May 16 00:44:09.666000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:44:09.665093 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 16 00:44:09.665196 systemd[1]: Stopped dracut-initqueue.service. May 16 00:44:09.666545 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 16 00:44:09.666639 systemd[1]: Stopped ignition-fetch-offline.service. May 16 00:44:09.667776 systemd[1]: Stopped target paths.target. May 16 00:44:09.668756 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 16 00:44:09.672718 systemd[1]: Stopped systemd-ask-password-console.path. May 16 00:44:09.673666 systemd[1]: Stopped target slices.target. May 16 00:44:09.674905 systemd[1]: Stopped target sockets.target. May 16 00:44:09.676007 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 16 00:44:09.677000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:44:09.676121 systemd[1]: Stopped initrd-setup-root-after-ignition.service. May 16 00:44:09.677000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:44:09.677344 systemd[1]: ignition-files.service: Deactivated successfully. May 16 00:44:09.677439 systemd[1]: Stopped ignition-files.service. May 16 00:44:09.682705 iscsid[744]: iscsid shutting down. May 16 00:44:09.679741 systemd[1]: Stopping ignition-mount.service... May 16 00:44:09.680710 systemd[1]: Stopping iscsid.service... May 16 00:44:09.684000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:44:09.683815 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 16 00:44:09.683948 systemd[1]: Stopped kmod-static-nodes.service. May 16 00:44:09.687795 ignition[895]: INFO : Ignition 2.14.0 May 16 00:44:09.687795 ignition[895]: INFO : Stage: umount May 16 00:44:09.687795 ignition[895]: INFO : no configs at "/usr/lib/ignition/base.d" May 16 00:44:09.687795 ignition[895]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 16 00:44:09.687795 ignition[895]: INFO : umount: umount passed May 16 00:44:09.687795 ignition[895]: INFO : Ignition finished successfully May 16 00:44:09.688000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:44:09.691000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:44:09.693000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:44:09.695000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:44:09.685775 systemd[1]: Stopping sysroot-boot.service... May 16 00:44:09.688379 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 16 00:44:09.688533 systemd[1]: Stopped systemd-udev-trigger.service. May 16 00:44:09.701000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:44:09.689856 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 16 00:44:09.702000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:44:09.704000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:44:09.689961 systemd[1]: Stopped dracut-pre-trigger.service. May 16 00:44:09.692671 systemd[1]: iscsid.service: Deactivated successfully. May 16 00:44:09.707000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:44:09.707000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:44:09.692781 systemd[1]: Stopped iscsid.service. May 16 00:44:09.694298 systemd[1]: ignition-mount.service: Deactivated successfully. May 16 00:44:09.694372 systemd[1]: Stopped ignition-mount.service. May 16 00:44:09.709000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:44:09.696286 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 16 00:44:09.696823 systemd[1]: iscsid.socket: Deactivated successfully. May 16 00:44:09.696890 systemd[1]: Closed iscsid.socket. May 16 00:44:09.698784 systemd[1]: ignition-disks.service: Deactivated successfully. May 16 00:44:09.698989 systemd[1]: Stopped ignition-disks.service. May 16 00:44:09.701396 systemd[1]: ignition-kargs.service: Deactivated successfully. May 16 00:44:09.701576 systemd[1]: Stopped ignition-kargs.service. May 16 00:44:09.703567 systemd[1]: ignition-setup.service: Deactivated successfully. May 16 00:44:09.703608 systemd[1]: Stopped ignition-setup.service. May 16 00:44:09.724000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:44:09.704596 systemd[1]: Stopping iscsiuio.service... May 16 00:44:09.725000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:44:09.706316 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 16 00:44:09.706401 systemd[1]: Finished initrd-cleanup.service. May 16 00:44:09.729000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:44:09.709817 systemd[1]: iscsiuio.service: Deactivated successfully. May 16 00:44:09.709920 systemd[1]: Stopped iscsiuio.service. May 16 00:44:09.710781 systemd[1]: Stopped target network.target. May 16 00:44:09.732000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:44:09.712837 systemd[1]: iscsiuio.socket: Deactivated successfully. May 16 00:44:09.732000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:44:09.712873 systemd[1]: Closed iscsiuio.socket. May 16 00:44:09.715254 systemd[1]: Stopping systemd-networkd.service... May 16 00:44:09.734000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:44:09.716946 systemd[1]: Stopping systemd-resolved.service... May 16 00:44:09.721736 systemd-networkd[738]: eth0: DHCPv6 lease lost May 16 00:44:09.738000 audit: BPF prog-id=9 op=UNLOAD May 16 00:44:09.722839 systemd[1]: systemd-networkd.service: Deactivated successfully. May 16 00:44:09.722940 systemd[1]: Stopped systemd-networkd.service. May 16 00:44:09.741000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:44:09.724735 systemd[1]: sysroot-boot.service: Deactivated successfully. May 16 00:44:09.724809 systemd[1]: Stopped sysroot-boot.service. May 16 00:44:09.726616 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 16 00:44:09.726644 systemd[1]: Closed systemd-networkd.socket. May 16 00:44:09.744000 audit: BPF prog-id=6 op=UNLOAD May 16 00:44:09.727733 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 16 00:44:09.727776 systemd[1]: Stopped initrd-setup-root.service. May 16 00:44:09.747000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:44:09.729969 systemd[1]: Stopping network-cleanup.service... May 16 00:44:09.747000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:44:09.731104 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 16 00:44:09.731164 systemd[1]: Stopped parse-ip-for-networkd.service. May 16 00:44:09.732496 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 16 00:44:09.751000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:44:09.732538 systemd[1]: Stopped systemd-sysctl.service. May 16 00:44:09.753000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:44:09.734388 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 16 00:44:09.754000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:44:09.734455 systemd[1]: Stopped systemd-modules-load.service. May 16 00:44:09.735883 systemd[1]: Stopping systemd-udevd.service... May 16 00:44:09.739974 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 16 00:44:09.740433 systemd[1]: systemd-resolved.service: Deactivated successfully. May 16 00:44:09.740527 systemd[1]: Stopped systemd-resolved.service. May 16 00:44:09.746297 systemd[1]: network-cleanup.service: Deactivated successfully. May 16 00:44:09.757000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:44:09.746399 systemd[1]: Stopped network-cleanup.service. May 16 00:44:09.762000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:44:09.762000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:44:09.747564 systemd[1]: systemd-udevd.service: Deactivated successfully. May 16 00:44:09.747673 systemd[1]: Stopped systemd-udevd.service. May 16 00:44:09.748889 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 16 00:44:09.748936 systemd[1]: Closed systemd-udevd-control.socket. May 16 00:44:09.749941 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 16 00:44:09.749977 systemd[1]: Closed systemd-udevd-kernel.socket. May 16 00:44:09.751301 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 16 00:44:09.751345 systemd[1]: Stopped dracut-pre-udev.service. May 16 00:44:09.752490 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 16 00:44:09.752531 systemd[1]: Stopped dracut-cmdline.service. May 16 00:44:09.753782 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 16 00:44:09.753822 systemd[1]: Stopped dracut-cmdline-ask.service. May 16 00:44:09.755702 systemd[1]: Starting initrd-udevadm-cleanup-db.service... May 16 00:44:09.756459 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 16 00:44:09.756516 systemd[1]: Stopped systemd-vconsole-setup.service. May 16 00:44:09.761073 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 16 00:44:09.761156 systemd[1]: Finished initrd-udevadm-cleanup-db.service. May 16 00:44:09.762603 systemd[1]: Reached target initrd-switch-root.target. May 16 00:44:09.764442 systemd[1]: Starting initrd-switch-root.service... May 16 00:44:09.770995 systemd[1]: Switching root. May 16 00:44:09.788886 systemd-journald[290]: Journal stopped May 16 00:44:11.821466 systemd-journald[290]: Received SIGTERM from PID 1 (systemd). May 16 00:44:11.821526 kernel: SELinux: Class mctp_socket not defined in policy. May 16 00:44:11.821539 kernel: SELinux: Class anon_inode not defined in policy. May 16 00:44:11.821549 kernel: SELinux: the above unknown classes and permissions will be allowed May 16 00:44:11.821559 kernel: SELinux: policy capability network_peer_controls=1 May 16 00:44:11.821568 kernel: SELinux: policy capability open_perms=1 May 16 00:44:11.821584 kernel: SELinux: policy capability extended_socket_class=1 May 16 00:44:11.821594 kernel: SELinux: policy capability always_check_network=0 May 16 00:44:11.821606 kernel: SELinux: policy capability cgroup_seclabel=1 May 16 00:44:11.821711 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 16 00:44:11.821924 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 16 00:44:11.821951 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 16 00:44:11.821964 systemd[1]: Successfully loaded SELinux policy in 33.532ms. May 16 00:44:11.821985 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 8.954ms. May 16 00:44:11.821998 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) May 16 00:44:11.822009 systemd[1]: Detected virtualization kvm. May 16 00:44:11.822023 systemd[1]: Detected architecture arm64. May 16 00:44:11.822034 systemd[1]: Detected first boot. May 16 00:44:11.822044 systemd[1]: Initializing machine ID from VM UUID. May 16 00:44:11.822054 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). May 16 00:44:11.822065 systemd[1]: Populated /etc with preset unit settings. May 16 00:44:11.822079 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 16 00:44:11.822091 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 16 00:44:11.822103 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 16 00:44:11.822116 kernel: kauditd_printk_skb: 80 callbacks suppressed May 16 00:44:11.822126 kernel: audit: type=1334 audit(1747356251.699:84): prog-id=12 op=LOAD May 16 00:44:11.822136 kernel: audit: type=1334 audit(1747356251.699:85): prog-id=3 op=UNLOAD May 16 00:44:11.822146 kernel: audit: type=1334 audit(1747356251.699:86): prog-id=13 op=LOAD May 16 00:44:11.822156 kernel: audit: type=1334 audit(1747356251.700:87): prog-id=14 op=LOAD May 16 00:44:11.822165 kernel: audit: type=1334 audit(1747356251.700:88): prog-id=4 op=UNLOAD May 16 00:44:11.822174 kernel: audit: type=1334 audit(1747356251.700:89): prog-id=5 op=UNLOAD May 16 00:44:11.822183 kernel: audit: type=1334 audit(1747356251.700:90): prog-id=15 op=LOAD May 16 00:44:11.822195 kernel: audit: type=1334 audit(1747356251.700:91): prog-id=12 op=UNLOAD May 16 00:44:11.822206 kernel: audit: type=1334 audit(1747356251.701:92): prog-id=16 op=LOAD May 16 00:44:11.822217 kernel: audit: type=1334 audit(1747356251.702:93): prog-id=17 op=LOAD May 16 00:44:11.822227 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 16 00:44:11.822238 systemd[1]: Stopped initrd-switch-root.service. May 16 00:44:11.822248 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 16 00:44:11.822258 systemd[1]: Created slice system-addon\x2dconfig.slice. May 16 00:44:11.822269 systemd[1]: Created slice system-addon\x2drun.slice. May 16 00:44:11.822279 systemd[1]: Created slice system-getty.slice. May 16 00:44:11.822292 systemd[1]: Created slice system-modprobe.slice. May 16 00:44:11.822304 systemd[1]: Created slice system-serial\x2dgetty.slice. May 16 00:44:11.822315 systemd[1]: Created slice system-system\x2dcloudinit.slice. May 16 00:44:11.822325 systemd[1]: Created slice system-systemd\x2dfsck.slice. May 16 00:44:11.822336 systemd[1]: Created slice user.slice. May 16 00:44:11.822346 systemd[1]: Started systemd-ask-password-console.path. May 16 00:44:11.822357 systemd[1]: Started systemd-ask-password-wall.path. May 16 00:44:11.822367 systemd[1]: Set up automount boot.automount. May 16 00:44:11.822378 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. May 16 00:44:11.822389 systemd[1]: Stopped target initrd-switch-root.target. May 16 00:44:11.822400 systemd[1]: Stopped target initrd-fs.target. May 16 00:44:11.822415 systemd[1]: Stopped target initrd-root-fs.target. May 16 00:44:11.822425 systemd[1]: Reached target integritysetup.target. May 16 00:44:11.822435 systemd[1]: Reached target remote-cryptsetup.target. May 16 00:44:11.822447 systemd[1]: Reached target remote-fs.target. May 16 00:44:11.822457 systemd[1]: Reached target slices.target. May 16 00:44:11.822468 systemd[1]: Reached target swap.target. May 16 00:44:11.822479 systemd[1]: Reached target torcx.target. May 16 00:44:11.822489 systemd[1]: Reached target veritysetup.target. May 16 00:44:11.822500 systemd[1]: Listening on systemd-coredump.socket. May 16 00:44:11.822510 systemd[1]: Listening on systemd-initctl.socket. May 16 00:44:11.822521 systemd[1]: Listening on systemd-networkd.socket. May 16 00:44:11.822532 systemd[1]: Listening on systemd-udevd-control.socket. May 16 00:44:11.822609 systemd[1]: Listening on systemd-udevd-kernel.socket. May 16 00:44:11.822626 systemd[1]: Listening on systemd-userdbd.socket. May 16 00:44:11.822637 systemd[1]: Mounting dev-hugepages.mount... May 16 00:44:11.822647 systemd[1]: Mounting dev-mqueue.mount... May 16 00:44:11.822657 systemd[1]: Mounting media.mount... May 16 00:44:11.822668 systemd[1]: Mounting sys-kernel-debug.mount... May 16 00:44:11.822709 systemd[1]: Mounting sys-kernel-tracing.mount... May 16 00:44:11.822722 systemd[1]: Mounting tmp.mount... May 16 00:44:11.822732 systemd[1]: Starting flatcar-tmpfiles.service... May 16 00:44:11.822742 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 16 00:44:11.822753 systemd[1]: Starting kmod-static-nodes.service... May 16 00:44:11.822764 systemd[1]: Starting modprobe@configfs.service... May 16 00:44:11.822774 systemd[1]: Starting modprobe@dm_mod.service... May 16 00:44:11.822785 systemd[1]: Starting modprobe@drm.service... May 16 00:44:11.822796 systemd[1]: Starting modprobe@efi_pstore.service... May 16 00:44:11.822806 systemd[1]: Starting modprobe@fuse.service... May 16 00:44:11.822816 systemd[1]: Starting modprobe@loop.service... May 16 00:44:11.822827 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 16 00:44:11.822837 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 16 00:44:11.822849 systemd[1]: Stopped systemd-fsck-root.service. May 16 00:44:11.822860 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 16 00:44:11.822871 systemd[1]: Stopped systemd-fsck-usr.service. May 16 00:44:11.822880 systemd[1]: Stopped systemd-journald.service. May 16 00:44:11.822890 kernel: loop: module loaded May 16 00:44:11.822906 systemd[1]: Starting systemd-journald.service... May 16 00:44:11.822916 kernel: fuse: init (API version 7.34) May 16 00:44:11.822928 systemd[1]: Starting systemd-modules-load.service... May 16 00:44:11.822939 systemd[1]: Starting systemd-network-generator.service... May 16 00:44:11.822951 systemd[1]: Starting systemd-remount-fs.service... May 16 00:44:11.822963 systemd[1]: Starting systemd-udev-trigger.service... May 16 00:44:11.822973 systemd[1]: verity-setup.service: Deactivated successfully. May 16 00:44:11.822984 systemd[1]: Stopped verity-setup.service. May 16 00:44:11.822993 systemd[1]: Mounted dev-hugepages.mount. May 16 00:44:11.823004 systemd[1]: Mounted dev-mqueue.mount. May 16 00:44:11.823013 systemd[1]: Mounted media.mount. May 16 00:44:11.823024 systemd[1]: Mounted sys-kernel-debug.mount. May 16 00:44:11.823033 systemd[1]: Mounted sys-kernel-tracing.mount. May 16 00:44:11.823045 systemd[1]: Mounted tmp.mount. May 16 00:44:11.823055 systemd[1]: Finished kmod-static-nodes.service. May 16 00:44:11.823066 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 16 00:44:11.823076 systemd[1]: Finished modprobe@configfs.service. May 16 00:44:11.823089 systemd-journald[994]: Journal started May 16 00:44:11.823138 systemd-journald[994]: Runtime Journal (/run/log/journal/8337541e062c4f72b24b025fc84dfce3) is 6.0M, max 48.7M, 42.6M free. May 16 00:44:09.855000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 May 16 00:44:09.927000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 May 16 00:44:09.927000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 May 16 00:44:09.927000 audit: BPF prog-id=10 op=LOAD May 16 00:44:09.927000 audit: BPF prog-id=10 op=UNLOAD May 16 00:44:09.927000 audit: BPF prog-id=11 op=LOAD May 16 00:44:09.927000 audit: BPF prog-id=11 op=UNLOAD May 16 00:44:09.982000 audit[928]: AVC avc: denied { associate } for pid=928 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" May 16 00:44:09.982000 audit[928]: SYSCALL arch=c00000b7 syscall=5 success=yes exit=0 a0=40001c589c a1=40000c8de0 a2=40000cf0c0 a3=32 items=0 ppid=911 pid=928 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:44:09.982000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 May 16 00:44:09.983000 audit[928]: AVC avc: denied { associate } for pid=928 comm="torcx-generator" name="lib" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 May 16 00:44:09.983000 audit[928]: SYSCALL arch=c00000b7 syscall=34 success=yes exit=0 a0=ffffffffffffff9c a1=40001c5975 a2=1ed a3=0 items=2 ppid=911 pid=928 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:44:09.983000 audit: CWD cwd="/" May 16 00:44:09.983000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 16 00:44:09.983000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 16 00:44:09.983000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 May 16 00:44:11.699000 audit: BPF prog-id=12 op=LOAD May 16 00:44:11.699000 audit: BPF prog-id=3 op=UNLOAD May 16 00:44:11.699000 audit: BPF prog-id=13 op=LOAD May 16 00:44:11.700000 audit: BPF prog-id=14 op=LOAD May 16 00:44:11.700000 audit: BPF prog-id=4 op=UNLOAD May 16 00:44:11.700000 audit: BPF prog-id=5 op=UNLOAD May 16 00:44:11.700000 audit: BPF prog-id=15 op=LOAD May 16 00:44:11.700000 audit: BPF prog-id=12 op=UNLOAD May 16 00:44:11.701000 audit: BPF prog-id=16 op=LOAD May 16 00:44:11.702000 audit: BPF prog-id=17 op=LOAD May 16 00:44:11.702000 audit: BPF prog-id=13 op=UNLOAD May 16 00:44:11.702000 audit: BPF prog-id=14 op=UNLOAD May 16 00:44:11.703000 audit: BPF prog-id=18 op=LOAD May 16 00:44:11.703000 audit: BPF prog-id=15 op=UNLOAD May 16 00:44:11.703000 audit: BPF prog-id=19 op=LOAD May 16 00:44:11.704000 audit: BPF prog-id=20 op=LOAD May 16 00:44:11.704000 audit: BPF prog-id=16 op=UNLOAD May 16 00:44:11.704000 audit: BPF prog-id=17 op=UNLOAD May 16 00:44:11.705000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:44:11.707000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:44:11.707000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:44:11.718000 audit: BPF prog-id=18 op=UNLOAD May 16 00:44:11.790000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:44:11.792000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:44:11.794000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:44:11.794000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:44:11.794000 audit: BPF prog-id=21 op=LOAD May 16 00:44:11.794000 audit: BPF prog-id=22 op=LOAD May 16 00:44:11.794000 audit: BPF prog-id=23 op=LOAD May 16 00:44:11.794000 audit: BPF prog-id=19 op=UNLOAD May 16 00:44:11.794000 audit: BPF prog-id=20 op=UNLOAD May 16 00:44:11.808000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:44:11.819000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 May 16 00:44:11.819000 audit[994]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=3 a1=fffff0391690 a2=4000 a3=1 items=0 ppid=1 pid=994 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:44:11.819000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" May 16 00:44:11.820000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:44:11.823000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:44:11.823000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:44:09.980911 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2025-05-16T00:44:09Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 16 00:44:11.697963 systemd[1]: Queued start job for default target multi-user.target. May 16 00:44:09.981292 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2025-05-16T00:44:09Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json May 16 00:44:11.697976 systemd[1]: Unnecessary job was removed for dev-vda6.device. May 16 00:44:09.981313 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2025-05-16T00:44:09Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json May 16 00:44:11.705519 systemd[1]: systemd-journald.service: Deactivated successfully. May 16 00:44:09.981345 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2025-05-16T00:44:09Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" May 16 00:44:09.981356 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2025-05-16T00:44:09Z" level=debug msg="skipped missing lower profile" missing profile=oem May 16 00:44:11.825025 systemd[1]: Started systemd-journald.service. May 16 00:44:09.981389 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2025-05-16T00:44:09Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" May 16 00:44:09.981401 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2025-05-16T00:44:09Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= May 16 00:44:11.825000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:44:09.981757 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2025-05-16T00:44:09Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack May 16 00:44:09.981804 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2025-05-16T00:44:09Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json May 16 00:44:09.981817 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2025-05-16T00:44:09Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json May 16 00:44:09.982429 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2025-05-16T00:44:09Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 May 16 00:44:09.982469 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2025-05-16T00:44:09Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl May 16 00:44:09.982488 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2025-05-16T00:44:09Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.7: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.7 May 16 00:44:11.825545 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 16 00:44:09.982503 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2025-05-16T00:44:09Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store May 16 00:44:09.982520 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2025-05-16T00:44:09Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.7: no such file or directory" path=/var/lib/torcx/store/3510.3.7 May 16 00:44:11.825720 systemd[1]: Finished modprobe@dm_mod.service. May 16 00:44:09.982533 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2025-05-16T00:44:09Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store May 16 00:44:11.441942 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2025-05-16T00:44:11Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 16 00:44:11.442200 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2025-05-16T00:44:11Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 16 00:44:11.442309 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2025-05-16T00:44:11Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 16 00:44:11.442471 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2025-05-16T00:44:11Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 16 00:44:11.442523 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2025-05-16T00:44:11Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= May 16 00:44:11.442581 /usr/lib/systemd/system-generators/torcx-generator[928]: time="2025-05-16T00:44:11Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx May 16 00:44:11.825000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:44:11.825000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:44:11.827086 systemd[1]: modprobe@drm.service: Deactivated successfully. May 16 00:44:11.827251 systemd[1]: Finished modprobe@drm.service. May 16 00:44:11.828000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:44:11.828000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:44:11.828441 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 16 00:44:11.828617 systemd[1]: Finished modprobe@efi_pstore.service. May 16 00:44:11.829000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:44:11.829000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:44:11.829802 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 16 00:44:11.829968 systemd[1]: Finished modprobe@fuse.service. May 16 00:44:11.830000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:44:11.830000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:44:11.831104 systemd[1]: modprobe@loop.service: Deactivated successfully. May 16 00:44:11.831276 systemd[1]: Finished modprobe@loop.service. May 16 00:44:11.831000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:44:11.831000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:44:11.832425 systemd[1]: Finished flatcar-tmpfiles.service. May 16 00:44:11.833000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:44:11.833752 systemd[1]: Finished systemd-modules-load.service. May 16 00:44:11.833000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:44:11.834954 systemd[1]: Finished systemd-network-generator.service. May 16 00:44:11.835000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:44:11.836290 systemd[1]: Finished systemd-remount-fs.service. May 16 00:44:11.837000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:44:11.837822 systemd[1]: Reached target network-pre.target. May 16 00:44:11.840051 systemd[1]: Mounting sys-fs-fuse-connections.mount... May 16 00:44:11.842118 systemd[1]: Mounting sys-kernel-config.mount... May 16 00:44:11.842931 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 16 00:44:11.844885 systemd[1]: Starting systemd-hwdb-update.service... May 16 00:44:11.847164 systemd[1]: Starting systemd-journal-flush.service... May 16 00:44:11.848219 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 16 00:44:11.849480 systemd[1]: Starting systemd-random-seed.service... May 16 00:44:11.850530 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 16 00:44:11.851589 systemd[1]: Starting systemd-sysctl.service... May 16 00:44:11.854783 systemd[1]: Starting systemd-sysusers.service... May 16 00:44:11.858503 systemd[1]: Mounted sys-fs-fuse-connections.mount. May 16 00:44:11.859369 systemd[1]: Mounted sys-kernel-config.mount. May 16 00:44:11.862930 systemd-journald[994]: Time spent on flushing to /var/log/journal/8337541e062c4f72b24b025fc84dfce3 is 20.856ms for 988 entries. May 16 00:44:11.862930 systemd-journald[994]: System Journal (/var/log/journal/8337541e062c4f72b24b025fc84dfce3) is 8.0M, max 195.6M, 187.6M free. May 16 00:44:11.891817 systemd-journald[994]: Received client request to flush runtime journal. May 16 00:44:11.864000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:44:11.866000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:44:11.881000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:44:11.882000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:44:11.863202 systemd[1]: Finished systemd-udev-trigger.service. May 16 00:44:11.865564 systemd[1]: Finished systemd-random-seed.service. May 16 00:44:11.892300 udevadm[1029]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. May 16 00:44:11.866451 systemd[1]: Reached target first-boot-complete.target. May 16 00:44:11.868401 systemd[1]: Starting systemd-udev-settle.service... May 16 00:44:11.881165 systemd[1]: Finished systemd-sysusers.service. May 16 00:44:11.882875 systemd[1]: Finished systemd-sysctl.service. May 16 00:44:11.892641 systemd[1]: Finished systemd-journal-flush.service. May 16 00:44:11.893000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:44:12.237987 systemd[1]: Finished systemd-hwdb-update.service. May 16 00:44:12.237000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:44:12.238000 audit: BPF prog-id=24 op=LOAD May 16 00:44:12.238000 audit: BPF prog-id=25 op=LOAD May 16 00:44:12.238000 audit: BPF prog-id=7 op=UNLOAD May 16 00:44:12.238000 audit: BPF prog-id=8 op=UNLOAD May 16 00:44:12.240009 systemd[1]: Starting systemd-udevd.service... May 16 00:44:12.254853 systemd-udevd[1032]: Using default interface naming scheme 'v252'. May 16 00:44:12.266009 systemd[1]: Started systemd-udevd.service. May 16 00:44:12.265000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:44:12.266000 audit: BPF prog-id=26 op=LOAD May 16 00:44:12.269979 systemd[1]: Starting systemd-networkd.service... May 16 00:44:12.283000 audit: BPF prog-id=27 op=LOAD May 16 00:44:12.283000 audit: BPF prog-id=28 op=LOAD May 16 00:44:12.283000 audit: BPF prog-id=29 op=LOAD May 16 00:44:12.284724 systemd[1]: Starting systemd-userdbd.service... May 16 00:44:12.293379 systemd[1]: Condition check resulted in dev-ttyAMA0.device being skipped. May 16 00:44:12.313919 systemd[1]: Started systemd-userdbd.service. May 16 00:44:12.314000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:44:12.327460 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. May 16 00:44:12.366809 systemd-networkd[1039]: lo: Link UP May 16 00:44:12.367142 systemd-networkd[1039]: lo: Gained carrier May 16 00:44:12.367557 systemd-networkd[1039]: Enumeration completed May 16 00:44:12.367770 systemd-networkd[1039]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 16 00:44:12.370216 systemd[1]: Started systemd-networkd.service. May 16 00:44:12.370000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:44:12.371347 systemd-networkd[1039]: eth0: Link UP May 16 00:44:12.371423 systemd-networkd[1039]: eth0: Gained carrier May 16 00:44:12.390183 systemd[1]: Finished systemd-udev-settle.service. May 16 00:44:12.390000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:44:12.392709 systemd[1]: Starting lvm2-activation-early.service... May 16 00:44:12.404838 systemd-networkd[1039]: eth0: DHCPv4 address 10.0.0.92/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 16 00:44:12.405316 lvm[1065]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 16 00:44:12.428633 systemd[1]: Finished lvm2-activation-early.service. May 16 00:44:12.429000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:44:12.429770 systemd[1]: Reached target cryptsetup.target. May 16 00:44:12.431740 systemd[1]: Starting lvm2-activation.service... May 16 00:44:12.435369 lvm[1066]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 16 00:44:12.458555 systemd[1]: Finished lvm2-activation.service. May 16 00:44:12.459000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:44:12.459552 systemd[1]: Reached target local-fs-pre.target. May 16 00:44:12.460415 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 16 00:44:12.460445 systemd[1]: Reached target local-fs.target. May 16 00:44:12.461258 systemd[1]: Reached target machines.target. May 16 00:44:12.463400 systemd[1]: Starting ldconfig.service... May 16 00:44:12.464557 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 16 00:44:12.464671 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 16 00:44:12.466337 systemd[1]: Starting systemd-boot-update.service... May 16 00:44:12.468418 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... May 16 00:44:12.470892 systemd[1]: Starting systemd-machine-id-commit.service... May 16 00:44:12.473408 systemd[1]: Starting systemd-sysext.service... May 16 00:44:12.475305 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1068 (bootctl) May 16 00:44:12.477535 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... May 16 00:44:12.487604 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. May 16 00:44:12.488000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:44:12.492971 systemd[1]: Unmounting usr-share-oem.mount... May 16 00:44:12.503218 systemd[1]: usr-share-oem.mount: Deactivated successfully. May 16 00:44:12.503414 systemd[1]: Unmounted usr-share-oem.mount. May 16 00:44:12.548699 kernel: loop0: detected capacity change from 0 to 207008 May 16 00:44:12.549184 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 16 00:44:12.550588 systemd[1]: Finished systemd-machine-id-commit.service. May 16 00:44:12.550000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:44:12.559646 systemd-fsck[1077]: fsck.fat 4.2 (2021-01-31) May 16 00:44:12.559646 systemd-fsck[1077]: /dev/vda1: 236 files, 117310/258078 clusters May 16 00:44:12.562267 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. May 16 00:44:12.562000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:44:12.565252 systemd[1]: Mounting boot.mount... May 16 00:44:12.566693 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 16 00:44:12.572821 systemd[1]: Mounted boot.mount. May 16 00:44:12.579483 systemd[1]: Finished systemd-boot-update.service. May 16 00:44:12.579000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:44:12.590696 kernel: loop1: detected capacity change from 0 to 207008 May 16 00:44:12.594402 (sd-sysext)[1083]: Using extensions 'kubernetes'. May 16 00:44:12.594761 (sd-sysext)[1083]: Merged extensions into '/usr'. May 16 00:44:12.617437 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 16 00:44:12.620184 systemd[1]: Starting modprobe@dm_mod.service... May 16 00:44:12.623992 systemd[1]: Starting modprobe@efi_pstore.service... May 16 00:44:12.626961 systemd[1]: Starting modprobe@loop.service... May 16 00:44:12.627766 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 16 00:44:12.627953 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 16 00:44:12.628929 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 16 00:44:12.629116 systemd[1]: Finished modprobe@dm_mod.service. May 16 00:44:12.629000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:44:12.629000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:44:12.630471 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 16 00:44:12.630649 systemd[1]: Finished modprobe@efi_pstore.service. May 16 00:44:12.630000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:44:12.630000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:44:12.632046 systemd[1]: modprobe@loop.service: Deactivated successfully. May 16 00:44:12.632352 systemd[1]: Finished modprobe@loop.service. May 16 00:44:12.632000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:44:12.632000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:44:12.634239 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 16 00:44:12.634525 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 16 00:44:12.660383 ldconfig[1067]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 16 00:44:12.663284 systemd[1]: Finished ldconfig.service. May 16 00:44:12.663000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:44:12.812397 systemd[1]: Mounting usr-share-oem.mount... May 16 00:44:12.817271 systemd[1]: Mounted usr-share-oem.mount. May 16 00:44:12.818892 systemd[1]: Finished systemd-sysext.service. May 16 00:44:12.819000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:44:12.820884 systemd[1]: Starting ensure-sysext.service... May 16 00:44:12.822351 systemd[1]: Starting systemd-tmpfiles-setup.service... May 16 00:44:12.826652 systemd[1]: Reloading. May 16 00:44:12.833437 systemd-tmpfiles[1090]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. May 16 00:44:12.835166 systemd-tmpfiles[1090]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 16 00:44:12.837732 systemd-tmpfiles[1090]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 16 00:44:12.868836 /usr/lib/systemd/system-generators/torcx-generator[1110]: time="2025-05-16T00:44:12Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 16 00:44:12.868867 /usr/lib/systemd/system-generators/torcx-generator[1110]: time="2025-05-16T00:44:12Z" level=info msg="torcx already run" May 16 00:44:12.925416 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 16 00:44:12.925436 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 16 00:44:12.940976 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 16 00:44:12.982000 audit: BPF prog-id=30 op=LOAD May 16 00:44:12.982000 audit: BPF prog-id=26 op=UNLOAD May 16 00:44:12.983000 audit: BPF prog-id=31 op=LOAD May 16 00:44:12.983000 audit: BPF prog-id=32 op=LOAD May 16 00:44:12.983000 audit: BPF prog-id=24 op=UNLOAD May 16 00:44:12.983000 audit: BPF prog-id=25 op=UNLOAD May 16 00:44:12.984000 audit: BPF prog-id=33 op=LOAD May 16 00:44:12.984000 audit: BPF prog-id=27 op=UNLOAD May 16 00:44:12.984000 audit: BPF prog-id=34 op=LOAD May 16 00:44:12.984000 audit: BPF prog-id=35 op=LOAD May 16 00:44:12.984000 audit: BPF prog-id=28 op=UNLOAD May 16 00:44:12.984000 audit: BPF prog-id=29 op=UNLOAD May 16 00:44:12.984000 audit: BPF prog-id=36 op=LOAD May 16 00:44:12.984000 audit: BPF prog-id=21 op=UNLOAD May 16 00:44:12.984000 audit: BPF prog-id=37 op=LOAD May 16 00:44:12.984000 audit: BPF prog-id=38 op=LOAD May 16 00:44:12.984000 audit: BPF prog-id=22 op=UNLOAD May 16 00:44:12.984000 audit: BPF prog-id=23 op=UNLOAD May 16 00:44:12.989001 systemd[1]: Finished systemd-tmpfiles-setup.service. May 16 00:44:12.989000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:44:12.993322 systemd[1]: Starting audit-rules.service... May 16 00:44:12.995251 systemd[1]: Starting clean-ca-certificates.service... May 16 00:44:12.997873 systemd[1]: Starting systemd-journal-catalog-update.service... May 16 00:44:13.002000 audit: BPF prog-id=39 op=LOAD May 16 00:44:13.003796 systemd[1]: Starting systemd-resolved.service... May 16 00:44:13.005000 audit: BPF prog-id=40 op=LOAD May 16 00:44:13.007187 systemd[1]: Starting systemd-timesyncd.service... May 16 00:44:13.009021 systemd[1]: Starting systemd-update-utmp.service... May 16 00:44:13.010508 systemd[1]: Finished clean-ca-certificates.service. May 16 00:44:13.011000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:44:13.013000 audit[1160]: SYSTEM_BOOT pid=1160 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' May 16 00:44:13.013510 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 16 00:44:13.016637 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 16 00:44:13.018370 systemd[1]: Starting modprobe@dm_mod.service... May 16 00:44:13.020288 systemd[1]: Starting modprobe@efi_pstore.service... May 16 00:44:13.022183 systemd[1]: Starting modprobe@loop.service... May 16 00:44:13.022788 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 16 00:44:13.022978 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 16 00:44:13.023126 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 16 00:44:13.024345 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 16 00:44:13.024479 systemd[1]: Finished modprobe@dm_mod.service. May 16 00:44:13.025000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:44:13.025000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:44:13.025663 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 16 00:44:13.025791 systemd[1]: Finished modprobe@efi_pstore.service. May 16 00:44:13.025000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:44:13.025000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:44:13.027003 systemd[1]: Finished systemd-journal-catalog-update.service. May 16 00:44:13.027000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:44:13.028202 systemd[1]: modprobe@loop.service: Deactivated successfully. May 16 00:44:13.028314 systemd[1]: Finished modprobe@loop.service. May 16 00:44:13.028000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:44:13.028000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:44:13.030000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:44:13.030717 systemd[1]: Finished systemd-update-utmp.service. May 16 00:44:13.032674 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 16 00:44:13.034123 systemd[1]: Starting modprobe@dm_mod.service... May 16 00:44:13.035979 systemd[1]: Starting modprobe@efi_pstore.service... May 16 00:44:13.037610 systemd[1]: Starting modprobe@loop.service... May 16 00:44:13.038263 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 16 00:44:13.038380 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 16 00:44:13.039547 systemd[1]: Starting systemd-update-done.service... May 16 00:44:13.040265 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 16 00:44:13.041243 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 16 00:44:13.041404 systemd[1]: Finished modprobe@dm_mod.service. May 16 00:44:13.042000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:44:13.042000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:44:13.042581 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 16 00:44:13.042701 systemd[1]: Finished modprobe@efi_pstore.service. May 16 00:44:13.042000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:44:13.042000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:44:13.043701 systemd[1]: modprobe@loop.service: Deactivated successfully. May 16 00:44:13.043813 systemd[1]: Finished modprobe@loop.service. May 16 00:44:13.043000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:44:13.043000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:44:13.046594 systemd[1]: Finished systemd-update-done.service. May 16 00:44:13.047000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:44:13.047888 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 16 00:44:13.049865 systemd[1]: Starting modprobe@dm_mod.service... May 16 00:44:13.051617 systemd[1]: Starting modprobe@drm.service... May 16 00:44:13.053291 systemd[1]: Starting modprobe@efi_pstore.service... May 16 00:44:13.058700 systemd[1]: Starting modprobe@loop.service... May 16 00:44:13.059372 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 16 00:44:13.059511 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 16 00:44:13.062192 systemd[1]: Starting systemd-networkd-wait-online.service... May 16 00:44:13.063054 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 16 00:44:13.064160 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 16 00:44:13.064296 systemd[1]: Finished modprobe@dm_mod.service. May 16 00:44:13.064000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:44:13.064000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:44:13.065381 systemd[1]: modprobe@drm.service: Deactivated successfully. May 16 00:44:13.065506 systemd[1]: Finished modprobe@drm.service. May 16 00:44:13.066000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:44:13.066000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:44:13.066551 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 16 00:44:13.066668 systemd[1]: Finished modprobe@efi_pstore.service. May 16 00:44:13.066000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:44:13.066000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:44:13.068039 systemd[1]: modprobe@loop.service: Deactivated successfully. May 16 00:44:13.068163 systemd[1]: Finished modprobe@loop.service. May 16 00:44:13.068000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:44:13.068000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:44:13.069287 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 16 00:44:13.069399 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 16 00:44:13.071484 systemd[1]: Finished ensure-sysext.service. May 16 00:44:13.072003 systemd-resolved[1155]: Positive Trust Anchors: May 16 00:44:13.072014 systemd-resolved[1155]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 16 00:44:13.072041 systemd-resolved[1155]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test May 16 00:44:13.072000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:44:13.081000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 May 16 00:44:13.081000 audit[1181]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffcfdcf0f0 a2=420 a3=0 items=0 ppid=1149 pid=1181 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:44:13.081000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 May 16 00:44:13.082661 augenrules[1181]: No rules May 16 00:44:13.082964 systemd[1]: Started systemd-timesyncd.service. May 16 00:44:13.083952 systemd-timesyncd[1159]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 16 00:44:13.084225 systemd-timesyncd[1159]: Initial clock synchronization to Fri 2025-05-16 00:44:13.114522 UTC. May 16 00:44:13.084255 systemd[1]: Finished audit-rules.service. May 16 00:44:13.085008 systemd[1]: Reached target time-set.target. May 16 00:44:13.087044 systemd-resolved[1155]: Defaulting to hostname 'linux'. May 16 00:44:13.088348 systemd[1]: Started systemd-resolved.service. May 16 00:44:13.089171 systemd[1]: Reached target network.target. May 16 00:44:13.089753 systemd[1]: Reached target nss-lookup.target. May 16 00:44:13.090324 systemd[1]: Reached target sysinit.target. May 16 00:44:13.090963 systemd[1]: Started motdgen.path. May 16 00:44:13.091470 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. May 16 00:44:13.092414 systemd[1]: Started logrotate.timer. May 16 00:44:13.093044 systemd[1]: Started mdadm.timer. May 16 00:44:13.093532 systemd[1]: Started systemd-tmpfiles-clean.timer. May 16 00:44:13.094155 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 16 00:44:13.094183 systemd[1]: Reached target paths.target. May 16 00:44:13.094702 systemd[1]: Reached target timers.target. May 16 00:44:13.095497 systemd[1]: Listening on dbus.socket. May 16 00:44:13.097074 systemd[1]: Starting docker.socket... May 16 00:44:13.100043 systemd[1]: Listening on sshd.socket. May 16 00:44:13.100706 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 16 00:44:13.101126 systemd[1]: Listening on docker.socket. May 16 00:44:13.101749 systemd[1]: Reached target sockets.target. May 16 00:44:13.102311 systemd[1]: Reached target basic.target. May 16 00:44:13.102924 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. May 16 00:44:13.102954 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. May 16 00:44:13.103846 systemd[1]: Starting containerd.service... May 16 00:44:13.105375 systemd[1]: Starting dbus.service... May 16 00:44:13.107137 systemd[1]: Starting enable-oem-cloudinit.service... May 16 00:44:13.109064 systemd[1]: Starting extend-filesystems.service... May 16 00:44:13.109787 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). May 16 00:44:13.111140 systemd[1]: Starting motdgen.service... May 16 00:44:13.112806 systemd[1]: Starting ssh-key-proc-cmdline.service... May 16 00:44:13.114505 systemd[1]: Starting sshd-keygen.service... May 16 00:44:13.117647 systemd[1]: Starting systemd-logind.service... May 16 00:44:13.118327 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 16 00:44:13.118433 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 16 00:44:13.118924 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 16 00:44:13.119609 systemd[1]: Starting update-engine.service... May 16 00:44:13.122339 systemd[1]: Starting update-ssh-keys-after-ignition.service... May 16 00:44:13.124847 jq[1191]: false May 16 00:44:13.125087 jq[1205]: true May 16 00:44:13.126478 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 16 00:44:13.126728 systemd[1]: Finished ssh-key-proc-cmdline.service. May 16 00:44:13.127882 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 16 00:44:13.128039 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. May 16 00:44:13.133087 jq[1208]: true May 16 00:44:13.136527 extend-filesystems[1192]: Found loop1 May 16 00:44:13.136527 extend-filesystems[1192]: Found vda May 16 00:44:13.136527 extend-filesystems[1192]: Found vda1 May 16 00:44:13.141949 extend-filesystems[1192]: Found vda2 May 16 00:44:13.141949 extend-filesystems[1192]: Found vda3 May 16 00:44:13.141949 extend-filesystems[1192]: Found usr May 16 00:44:13.141949 extend-filesystems[1192]: Found vda4 May 16 00:44:13.141949 extend-filesystems[1192]: Found vda6 May 16 00:44:13.141949 extend-filesystems[1192]: Found vda7 May 16 00:44:13.141949 extend-filesystems[1192]: Found vda9 May 16 00:44:13.141949 extend-filesystems[1192]: Checking size of /dev/vda9 May 16 00:44:13.139454 systemd[1]: motdgen.service: Deactivated successfully. May 16 00:44:13.143796 dbus-daemon[1190]: [system] SELinux support is enabled May 16 00:44:13.171942 extend-filesystems[1192]: Resized partition /dev/vda9 May 16 00:44:13.139604 systemd[1]: Finished motdgen.service. May 16 00:44:13.143945 systemd[1]: Started dbus.service. May 16 00:44:13.160496 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 16 00:44:13.160525 systemd[1]: Reached target system-config.target. May 16 00:44:13.161231 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 16 00:44:13.161247 systemd[1]: Reached target user-config.target. May 16 00:44:13.175546 systemd-logind[1200]: Watching system buttons on /dev/input/event0 (Power Button) May 16 00:44:13.176838 extend-filesystems[1229]: resize2fs 1.46.5 (30-Dec-2021) May 16 00:44:13.177203 systemd-logind[1200]: New seat seat0. May 16 00:44:13.182985 systemd[1]: Started systemd-logind.service. May 16 00:44:13.189916 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 16 00:44:13.194099 bash[1233]: Updated "/home/core/.ssh/authorized_keys" May 16 00:44:13.194837 systemd[1]: Finished update-ssh-keys-after-ignition.service. May 16 00:44:13.209504 update_engine[1202]: I0516 00:44:13.209283 1202 main.cc:92] Flatcar Update Engine starting May 16 00:44:13.211814 systemd[1]: Started update-engine.service. May 16 00:44:13.215140 systemd[1]: Started locksmithd.service. May 16 00:44:13.216327 update_engine[1202]: I0516 00:44:13.216272 1202 update_check_scheduler.cc:74] Next update check in 11m15s May 16 00:44:13.217698 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 16 00:44:13.226672 extend-filesystems[1229]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 16 00:44:13.226672 extend-filesystems[1229]: old_desc_blocks = 1, new_desc_blocks = 1 May 16 00:44:13.226672 extend-filesystems[1229]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 16 00:44:13.229289 extend-filesystems[1192]: Resized filesystem in /dev/vda9 May 16 00:44:13.228272 systemd[1]: extend-filesystems.service: Deactivated successfully. May 16 00:44:13.228439 systemd[1]: Finished extend-filesystems.service. May 16 00:44:13.244195 env[1212]: time="2025-05-16T00:44:13.242854440Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 May 16 00:44:13.262604 env[1212]: time="2025-05-16T00:44:13.262560640Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 16 00:44:13.262723 env[1212]: time="2025-05-16T00:44:13.262706240Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 16 00:44:13.263913 env[1212]: time="2025-05-16T00:44:13.263862560Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.181-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 16 00:44:13.263913 env[1212]: time="2025-05-16T00:44:13.263894880Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 16 00:44:13.264119 env[1212]: time="2025-05-16T00:44:13.264090880Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 16 00:44:13.264119 env[1212]: time="2025-05-16T00:44:13.264113880Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 16 00:44:13.264182 env[1212]: time="2025-05-16T00:44:13.264128200Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" May 16 00:44:13.264182 env[1212]: time="2025-05-16T00:44:13.264137840Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 16 00:44:13.264219 env[1212]: time="2025-05-16T00:44:13.264204680Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 16 00:44:13.264495 env[1212]: time="2025-05-16T00:44:13.264466040Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 16 00:44:13.264600 env[1212]: time="2025-05-16T00:44:13.264583320Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 16 00:44:13.264627 env[1212]: time="2025-05-16T00:44:13.264602760Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 16 00:44:13.264664 env[1212]: time="2025-05-16T00:44:13.264650600Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" May 16 00:44:13.264701 env[1212]: time="2025-05-16T00:44:13.264664720Z" level=info msg="metadata content store policy set" policy=shared May 16 00:44:13.267548 env[1212]: time="2025-05-16T00:44:13.267520800Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 16 00:44:13.267610 env[1212]: time="2025-05-16T00:44:13.267552760Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 16 00:44:13.267610 env[1212]: time="2025-05-16T00:44:13.267566040Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 16 00:44:13.267610 env[1212]: time="2025-05-16T00:44:13.267599840Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 16 00:44:13.267670 env[1212]: time="2025-05-16T00:44:13.267614520Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 16 00:44:13.267670 env[1212]: time="2025-05-16T00:44:13.267627760Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 16 00:44:13.267670 env[1212]: time="2025-05-16T00:44:13.267640320Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 16 00:44:13.268068 env[1212]: time="2025-05-16T00:44:13.268013160Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 16 00:44:13.268068 env[1212]: time="2025-05-16T00:44:13.268038600Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 May 16 00:44:13.268068 env[1212]: time="2025-05-16T00:44:13.268052920Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 16 00:44:13.268068 env[1212]: time="2025-05-16T00:44:13.268065200Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 16 00:44:13.268180 env[1212]: time="2025-05-16T00:44:13.268077960Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 16 00:44:13.268204 env[1212]: time="2025-05-16T00:44:13.268180040Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 16 00:44:13.268301 env[1212]: time="2025-05-16T00:44:13.268247040Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 16 00:44:13.268510 env[1212]: time="2025-05-16T00:44:13.268489280Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 16 00:44:13.268553 env[1212]: time="2025-05-16T00:44:13.268516600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 16 00:44:13.268553 env[1212]: time="2025-05-16T00:44:13.268530200Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 16 00:44:13.268654 env[1212]: time="2025-05-16T00:44:13.268640440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 16 00:44:13.268654 env[1212]: time="2025-05-16T00:44:13.268656720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 16 00:44:13.268732 env[1212]: time="2025-05-16T00:44:13.268669160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 16 00:44:13.268792 env[1212]: time="2025-05-16T00:44:13.268777280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 16 00:44:13.268814 env[1212]: time="2025-05-16T00:44:13.268796880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 16 00:44:13.268814 env[1212]: time="2025-05-16T00:44:13.268809520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 16 00:44:13.268860 env[1212]: time="2025-05-16T00:44:13.268821600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 16 00:44:13.268860 env[1212]: time="2025-05-16T00:44:13.268833120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 16 00:44:13.268860 env[1212]: time="2025-05-16T00:44:13.268846120Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 16 00:44:13.269000 env[1212]: time="2025-05-16T00:44:13.268969240Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 16 00:44:13.269000 env[1212]: time="2025-05-16T00:44:13.268990640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 16 00:44:13.269048 env[1212]: time="2025-05-16T00:44:13.269003680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 16 00:44:13.269048 env[1212]: time="2025-05-16T00:44:13.269014920Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 16 00:44:13.269048 env[1212]: time="2025-05-16T00:44:13.269028400Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 May 16 00:44:13.269048 env[1212]: time="2025-05-16T00:44:13.269038720Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 16 00:44:13.269123 env[1212]: time="2025-05-16T00:44:13.269055480Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" May 16 00:44:13.269123 env[1212]: time="2025-05-16T00:44:13.269087280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 16 00:44:13.269318 env[1212]: time="2025-05-16T00:44:13.269269840Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 16 00:44:13.271831 env[1212]: time="2025-05-16T00:44:13.269324920Z" level=info msg="Connect containerd service" May 16 00:44:13.271831 env[1212]: time="2025-05-16T00:44:13.269392400Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 16 00:44:13.271831 env[1212]: time="2025-05-16T00:44:13.270128440Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 16 00:44:13.271831 env[1212]: time="2025-05-16T00:44:13.270355120Z" level=info msg="Start subscribing containerd event" May 16 00:44:13.271831 env[1212]: time="2025-05-16T00:44:13.270393040Z" level=info msg="Start recovering state" May 16 00:44:13.271831 env[1212]: time="2025-05-16T00:44:13.270445960Z" level=info msg="Start event monitor" May 16 00:44:13.271831 env[1212]: time="2025-05-16T00:44:13.270460080Z" level=info msg="Start snapshots syncer" May 16 00:44:13.271831 env[1212]: time="2025-05-16T00:44:13.270468920Z" level=info msg="Start cni network conf syncer for default" May 16 00:44:13.271831 env[1212]: time="2025-05-16T00:44:13.270477280Z" level=info msg="Start streaming server" May 16 00:44:13.271831 env[1212]: time="2025-05-16T00:44:13.270886440Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 16 00:44:13.271831 env[1212]: time="2025-05-16T00:44:13.270940680Z" level=info msg=serving... address=/run/containerd/containerd.sock May 16 00:44:13.271059 systemd[1]: Started containerd.service. May 16 00:44:13.272121 env[1212]: time="2025-05-16T00:44:13.272095960Z" level=info msg="containerd successfully booted in 0.029951s" May 16 00:44:13.273937 locksmithd[1239]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 16 00:44:14.208775 sshd_keygen[1206]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 16 00:44:14.225881 systemd[1]: Finished sshd-keygen.service. May 16 00:44:14.227919 systemd[1]: Starting issuegen.service... May 16 00:44:14.232361 systemd[1]: issuegen.service: Deactivated successfully. May 16 00:44:14.232510 systemd[1]: Finished issuegen.service. May 16 00:44:14.234518 systemd[1]: Starting systemd-user-sessions.service... May 16 00:44:14.240568 systemd[1]: Finished systemd-user-sessions.service. May 16 00:44:14.242549 systemd[1]: Started getty@tty1.service. May 16 00:44:14.244316 systemd[1]: Started serial-getty@ttyAMA0.service. May 16 00:44:14.245154 systemd[1]: Reached target getty.target. May 16 00:44:14.286836 systemd-networkd[1039]: eth0: Gained IPv6LL May 16 00:44:14.288503 systemd[1]: Finished systemd-networkd-wait-online.service. May 16 00:44:14.289581 systemd[1]: Reached target network-online.target. May 16 00:44:14.292105 systemd[1]: Starting kubelet.service... May 16 00:44:14.865998 systemd[1]: Started kubelet.service. May 16 00:44:14.867301 systemd[1]: Reached target multi-user.target. May 16 00:44:14.869365 systemd[1]: Starting systemd-update-utmp-runlevel.service... May 16 00:44:14.876112 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. May 16 00:44:14.876265 systemd[1]: Finished systemd-update-utmp-runlevel.service. May 16 00:44:14.877235 systemd[1]: Startup finished in 592ms (kernel) + 4.248s (initrd) + 5.060s (userspace) = 9.901s. May 16 00:44:15.279983 kubelet[1267]: E0516 00:44:15.279862 1267 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 16 00:44:15.281816 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 16 00:44:15.281944 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 16 00:44:18.301851 systemd[1]: Created slice system-sshd.slice. May 16 00:44:18.302995 systemd[1]: Started sshd@0-10.0.0.92:22-10.0.0.1:33192.service. May 16 00:44:18.355717 sshd[1276]: Accepted publickey for core from 10.0.0.1 port 33192 ssh2: RSA SHA256:czXyODm5lEdSCdxgc4UKYE1H3sjGZqNxHBxH/SPqyp4 May 16 00:44:18.357834 sshd[1276]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 16 00:44:18.372398 systemd-logind[1200]: New session 1 of user core. May 16 00:44:18.373397 systemd[1]: Created slice user-500.slice. May 16 00:44:18.374993 systemd[1]: Starting user-runtime-dir@500.service... May 16 00:44:18.384527 systemd[1]: Finished user-runtime-dir@500.service. May 16 00:44:18.386334 systemd[1]: Starting user@500.service... May 16 00:44:18.390430 (systemd)[1279]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 16 00:44:18.453502 systemd[1279]: Queued start job for default target default.target. May 16 00:44:18.454015 systemd[1279]: Reached target paths.target. May 16 00:44:18.454048 systemd[1279]: Reached target sockets.target. May 16 00:44:18.454059 systemd[1279]: Reached target timers.target. May 16 00:44:18.454069 systemd[1279]: Reached target basic.target. May 16 00:44:18.454107 systemd[1279]: Reached target default.target. May 16 00:44:18.454133 systemd[1279]: Startup finished in 56ms. May 16 00:44:18.454339 systemd[1]: Started user@500.service. May 16 00:44:18.455446 systemd[1]: Started session-1.scope. May 16 00:44:18.507491 systemd[1]: Started sshd@1-10.0.0.92:22-10.0.0.1:33198.service. May 16 00:44:18.550765 sshd[1288]: Accepted publickey for core from 10.0.0.1 port 33198 ssh2: RSA SHA256:czXyODm5lEdSCdxgc4UKYE1H3sjGZqNxHBxH/SPqyp4 May 16 00:44:18.552274 sshd[1288]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 16 00:44:18.556933 systemd-logind[1200]: New session 2 of user core. May 16 00:44:18.556996 systemd[1]: Started session-2.scope. May 16 00:44:18.615259 sshd[1288]: pam_unix(sshd:session): session closed for user core May 16 00:44:18.618978 systemd[1]: Started sshd@2-10.0.0.92:22-10.0.0.1:33200.service. May 16 00:44:18.619635 systemd[1]: sshd@1-10.0.0.92:22-10.0.0.1:33198.service: Deactivated successfully. May 16 00:44:18.620554 systemd[1]: session-2.scope: Deactivated successfully. May 16 00:44:18.621132 systemd-logind[1200]: Session 2 logged out. Waiting for processes to exit. May 16 00:44:18.621912 systemd-logind[1200]: Removed session 2. May 16 00:44:18.662474 sshd[1293]: Accepted publickey for core from 10.0.0.1 port 33200 ssh2: RSA SHA256:czXyODm5lEdSCdxgc4UKYE1H3sjGZqNxHBxH/SPqyp4 May 16 00:44:18.663760 sshd[1293]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 16 00:44:18.667209 systemd-logind[1200]: New session 3 of user core. May 16 00:44:18.668044 systemd[1]: Started session-3.scope. May 16 00:44:18.717981 sshd[1293]: pam_unix(sshd:session): session closed for user core May 16 00:44:18.721473 systemd[1]: sshd@2-10.0.0.92:22-10.0.0.1:33200.service: Deactivated successfully. May 16 00:44:18.722084 systemd[1]: session-3.scope: Deactivated successfully. May 16 00:44:18.722619 systemd-logind[1200]: Session 3 logged out. Waiting for processes to exit. May 16 00:44:18.723824 systemd[1]: Started sshd@3-10.0.0.92:22-10.0.0.1:33206.service. May 16 00:44:18.724665 systemd-logind[1200]: Removed session 3. May 16 00:44:18.766841 sshd[1300]: Accepted publickey for core from 10.0.0.1 port 33206 ssh2: RSA SHA256:czXyODm5lEdSCdxgc4UKYE1H3sjGZqNxHBxH/SPqyp4 May 16 00:44:18.768363 sshd[1300]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 16 00:44:18.771628 systemd-logind[1200]: New session 4 of user core. May 16 00:44:18.772444 systemd[1]: Started session-4.scope. May 16 00:44:18.826144 sshd[1300]: pam_unix(sshd:session): session closed for user core May 16 00:44:18.830082 systemd[1]: Started sshd@4-10.0.0.92:22-10.0.0.1:33220.service. May 16 00:44:18.830610 systemd[1]: sshd@3-10.0.0.92:22-10.0.0.1:33206.service: Deactivated successfully. May 16 00:44:18.831225 systemd[1]: session-4.scope: Deactivated successfully. May 16 00:44:18.831793 systemd-logind[1200]: Session 4 logged out. Waiting for processes to exit. May 16 00:44:18.832779 systemd-logind[1200]: Removed session 4. May 16 00:44:18.872729 sshd[1306]: Accepted publickey for core from 10.0.0.1 port 33220 ssh2: RSA SHA256:czXyODm5lEdSCdxgc4UKYE1H3sjGZqNxHBxH/SPqyp4 May 16 00:44:18.873951 sshd[1306]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 16 00:44:18.877304 systemd-logind[1200]: New session 5 of user core. May 16 00:44:18.878162 systemd[1]: Started session-5.scope. May 16 00:44:18.944517 sudo[1310]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 16 00:44:18.945065 sudo[1310]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) May 16 00:44:18.957142 systemd[1]: Starting coreos-metadata.service... May 16 00:44:18.963586 systemd[1]: coreos-metadata.service: Deactivated successfully. May 16 00:44:18.963786 systemd[1]: Finished coreos-metadata.service. May 16 00:44:19.481628 systemd[1]: Stopped kubelet.service. May 16 00:44:19.483688 systemd[1]: Starting kubelet.service... May 16 00:44:19.506081 systemd[1]: Reloading. May 16 00:44:19.550938 /usr/lib/systemd/system-generators/torcx-generator[1371]: time="2025-05-16T00:44:19Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 16 00:44:19.550969 /usr/lib/systemd/system-generators/torcx-generator[1371]: time="2025-05-16T00:44:19Z" level=info msg="torcx already run" May 16 00:44:19.638025 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 16 00:44:19.638042 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 16 00:44:19.653750 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 16 00:44:19.720118 systemd[1]: Started kubelet.service. May 16 00:44:19.721401 systemd[1]: Stopping kubelet.service... May 16 00:44:19.721642 systemd[1]: kubelet.service: Deactivated successfully. May 16 00:44:19.721887 systemd[1]: Stopped kubelet.service. May 16 00:44:19.723599 systemd[1]: Starting kubelet.service... May 16 00:44:19.818624 systemd[1]: Started kubelet.service. May 16 00:44:19.857294 kubelet[1414]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 16 00:44:19.857294 kubelet[1414]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 16 00:44:19.857294 kubelet[1414]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 16 00:44:19.857648 kubelet[1414]: I0516 00:44:19.857384 1414 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 16 00:44:20.663885 kubelet[1414]: I0516 00:44:20.663837 1414 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" May 16 00:44:20.663885 kubelet[1414]: I0516 00:44:20.663871 1414 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 16 00:44:20.664178 kubelet[1414]: I0516 00:44:20.664149 1414 server.go:954] "Client rotation is on, will bootstrap in background" May 16 00:44:20.728173 kubelet[1414]: I0516 00:44:20.728133 1414 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 16 00:44:20.740797 kubelet[1414]: E0516 00:44:20.740764 1414 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 16 00:44:20.740970 kubelet[1414]: I0516 00:44:20.740956 1414 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 16 00:44:20.748235 kubelet[1414]: I0516 00:44:20.748204 1414 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 16 00:44:20.748994 kubelet[1414]: I0516 00:44:20.748950 1414 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 16 00:44:20.749171 kubelet[1414]: I0516 00:44:20.749000 1414 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"10.0.0.92","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 16 00:44:20.749259 kubelet[1414]: I0516 00:44:20.749245 1414 topology_manager.go:138] "Creating topology manager with none policy" May 16 00:44:20.749259 kubelet[1414]: I0516 00:44:20.749257 1414 container_manager_linux.go:304] "Creating device plugin manager" May 16 00:44:20.749525 kubelet[1414]: I0516 00:44:20.749500 1414 state_mem.go:36] "Initialized new in-memory state store" May 16 00:44:20.752988 kubelet[1414]: I0516 00:44:20.752956 1414 kubelet.go:446] "Attempting to sync node with API server" May 16 00:44:20.752988 kubelet[1414]: I0516 00:44:20.752990 1414 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" May 16 00:44:20.753105 kubelet[1414]: I0516 00:44:20.753010 1414 kubelet.go:352] "Adding apiserver pod source" May 16 00:44:20.753105 kubelet[1414]: I0516 00:44:20.753036 1414 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 16 00:44:20.753229 kubelet[1414]: E0516 00:44:20.753203 1414 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:44:20.753300 kubelet[1414]: E0516 00:44:20.753287 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:44:20.758665 kubelet[1414]: I0516 00:44:20.758635 1414 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" May 16 00:44:20.759305 kubelet[1414]: I0516 00:44:20.759276 1414 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 16 00:44:20.759406 kubelet[1414]: W0516 00:44:20.759394 1414 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 16 00:44:20.760292 kubelet[1414]: I0516 00:44:20.760272 1414 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 16 00:44:20.760344 kubelet[1414]: I0516 00:44:20.760332 1414 server.go:1287] "Started kubelet" May 16 00:44:20.760527 kubelet[1414]: I0516 00:44:20.760492 1414 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 May 16 00:44:20.760903 kubelet[1414]: W0516 00:44:20.760880 1414 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope May 16 00:44:20.761009 kubelet[1414]: E0516 00:44:20.760990 1414 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" May 16 00:44:20.761860 kubelet[1414]: W0516 00:44:20.761830 1414 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "10.0.0.92" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope May 16 00:44:20.761965 kubelet[1414]: E0516 00:44:20.761946 1414 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"10.0.0.92\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" May 16 00:44:20.763204 kubelet[1414]: I0516 00:44:20.763168 1414 server.go:479] "Adding debug handlers to kubelet server" May 16 00:44:20.763327 kubelet[1414]: I0516 00:44:20.763263 1414 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 16 00:44:20.763553 kubelet[1414]: I0516 00:44:20.763528 1414 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 16 00:44:20.764976 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). May 16 00:44:20.765177 kubelet[1414]: I0516 00:44:20.765157 1414 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 16 00:44:20.765377 kubelet[1414]: I0516 00:44:20.765352 1414 volume_manager.go:297] "Starting Kubelet Volume Manager" May 16 00:44:20.765451 kubelet[1414]: I0516 00:44:20.765436 1414 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 16 00:44:20.766611 kubelet[1414]: I0516 00:44:20.766570 1414 desired_state_of_world_populator.go:150] "Desired state populator starts to run" May 16 00:44:20.766664 kubelet[1414]: I0516 00:44:20.766642 1414 reconciler.go:26] "Reconciler: start to sync state" May 16 00:44:20.766767 kubelet[1414]: E0516 00:44:20.766727 1414 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.92\" not found" May 16 00:44:20.766998 kubelet[1414]: I0516 00:44:20.766974 1414 factory.go:221] Registration of the systemd container factory successfully May 16 00:44:20.767089 kubelet[1414]: I0516 00:44:20.767066 1414 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 16 00:44:20.767306 kubelet[1414]: E0516 00:44:20.767282 1414 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 16 00:44:20.768426 kubelet[1414]: I0516 00:44:20.768401 1414 factory.go:221] Registration of the containerd container factory successfully May 16 00:44:20.777258 kubelet[1414]: I0516 00:44:20.777231 1414 cpu_manager.go:221] "Starting CPU manager" policy="none" May 16 00:44:20.777258 kubelet[1414]: I0516 00:44:20.777250 1414 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 16 00:44:20.777361 kubelet[1414]: I0516 00:44:20.777271 1414 state_mem.go:36] "Initialized new in-memory state store" May 16 00:44:20.785518 kubelet[1414]: E0516 00:44:20.785464 1414 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.0.0.92\" not found" node="10.0.0.92" May 16 00:44:20.864839 kubelet[1414]: I0516 00:44:20.864756 1414 policy_none.go:49] "None policy: Start" May 16 00:44:20.864839 kubelet[1414]: I0516 00:44:20.864836 1414 memory_manager.go:186] "Starting memorymanager" policy="None" May 16 00:44:20.864839 kubelet[1414]: I0516 00:44:20.864851 1414 state_mem.go:35] "Initializing new in-memory state store" May 16 00:44:20.866888 kubelet[1414]: E0516 00:44:20.866842 1414 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.92\" not found" May 16 00:44:20.870040 systemd[1]: Created slice kubepods.slice. May 16 00:44:20.873908 systemd[1]: Created slice kubepods-burstable.slice. May 16 00:44:20.876418 systemd[1]: Created slice kubepods-besteffort.slice. May 16 00:44:20.885577 kubelet[1414]: I0516 00:44:20.885552 1414 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 16 00:44:20.885903 kubelet[1414]: I0516 00:44:20.885876 1414 eviction_manager.go:189] "Eviction manager: starting control loop" May 16 00:44:20.886006 kubelet[1414]: I0516 00:44:20.885969 1414 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 16 00:44:20.886365 kubelet[1414]: I0516 00:44:20.886346 1414 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 16 00:44:20.888179 kubelet[1414]: E0516 00:44:20.888148 1414 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 16 00:44:20.888264 kubelet[1414]: E0516 00:44:20.888195 1414 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.0.92\" not found" May 16 00:44:20.956067 kubelet[1414]: I0516 00:44:20.955947 1414 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 16 00:44:20.957783 kubelet[1414]: I0516 00:44:20.957756 1414 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 16 00:44:20.957783 kubelet[1414]: I0516 00:44:20.957784 1414 status_manager.go:227] "Starting to sync pod status with apiserver" May 16 00:44:20.957885 kubelet[1414]: I0516 00:44:20.957804 1414 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 16 00:44:20.957885 kubelet[1414]: I0516 00:44:20.957813 1414 kubelet.go:2382] "Starting kubelet main sync loop" May 16 00:44:20.957885 kubelet[1414]: E0516 00:44:20.957859 1414 kubelet.go:2406] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" May 16 00:44:20.987616 kubelet[1414]: I0516 00:44:20.987582 1414 kubelet_node_status.go:75] "Attempting to register node" node="10.0.0.92" May 16 00:44:20.995493 kubelet[1414]: I0516 00:44:20.995465 1414 kubelet_node_status.go:78] "Successfully registered node" node="10.0.0.92" May 16 00:44:20.995570 kubelet[1414]: E0516 00:44:20.995500 1414 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"10.0.0.92\": node \"10.0.0.92\" not found" May 16 00:44:20.998781 kubelet[1414]: I0516 00:44:20.998755 1414 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" May 16 00:44:20.999269 env[1212]: time="2025-05-16T00:44:20.999228342Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 16 00:44:20.999510 kubelet[1414]: I0516 00:44:20.999483 1414 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" May 16 00:44:21.005450 kubelet[1414]: E0516 00:44:21.005422 1414 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.92\" not found" May 16 00:44:21.105907 kubelet[1414]: E0516 00:44:21.105863 1414 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.92\" not found" May 16 00:44:21.206500 kubelet[1414]: E0516 00:44:21.206395 1414 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.92\" not found" May 16 00:44:21.306514 kubelet[1414]: E0516 00:44:21.306464 1414 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.92\" not found" May 16 00:44:21.380212 sudo[1310]: pam_unix(sudo:session): session closed for user root May 16 00:44:21.383645 sshd[1306]: pam_unix(sshd:session): session closed for user core May 16 00:44:21.386465 systemd-logind[1200]: Session 5 logged out. Waiting for processes to exit. May 16 00:44:21.386618 systemd[1]: session-5.scope: Deactivated successfully. May 16 00:44:21.387143 systemd[1]: sshd@4-10.0.0.92:22-10.0.0.1:33220.service: Deactivated successfully. May 16 00:44:21.388048 systemd-logind[1200]: Removed session 5. May 16 00:44:21.406918 kubelet[1414]: E0516 00:44:21.406879 1414 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.92\" not found" May 16 00:44:21.507578 kubelet[1414]: E0516 00:44:21.507492 1414 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.92\" not found" May 16 00:44:21.608144 kubelet[1414]: E0516 00:44:21.608101 1414 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.92\" not found" May 16 00:44:21.666342 kubelet[1414]: I0516 00:44:21.666315 1414 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" May 16 00:44:21.666637 kubelet[1414]: W0516 00:44:21.666606 1414 reflector.go:492] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received May 16 00:44:21.666738 kubelet[1414]: W0516 00:44:21.666721 1414 reflector.go:492] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received May 16 00:44:21.708913 kubelet[1414]: E0516 00:44:21.708850 1414 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"10.0.0.92\" not found" May 16 00:44:21.754221 kubelet[1414]: I0516 00:44:21.754186 1414 apiserver.go:52] "Watching apiserver" May 16 00:44:21.754440 kubelet[1414]: E0516 00:44:21.754420 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:44:21.763427 systemd[1]: Created slice kubepods-besteffort-pod71fdcd45_154f_403a_9dd1_21b136b37567.slice. May 16 00:44:21.767888 kubelet[1414]: I0516 00:44:21.767859 1414 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" May 16 00:44:21.770940 kubelet[1414]: I0516 00:44:21.770903 1414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/71fdcd45-154f-403a-9dd1-21b136b37567-kube-proxy\") pod \"kube-proxy-mr96m\" (UID: \"71fdcd45-154f-403a-9dd1-21b136b37567\") " pod="kube-system/kube-proxy-mr96m" May 16 00:44:21.770940 kubelet[1414]: I0516 00:44:21.770936 1414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/eefb1005-645e-4162-9ad0-71e103fb5ed1-lib-modules\") pod \"cilium-wj477\" (UID: \"eefb1005-645e-4162-9ad0-71e103fb5ed1\") " pod="kube-system/cilium-wj477" May 16 00:44:21.771100 kubelet[1414]: I0516 00:44:21.770958 1414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rztsx\" (UniqueName: \"kubernetes.io/projected/eefb1005-645e-4162-9ad0-71e103fb5ed1-kube-api-access-rztsx\") pod \"cilium-wj477\" (UID: \"eefb1005-645e-4162-9ad0-71e103fb5ed1\") " pod="kube-system/cilium-wj477" May 16 00:44:21.771100 kubelet[1414]: I0516 00:44:21.770997 1414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/71fdcd45-154f-403a-9dd1-21b136b37567-xtables-lock\") pod \"kube-proxy-mr96m\" (UID: \"71fdcd45-154f-403a-9dd1-21b136b37567\") " pod="kube-system/kube-proxy-mr96m" May 16 00:44:21.771100 kubelet[1414]: I0516 00:44:21.771025 1414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/eefb1005-645e-4162-9ad0-71e103fb5ed1-cilium-run\") pod \"cilium-wj477\" (UID: \"eefb1005-645e-4162-9ad0-71e103fb5ed1\") " pod="kube-system/cilium-wj477" May 16 00:44:21.771100 kubelet[1414]: I0516 00:44:21.771041 1414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/eefb1005-645e-4162-9ad0-71e103fb5ed1-bpf-maps\") pod \"cilium-wj477\" (UID: \"eefb1005-645e-4162-9ad0-71e103fb5ed1\") " pod="kube-system/cilium-wj477" May 16 00:44:21.771100 kubelet[1414]: I0516 00:44:21.771056 1414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/eefb1005-645e-4162-9ad0-71e103fb5ed1-cilium-cgroup\") pod \"cilium-wj477\" (UID: \"eefb1005-645e-4162-9ad0-71e103fb5ed1\") " pod="kube-system/cilium-wj477" May 16 00:44:21.771100 kubelet[1414]: I0516 00:44:21.771071 1414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/eefb1005-645e-4162-9ad0-71e103fb5ed1-host-proc-sys-net\") pod \"cilium-wj477\" (UID: \"eefb1005-645e-4162-9ad0-71e103fb5ed1\") " pod="kube-system/cilium-wj477" May 16 00:44:21.771285 kubelet[1414]: I0516 00:44:21.771085 1414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-662j9\" (UniqueName: \"kubernetes.io/projected/71fdcd45-154f-403a-9dd1-21b136b37567-kube-api-access-662j9\") pod \"kube-proxy-mr96m\" (UID: \"71fdcd45-154f-403a-9dd1-21b136b37567\") " pod="kube-system/kube-proxy-mr96m" May 16 00:44:21.771285 kubelet[1414]: I0516 00:44:21.771100 1414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/eefb1005-645e-4162-9ad0-71e103fb5ed1-hostproc\") pod \"cilium-wj477\" (UID: \"eefb1005-645e-4162-9ad0-71e103fb5ed1\") " pod="kube-system/cilium-wj477" May 16 00:44:21.771285 kubelet[1414]: I0516 00:44:21.771114 1414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/eefb1005-645e-4162-9ad0-71e103fb5ed1-cni-path\") pod \"cilium-wj477\" (UID: \"eefb1005-645e-4162-9ad0-71e103fb5ed1\") " pod="kube-system/cilium-wj477" May 16 00:44:21.771285 kubelet[1414]: I0516 00:44:21.771134 1414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/eefb1005-645e-4162-9ad0-71e103fb5ed1-xtables-lock\") pod \"cilium-wj477\" (UID: \"eefb1005-645e-4162-9ad0-71e103fb5ed1\") " pod="kube-system/cilium-wj477" May 16 00:44:21.771285 kubelet[1414]: I0516 00:44:21.771149 1414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/eefb1005-645e-4162-9ad0-71e103fb5ed1-clustermesh-secrets\") pod \"cilium-wj477\" (UID: \"eefb1005-645e-4162-9ad0-71e103fb5ed1\") " pod="kube-system/cilium-wj477" May 16 00:44:21.771285 kubelet[1414]: I0516 00:44:21.771163 1414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/eefb1005-645e-4162-9ad0-71e103fb5ed1-cilium-config-path\") pod \"cilium-wj477\" (UID: \"eefb1005-645e-4162-9ad0-71e103fb5ed1\") " pod="kube-system/cilium-wj477" May 16 00:44:21.771445 kubelet[1414]: I0516 00:44:21.771177 1414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/71fdcd45-154f-403a-9dd1-21b136b37567-lib-modules\") pod \"kube-proxy-mr96m\" (UID: \"71fdcd45-154f-403a-9dd1-21b136b37567\") " pod="kube-system/kube-proxy-mr96m" May 16 00:44:21.771445 kubelet[1414]: I0516 00:44:21.771193 1414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/eefb1005-645e-4162-9ad0-71e103fb5ed1-etc-cni-netd\") pod \"cilium-wj477\" (UID: \"eefb1005-645e-4162-9ad0-71e103fb5ed1\") " pod="kube-system/cilium-wj477" May 16 00:44:21.771445 kubelet[1414]: I0516 00:44:21.771210 1414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/eefb1005-645e-4162-9ad0-71e103fb5ed1-host-proc-sys-kernel\") pod \"cilium-wj477\" (UID: \"eefb1005-645e-4162-9ad0-71e103fb5ed1\") " pod="kube-system/cilium-wj477" May 16 00:44:21.771445 kubelet[1414]: I0516 00:44:21.771251 1414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/eefb1005-645e-4162-9ad0-71e103fb5ed1-hubble-tls\") pod \"cilium-wj477\" (UID: \"eefb1005-645e-4162-9ad0-71e103fb5ed1\") " pod="kube-system/cilium-wj477" May 16 00:44:21.793529 systemd[1]: Created slice kubepods-burstable-podeefb1005_645e_4162_9ad0_71e103fb5ed1.slice. May 16 00:44:21.872584 kubelet[1414]: I0516 00:44:21.872531 1414 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" May 16 00:44:22.092656 kubelet[1414]: E0516 00:44:22.092539 1414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:44:22.093674 env[1212]: time="2025-05-16T00:44:22.093637584Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-mr96m,Uid:71fdcd45-154f-403a-9dd1-21b136b37567,Namespace:kube-system,Attempt:0,}" May 16 00:44:22.102055 kubelet[1414]: E0516 00:44:22.102028 1414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:44:22.102546 env[1212]: time="2025-05-16T00:44:22.102495780Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wj477,Uid:eefb1005-645e-4162-9ad0-71e103fb5ed1,Namespace:kube-system,Attempt:0,}" May 16 00:44:22.674920 env[1212]: time="2025-05-16T00:44:22.674876776Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:44:22.677080 env[1212]: time="2025-05-16T00:44:22.677049497Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:44:22.678706 env[1212]: time="2025-05-16T00:44:22.678126887Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:44:22.680578 env[1212]: time="2025-05-16T00:44:22.680549374Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:44:22.681761 env[1212]: time="2025-05-16T00:44:22.681733806Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:44:22.683699 env[1212]: time="2025-05-16T00:44:22.683660286Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:44:22.686220 env[1212]: time="2025-05-16T00:44:22.686185410Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:44:22.687203 env[1212]: time="2025-05-16T00:44:22.687177623Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:44:22.725766 env[1212]: time="2025-05-16T00:44:22.725673824Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 16 00:44:22.726030 env[1212]: time="2025-05-16T00:44:22.725972365Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 16 00:44:22.726030 env[1212]: time="2025-05-16T00:44:22.725993269Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 00:44:22.726392 env[1212]: time="2025-05-16T00:44:22.726340786Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 16 00:44:22.726392 env[1212]: time="2025-05-16T00:44:22.726370580Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 16 00:44:22.726503 env[1212]: time="2025-05-16T00:44:22.726380551Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 00:44:22.726835 env[1212]: time="2025-05-16T00:44:22.726787416Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b3dc88e6aef0944ea91847bb16cf6e559e96d5472a922adaffccc91880f2df47 pid=1479 runtime=io.containerd.runc.v2 May 16 00:44:22.726901 env[1212]: time="2025-05-16T00:44:22.726810242Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/620d74bb178cad19960755942b01566cec7f2079a544f03e5562e34d71662d0b pid=1478 runtime=io.containerd.runc.v2 May 16 00:44:22.750879 systemd[1]: Started cri-containerd-b3dc88e6aef0944ea91847bb16cf6e559e96d5472a922adaffccc91880f2df47.scope. May 16 00:44:22.753413 systemd[1]: Started cri-containerd-620d74bb178cad19960755942b01566cec7f2079a544f03e5562e34d71662d0b.scope. May 16 00:44:22.761671 kubelet[1414]: E0516 00:44:22.758310 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:44:22.793568 env[1212]: time="2025-05-16T00:44:22.793524547Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-mr96m,Uid:71fdcd45-154f-403a-9dd1-21b136b37567,Namespace:kube-system,Attempt:0,} returns sandbox id \"b3dc88e6aef0944ea91847bb16cf6e559e96d5472a922adaffccc91880f2df47\"" May 16 00:44:22.794924 kubelet[1414]: E0516 00:44:22.794675 1414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:44:22.795633 env[1212]: time="2025-05-16T00:44:22.795605683Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.5\"" May 16 00:44:22.795933 env[1212]: time="2025-05-16T00:44:22.795883841Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wj477,Uid:eefb1005-645e-4162-9ad0-71e103fb5ed1,Namespace:kube-system,Attempt:0,} returns sandbox id \"620d74bb178cad19960755942b01566cec7f2079a544f03e5562e34d71662d0b\"" May 16 00:44:22.796408 kubelet[1414]: E0516 00:44:22.796391 1414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:44:22.878465 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2716779188.mount: Deactivated successfully. May 16 00:44:23.759164 kubelet[1414]: E0516 00:44:23.759110 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:44:23.785125 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3778332542.mount: Deactivated successfully. May 16 00:44:24.248756 env[1212]: time="2025-05-16T00:44:24.248635381Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.32.5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:44:24.250265 env[1212]: time="2025-05-16T00:44:24.250225698Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:69b7afc06f22edcae3b6a7d80cdacb488a5415fd605e89534679e5ebc41375fc,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:44:24.251598 env[1212]: time="2025-05-16T00:44:24.251576574Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.32.5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:44:24.252761 env[1212]: time="2025-05-16T00:44:24.252719161Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:9dc6553459c3319525ba4090a780db1a133d5dee68c08e07f9b9d6ba83b42a0b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:44:24.253283 env[1212]: time="2025-05-16T00:44:24.253237121Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.5\" returns image reference \"sha256:69b7afc06f22edcae3b6a7d80cdacb488a5415fd605e89534679e5ebc41375fc\"" May 16 00:44:24.254958 env[1212]: time="2025-05-16T00:44:24.254758288Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 16 00:44:24.256451 env[1212]: time="2025-05-16T00:44:24.255997251Z" level=info msg="CreateContainer within sandbox \"b3dc88e6aef0944ea91847bb16cf6e559e96d5472a922adaffccc91880f2df47\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 16 00:44:24.269050 env[1212]: time="2025-05-16T00:44:24.269014518Z" level=info msg="CreateContainer within sandbox \"b3dc88e6aef0944ea91847bb16cf6e559e96d5472a922adaffccc91880f2df47\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"72729292adc74975fe8065ef2831d74fe3885b98abb17ed1ccd87c2aefd6da51\"" May 16 00:44:24.269560 env[1212]: time="2025-05-16T00:44:24.269523349Z" level=info msg="StartContainer for \"72729292adc74975fe8065ef2831d74fe3885b98abb17ed1ccd87c2aefd6da51\"" May 16 00:44:24.287350 systemd[1]: run-containerd-runc-k8s.io-72729292adc74975fe8065ef2831d74fe3885b98abb17ed1ccd87c2aefd6da51-runc.DI3sWQ.mount: Deactivated successfully. May 16 00:44:24.288676 systemd[1]: Started cri-containerd-72729292adc74975fe8065ef2831d74fe3885b98abb17ed1ccd87c2aefd6da51.scope. May 16 00:44:24.328579 env[1212]: time="2025-05-16T00:44:24.328419150Z" level=info msg="StartContainer for \"72729292adc74975fe8065ef2831d74fe3885b98abb17ed1ccd87c2aefd6da51\" returns successfully" May 16 00:44:24.759276 kubelet[1414]: E0516 00:44:24.759239 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:44:24.968356 kubelet[1414]: E0516 00:44:24.968328 1414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:44:24.977420 kubelet[1414]: I0516 00:44:24.977335 1414 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-mr96m" podStartSLOduration=2.518228252 podStartE2EDuration="3.977319845s" podCreationTimestamp="2025-05-16 00:44:21 +0000 UTC" firstStartedPulling="2025-05-16 00:44:22.795249316 +0000 UTC m=+2.972966290" lastFinishedPulling="2025-05-16 00:44:24.254340869 +0000 UTC m=+4.432057883" observedRunningTime="2025-05-16 00:44:24.977068152 +0000 UTC m=+5.154785167" watchObservedRunningTime="2025-05-16 00:44:24.977319845 +0000 UTC m=+5.155036899" May 16 00:44:25.759754 kubelet[1414]: E0516 00:44:25.759719 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:44:25.970429 kubelet[1414]: E0516 00:44:25.970373 1414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:44:26.760193 kubelet[1414]: E0516 00:44:26.760139 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:44:27.760908 kubelet[1414]: E0516 00:44:27.760865 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:44:28.440625 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4224619835.mount: Deactivated successfully. May 16 00:44:28.761958 kubelet[1414]: E0516 00:44:28.761824 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:44:29.762888 kubelet[1414]: E0516 00:44:29.762857 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:44:30.553498 env[1212]: time="2025-05-16T00:44:30.553436750Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:44:30.554684 env[1212]: time="2025-05-16T00:44:30.554634086Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:44:30.558802 env[1212]: time="2025-05-16T00:44:30.558757817Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:44:30.559014 env[1212]: time="2025-05-16T00:44:30.558964198Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" May 16 00:44:30.561714 env[1212]: time="2025-05-16T00:44:30.561667681Z" level=info msg="CreateContainer within sandbox \"620d74bb178cad19960755942b01566cec7f2079a544f03e5562e34d71662d0b\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 16 00:44:30.572439 env[1212]: time="2025-05-16T00:44:30.572382306Z" level=info msg="CreateContainer within sandbox \"620d74bb178cad19960755942b01566cec7f2079a544f03e5562e34d71662d0b\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"0669c6f6c2d940db220831c4a079bef8a0315a5191288b5a7fb634f8fa4d4493\"" May 16 00:44:30.572863 env[1212]: time="2025-05-16T00:44:30.572832332Z" level=info msg="StartContainer for \"0669c6f6c2d940db220831c4a079bef8a0315a5191288b5a7fb634f8fa4d4493\"" May 16 00:44:30.588712 systemd[1]: Started cri-containerd-0669c6f6c2d940db220831c4a079bef8a0315a5191288b5a7fb634f8fa4d4493.scope. May 16 00:44:30.625203 env[1212]: time="2025-05-16T00:44:30.625158445Z" level=info msg="StartContainer for \"0669c6f6c2d940db220831c4a079bef8a0315a5191288b5a7fb634f8fa4d4493\" returns successfully" May 16 00:44:30.656574 systemd[1]: cri-containerd-0669c6f6c2d940db220831c4a079bef8a0315a5191288b5a7fb634f8fa4d4493.scope: Deactivated successfully. May 16 00:44:30.763796 kubelet[1414]: E0516 00:44:30.763738 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:44:30.796136 env[1212]: time="2025-05-16T00:44:30.796087255Z" level=info msg="shim disconnected" id=0669c6f6c2d940db220831c4a079bef8a0315a5191288b5a7fb634f8fa4d4493 May 16 00:44:30.796136 env[1212]: time="2025-05-16T00:44:30.796136689Z" level=warning msg="cleaning up after shim disconnected" id=0669c6f6c2d940db220831c4a079bef8a0315a5191288b5a7fb634f8fa4d4493 namespace=k8s.io May 16 00:44:30.796308 env[1212]: time="2025-05-16T00:44:30.796146375Z" level=info msg="cleaning up dead shim" May 16 00:44:30.802927 env[1212]: time="2025-05-16T00:44:30.802877004Z" level=warning msg="cleanup warnings time=\"2025-05-16T00:44:30Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1769 runtime=io.containerd.runc.v2\n" May 16 00:44:30.976915 kubelet[1414]: E0516 00:44:30.976811 1414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:44:30.982569 env[1212]: time="2025-05-16T00:44:30.982527920Z" level=info msg="CreateContainer within sandbox \"620d74bb178cad19960755942b01566cec7f2079a544f03e5562e34d71662d0b\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 16 00:44:30.993947 env[1212]: time="2025-05-16T00:44:30.993888945Z" level=info msg="CreateContainer within sandbox \"620d74bb178cad19960755942b01566cec7f2079a544f03e5562e34d71662d0b\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"8a39a1499fb47e99e98e4dbe7a2408ab1e093e22bafa73a2a88b0875220ec452\"" May 16 00:44:30.994669 env[1212]: time="2025-05-16T00:44:30.994644140Z" level=info msg="StartContainer for \"8a39a1499fb47e99e98e4dbe7a2408ab1e093e22bafa73a2a88b0875220ec452\"" May 16 00:44:31.008606 systemd[1]: Started cri-containerd-8a39a1499fb47e99e98e4dbe7a2408ab1e093e22bafa73a2a88b0875220ec452.scope. May 16 00:44:31.043483 env[1212]: time="2025-05-16T00:44:31.043420893Z" level=info msg="StartContainer for \"8a39a1499fb47e99e98e4dbe7a2408ab1e093e22bafa73a2a88b0875220ec452\" returns successfully" May 16 00:44:31.061980 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 16 00:44:31.062178 systemd[1]: Stopped systemd-sysctl.service. May 16 00:44:31.062743 systemd[1]: Stopping systemd-sysctl.service... May 16 00:44:31.064581 systemd[1]: Starting systemd-sysctl.service... May 16 00:44:31.067419 systemd[1]: cri-containerd-8a39a1499fb47e99e98e4dbe7a2408ab1e093e22bafa73a2a88b0875220ec452.scope: Deactivated successfully. May 16 00:44:31.072485 systemd[1]: Finished systemd-sysctl.service. May 16 00:44:31.094444 env[1212]: time="2025-05-16T00:44:31.094391352Z" level=info msg="shim disconnected" id=8a39a1499fb47e99e98e4dbe7a2408ab1e093e22bafa73a2a88b0875220ec452 May 16 00:44:31.094444 env[1212]: time="2025-05-16T00:44:31.094446627Z" level=warning msg="cleaning up after shim disconnected" id=8a39a1499fb47e99e98e4dbe7a2408ab1e093e22bafa73a2a88b0875220ec452 namespace=k8s.io May 16 00:44:31.094651 env[1212]: time="2025-05-16T00:44:31.094456714Z" level=info msg="cleaning up dead shim" May 16 00:44:31.101314 env[1212]: time="2025-05-16T00:44:31.101262303Z" level=warning msg="cleanup warnings time=\"2025-05-16T00:44:31Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1835 runtime=io.containerd.runc.v2\n" May 16 00:44:31.568925 systemd[1]: run-containerd-runc-k8s.io-0669c6f6c2d940db220831c4a079bef8a0315a5191288b5a7fb634f8fa4d4493-runc.LFVS1y.mount: Deactivated successfully. May 16 00:44:31.569014 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0669c6f6c2d940db220831c4a079bef8a0315a5191288b5a7fb634f8fa4d4493-rootfs.mount: Deactivated successfully. May 16 00:44:31.763910 kubelet[1414]: E0516 00:44:31.763862 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:44:31.980491 kubelet[1414]: E0516 00:44:31.980255 1414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:44:31.982161 env[1212]: time="2025-05-16T00:44:31.982120754Z" level=info msg="CreateContainer within sandbox \"620d74bb178cad19960755942b01566cec7f2079a544f03e5562e34d71662d0b\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 16 00:44:31.992853 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4041483105.mount: Deactivated successfully. May 16 00:44:31.997435 env[1212]: time="2025-05-16T00:44:31.997386992Z" level=info msg="CreateContainer within sandbox \"620d74bb178cad19960755942b01566cec7f2079a544f03e5562e34d71662d0b\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"f91ed9d4c87a13ed2e2bffff04c1149084d08495b0cd6f716db7a7120199466b\"" May 16 00:44:31.998179 env[1212]: time="2025-05-16T00:44:31.998146317Z" level=info msg="StartContainer for \"f91ed9d4c87a13ed2e2bffff04c1149084d08495b0cd6f716db7a7120199466b\"" May 16 00:44:32.019144 systemd[1]: Started cri-containerd-f91ed9d4c87a13ed2e2bffff04c1149084d08495b0cd6f716db7a7120199466b.scope. May 16 00:44:32.065719 systemd[1]: cri-containerd-f91ed9d4c87a13ed2e2bffff04c1149084d08495b0cd6f716db7a7120199466b.scope: Deactivated successfully. May 16 00:44:32.067403 env[1212]: time="2025-05-16T00:44:32.067359534Z" level=info msg="StartContainer for \"f91ed9d4c87a13ed2e2bffff04c1149084d08495b0cd6f716db7a7120199466b\" returns successfully" May 16 00:44:32.204989 env[1212]: time="2025-05-16T00:44:32.204929571Z" level=info msg="shim disconnected" id=f91ed9d4c87a13ed2e2bffff04c1149084d08495b0cd6f716db7a7120199466b May 16 00:44:32.204989 env[1212]: time="2025-05-16T00:44:32.204978200Z" level=warning msg="cleaning up after shim disconnected" id=f91ed9d4c87a13ed2e2bffff04c1149084d08495b0cd6f716db7a7120199466b namespace=k8s.io May 16 00:44:32.204989 env[1212]: time="2025-05-16T00:44:32.204987165Z" level=info msg="cleaning up dead shim" May 16 00:44:32.212090 env[1212]: time="2025-05-16T00:44:32.212038431Z" level=warning msg="cleanup warnings time=\"2025-05-16T00:44:32Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1894 runtime=io.containerd.runc.v2\n" May 16 00:44:32.568673 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f91ed9d4c87a13ed2e2bffff04c1149084d08495b0cd6f716db7a7120199466b-rootfs.mount: Deactivated successfully. May 16 00:44:32.764549 kubelet[1414]: E0516 00:44:32.764505 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:44:32.983286 kubelet[1414]: E0516 00:44:32.983066 1414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:44:32.989702 env[1212]: time="2025-05-16T00:44:32.989638558Z" level=info msg="CreateContainer within sandbox \"620d74bb178cad19960755942b01566cec7f2079a544f03e5562e34d71662d0b\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 16 00:44:33.000777 env[1212]: time="2025-05-16T00:44:33.000731284Z" level=info msg="CreateContainer within sandbox \"620d74bb178cad19960755942b01566cec7f2079a544f03e5562e34d71662d0b\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"8ec2f2c6da8df96e28609ddb83a14ca09e20bc43ecb7b0b30d66335d53c21361\"" May 16 00:44:33.001236 env[1212]: time="2025-05-16T00:44:33.001204029Z" level=info msg="StartContainer for \"8ec2f2c6da8df96e28609ddb83a14ca09e20bc43ecb7b0b30d66335d53c21361\"" May 16 00:44:33.019424 systemd[1]: Started cri-containerd-8ec2f2c6da8df96e28609ddb83a14ca09e20bc43ecb7b0b30d66335d53c21361.scope. May 16 00:44:33.055122 systemd[1]: cri-containerd-8ec2f2c6da8df96e28609ddb83a14ca09e20bc43ecb7b0b30d66335d53c21361.scope: Deactivated successfully. May 16 00:44:33.057621 env[1212]: time="2025-05-16T00:44:33.057573938Z" level=info msg="StartContainer for \"8ec2f2c6da8df96e28609ddb83a14ca09e20bc43ecb7b0b30d66335d53c21361\" returns successfully" May 16 00:44:33.075814 env[1212]: time="2025-05-16T00:44:33.075762637Z" level=info msg="shim disconnected" id=8ec2f2c6da8df96e28609ddb83a14ca09e20bc43ecb7b0b30d66335d53c21361 May 16 00:44:33.075814 env[1212]: time="2025-05-16T00:44:33.075807742Z" level=warning msg="cleaning up after shim disconnected" id=8ec2f2c6da8df96e28609ddb83a14ca09e20bc43ecb7b0b30d66335d53c21361 namespace=k8s.io May 16 00:44:33.075814 env[1212]: time="2025-05-16T00:44:33.075816667Z" level=info msg="cleaning up dead shim" May 16 00:44:33.082506 env[1212]: time="2025-05-16T00:44:33.082461360Z" level=warning msg="cleanup warnings time=\"2025-05-16T00:44:33Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1948 runtime=io.containerd.runc.v2\n" May 16 00:44:33.568748 systemd[1]: run-containerd-runc-k8s.io-8ec2f2c6da8df96e28609ddb83a14ca09e20bc43ecb7b0b30d66335d53c21361-runc.sDKDm3.mount: Deactivated successfully. May 16 00:44:33.568839 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8ec2f2c6da8df96e28609ddb83a14ca09e20bc43ecb7b0b30d66335d53c21361-rootfs.mount: Deactivated successfully. May 16 00:44:33.765340 kubelet[1414]: E0516 00:44:33.765301 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:44:33.986535 kubelet[1414]: E0516 00:44:33.986240 1414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:44:33.988343 env[1212]: time="2025-05-16T00:44:33.988301787Z" level=info msg="CreateContainer within sandbox \"620d74bb178cad19960755942b01566cec7f2079a544f03e5562e34d71662d0b\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 16 00:44:34.002763 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1931017339.mount: Deactivated successfully. May 16 00:44:34.008381 env[1212]: time="2025-05-16T00:44:34.008335214Z" level=info msg="CreateContainer within sandbox \"620d74bb178cad19960755942b01566cec7f2079a544f03e5562e34d71662d0b\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"c6c8745a8a2cbde242bbb15a4ca281f7253b24700669a977cec378785aea8551\"" May 16 00:44:34.009144 env[1212]: time="2025-05-16T00:44:34.009113864Z" level=info msg="StartContainer for \"c6c8745a8a2cbde242bbb15a4ca281f7253b24700669a977cec378785aea8551\"" May 16 00:44:34.022691 systemd[1]: Started cri-containerd-c6c8745a8a2cbde242bbb15a4ca281f7253b24700669a977cec378785aea8551.scope. May 16 00:44:34.061905 env[1212]: time="2025-05-16T00:44:34.061855684Z" level=info msg="StartContainer for \"c6c8745a8a2cbde242bbb15a4ca281f7253b24700669a977cec378785aea8551\" returns successfully" May 16 00:44:34.233455 kubelet[1414]: I0516 00:44:34.232775 1414 kubelet_node_status.go:501] "Fast updating node status as it just became ready" May 16 00:44:34.315711 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! May 16 00:44:34.558716 kernel: Initializing XFRM netlink socket May 16 00:44:34.561701 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! May 16 00:44:34.765709 kubelet[1414]: E0516 00:44:34.765647 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:44:34.990855 kubelet[1414]: E0516 00:44:34.990751 1414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:44:35.008815 kubelet[1414]: I0516 00:44:35.008547 1414 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-wj477" podStartSLOduration=6.245045525 podStartE2EDuration="14.008526814s" podCreationTimestamp="2025-05-16 00:44:21 +0000 UTC" firstStartedPulling="2025-05-16 00:44:22.796799847 +0000 UTC m=+2.974516821" lastFinishedPulling="2025-05-16 00:44:30.560281136 +0000 UTC m=+10.737998110" observedRunningTime="2025-05-16 00:44:35.007797654 +0000 UTC m=+15.185514668" watchObservedRunningTime="2025-05-16 00:44:35.008526814 +0000 UTC m=+15.186243828" May 16 00:44:35.765870 kubelet[1414]: E0516 00:44:35.765818 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:44:35.992259 kubelet[1414]: E0516 00:44:35.992215 1414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:44:36.171666 systemd-networkd[1039]: cilium_host: Link UP May 16 00:44:36.171780 systemd-networkd[1039]: cilium_net: Link UP May 16 00:44:36.173970 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready May 16 00:44:36.174027 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready May 16 00:44:36.173428 systemd-networkd[1039]: cilium_net: Gained carrier May 16 00:44:36.173597 systemd-networkd[1039]: cilium_host: Gained carrier May 16 00:44:36.250629 systemd-networkd[1039]: cilium_vxlan: Link UP May 16 00:44:36.250635 systemd-networkd[1039]: cilium_vxlan: Gained carrier May 16 00:44:36.518831 systemd-networkd[1039]: cilium_host: Gained IPv6LL May 16 00:44:36.549702 kernel: NET: Registered PF_ALG protocol family May 16 00:44:36.766331 kubelet[1414]: E0516 00:44:36.766293 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:44:36.814838 systemd-networkd[1039]: cilium_net: Gained IPv6LL May 16 00:44:36.993730 kubelet[1414]: E0516 00:44:36.993673 1414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:44:37.148582 systemd-networkd[1039]: lxc_health: Link UP May 16 00:44:37.156431 systemd-networkd[1039]: lxc_health: Gained carrier May 16 00:44:37.156703 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready May 16 00:44:37.268009 systemd[1]: Created slice kubepods-besteffort-pod7bd52f27_97e7_4565_921d_1c4dfe8a3b88.slice. May 16 00:44:37.364599 kubelet[1414]: I0516 00:44:37.364536 1414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f6tlt\" (UniqueName: \"kubernetes.io/projected/7bd52f27-97e7-4565-921d-1c4dfe8a3b88-kube-api-access-f6tlt\") pod \"nginx-deployment-7fcdb87857-lhtjb\" (UID: \"7bd52f27-97e7-4565-921d-1c4dfe8a3b88\") " pod="default/nginx-deployment-7fcdb87857-lhtjb" May 16 00:44:37.571882 env[1212]: time="2025-05-16T00:44:37.571769497Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-lhtjb,Uid:7bd52f27-97e7-4565-921d-1c4dfe8a3b88,Namespace:default,Attempt:0,}" May 16 00:44:37.610470 systemd-networkd[1039]: lxcc698b0d24a21: Link UP May 16 00:44:37.619713 kernel: eth0: renamed from tmpe11c7 May 16 00:44:37.628739 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready May 16 00:44:37.628824 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcc698b0d24a21: link becomes ready May 16 00:44:37.628830 systemd-networkd[1039]: lxcc698b0d24a21: Gained carrier May 16 00:44:37.767139 kubelet[1414]: E0516 00:44:37.767005 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:44:37.775081 systemd-networkd[1039]: cilium_vxlan: Gained IPv6LL May 16 00:44:37.994970 kubelet[1414]: E0516 00:44:37.994888 1414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:44:38.224064 systemd-networkd[1039]: lxc_health: Gained IPv6LL May 16 00:44:38.768095 kubelet[1414]: E0516 00:44:38.768060 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:44:39.439161 systemd-networkd[1039]: lxcc698b0d24a21: Gained IPv6LL May 16 00:44:39.769229 kubelet[1414]: E0516 00:44:39.769101 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:44:40.753943 kubelet[1414]: E0516 00:44:40.753904 1414 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:44:40.770253 kubelet[1414]: E0516 00:44:40.770211 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:44:41.083789 env[1212]: time="2025-05-16T00:44:41.083423073Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 16 00:44:41.083789 env[1212]: time="2025-05-16T00:44:41.083467888Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 16 00:44:41.084119 env[1212]: time="2025-05-16T00:44:41.083478731Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 00:44:41.084250 env[1212]: time="2025-05-16T00:44:41.084209656Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e11c70e9a0717e01c45957d668725174c3fa14f2a956a5620204d933e40bc598 pid=2489 runtime=io.containerd.runc.v2 May 16 00:44:41.099432 systemd[1]: Started cri-containerd-e11c70e9a0717e01c45957d668725174c3fa14f2a956a5620204d933e40bc598.scope. May 16 00:44:41.159939 systemd-resolved[1155]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 16 00:44:41.175284 env[1212]: time="2025-05-16T00:44:41.175233618Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-lhtjb,Uid:7bd52f27-97e7-4565-921d-1c4dfe8a3b88,Namespace:default,Attempt:0,} returns sandbox id \"e11c70e9a0717e01c45957d668725174c3fa14f2a956a5620204d933e40bc598\"" May 16 00:44:41.176607 env[1212]: time="2025-05-16T00:44:41.176574668Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" May 16 00:44:41.771054 kubelet[1414]: E0516 00:44:41.771005 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:44:42.771791 kubelet[1414]: E0516 00:44:42.771738 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:44:43.037821 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3989633658.mount: Deactivated successfully. May 16 00:44:43.772307 kubelet[1414]: E0516 00:44:43.772257 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:44:43.812099 kubelet[1414]: I0516 00:44:43.811925 1414 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 16 00:44:43.813144 kubelet[1414]: E0516 00:44:43.812854 1414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:44:44.006439 kubelet[1414]: E0516 00:44:44.006079 1414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:44:44.264582 env[1212]: time="2025-05-16T00:44:44.264092778Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:44:44.266125 env[1212]: time="2025-05-16T00:44:44.266086208Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e8b1cb61bd96acc3ff3c695318c9cc691213d532eee3731d038af92816fcb5f4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:44:44.267743 env[1212]: time="2025-05-16T00:44:44.267711337Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:44:44.269400 env[1212]: time="2025-05-16T00:44:44.269368475Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx@sha256:beabce8f1782671ba500ddff99dd260fbf9c5ec85fb9c3162e35a3c40bafd023,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:44:44.270985 env[1212]: time="2025-05-16T00:44:44.270948912Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:e8b1cb61bd96acc3ff3c695318c9cc691213d532eee3731d038af92816fcb5f4\"" May 16 00:44:44.272589 env[1212]: time="2025-05-16T00:44:44.272558477Z" level=info msg="CreateContainer within sandbox \"e11c70e9a0717e01c45957d668725174c3fa14f2a956a5620204d933e40bc598\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" May 16 00:44:44.282844 env[1212]: time="2025-05-16T00:44:44.282798146Z" level=info msg="CreateContainer within sandbox \"e11c70e9a0717e01c45957d668725174c3fa14f2a956a5620204d933e40bc598\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"58bdfaa0c41033dd29bca4756fc31f4e59371a74fd151566742b23a7cfc55398\"" May 16 00:44:44.283225 env[1212]: time="2025-05-16T00:44:44.283174210Z" level=info msg="StartContainer for \"58bdfaa0c41033dd29bca4756fc31f4e59371a74fd151566742b23a7cfc55398\"" May 16 00:44:44.299806 systemd[1]: Started cri-containerd-58bdfaa0c41033dd29bca4756fc31f4e59371a74fd151566742b23a7cfc55398.scope. May 16 00:44:44.334085 env[1212]: time="2025-05-16T00:44:44.334043946Z" level=info msg="StartContainer for \"58bdfaa0c41033dd29bca4756fc31f4e59371a74fd151566742b23a7cfc55398\" returns successfully" May 16 00:44:44.772830 kubelet[1414]: E0516 00:44:44.772795 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:44:45.018228 kubelet[1414]: I0516 00:44:45.018102 1414 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-7fcdb87857-lhtjb" podStartSLOduration=4.922673553 podStartE2EDuration="8.01808663s" podCreationTimestamp="2025-05-16 00:44:37 +0000 UTC" firstStartedPulling="2025-05-16 00:44:41.176146684 +0000 UTC m=+21.353863698" lastFinishedPulling="2025-05-16 00:44:44.271559761 +0000 UTC m=+24.449276775" observedRunningTime="2025-05-16 00:44:45.017111497 +0000 UTC m=+25.194828511" watchObservedRunningTime="2025-05-16 00:44:45.01808663 +0000 UTC m=+25.195803644" May 16 00:44:45.773370 kubelet[1414]: E0516 00:44:45.773322 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:44:46.773977 kubelet[1414]: E0516 00:44:46.773936 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:44:47.775318 kubelet[1414]: E0516 00:44:47.775271 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:44:48.775806 kubelet[1414]: E0516 00:44:48.775757 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:44:48.792004 systemd[1]: Created slice kubepods-besteffort-pod052640d7_7504_4918_8d88_b22599d6c3fd.slice. May 16 00:44:48.828535 kubelet[1414]: I0516 00:44:48.828401 1414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6zhps\" (UniqueName: \"kubernetes.io/projected/052640d7-7504-4918-8d88-b22599d6c3fd-kube-api-access-6zhps\") pod \"nfs-server-provisioner-0\" (UID: \"052640d7-7504-4918-8d88-b22599d6c3fd\") " pod="default/nfs-server-provisioner-0" May 16 00:44:48.828535 kubelet[1414]: I0516 00:44:48.828449 1414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/052640d7-7504-4918-8d88-b22599d6c3fd-data\") pod \"nfs-server-provisioner-0\" (UID: \"052640d7-7504-4918-8d88-b22599d6c3fd\") " pod="default/nfs-server-provisioner-0" May 16 00:44:49.095673 env[1212]: time="2025-05-16T00:44:49.095279364Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:052640d7-7504-4918-8d88-b22599d6c3fd,Namespace:default,Attempt:0,}" May 16 00:44:49.130146 systemd-networkd[1039]: lxcabd5ae6391a8: Link UP May 16 00:44:49.134710 kernel: eth0: renamed from tmp07317 May 16 00:44:49.141258 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready May 16 00:44:49.141348 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcabd5ae6391a8: link becomes ready May 16 00:44:49.147466 systemd-networkd[1039]: lxcabd5ae6391a8: Gained carrier May 16 00:44:49.327865 env[1212]: time="2025-05-16T00:44:49.327802856Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 16 00:44:49.328021 env[1212]: time="2025-05-16T00:44:49.327842904Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 16 00:44:49.328021 env[1212]: time="2025-05-16T00:44:49.327854586Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 00:44:49.328111 env[1212]: time="2025-05-16T00:44:49.328048625Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/073174a0367b22c1734944a5ee5d32b4a621577c0322d1464c35584c63bc832f pid=2620 runtime=io.containerd.runc.v2 May 16 00:44:49.351826 systemd[1]: Started cri-containerd-073174a0367b22c1734944a5ee5d32b4a621577c0322d1464c35584c63bc832f.scope. May 16 00:44:49.385605 systemd-resolved[1155]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 16 00:44:49.402529 env[1212]: time="2025-05-16T00:44:49.402480720Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:052640d7-7504-4918-8d88-b22599d6c3fd,Namespace:default,Attempt:0,} returns sandbox id \"073174a0367b22c1734944a5ee5d32b4a621577c0322d1464c35584c63bc832f\"" May 16 00:44:49.404051 env[1212]: time="2025-05-16T00:44:49.403998184Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" May 16 00:44:49.776885 kubelet[1414]: E0516 00:44:49.776774 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:44:49.943884 systemd[1]: run-containerd-runc-k8s.io-073174a0367b22c1734944a5ee5d32b4a621577c0322d1464c35584c63bc832f-runc.LwBxjo.mount: Deactivated successfully. May 16 00:44:50.256765 systemd-networkd[1039]: lxcabd5ae6391a8: Gained IPv6LL May 16 00:44:50.777236 kubelet[1414]: E0516 00:44:50.777185 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:44:51.548725 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4058229027.mount: Deactivated successfully. May 16 00:44:51.777927 kubelet[1414]: E0516 00:44:51.777874 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:44:52.778327 kubelet[1414]: E0516 00:44:52.778279 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:44:53.324309 env[1212]: time="2025-05-16T00:44:53.324260843Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:44:53.325599 env[1212]: time="2025-05-16T00:44:53.325570885Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:44:53.327495 env[1212]: time="2025-05-16T00:44:53.327461298Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:44:53.329090 env[1212]: time="2025-05-16T00:44:53.329061065Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:44:53.329948 env[1212]: time="2025-05-16T00:44:53.329909556Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\"" May 16 00:44:53.332630 env[1212]: time="2025-05-16T00:44:53.332600572Z" level=info msg="CreateContainer within sandbox \"073174a0367b22c1734944a5ee5d32b4a621577c0322d1464c35584c63bc832f\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" May 16 00:44:53.344904 env[1212]: time="2025-05-16T00:44:53.344869909Z" level=info msg="CreateContainer within sandbox \"073174a0367b22c1734944a5ee5d32b4a621577c0322d1464c35584c63bc832f\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"a75818d75f1bb04b666cb29ceb87404f177207abbac5c25024f4599f8314a9d7\"" May 16 00:44:53.345429 env[1212]: time="2025-05-16T00:44:53.345398831Z" level=info msg="StartContainer for \"a75818d75f1bb04b666cb29ceb87404f177207abbac5c25024f4599f8314a9d7\"" May 16 00:44:53.362847 systemd[1]: Started cri-containerd-a75818d75f1bb04b666cb29ceb87404f177207abbac5c25024f4599f8314a9d7.scope. May 16 00:44:53.404775 env[1212]: time="2025-05-16T00:44:53.404731403Z" level=info msg="StartContainer for \"a75818d75f1bb04b666cb29ceb87404f177207abbac5c25024f4599f8314a9d7\" returns successfully" May 16 00:44:53.778897 kubelet[1414]: E0516 00:44:53.778753 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:44:54.034574 kubelet[1414]: I0516 00:44:54.034420 1414 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=2.106773383 podStartE2EDuration="6.034401916s" podCreationTimestamp="2025-05-16 00:44:48 +0000 UTC" firstStartedPulling="2025-05-16 00:44:49.403670118 +0000 UTC m=+29.581387132" lastFinishedPulling="2025-05-16 00:44:53.331298691 +0000 UTC m=+33.509015665" observedRunningTime="2025-05-16 00:44:54.03408731 +0000 UTC m=+34.211804324" watchObservedRunningTime="2025-05-16 00:44:54.034401916 +0000 UTC m=+34.212118930" May 16 00:44:54.343180 systemd[1]: run-containerd-runc-k8s.io-a75818d75f1bb04b666cb29ceb87404f177207abbac5c25024f4599f8314a9d7-runc.kzTOly.mount: Deactivated successfully. May 16 00:44:54.779477 kubelet[1414]: E0516 00:44:54.779367 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:44:55.779796 kubelet[1414]: E0516 00:44:55.779752 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:44:56.780580 kubelet[1414]: E0516 00:44:56.780527 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:44:57.781579 kubelet[1414]: E0516 00:44:57.781537 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:44:57.980647 update_engine[1202]: I0516 00:44:57.980597 1202 update_attempter.cc:509] Updating boot flags... May 16 00:44:58.782215 kubelet[1414]: E0516 00:44:58.782166 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:44:59.783296 kubelet[1414]: E0516 00:44:59.783254 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:45:00.753373 kubelet[1414]: E0516 00:45:00.753332 1414 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:45:00.783811 kubelet[1414]: E0516 00:45:00.783775 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:45:01.784119 kubelet[1414]: E0516 00:45:01.784080 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:45:02.785165 kubelet[1414]: E0516 00:45:02.785127 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:45:03.619575 systemd[1]: Created slice kubepods-besteffort-podd572f272_35bb_4968_b8e1_fc743d2275ef.slice. May 16 00:45:03.719479 kubelet[1414]: I0516 00:45:03.719444 1414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-b1d98bff-306d-4b15-be4d-f8d99d5ccad7\" (UniqueName: \"kubernetes.io/nfs/d572f272-35bb-4968-b8e1-fc743d2275ef-pvc-b1d98bff-306d-4b15-be4d-f8d99d5ccad7\") pod \"test-pod-1\" (UID: \"d572f272-35bb-4968-b8e1-fc743d2275ef\") " pod="default/test-pod-1" May 16 00:45:03.719670 kubelet[1414]: I0516 00:45:03.719651 1414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bl82n\" (UniqueName: \"kubernetes.io/projected/d572f272-35bb-4968-b8e1-fc743d2275ef-kube-api-access-bl82n\") pod \"test-pod-1\" (UID: \"d572f272-35bb-4968-b8e1-fc743d2275ef\") " pod="default/test-pod-1" May 16 00:45:03.786193 kubelet[1414]: E0516 00:45:03.786136 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:45:03.845699 kernel: FS-Cache: Loaded May 16 00:45:03.872987 kernel: RPC: Registered named UNIX socket transport module. May 16 00:45:03.873119 kernel: RPC: Registered udp transport module. May 16 00:45:03.873141 kernel: RPC: Registered tcp transport module. May 16 00:45:03.874103 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. May 16 00:45:03.922706 kernel: FS-Cache: Netfs 'nfs' registered for caching May 16 00:45:04.054745 kernel: NFS: Registering the id_resolver key type May 16 00:45:04.054863 kernel: Key type id_resolver registered May 16 00:45:04.054896 kernel: Key type id_legacy registered May 16 00:45:04.083441 nfsidmap[2752]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' May 16 00:45:04.087050 nfsidmap[2755]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' May 16 00:45:04.222230 env[1212]: time="2025-05-16T00:45:04.222097823Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:d572f272-35bb-4968-b8e1-fc743d2275ef,Namespace:default,Attempt:0,}" May 16 00:45:04.260267 systemd-networkd[1039]: lxc2c97d60312b4: Link UP May 16 00:45:04.276732 kernel: eth0: renamed from tmp61c79 May 16 00:45:04.283706 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready May 16 00:45:04.283776 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc2c97d60312b4: link becomes ready May 16 00:45:04.284573 systemd-networkd[1039]: lxc2c97d60312b4: Gained carrier May 16 00:45:04.420726 env[1212]: time="2025-05-16T00:45:04.420633675Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 16 00:45:04.420912 env[1212]: time="2025-05-16T00:45:04.420886535Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 16 00:45:04.421026 env[1212]: time="2025-05-16T00:45:04.421002303Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 00:45:04.421312 env[1212]: time="2025-05-16T00:45:04.421281165Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/61c79cfa02a7787814223dbaa07f627c5bda40890f24cbc142a7f83d7d38a243 pid=2792 runtime=io.containerd.runc.v2 May 16 00:45:04.432774 systemd[1]: Started cri-containerd-61c79cfa02a7787814223dbaa07f627c5bda40890f24cbc142a7f83d7d38a243.scope. May 16 00:45:04.467832 systemd-resolved[1155]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 16 00:45:04.491271 env[1212]: time="2025-05-16T00:45:04.491160677Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:d572f272-35bb-4968-b8e1-fc743d2275ef,Namespace:default,Attempt:0,} returns sandbox id \"61c79cfa02a7787814223dbaa07f627c5bda40890f24cbc142a7f83d7d38a243\"" May 16 00:45:04.493042 env[1212]: time="2025-05-16T00:45:04.493007937Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" May 16 00:45:04.787298 kubelet[1414]: E0516 00:45:04.787189 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:45:04.864690 env[1212]: time="2025-05-16T00:45:04.864628226Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:45:04.905524 env[1212]: time="2025-05-16T00:45:04.905473491Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:e8b1cb61bd96acc3ff3c695318c9cc691213d532eee3731d038af92816fcb5f4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:45:04.920139 env[1212]: time="2025-05-16T00:45:04.920087922Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:45:04.961302 env[1212]: time="2025-05-16T00:45:04.961257532Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx@sha256:beabce8f1782671ba500ddff99dd260fbf9c5ec85fb9c3162e35a3c40bafd023,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:45:04.961993 env[1212]: time="2025-05-16T00:45:04.961963225Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:e8b1cb61bd96acc3ff3c695318c9cc691213d532eee3731d038af92816fcb5f4\"" May 16 00:45:04.964806 env[1212]: time="2025-05-16T00:45:04.964712194Z" level=info msg="CreateContainer within sandbox \"61c79cfa02a7787814223dbaa07f627c5bda40890f24cbc142a7f83d7d38a243\" for container &ContainerMetadata{Name:test,Attempt:0,}" May 16 00:45:05.105596 env[1212]: time="2025-05-16T00:45:05.105519769Z" level=info msg="CreateContainer within sandbox \"61c79cfa02a7787814223dbaa07f627c5bda40890f24cbc142a7f83d7d38a243\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"828403c52aa4065420e018d23255a31975b2505ee4bf40b6782be5c7e6fa56e2\"" May 16 00:45:05.106188 env[1212]: time="2025-05-16T00:45:05.106138533Z" level=info msg="StartContainer for \"828403c52aa4065420e018d23255a31975b2505ee4bf40b6782be5c7e6fa56e2\"" May 16 00:45:05.125272 systemd[1]: Started cri-containerd-828403c52aa4065420e018d23255a31975b2505ee4bf40b6782be5c7e6fa56e2.scope. May 16 00:45:05.159906 env[1212]: time="2025-05-16T00:45:05.159510097Z" level=info msg="StartContainer for \"828403c52aa4065420e018d23255a31975b2505ee4bf40b6782be5c7e6fa56e2\" returns successfully" May 16 00:45:05.678801 systemd-networkd[1039]: lxc2c97d60312b4: Gained IPv6LL May 16 00:45:05.787324 kubelet[1414]: E0516 00:45:05.787285 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:45:05.831991 systemd[1]: run-containerd-runc-k8s.io-828403c52aa4065420e018d23255a31975b2505ee4bf40b6782be5c7e6fa56e2-runc.HzoRN1.mount: Deactivated successfully. May 16 00:45:06.055938 kubelet[1414]: I0516 00:45:06.055888 1414 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=16.584933607 podStartE2EDuration="17.055869205s" podCreationTimestamp="2025-05-16 00:44:49 +0000 UTC" firstStartedPulling="2025-05-16 00:45:04.492462856 +0000 UTC m=+44.670179830" lastFinishedPulling="2025-05-16 00:45:04.963398414 +0000 UTC m=+45.141115428" observedRunningTime="2025-05-16 00:45:06.055308607 +0000 UTC m=+46.233025621" watchObservedRunningTime="2025-05-16 00:45:06.055869205 +0000 UTC m=+46.233586219" May 16 00:45:06.787884 kubelet[1414]: E0516 00:45:06.787842 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:45:07.788421 kubelet[1414]: E0516 00:45:07.788379 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:45:08.789225 kubelet[1414]: E0516 00:45:08.789174 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:45:09.790184 kubelet[1414]: E0516 00:45:09.790137 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:45:10.790876 kubelet[1414]: E0516 00:45:10.790830 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:45:11.791623 kubelet[1414]: E0516 00:45:11.791579 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:45:12.354781 systemd[1]: run-containerd-runc-k8s.io-c6c8745a8a2cbde242bbb15a4ca281f7253b24700669a977cec378785aea8551-runc.Tu5XtA.mount: Deactivated successfully. May 16 00:45:12.428817 env[1212]: time="2025-05-16T00:45:12.428748640Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 16 00:45:12.433785 env[1212]: time="2025-05-16T00:45:12.433736947Z" level=info msg="StopContainer for \"c6c8745a8a2cbde242bbb15a4ca281f7253b24700669a977cec378785aea8551\" with timeout 2 (s)" May 16 00:45:12.434072 env[1212]: time="2025-05-16T00:45:12.434031960Z" level=info msg="Stop container \"c6c8745a8a2cbde242bbb15a4ca281f7253b24700669a977cec378785aea8551\" with signal terminated" May 16 00:45:12.439221 systemd-networkd[1039]: lxc_health: Link DOWN May 16 00:45:12.439230 systemd-networkd[1039]: lxc_health: Lost carrier May 16 00:45:12.479112 systemd[1]: cri-containerd-c6c8745a8a2cbde242bbb15a4ca281f7253b24700669a977cec378785aea8551.scope: Deactivated successfully. May 16 00:45:12.479432 systemd[1]: cri-containerd-c6c8745a8a2cbde242bbb15a4ca281f7253b24700669a977cec378785aea8551.scope: Consumed 6.343s CPU time. May 16 00:45:12.495393 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c6c8745a8a2cbde242bbb15a4ca281f7253b24700669a977cec378785aea8551-rootfs.mount: Deactivated successfully. May 16 00:45:12.508019 env[1212]: time="2025-05-16T00:45:12.507975274Z" level=info msg="shim disconnected" id=c6c8745a8a2cbde242bbb15a4ca281f7253b24700669a977cec378785aea8551 May 16 00:45:12.508019 env[1212]: time="2025-05-16T00:45:12.508017836Z" level=warning msg="cleaning up after shim disconnected" id=c6c8745a8a2cbde242bbb15a4ca281f7253b24700669a977cec378785aea8551 namespace=k8s.io May 16 00:45:12.508019 env[1212]: time="2025-05-16T00:45:12.508027717Z" level=info msg="cleaning up dead shim" May 16 00:45:12.514526 env[1212]: time="2025-05-16T00:45:12.514482769Z" level=warning msg="cleanup warnings time=\"2025-05-16T00:45:12Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2925 runtime=io.containerd.runc.v2\n" May 16 00:45:12.516817 env[1212]: time="2025-05-16T00:45:12.516775793Z" level=info msg="StopContainer for \"c6c8745a8a2cbde242bbb15a4ca281f7253b24700669a977cec378785aea8551\" returns successfully" May 16 00:45:12.517461 env[1212]: time="2025-05-16T00:45:12.517430023Z" level=info msg="StopPodSandbox for \"620d74bb178cad19960755942b01566cec7f2079a544f03e5562e34d71662d0b\"" May 16 00:45:12.517509 env[1212]: time="2025-05-16T00:45:12.517498546Z" level=info msg="Container to stop \"f91ed9d4c87a13ed2e2bffff04c1149084d08495b0cd6f716db7a7120199466b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 16 00:45:12.517540 env[1212]: time="2025-05-16T00:45:12.517513587Z" level=info msg="Container to stop \"8ec2f2c6da8df96e28609ddb83a14ca09e20bc43ecb7b0b30d66335d53c21361\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 16 00:45:12.517540 env[1212]: time="2025-05-16T00:45:12.517525707Z" level=info msg="Container to stop \"0669c6f6c2d940db220831c4a079bef8a0315a5191288b5a7fb634f8fa4d4493\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 16 00:45:12.517589 env[1212]: time="2025-05-16T00:45:12.517538148Z" level=info msg="Container to stop \"8a39a1499fb47e99e98e4dbe7a2408ab1e093e22bafa73a2a88b0875220ec452\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 16 00:45:12.517589 env[1212]: time="2025-05-16T00:45:12.517552709Z" level=info msg="Container to stop \"c6c8745a8a2cbde242bbb15a4ca281f7253b24700669a977cec378785aea8551\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 16 00:45:12.519262 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-620d74bb178cad19960755942b01566cec7f2079a544f03e5562e34d71662d0b-shm.mount: Deactivated successfully. May 16 00:45:12.525350 systemd[1]: cri-containerd-620d74bb178cad19960755942b01566cec7f2079a544f03e5562e34d71662d0b.scope: Deactivated successfully. May 16 00:45:12.542994 env[1212]: time="2025-05-16T00:45:12.542943900Z" level=info msg="shim disconnected" id=620d74bb178cad19960755942b01566cec7f2079a544f03e5562e34d71662d0b May 16 00:45:12.542994 env[1212]: time="2025-05-16T00:45:12.542991823Z" level=warning msg="cleaning up after shim disconnected" id=620d74bb178cad19960755942b01566cec7f2079a544f03e5562e34d71662d0b namespace=k8s.io May 16 00:45:12.542994 env[1212]: time="2025-05-16T00:45:12.543000463Z" level=info msg="cleaning up dead shim" May 16 00:45:12.549448 env[1212]: time="2025-05-16T00:45:12.549397913Z" level=warning msg="cleanup warnings time=\"2025-05-16T00:45:12Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2955 runtime=io.containerd.runc.v2\n" May 16 00:45:12.549756 env[1212]: time="2025-05-16T00:45:12.549731888Z" level=info msg="TearDown network for sandbox \"620d74bb178cad19960755942b01566cec7f2079a544f03e5562e34d71662d0b\" successfully" May 16 00:45:12.549796 env[1212]: time="2025-05-16T00:45:12.549756329Z" level=info msg="StopPodSandbox for \"620d74bb178cad19960755942b01566cec7f2079a544f03e5562e34d71662d0b\" returns successfully" May 16 00:45:12.673485 kubelet[1414]: I0516 00:45:12.672801 1414 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/eefb1005-645e-4162-9ad0-71e103fb5ed1-lib-modules\") pod \"eefb1005-645e-4162-9ad0-71e103fb5ed1\" (UID: \"eefb1005-645e-4162-9ad0-71e103fb5ed1\") " May 16 00:45:12.673485 kubelet[1414]: I0516 00:45:12.672859 1414 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rztsx\" (UniqueName: \"kubernetes.io/projected/eefb1005-645e-4162-9ad0-71e103fb5ed1-kube-api-access-rztsx\") pod \"eefb1005-645e-4162-9ad0-71e103fb5ed1\" (UID: \"eefb1005-645e-4162-9ad0-71e103fb5ed1\") " May 16 00:45:12.673485 kubelet[1414]: I0516 00:45:12.672888 1414 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/eefb1005-645e-4162-9ad0-71e103fb5ed1-hostproc\") pod \"eefb1005-645e-4162-9ad0-71e103fb5ed1\" (UID: \"eefb1005-645e-4162-9ad0-71e103fb5ed1\") " May 16 00:45:12.673485 kubelet[1414]: I0516 00:45:12.672919 1414 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/eefb1005-645e-4162-9ad0-71e103fb5ed1-cilium-config-path\") pod \"eefb1005-645e-4162-9ad0-71e103fb5ed1\" (UID: \"eefb1005-645e-4162-9ad0-71e103fb5ed1\") " May 16 00:45:12.673485 kubelet[1414]: I0516 00:45:12.672940 1414 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/eefb1005-645e-4162-9ad0-71e103fb5ed1-clustermesh-secrets\") pod \"eefb1005-645e-4162-9ad0-71e103fb5ed1\" (UID: \"eefb1005-645e-4162-9ad0-71e103fb5ed1\") " May 16 00:45:12.673485 kubelet[1414]: I0516 00:45:12.672943 1414 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eefb1005-645e-4162-9ad0-71e103fb5ed1-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "eefb1005-645e-4162-9ad0-71e103fb5ed1" (UID: "eefb1005-645e-4162-9ad0-71e103fb5ed1"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 16 00:45:12.673986 kubelet[1414]: I0516 00:45:12.672958 1414 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/eefb1005-645e-4162-9ad0-71e103fb5ed1-host-proc-sys-kernel\") pod \"eefb1005-645e-4162-9ad0-71e103fb5ed1\" (UID: \"eefb1005-645e-4162-9ad0-71e103fb5ed1\") " May 16 00:45:12.673986 kubelet[1414]: I0516 00:45:12.673000 1414 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eefb1005-645e-4162-9ad0-71e103fb5ed1-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "eefb1005-645e-4162-9ad0-71e103fb5ed1" (UID: "eefb1005-645e-4162-9ad0-71e103fb5ed1"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 16 00:45:12.673986 kubelet[1414]: I0516 00:45:12.673015 1414 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/eefb1005-645e-4162-9ad0-71e103fb5ed1-hubble-tls\") pod \"eefb1005-645e-4162-9ad0-71e103fb5ed1\" (UID: \"eefb1005-645e-4162-9ad0-71e103fb5ed1\") " May 16 00:45:12.673986 kubelet[1414]: I0516 00:45:12.673037 1414 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/eefb1005-645e-4162-9ad0-71e103fb5ed1-host-proc-sys-net\") pod \"eefb1005-645e-4162-9ad0-71e103fb5ed1\" (UID: \"eefb1005-645e-4162-9ad0-71e103fb5ed1\") " May 16 00:45:12.673986 kubelet[1414]: I0516 00:45:12.673054 1414 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/eefb1005-645e-4162-9ad0-71e103fb5ed1-cni-path\") pod \"eefb1005-645e-4162-9ad0-71e103fb5ed1\" (UID: \"eefb1005-645e-4162-9ad0-71e103fb5ed1\") " May 16 00:45:12.673986 kubelet[1414]: I0516 00:45:12.673069 1414 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/eefb1005-645e-4162-9ad0-71e103fb5ed1-bpf-maps\") pod \"eefb1005-645e-4162-9ad0-71e103fb5ed1\" (UID: \"eefb1005-645e-4162-9ad0-71e103fb5ed1\") " May 16 00:45:12.674139 kubelet[1414]: I0516 00:45:12.673083 1414 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/eefb1005-645e-4162-9ad0-71e103fb5ed1-cilium-run\") pod \"eefb1005-645e-4162-9ad0-71e103fb5ed1\" (UID: \"eefb1005-645e-4162-9ad0-71e103fb5ed1\") " May 16 00:45:12.674139 kubelet[1414]: I0516 00:45:12.673097 1414 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/eefb1005-645e-4162-9ad0-71e103fb5ed1-cilium-cgroup\") pod \"eefb1005-645e-4162-9ad0-71e103fb5ed1\" (UID: \"eefb1005-645e-4162-9ad0-71e103fb5ed1\") " May 16 00:45:12.674139 kubelet[1414]: I0516 00:45:12.673115 1414 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/eefb1005-645e-4162-9ad0-71e103fb5ed1-xtables-lock\") pod \"eefb1005-645e-4162-9ad0-71e103fb5ed1\" (UID: \"eefb1005-645e-4162-9ad0-71e103fb5ed1\") " May 16 00:45:12.674139 kubelet[1414]: I0516 00:45:12.673129 1414 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/eefb1005-645e-4162-9ad0-71e103fb5ed1-etc-cni-netd\") pod \"eefb1005-645e-4162-9ad0-71e103fb5ed1\" (UID: \"eefb1005-645e-4162-9ad0-71e103fb5ed1\") " May 16 00:45:12.674139 kubelet[1414]: I0516 00:45:12.673164 1414 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/eefb1005-645e-4162-9ad0-71e103fb5ed1-lib-modules\") on node \"10.0.0.92\" DevicePath \"\"" May 16 00:45:12.674139 kubelet[1414]: I0516 00:45:12.673174 1414 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/eefb1005-645e-4162-9ad0-71e103fb5ed1-host-proc-sys-kernel\") on node \"10.0.0.92\" DevicePath \"\"" May 16 00:45:12.674269 kubelet[1414]: I0516 00:45:12.673194 1414 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eefb1005-645e-4162-9ad0-71e103fb5ed1-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "eefb1005-645e-4162-9ad0-71e103fb5ed1" (UID: "eefb1005-645e-4162-9ad0-71e103fb5ed1"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 16 00:45:12.674269 kubelet[1414]: I0516 00:45:12.673359 1414 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eefb1005-645e-4162-9ad0-71e103fb5ed1-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "eefb1005-645e-4162-9ad0-71e103fb5ed1" (UID: "eefb1005-645e-4162-9ad0-71e103fb5ed1"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 16 00:45:12.674269 kubelet[1414]: I0516 00:45:12.673394 1414 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eefb1005-645e-4162-9ad0-71e103fb5ed1-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "eefb1005-645e-4162-9ad0-71e103fb5ed1" (UID: "eefb1005-645e-4162-9ad0-71e103fb5ed1"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 16 00:45:12.674269 kubelet[1414]: I0516 00:45:12.673413 1414 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eefb1005-645e-4162-9ad0-71e103fb5ed1-cni-path" (OuterVolumeSpecName: "cni-path") pod "eefb1005-645e-4162-9ad0-71e103fb5ed1" (UID: "eefb1005-645e-4162-9ad0-71e103fb5ed1"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 16 00:45:12.674269 kubelet[1414]: I0516 00:45:12.673428 1414 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eefb1005-645e-4162-9ad0-71e103fb5ed1-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "eefb1005-645e-4162-9ad0-71e103fb5ed1" (UID: "eefb1005-645e-4162-9ad0-71e103fb5ed1"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 16 00:45:12.674379 kubelet[1414]: I0516 00:45:12.673450 1414 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eefb1005-645e-4162-9ad0-71e103fb5ed1-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "eefb1005-645e-4162-9ad0-71e103fb5ed1" (UID: "eefb1005-645e-4162-9ad0-71e103fb5ed1"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 16 00:45:12.674379 kubelet[1414]: I0516 00:45:12.673896 1414 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eefb1005-645e-4162-9ad0-71e103fb5ed1-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "eefb1005-645e-4162-9ad0-71e103fb5ed1" (UID: "eefb1005-645e-4162-9ad0-71e103fb5ed1"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 16 00:45:12.674379 kubelet[1414]: I0516 00:45:12.673948 1414 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eefb1005-645e-4162-9ad0-71e103fb5ed1-hostproc" (OuterVolumeSpecName: "hostproc") pod "eefb1005-645e-4162-9ad0-71e103fb5ed1" (UID: "eefb1005-645e-4162-9ad0-71e103fb5ed1"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 16 00:45:12.675485 kubelet[1414]: I0516 00:45:12.675425 1414 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eefb1005-645e-4162-9ad0-71e103fb5ed1-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "eefb1005-645e-4162-9ad0-71e103fb5ed1" (UID: "eefb1005-645e-4162-9ad0-71e103fb5ed1"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 16 00:45:12.677142 kubelet[1414]: I0516 00:45:12.677112 1414 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eefb1005-645e-4162-9ad0-71e103fb5ed1-kube-api-access-rztsx" (OuterVolumeSpecName: "kube-api-access-rztsx") pod "eefb1005-645e-4162-9ad0-71e103fb5ed1" (UID: "eefb1005-645e-4162-9ad0-71e103fb5ed1"). InnerVolumeSpecName "kube-api-access-rztsx". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 16 00:45:12.677205 kubelet[1414]: I0516 00:45:12.677156 1414 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eefb1005-645e-4162-9ad0-71e103fb5ed1-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "eefb1005-645e-4162-9ad0-71e103fb5ed1" (UID: "eefb1005-645e-4162-9ad0-71e103fb5ed1"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 16 00:45:12.677238 kubelet[1414]: I0516 00:45:12.677214 1414 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eefb1005-645e-4162-9ad0-71e103fb5ed1-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "eefb1005-645e-4162-9ad0-71e103fb5ed1" (UID: "eefb1005-645e-4162-9ad0-71e103fb5ed1"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 16 00:45:12.773916 kubelet[1414]: I0516 00:45:12.773859 1414 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rztsx\" (UniqueName: \"kubernetes.io/projected/eefb1005-645e-4162-9ad0-71e103fb5ed1-kube-api-access-rztsx\") on node \"10.0.0.92\" DevicePath \"\"" May 16 00:45:12.773916 kubelet[1414]: I0516 00:45:12.773914 1414 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/eefb1005-645e-4162-9ad0-71e103fb5ed1-hubble-tls\") on node \"10.0.0.92\" DevicePath \"\"" May 16 00:45:12.773916 kubelet[1414]: I0516 00:45:12.773925 1414 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/eefb1005-645e-4162-9ad0-71e103fb5ed1-hostproc\") on node \"10.0.0.92\" DevicePath \"\"" May 16 00:45:12.774133 kubelet[1414]: I0516 00:45:12.773934 1414 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/eefb1005-645e-4162-9ad0-71e103fb5ed1-cilium-config-path\") on node \"10.0.0.92\" DevicePath \"\"" May 16 00:45:12.774133 kubelet[1414]: I0516 00:45:12.773943 1414 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/eefb1005-645e-4162-9ad0-71e103fb5ed1-clustermesh-secrets\") on node \"10.0.0.92\" DevicePath \"\"" May 16 00:45:12.774133 kubelet[1414]: I0516 00:45:12.773952 1414 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/eefb1005-645e-4162-9ad0-71e103fb5ed1-host-proc-sys-net\") on node \"10.0.0.92\" DevicePath \"\"" May 16 00:45:12.774133 kubelet[1414]: I0516 00:45:12.773959 1414 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/eefb1005-645e-4162-9ad0-71e103fb5ed1-cni-path\") on node \"10.0.0.92\" DevicePath \"\"" May 16 00:45:12.774133 kubelet[1414]: I0516 00:45:12.773969 1414 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/eefb1005-645e-4162-9ad0-71e103fb5ed1-etc-cni-netd\") on node \"10.0.0.92\" DevicePath \"\"" May 16 00:45:12.774133 kubelet[1414]: I0516 00:45:12.773976 1414 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/eefb1005-645e-4162-9ad0-71e103fb5ed1-bpf-maps\") on node \"10.0.0.92\" DevicePath \"\"" May 16 00:45:12.774133 kubelet[1414]: I0516 00:45:12.773984 1414 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/eefb1005-645e-4162-9ad0-71e103fb5ed1-cilium-run\") on node \"10.0.0.92\" DevicePath \"\"" May 16 00:45:12.774133 kubelet[1414]: I0516 00:45:12.773992 1414 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/eefb1005-645e-4162-9ad0-71e103fb5ed1-cilium-cgroup\") on node \"10.0.0.92\" DevicePath \"\"" May 16 00:45:12.774302 kubelet[1414]: I0516 00:45:12.774001 1414 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/eefb1005-645e-4162-9ad0-71e103fb5ed1-xtables-lock\") on node \"10.0.0.92\" DevicePath \"\"" May 16 00:45:12.792045 kubelet[1414]: E0516 00:45:12.792008 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:45:12.964112 systemd[1]: Removed slice kubepods-burstable-podeefb1005_645e_4162_9ad0_71e103fb5ed1.slice. May 16 00:45:12.964199 systemd[1]: kubepods-burstable-podeefb1005_645e_4162_9ad0_71e103fb5ed1.slice: Consumed 6.536s CPU time. May 16 00:45:13.059416 kubelet[1414]: I0516 00:45:13.059390 1414 scope.go:117] "RemoveContainer" containerID="c6c8745a8a2cbde242bbb15a4ca281f7253b24700669a977cec378785aea8551" May 16 00:45:13.061960 env[1212]: time="2025-05-16T00:45:13.061921551Z" level=info msg="RemoveContainer for \"c6c8745a8a2cbde242bbb15a4ca281f7253b24700669a977cec378785aea8551\"" May 16 00:45:13.065546 env[1212]: time="2025-05-16T00:45:13.065495143Z" level=info msg="RemoveContainer for \"c6c8745a8a2cbde242bbb15a4ca281f7253b24700669a977cec378785aea8551\" returns successfully" May 16 00:45:13.065722 kubelet[1414]: I0516 00:45:13.065688 1414 scope.go:117] "RemoveContainer" containerID="8ec2f2c6da8df96e28609ddb83a14ca09e20bc43ecb7b0b30d66335d53c21361" May 16 00:45:13.066994 env[1212]: time="2025-05-16T00:45:13.066956366Z" level=info msg="RemoveContainer for \"8ec2f2c6da8df96e28609ddb83a14ca09e20bc43ecb7b0b30d66335d53c21361\"" May 16 00:45:13.069262 env[1212]: time="2025-05-16T00:45:13.069229862Z" level=info msg="RemoveContainer for \"8ec2f2c6da8df96e28609ddb83a14ca09e20bc43ecb7b0b30d66335d53c21361\" returns successfully" May 16 00:45:13.069537 kubelet[1414]: I0516 00:45:13.069518 1414 scope.go:117] "RemoveContainer" containerID="f91ed9d4c87a13ed2e2bffff04c1149084d08495b0cd6f716db7a7120199466b" May 16 00:45:13.070599 env[1212]: time="2025-05-16T00:45:13.070426273Z" level=info msg="RemoveContainer for \"f91ed9d4c87a13ed2e2bffff04c1149084d08495b0cd6f716db7a7120199466b\"" May 16 00:45:13.072598 env[1212]: time="2025-05-16T00:45:13.072561324Z" level=info msg="RemoveContainer for \"f91ed9d4c87a13ed2e2bffff04c1149084d08495b0cd6f716db7a7120199466b\" returns successfully" May 16 00:45:13.072815 kubelet[1414]: I0516 00:45:13.072769 1414 scope.go:117] "RemoveContainer" containerID="8a39a1499fb47e99e98e4dbe7a2408ab1e093e22bafa73a2a88b0875220ec452" May 16 00:45:13.074194 env[1212]: time="2025-05-16T00:45:13.074162352Z" level=info msg="RemoveContainer for \"8a39a1499fb47e99e98e4dbe7a2408ab1e093e22bafa73a2a88b0875220ec452\"" May 16 00:45:13.076227 env[1212]: time="2025-05-16T00:45:13.076186758Z" level=info msg="RemoveContainer for \"8a39a1499fb47e99e98e4dbe7a2408ab1e093e22bafa73a2a88b0875220ec452\" returns successfully" May 16 00:45:13.076424 kubelet[1414]: I0516 00:45:13.076407 1414 scope.go:117] "RemoveContainer" containerID="0669c6f6c2d940db220831c4a079bef8a0315a5191288b5a7fb634f8fa4d4493" May 16 00:45:13.077562 env[1212]: time="2025-05-16T00:45:13.077532575Z" level=info msg="RemoveContainer for \"0669c6f6c2d940db220831c4a079bef8a0315a5191288b5a7fb634f8fa4d4493\"" May 16 00:45:13.079473 env[1212]: time="2025-05-16T00:45:13.079435656Z" level=info msg="RemoveContainer for \"0669c6f6c2d940db220831c4a079bef8a0315a5191288b5a7fb634f8fa4d4493\" returns successfully" May 16 00:45:13.079704 kubelet[1414]: I0516 00:45:13.079637 1414 scope.go:117] "RemoveContainer" containerID="c6c8745a8a2cbde242bbb15a4ca281f7253b24700669a977cec378785aea8551" May 16 00:45:13.079911 env[1212]: time="2025-05-16T00:45:13.079816272Z" level=error msg="ContainerStatus for \"c6c8745a8a2cbde242bbb15a4ca281f7253b24700669a977cec378785aea8551\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c6c8745a8a2cbde242bbb15a4ca281f7253b24700669a977cec378785aea8551\": not found" May 16 00:45:13.080061 kubelet[1414]: E0516 00:45:13.080039 1414 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c6c8745a8a2cbde242bbb15a4ca281f7253b24700669a977cec378785aea8551\": not found" containerID="c6c8745a8a2cbde242bbb15a4ca281f7253b24700669a977cec378785aea8551" May 16 00:45:13.080204 kubelet[1414]: I0516 00:45:13.080131 1414 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c6c8745a8a2cbde242bbb15a4ca281f7253b24700669a977cec378785aea8551"} err="failed to get container status \"c6c8745a8a2cbde242bbb15a4ca281f7253b24700669a977cec378785aea8551\": rpc error: code = NotFound desc = an error occurred when try to find container \"c6c8745a8a2cbde242bbb15a4ca281f7253b24700669a977cec378785aea8551\": not found" May 16 00:45:13.080268 kubelet[1414]: I0516 00:45:13.080257 1414 scope.go:117] "RemoveContainer" containerID="8ec2f2c6da8df96e28609ddb83a14ca09e20bc43ecb7b0b30d66335d53c21361" May 16 00:45:13.080508 env[1212]: time="2025-05-16T00:45:13.080450979Z" level=error msg="ContainerStatus for \"8ec2f2c6da8df96e28609ddb83a14ca09e20bc43ecb7b0b30d66335d53c21361\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8ec2f2c6da8df96e28609ddb83a14ca09e20bc43ecb7b0b30d66335d53c21361\": not found" May 16 00:45:13.080647 kubelet[1414]: E0516 00:45:13.080624 1414 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8ec2f2c6da8df96e28609ddb83a14ca09e20bc43ecb7b0b30d66335d53c21361\": not found" containerID="8ec2f2c6da8df96e28609ddb83a14ca09e20bc43ecb7b0b30d66335d53c21361" May 16 00:45:13.080752 kubelet[1414]: I0516 00:45:13.080730 1414 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8ec2f2c6da8df96e28609ddb83a14ca09e20bc43ecb7b0b30d66335d53c21361"} err="failed to get container status \"8ec2f2c6da8df96e28609ddb83a14ca09e20bc43ecb7b0b30d66335d53c21361\": rpc error: code = NotFound desc = an error occurred when try to find container \"8ec2f2c6da8df96e28609ddb83a14ca09e20bc43ecb7b0b30d66335d53c21361\": not found" May 16 00:45:13.080825 kubelet[1414]: I0516 00:45:13.080810 1414 scope.go:117] "RemoveContainer" containerID="f91ed9d4c87a13ed2e2bffff04c1149084d08495b0cd6f716db7a7120199466b" May 16 00:45:13.081130 env[1212]: time="2025-05-16T00:45:13.081058805Z" level=error msg="ContainerStatus for \"f91ed9d4c87a13ed2e2bffff04c1149084d08495b0cd6f716db7a7120199466b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f91ed9d4c87a13ed2e2bffff04c1149084d08495b0cd6f716db7a7120199466b\": not found" May 16 00:45:13.081274 kubelet[1414]: E0516 00:45:13.081255 1414 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f91ed9d4c87a13ed2e2bffff04c1149084d08495b0cd6f716db7a7120199466b\": not found" containerID="f91ed9d4c87a13ed2e2bffff04c1149084d08495b0cd6f716db7a7120199466b" May 16 00:45:13.081367 kubelet[1414]: I0516 00:45:13.081345 1414 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f91ed9d4c87a13ed2e2bffff04c1149084d08495b0cd6f716db7a7120199466b"} err="failed to get container status \"f91ed9d4c87a13ed2e2bffff04c1149084d08495b0cd6f716db7a7120199466b\": rpc error: code = NotFound desc = an error occurred when try to find container \"f91ed9d4c87a13ed2e2bffff04c1149084d08495b0cd6f716db7a7120199466b\": not found" May 16 00:45:13.081426 kubelet[1414]: I0516 00:45:13.081415 1414 scope.go:117] "RemoveContainer" containerID="8a39a1499fb47e99e98e4dbe7a2408ab1e093e22bafa73a2a88b0875220ec452" May 16 00:45:13.081653 env[1212]: time="2025-05-16T00:45:13.081610909Z" level=error msg="ContainerStatus for \"8a39a1499fb47e99e98e4dbe7a2408ab1e093e22bafa73a2a88b0875220ec452\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8a39a1499fb47e99e98e4dbe7a2408ab1e093e22bafa73a2a88b0875220ec452\": not found" May 16 00:45:13.081787 kubelet[1414]: E0516 00:45:13.081764 1414 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8a39a1499fb47e99e98e4dbe7a2408ab1e093e22bafa73a2a88b0875220ec452\": not found" containerID="8a39a1499fb47e99e98e4dbe7a2408ab1e093e22bafa73a2a88b0875220ec452" May 16 00:45:13.081829 kubelet[1414]: I0516 00:45:13.081793 1414 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8a39a1499fb47e99e98e4dbe7a2408ab1e093e22bafa73a2a88b0875220ec452"} err="failed to get container status \"8a39a1499fb47e99e98e4dbe7a2408ab1e093e22bafa73a2a88b0875220ec452\": rpc error: code = NotFound desc = an error occurred when try to find container \"8a39a1499fb47e99e98e4dbe7a2408ab1e093e22bafa73a2a88b0875220ec452\": not found" May 16 00:45:13.081829 kubelet[1414]: I0516 00:45:13.081810 1414 scope.go:117] "RemoveContainer" containerID="0669c6f6c2d940db220831c4a079bef8a0315a5191288b5a7fb634f8fa4d4493" May 16 00:45:13.082093 env[1212]: time="2025-05-16T00:45:13.082035407Z" level=error msg="ContainerStatus for \"0669c6f6c2d940db220831c4a079bef8a0315a5191288b5a7fb634f8fa4d4493\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0669c6f6c2d940db220831c4a079bef8a0315a5191288b5a7fb634f8fa4d4493\": not found" May 16 00:45:13.082311 kubelet[1414]: E0516 00:45:13.082287 1414 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0669c6f6c2d940db220831c4a079bef8a0315a5191288b5a7fb634f8fa4d4493\": not found" containerID="0669c6f6c2d940db220831c4a079bef8a0315a5191288b5a7fb634f8fa4d4493" May 16 00:45:13.082417 kubelet[1414]: I0516 00:45:13.082391 1414 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0669c6f6c2d940db220831c4a079bef8a0315a5191288b5a7fb634f8fa4d4493"} err="failed to get container status \"0669c6f6c2d940db220831c4a079bef8a0315a5191288b5a7fb634f8fa4d4493\": rpc error: code = NotFound desc = an error occurred when try to find container \"0669c6f6c2d940db220831c4a079bef8a0315a5191288b5a7fb634f8fa4d4493\": not found" May 16 00:45:13.351041 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-620d74bb178cad19960755942b01566cec7f2079a544f03e5562e34d71662d0b-rootfs.mount: Deactivated successfully. May 16 00:45:13.351130 systemd[1]: var-lib-kubelet-pods-eefb1005\x2d645e\x2d4162\x2d9ad0\x2d71e103fb5ed1-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2drztsx.mount: Deactivated successfully. May 16 00:45:13.351187 systemd[1]: var-lib-kubelet-pods-eefb1005\x2d645e\x2d4162\x2d9ad0\x2d71e103fb5ed1-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 16 00:45:13.351241 systemd[1]: var-lib-kubelet-pods-eefb1005\x2d645e\x2d4162\x2d9ad0\x2d71e103fb5ed1-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 16 00:45:13.792895 kubelet[1414]: E0516 00:45:13.792601 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:45:14.792979 kubelet[1414]: E0516 00:45:14.792923 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:45:14.961117 kubelet[1414]: I0516 00:45:14.961060 1414 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eefb1005-645e-4162-9ad0-71e103fb5ed1" path="/var/lib/kubelet/pods/eefb1005-645e-4162-9ad0-71e103fb5ed1/volumes" May 16 00:45:15.119711 kubelet[1414]: I0516 00:45:15.119653 1414 memory_manager.go:355] "RemoveStaleState removing state" podUID="eefb1005-645e-4162-9ad0-71e103fb5ed1" containerName="cilium-agent" May 16 00:45:15.124949 systemd[1]: Created slice kubepods-burstable-pod54f273db_2c69_4246_ab51_cfd686983056.slice. May 16 00:45:15.144713 systemd[1]: Created slice kubepods-besteffort-pod5b4c5fe9_5c21_47f7_9423_e69aa8dfe214.slice. May 16 00:45:15.256741 kubelet[1414]: E0516 00:45:15.256670 1414 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[bpf-maps cilium-cgroup cilium-config-path cilium-ipsec-secrets cilium-run clustermesh-secrets cni-path etc-cni-netd host-proc-sys-kernel host-proc-sys-net hostproc hubble-tls kube-api-access-jdj6g lib-modules xtables-lock], unattached volumes=[], failed to process volumes=[]: context canceled" pod="kube-system/cilium-mwg2v" podUID="54f273db-2c69-4246-ab51-cfd686983056" May 16 00:45:15.285750 kubelet[1414]: I0516 00:45:15.285706 1414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/54f273db-2c69-4246-ab51-cfd686983056-cilium-ipsec-secrets\") pod \"cilium-mwg2v\" (UID: \"54f273db-2c69-4246-ab51-cfd686983056\") " pod="kube-system/cilium-mwg2v" May 16 00:45:15.285849 kubelet[1414]: I0516 00:45:15.285776 1414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jdj6g\" (UniqueName: \"kubernetes.io/projected/54f273db-2c69-4246-ab51-cfd686983056-kube-api-access-jdj6g\") pod \"cilium-mwg2v\" (UID: \"54f273db-2c69-4246-ab51-cfd686983056\") " pod="kube-system/cilium-mwg2v" May 16 00:45:15.285849 kubelet[1414]: I0516 00:45:15.285815 1414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/54f273db-2c69-4246-ab51-cfd686983056-etc-cni-netd\") pod \"cilium-mwg2v\" (UID: \"54f273db-2c69-4246-ab51-cfd686983056\") " pod="kube-system/cilium-mwg2v" May 16 00:45:15.285849 kubelet[1414]: I0516 00:45:15.285845 1414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/54f273db-2c69-4246-ab51-cfd686983056-xtables-lock\") pod \"cilium-mwg2v\" (UID: \"54f273db-2c69-4246-ab51-cfd686983056\") " pod="kube-system/cilium-mwg2v" May 16 00:45:15.285948 kubelet[1414]: I0516 00:45:15.285862 1414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/54f273db-2c69-4246-ab51-cfd686983056-host-proc-sys-net\") pod \"cilium-mwg2v\" (UID: \"54f273db-2c69-4246-ab51-cfd686983056\") " pod="kube-system/cilium-mwg2v" May 16 00:45:15.285948 kubelet[1414]: I0516 00:45:15.285880 1414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/54f273db-2c69-4246-ab51-cfd686983056-host-proc-sys-kernel\") pod \"cilium-mwg2v\" (UID: \"54f273db-2c69-4246-ab51-cfd686983056\") " pod="kube-system/cilium-mwg2v" May 16 00:45:15.285948 kubelet[1414]: I0516 00:45:15.285897 1414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-htfdk\" (UniqueName: \"kubernetes.io/projected/5b4c5fe9-5c21-47f7-9423-e69aa8dfe214-kube-api-access-htfdk\") pod \"cilium-operator-6c4d7847fc-45rqp\" (UID: \"5b4c5fe9-5c21-47f7-9423-e69aa8dfe214\") " pod="kube-system/cilium-operator-6c4d7847fc-45rqp" May 16 00:45:15.285948 kubelet[1414]: I0516 00:45:15.285917 1414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/54f273db-2c69-4246-ab51-cfd686983056-cilium-cgroup\") pod \"cilium-mwg2v\" (UID: \"54f273db-2c69-4246-ab51-cfd686983056\") " pod="kube-system/cilium-mwg2v" May 16 00:45:15.285948 kubelet[1414]: I0516 00:45:15.285933 1414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/54f273db-2c69-4246-ab51-cfd686983056-cni-path\") pod \"cilium-mwg2v\" (UID: \"54f273db-2c69-4246-ab51-cfd686983056\") " pod="kube-system/cilium-mwg2v" May 16 00:45:15.286064 kubelet[1414]: I0516 00:45:15.285957 1414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/54f273db-2c69-4246-ab51-cfd686983056-lib-modules\") pod \"cilium-mwg2v\" (UID: \"54f273db-2c69-4246-ab51-cfd686983056\") " pod="kube-system/cilium-mwg2v" May 16 00:45:15.286064 kubelet[1414]: I0516 00:45:15.285974 1414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/54f273db-2c69-4246-ab51-cfd686983056-cilium-config-path\") pod \"cilium-mwg2v\" (UID: \"54f273db-2c69-4246-ab51-cfd686983056\") " pod="kube-system/cilium-mwg2v" May 16 00:45:15.286064 kubelet[1414]: I0516 00:45:15.286009 1414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5b4c5fe9-5c21-47f7-9423-e69aa8dfe214-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-45rqp\" (UID: \"5b4c5fe9-5c21-47f7-9423-e69aa8dfe214\") " pod="kube-system/cilium-operator-6c4d7847fc-45rqp" May 16 00:45:15.286064 kubelet[1414]: I0516 00:45:15.286031 1414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/54f273db-2c69-4246-ab51-cfd686983056-bpf-maps\") pod \"cilium-mwg2v\" (UID: \"54f273db-2c69-4246-ab51-cfd686983056\") " pod="kube-system/cilium-mwg2v" May 16 00:45:15.286064 kubelet[1414]: I0516 00:45:15.286047 1414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/54f273db-2c69-4246-ab51-cfd686983056-hostproc\") pod \"cilium-mwg2v\" (UID: \"54f273db-2c69-4246-ab51-cfd686983056\") " pod="kube-system/cilium-mwg2v" May 16 00:45:15.286175 kubelet[1414]: I0516 00:45:15.286063 1414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/54f273db-2c69-4246-ab51-cfd686983056-clustermesh-secrets\") pod \"cilium-mwg2v\" (UID: \"54f273db-2c69-4246-ab51-cfd686983056\") " pod="kube-system/cilium-mwg2v" May 16 00:45:15.286175 kubelet[1414]: I0516 00:45:15.286078 1414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/54f273db-2c69-4246-ab51-cfd686983056-hubble-tls\") pod \"cilium-mwg2v\" (UID: \"54f273db-2c69-4246-ab51-cfd686983056\") " pod="kube-system/cilium-mwg2v" May 16 00:45:15.286175 kubelet[1414]: I0516 00:45:15.286094 1414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/54f273db-2c69-4246-ab51-cfd686983056-cilium-run\") pod \"cilium-mwg2v\" (UID: \"54f273db-2c69-4246-ab51-cfd686983056\") " pod="kube-system/cilium-mwg2v" May 16 00:45:15.446719 kubelet[1414]: E0516 00:45:15.446606 1414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:45:15.447516 env[1212]: time="2025-05-16T00:45:15.447227922Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-45rqp,Uid:5b4c5fe9-5c21-47f7-9423-e69aa8dfe214,Namespace:kube-system,Attempt:0,}" May 16 00:45:15.462197 env[1212]: time="2025-05-16T00:45:15.462130559Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 16 00:45:15.462197 env[1212]: time="2025-05-16T00:45:15.462167481Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 16 00:45:15.462358 env[1212]: time="2025-05-16T00:45:15.462177601Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 00:45:15.462560 env[1212]: time="2025-05-16T00:45:15.462521294Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b446bb562873531949a484823a82404a5c0b21333d11ef8fa1b59bb03a495007 pid=2982 runtime=io.containerd.runc.v2 May 16 00:45:15.472508 systemd[1]: Started cri-containerd-b446bb562873531949a484823a82404a5c0b21333d11ef8fa1b59bb03a495007.scope. May 16 00:45:15.531779 env[1212]: time="2025-05-16T00:45:15.531724481Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-45rqp,Uid:5b4c5fe9-5c21-47f7-9423-e69aa8dfe214,Namespace:kube-system,Attempt:0,} returns sandbox id \"b446bb562873531949a484823a82404a5c0b21333d11ef8fa1b59bb03a495007\"" May 16 00:45:15.532763 kubelet[1414]: E0516 00:45:15.532438 1414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:45:15.533351 env[1212]: time="2025-05-16T00:45:15.533325020Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 16 00:45:15.794155 kubelet[1414]: E0516 00:45:15.793689 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:45:15.899549 kubelet[1414]: E0516 00:45:15.899517 1414 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 16 00:45:16.092000 kubelet[1414]: I0516 00:45:16.091958 1414 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/54f273db-2c69-4246-ab51-cfd686983056-cilium-cgroup\") pod \"54f273db-2c69-4246-ab51-cfd686983056\" (UID: \"54f273db-2c69-4246-ab51-cfd686983056\") " May 16 00:45:16.092000 kubelet[1414]: I0516 00:45:16.091993 1414 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/54f273db-2c69-4246-ab51-cfd686983056-host-proc-sys-net\") pod \"54f273db-2c69-4246-ab51-cfd686983056\" (UID: \"54f273db-2c69-4246-ab51-cfd686983056\") " May 16 00:45:16.092187 kubelet[1414]: I0516 00:45:16.092036 1414 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/54f273db-2c69-4246-ab51-cfd686983056-cilium-config-path\") pod \"54f273db-2c69-4246-ab51-cfd686983056\" (UID: \"54f273db-2c69-4246-ab51-cfd686983056\") " May 16 00:45:16.092187 kubelet[1414]: I0516 00:45:16.092054 1414 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/54f273db-2c69-4246-ab51-cfd686983056-cilium-run\") pod \"54f273db-2c69-4246-ab51-cfd686983056\" (UID: \"54f273db-2c69-4246-ab51-cfd686983056\") " May 16 00:45:16.092187 kubelet[1414]: I0516 00:45:16.092077 1414 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/54f273db-2c69-4246-ab51-cfd686983056-hubble-tls\") pod \"54f273db-2c69-4246-ab51-cfd686983056\" (UID: \"54f273db-2c69-4246-ab51-cfd686983056\") " May 16 00:45:16.092187 kubelet[1414]: I0516 00:45:16.092082 1414 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/54f273db-2c69-4246-ab51-cfd686983056-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "54f273db-2c69-4246-ab51-cfd686983056" (UID: "54f273db-2c69-4246-ab51-cfd686983056"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 16 00:45:16.092187 kubelet[1414]: I0516 00:45:16.092093 1414 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/54f273db-2c69-4246-ab51-cfd686983056-xtables-lock\") pod \"54f273db-2c69-4246-ab51-cfd686983056\" (UID: \"54f273db-2c69-4246-ab51-cfd686983056\") " May 16 00:45:16.092187 kubelet[1414]: I0516 00:45:16.092124 1414 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/54f273db-2c69-4246-ab51-cfd686983056-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "54f273db-2c69-4246-ab51-cfd686983056" (UID: "54f273db-2c69-4246-ab51-cfd686983056"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 16 00:45:16.092330 kubelet[1414]: I0516 00:45:16.092143 1414 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/54f273db-2c69-4246-ab51-cfd686983056-hostproc\") pod \"54f273db-2c69-4246-ab51-cfd686983056\" (UID: \"54f273db-2c69-4246-ab51-cfd686983056\") " May 16 00:45:16.092330 kubelet[1414]: I0516 00:45:16.092168 1414 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/54f273db-2c69-4246-ab51-cfd686983056-cilium-ipsec-secrets\") pod \"54f273db-2c69-4246-ab51-cfd686983056\" (UID: \"54f273db-2c69-4246-ab51-cfd686983056\") " May 16 00:45:16.092330 kubelet[1414]: I0516 00:45:16.092187 1414 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jdj6g\" (UniqueName: \"kubernetes.io/projected/54f273db-2c69-4246-ab51-cfd686983056-kube-api-access-jdj6g\") pod \"54f273db-2c69-4246-ab51-cfd686983056\" (UID: \"54f273db-2c69-4246-ab51-cfd686983056\") " May 16 00:45:16.092330 kubelet[1414]: I0516 00:45:16.092203 1414 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/54f273db-2c69-4246-ab51-cfd686983056-lib-modules\") pod \"54f273db-2c69-4246-ab51-cfd686983056\" (UID: \"54f273db-2c69-4246-ab51-cfd686983056\") " May 16 00:45:16.092330 kubelet[1414]: I0516 00:45:16.092222 1414 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/54f273db-2c69-4246-ab51-cfd686983056-clustermesh-secrets\") pod \"54f273db-2c69-4246-ab51-cfd686983056\" (UID: \"54f273db-2c69-4246-ab51-cfd686983056\") " May 16 00:45:16.092330 kubelet[1414]: I0516 00:45:16.092237 1414 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/54f273db-2c69-4246-ab51-cfd686983056-etc-cni-netd\") pod \"54f273db-2c69-4246-ab51-cfd686983056\" (UID: \"54f273db-2c69-4246-ab51-cfd686983056\") " May 16 00:45:16.092451 kubelet[1414]: I0516 00:45:16.092257 1414 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/54f273db-2c69-4246-ab51-cfd686983056-cni-path\") pod \"54f273db-2c69-4246-ab51-cfd686983056\" (UID: \"54f273db-2c69-4246-ab51-cfd686983056\") " May 16 00:45:16.092451 kubelet[1414]: I0516 00:45:16.092275 1414 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/54f273db-2c69-4246-ab51-cfd686983056-host-proc-sys-kernel\") pod \"54f273db-2c69-4246-ab51-cfd686983056\" (UID: \"54f273db-2c69-4246-ab51-cfd686983056\") " May 16 00:45:16.092451 kubelet[1414]: I0516 00:45:16.092288 1414 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/54f273db-2c69-4246-ab51-cfd686983056-bpf-maps\") pod \"54f273db-2c69-4246-ab51-cfd686983056\" (UID: \"54f273db-2c69-4246-ab51-cfd686983056\") " May 16 00:45:16.092451 kubelet[1414]: I0516 00:45:16.092319 1414 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/54f273db-2c69-4246-ab51-cfd686983056-cilium-cgroup\") on node \"10.0.0.92\" DevicePath \"\"" May 16 00:45:16.092451 kubelet[1414]: I0516 00:45:16.092328 1414 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/54f273db-2c69-4246-ab51-cfd686983056-xtables-lock\") on node \"10.0.0.92\" DevicePath \"\"" May 16 00:45:16.092451 kubelet[1414]: I0516 00:45:16.092145 1414 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/54f273db-2c69-4246-ab51-cfd686983056-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "54f273db-2c69-4246-ab51-cfd686983056" (UID: "54f273db-2c69-4246-ab51-cfd686983056"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 16 00:45:16.092583 kubelet[1414]: I0516 00:45:16.092354 1414 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/54f273db-2c69-4246-ab51-cfd686983056-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "54f273db-2c69-4246-ab51-cfd686983056" (UID: "54f273db-2c69-4246-ab51-cfd686983056"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 16 00:45:16.092583 kubelet[1414]: I0516 00:45:16.092365 1414 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/54f273db-2c69-4246-ab51-cfd686983056-hostproc" (OuterVolumeSpecName: "hostproc") pod "54f273db-2c69-4246-ab51-cfd686983056" (UID: "54f273db-2c69-4246-ab51-cfd686983056"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 16 00:45:16.092629 kubelet[1414]: I0516 00:45:16.092612 1414 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/54f273db-2c69-4246-ab51-cfd686983056-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "54f273db-2c69-4246-ab51-cfd686983056" (UID: "54f273db-2c69-4246-ab51-cfd686983056"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 16 00:45:16.094643 kubelet[1414]: I0516 00:45:16.093861 1414 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/54f273db-2c69-4246-ab51-cfd686983056-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "54f273db-2c69-4246-ab51-cfd686983056" (UID: "54f273db-2c69-4246-ab51-cfd686983056"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 16 00:45:16.095175 kubelet[1414]: I0516 00:45:16.095143 1414 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/54f273db-2c69-4246-ab51-cfd686983056-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "54f273db-2c69-4246-ab51-cfd686983056" (UID: "54f273db-2c69-4246-ab51-cfd686983056"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 16 00:45:16.095233 kubelet[1414]: I0516 00:45:16.095191 1414 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/54f273db-2c69-4246-ab51-cfd686983056-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "54f273db-2c69-4246-ab51-cfd686983056" (UID: "54f273db-2c69-4246-ab51-cfd686983056"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 16 00:45:16.095233 kubelet[1414]: I0516 00:45:16.095214 1414 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/54f273db-2c69-4246-ab51-cfd686983056-cni-path" (OuterVolumeSpecName: "cni-path") pod "54f273db-2c69-4246-ab51-cfd686983056" (UID: "54f273db-2c69-4246-ab51-cfd686983056"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 16 00:45:16.095288 kubelet[1414]: I0516 00:45:16.095231 1414 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/54f273db-2c69-4246-ab51-cfd686983056-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "54f273db-2c69-4246-ab51-cfd686983056" (UID: "54f273db-2c69-4246-ab51-cfd686983056"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 16 00:45:16.095288 kubelet[1414]: I0516 00:45:16.095251 1414 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/54f273db-2c69-4246-ab51-cfd686983056-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "54f273db-2c69-4246-ab51-cfd686983056" (UID: "54f273db-2c69-4246-ab51-cfd686983056"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 16 00:45:16.095365 kubelet[1414]: I0516 00:45:16.095338 1414 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/54f273db-2c69-4246-ab51-cfd686983056-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "54f273db-2c69-4246-ab51-cfd686983056" (UID: "54f273db-2c69-4246-ab51-cfd686983056"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 16 00:45:16.095584 kubelet[1414]: I0516 00:45:16.095562 1414 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/54f273db-2c69-4246-ab51-cfd686983056-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "54f273db-2c69-4246-ab51-cfd686983056" (UID: "54f273db-2c69-4246-ab51-cfd686983056"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 16 00:45:16.096083 kubelet[1414]: I0516 00:45:16.096036 1414 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/54f273db-2c69-4246-ab51-cfd686983056-kube-api-access-jdj6g" (OuterVolumeSpecName: "kube-api-access-jdj6g") pod "54f273db-2c69-4246-ab51-cfd686983056" (UID: "54f273db-2c69-4246-ab51-cfd686983056"). InnerVolumeSpecName "kube-api-access-jdj6g". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 16 00:45:16.193446 kubelet[1414]: I0516 00:45:16.193390 1414 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/54f273db-2c69-4246-ab51-cfd686983056-lib-modules\") on node \"10.0.0.92\" DevicePath \"\"" May 16 00:45:16.193446 kubelet[1414]: I0516 00:45:16.193421 1414 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/54f273db-2c69-4246-ab51-cfd686983056-clustermesh-secrets\") on node \"10.0.0.92\" DevicePath \"\"" May 16 00:45:16.193446 kubelet[1414]: I0516 00:45:16.193432 1414 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/54f273db-2c69-4246-ab51-cfd686983056-hostproc\") on node \"10.0.0.92\" DevicePath \"\"" May 16 00:45:16.193446 kubelet[1414]: I0516 00:45:16.193439 1414 reconciler_common.go:299] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/54f273db-2c69-4246-ab51-cfd686983056-cilium-ipsec-secrets\") on node \"10.0.0.92\" DevicePath \"\"" May 16 00:45:16.193446 kubelet[1414]: I0516 00:45:16.193448 1414 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-jdj6g\" (UniqueName: \"kubernetes.io/projected/54f273db-2c69-4246-ab51-cfd686983056-kube-api-access-jdj6g\") on node \"10.0.0.92\" DevicePath \"\"" May 16 00:45:16.193446 kubelet[1414]: I0516 00:45:16.193457 1414 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/54f273db-2c69-4246-ab51-cfd686983056-cni-path\") on node \"10.0.0.92\" DevicePath \"\"" May 16 00:45:16.193754 kubelet[1414]: I0516 00:45:16.193465 1414 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/54f273db-2c69-4246-ab51-cfd686983056-host-proc-sys-kernel\") on node \"10.0.0.92\" DevicePath \"\"" May 16 00:45:16.193754 kubelet[1414]: I0516 00:45:16.193474 1414 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/54f273db-2c69-4246-ab51-cfd686983056-bpf-maps\") on node \"10.0.0.92\" DevicePath \"\"" May 16 00:45:16.193754 kubelet[1414]: I0516 00:45:16.193482 1414 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/54f273db-2c69-4246-ab51-cfd686983056-etc-cni-netd\") on node \"10.0.0.92\" DevicePath \"\"" May 16 00:45:16.193754 kubelet[1414]: I0516 00:45:16.193489 1414 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/54f273db-2c69-4246-ab51-cfd686983056-cilium-config-path\") on node \"10.0.0.92\" DevicePath \"\"" May 16 00:45:16.193754 kubelet[1414]: I0516 00:45:16.193497 1414 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/54f273db-2c69-4246-ab51-cfd686983056-cilium-run\") on node \"10.0.0.92\" DevicePath \"\"" May 16 00:45:16.193754 kubelet[1414]: I0516 00:45:16.193506 1414 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/54f273db-2c69-4246-ab51-cfd686983056-hubble-tls\") on node \"10.0.0.92\" DevicePath \"\"" May 16 00:45:16.193754 kubelet[1414]: I0516 00:45:16.193514 1414 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/54f273db-2c69-4246-ab51-cfd686983056-host-proc-sys-net\") on node \"10.0.0.92\" DevicePath \"\"" May 16 00:45:16.392667 systemd[1]: var-lib-kubelet-pods-54f273db\x2d2c69\x2d4246\x2dab51\x2dcfd686983056-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2djdj6g.mount: Deactivated successfully. May 16 00:45:16.392767 systemd[1]: var-lib-kubelet-pods-54f273db\x2d2c69\x2d4246\x2dab51\x2dcfd686983056-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 16 00:45:16.392817 systemd[1]: var-lib-kubelet-pods-54f273db\x2d2c69\x2d4246\x2dab51\x2dcfd686983056-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 16 00:45:16.392865 systemd[1]: var-lib-kubelet-pods-54f273db\x2d2c69\x2d4246\x2dab51\x2dcfd686983056-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. May 16 00:45:16.725442 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2441026090.mount: Deactivated successfully. May 16 00:45:16.793919 kubelet[1414]: E0516 00:45:16.793858 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:45:16.964289 systemd[1]: Removed slice kubepods-burstable-pod54f273db_2c69_4246_ab51_cfd686983056.slice. May 16 00:45:17.112262 systemd[1]: Created slice kubepods-burstable-podd3be9b25_3d38_4716_8206_c6e0b922df16.slice. May 16 00:45:17.193292 env[1212]: time="2025-05-16T00:45:17.193217541Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:45:17.194894 env[1212]: time="2025-05-16T00:45:17.194852955Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:45:17.196392 env[1212]: time="2025-05-16T00:45:17.196360965Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:45:17.196862 env[1212]: time="2025-05-16T00:45:17.196829220Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" May 16 00:45:17.198947 kubelet[1414]: I0516 00:45:17.198923 1414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d3be9b25-3d38-4716-8206-c6e0b922df16-cni-path\") pod \"cilium-qgjvm\" (UID: \"d3be9b25-3d38-4716-8206-c6e0b922df16\") " pod="kube-system/cilium-qgjvm" May 16 00:45:17.199169 env[1212]: time="2025-05-16T00:45:17.199076934Z" level=info msg="CreateContainer within sandbox \"b446bb562873531949a484823a82404a5c0b21333d11ef8fa1b59bb03a495007\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 16 00:45:17.199268 kubelet[1414]: I0516 00:45:17.199246 1414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d3be9b25-3d38-4716-8206-c6e0b922df16-lib-modules\") pod \"cilium-qgjvm\" (UID: \"d3be9b25-3d38-4716-8206-c6e0b922df16\") " pod="kube-system/cilium-qgjvm" May 16 00:45:17.199359 kubelet[1414]: I0516 00:45:17.199344 1414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d3be9b25-3d38-4716-8206-c6e0b922df16-xtables-lock\") pod \"cilium-qgjvm\" (UID: \"d3be9b25-3d38-4716-8206-c6e0b922df16\") " pod="kube-system/cilium-qgjvm" May 16 00:45:17.199482 kubelet[1414]: I0516 00:45:17.199465 1414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d3be9b25-3d38-4716-8206-c6e0b922df16-cilium-run\") pod \"cilium-qgjvm\" (UID: \"d3be9b25-3d38-4716-8206-c6e0b922df16\") " pod="kube-system/cilium-qgjvm" May 16 00:45:17.199563 kubelet[1414]: I0516 00:45:17.199549 1414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d3be9b25-3d38-4716-8206-c6e0b922df16-cilium-cgroup\") pod \"cilium-qgjvm\" (UID: \"d3be9b25-3d38-4716-8206-c6e0b922df16\") " pod="kube-system/cilium-qgjvm" May 16 00:45:17.199646 kubelet[1414]: I0516 00:45:17.199633 1414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d3be9b25-3d38-4716-8206-c6e0b922df16-etc-cni-netd\") pod \"cilium-qgjvm\" (UID: \"d3be9b25-3d38-4716-8206-c6e0b922df16\") " pod="kube-system/cilium-qgjvm" May 16 00:45:17.199755 kubelet[1414]: I0516 00:45:17.199741 1414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d3be9b25-3d38-4716-8206-c6e0b922df16-clustermesh-secrets\") pod \"cilium-qgjvm\" (UID: \"d3be9b25-3d38-4716-8206-c6e0b922df16\") " pod="kube-system/cilium-qgjvm" May 16 00:45:17.199854 kubelet[1414]: I0516 00:45:17.199841 1414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d3be9b25-3d38-4716-8206-c6e0b922df16-host-proc-sys-kernel\") pod \"cilium-qgjvm\" (UID: \"d3be9b25-3d38-4716-8206-c6e0b922df16\") " pod="kube-system/cilium-qgjvm" May 16 00:45:17.199967 kubelet[1414]: I0516 00:45:17.199953 1414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d3be9b25-3d38-4716-8206-c6e0b922df16-hostproc\") pod \"cilium-qgjvm\" (UID: \"d3be9b25-3d38-4716-8206-c6e0b922df16\") " pod="kube-system/cilium-qgjvm" May 16 00:45:17.200077 kubelet[1414]: I0516 00:45:17.200063 1414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/d3be9b25-3d38-4716-8206-c6e0b922df16-cilium-ipsec-secrets\") pod \"cilium-qgjvm\" (UID: \"d3be9b25-3d38-4716-8206-c6e0b922df16\") " pod="kube-system/cilium-qgjvm" May 16 00:45:17.200183 kubelet[1414]: I0516 00:45:17.200161 1414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d3be9b25-3d38-4716-8206-c6e0b922df16-host-proc-sys-net\") pod \"cilium-qgjvm\" (UID: \"d3be9b25-3d38-4716-8206-c6e0b922df16\") " pod="kube-system/cilium-qgjvm" May 16 00:45:17.200271 kubelet[1414]: I0516 00:45:17.200259 1414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d3be9b25-3d38-4716-8206-c6e0b922df16-hubble-tls\") pod \"cilium-qgjvm\" (UID: \"d3be9b25-3d38-4716-8206-c6e0b922df16\") " pod="kube-system/cilium-qgjvm" May 16 00:45:17.200352 kubelet[1414]: I0516 00:45:17.200340 1414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qf7hs\" (UniqueName: \"kubernetes.io/projected/d3be9b25-3d38-4716-8206-c6e0b922df16-kube-api-access-qf7hs\") pod \"cilium-qgjvm\" (UID: \"d3be9b25-3d38-4716-8206-c6e0b922df16\") " pod="kube-system/cilium-qgjvm" May 16 00:45:17.200455 kubelet[1414]: I0516 00:45:17.200442 1414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d3be9b25-3d38-4716-8206-c6e0b922df16-bpf-maps\") pod \"cilium-qgjvm\" (UID: \"d3be9b25-3d38-4716-8206-c6e0b922df16\") " pod="kube-system/cilium-qgjvm" May 16 00:45:17.200557 kubelet[1414]: I0516 00:45:17.200543 1414 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d3be9b25-3d38-4716-8206-c6e0b922df16-cilium-config-path\") pod \"cilium-qgjvm\" (UID: \"d3be9b25-3d38-4716-8206-c6e0b922df16\") " pod="kube-system/cilium-qgjvm" May 16 00:45:17.210431 env[1212]: time="2025-05-16T00:45:17.210388745Z" level=info msg="CreateContainer within sandbox \"b446bb562873531949a484823a82404a5c0b21333d11ef8fa1b59bb03a495007\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"63f76ea0c86412a8cfe23ea82fe00edb8ff8980aef55aa02f76c71f439044238\"" May 16 00:45:17.210853 env[1212]: time="2025-05-16T00:45:17.210828200Z" level=info msg="StartContainer for \"63f76ea0c86412a8cfe23ea82fe00edb8ff8980aef55aa02f76c71f439044238\"" May 16 00:45:17.223948 systemd[1]: Started cri-containerd-63f76ea0c86412a8cfe23ea82fe00edb8ff8980aef55aa02f76c71f439044238.scope. May 16 00:45:17.298918 env[1212]: time="2025-05-16T00:45:17.298862612Z" level=info msg="StartContainer for \"63f76ea0c86412a8cfe23ea82fe00edb8ff8980aef55aa02f76c71f439044238\" returns successfully" May 16 00:45:17.423830 kubelet[1414]: E0516 00:45:17.423721 1414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:45:17.424633 env[1212]: time="2025-05-16T00:45:17.424304043Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-qgjvm,Uid:d3be9b25-3d38-4716-8206-c6e0b922df16,Namespace:kube-system,Attempt:0,}" May 16 00:45:17.436224 env[1212]: time="2025-05-16T00:45:17.436148252Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 16 00:45:17.436224 env[1212]: time="2025-05-16T00:45:17.436187693Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 16 00:45:17.436224 env[1212]: time="2025-05-16T00:45:17.436197653Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 00:45:17.436584 env[1212]: time="2025-05-16T00:45:17.436549385Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4838ac4778753cae216ede32aba39e7e5533bd1666c25dab364aa68acf2a75a1 pid=3071 runtime=io.containerd.runc.v2 May 16 00:45:17.450434 systemd[1]: Started cri-containerd-4838ac4778753cae216ede32aba39e7e5533bd1666c25dab364aa68acf2a75a1.scope. May 16 00:45:17.491223 env[1212]: time="2025-05-16T00:45:17.491168507Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-qgjvm,Uid:d3be9b25-3d38-4716-8206-c6e0b922df16,Namespace:kube-system,Attempt:0,} returns sandbox id \"4838ac4778753cae216ede32aba39e7e5533bd1666c25dab364aa68acf2a75a1\"" May 16 00:45:17.491983 kubelet[1414]: E0516 00:45:17.491790 1414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:45:17.493496 env[1212]: time="2025-05-16T00:45:17.493463426Z" level=info msg="CreateContainer within sandbox \"4838ac4778753cae216ede32aba39e7e5533bd1666c25dab364aa68acf2a75a1\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 16 00:45:17.504032 env[1212]: time="2025-05-16T00:45:17.503975389Z" level=info msg="CreateContainer within sandbox \"4838ac4778753cae216ede32aba39e7e5533bd1666c25dab364aa68acf2a75a1\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"9d2f342e9faa71d9483e7f6f5235af91d464244144a8eb9a5e872ddc17923f13\"" May 16 00:45:17.504648 env[1212]: time="2025-05-16T00:45:17.504611131Z" level=info msg="StartContainer for \"9d2f342e9faa71d9483e7f6f5235af91d464244144a8eb9a5e872ddc17923f13\"" May 16 00:45:17.517506 systemd[1]: Started cri-containerd-9d2f342e9faa71d9483e7f6f5235af91d464244144a8eb9a5e872ddc17923f13.scope. May 16 00:45:17.552465 env[1212]: time="2025-05-16T00:45:17.552404097Z" level=info msg="StartContainer for \"9d2f342e9faa71d9483e7f6f5235af91d464244144a8eb9a5e872ddc17923f13\" returns successfully" May 16 00:45:17.563491 systemd[1]: cri-containerd-9d2f342e9faa71d9483e7f6f5235af91d464244144a8eb9a5e872ddc17923f13.scope: Deactivated successfully. May 16 00:45:17.582423 env[1212]: time="2025-05-16T00:45:17.582371930Z" level=info msg="shim disconnected" id=9d2f342e9faa71d9483e7f6f5235af91d464244144a8eb9a5e872ddc17923f13 May 16 00:45:17.582423 env[1212]: time="2025-05-16T00:45:17.582422652Z" level=warning msg="cleaning up after shim disconnected" id=9d2f342e9faa71d9483e7f6f5235af91d464244144a8eb9a5e872ddc17923f13 namespace=k8s.io May 16 00:45:17.582662 env[1212]: time="2025-05-16T00:45:17.582433652Z" level=info msg="cleaning up dead shim" May 16 00:45:17.589331 env[1212]: time="2025-05-16T00:45:17.589278488Z" level=warning msg="cleanup warnings time=\"2025-05-16T00:45:17Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3156 runtime=io.containerd.runc.v2\n" May 16 00:45:17.794818 kubelet[1414]: E0516 00:45:17.794153 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:45:18.070088 kubelet[1414]: E0516 00:45:18.070048 1414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:45:18.071808 kubelet[1414]: E0516 00:45:18.071787 1414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:45:18.073326 env[1212]: time="2025-05-16T00:45:18.073289864Z" level=info msg="CreateContainer within sandbox \"4838ac4778753cae216ede32aba39e7e5533bd1666c25dab364aa68acf2a75a1\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 16 00:45:18.080391 kubelet[1414]: I0516 00:45:18.080283 1414 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-45rqp" podStartSLOduration=1.415669808 podStartE2EDuration="3.080267405s" podCreationTimestamp="2025-05-16 00:45:15 +0000 UTC" firstStartedPulling="2025-05-16 00:45:15.532974687 +0000 UTC m=+55.710691661" lastFinishedPulling="2025-05-16 00:45:17.197572244 +0000 UTC m=+57.375289258" observedRunningTime="2025-05-16 00:45:18.080231085 +0000 UTC m=+58.257948139" watchObservedRunningTime="2025-05-16 00:45:18.080267405 +0000 UTC m=+58.257984459" May 16 00:45:18.088961 env[1212]: time="2025-05-16T00:45:18.088902570Z" level=info msg="CreateContainer within sandbox \"4838ac4778753cae216ede32aba39e7e5533bd1666c25dab364aa68acf2a75a1\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"107d53e4ce8824e839c62aad6f9239e31d403f016c8efc2b43e0a9af4d33e3c8\"" May 16 00:45:18.089500 env[1212]: time="2025-05-16T00:45:18.089467138Z" level=info msg="StartContainer for \"107d53e4ce8824e839c62aad6f9239e31d403f016c8efc2b43e0a9af4d33e3c8\"" May 16 00:45:18.103774 systemd[1]: Started cri-containerd-107d53e4ce8824e839c62aad6f9239e31d403f016c8efc2b43e0a9af4d33e3c8.scope. May 16 00:45:18.137355 env[1212]: time="2025-05-16T00:45:18.137308550Z" level=info msg="StartContainer for \"107d53e4ce8824e839c62aad6f9239e31d403f016c8efc2b43e0a9af4d33e3c8\" returns successfully" May 16 00:45:18.146565 systemd[1]: cri-containerd-107d53e4ce8824e839c62aad6f9239e31d403f016c8efc2b43e0a9af4d33e3c8.scope: Deactivated successfully. May 16 00:45:18.164425 env[1212]: time="2025-05-16T00:45:18.164377421Z" level=info msg="shim disconnected" id=107d53e4ce8824e839c62aad6f9239e31d403f016c8efc2b43e0a9af4d33e3c8 May 16 00:45:18.164425 env[1212]: time="2025-05-16T00:45:18.164427542Z" level=warning msg="cleaning up after shim disconnected" id=107d53e4ce8824e839c62aad6f9239e31d403f016c8efc2b43e0a9af4d33e3c8 namespace=k8s.io May 16 00:45:18.164789 env[1212]: time="2025-05-16T00:45:18.164437022Z" level=info msg="cleaning up dead shim" May 16 00:45:18.171125 env[1212]: time="2025-05-16T00:45:18.171063478Z" level=warning msg="cleanup warnings time=\"2025-05-16T00:45:18Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3219 runtime=io.containerd.runc.v2\n" May 16 00:45:18.794624 kubelet[1414]: E0516 00:45:18.794572 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:45:18.960645 kubelet[1414]: I0516 00:45:18.960604 1414 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="54f273db-2c69-4246-ab51-cfd686983056" path="/var/lib/kubelet/pods/54f273db-2c69-4246-ab51-cfd686983056/volumes" May 16 00:45:19.074890 kubelet[1414]: E0516 00:45:19.074869 1414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:45:19.074999 kubelet[1414]: E0516 00:45:19.074903 1414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:45:19.076801 env[1212]: time="2025-05-16T00:45:19.076762505Z" level=info msg="CreateContainer within sandbox \"4838ac4778753cae216ede32aba39e7e5533bd1666c25dab364aa68acf2a75a1\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 16 00:45:19.099696 env[1212]: time="2025-05-16T00:45:19.099648507Z" level=info msg="CreateContainer within sandbox \"4838ac4778753cae216ede32aba39e7e5533bd1666c25dab364aa68acf2a75a1\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"ef2f94d1252a8867d7ffecb2b77ce33557fa07ab94f39e40fc2168e5dcf8b5cd\"" May 16 00:45:19.100388 env[1212]: time="2025-05-16T00:45:19.100361037Z" level=info msg="StartContainer for \"ef2f94d1252a8867d7ffecb2b77ce33557fa07ab94f39e40fc2168e5dcf8b5cd\"" May 16 00:45:19.114700 systemd[1]: Started cri-containerd-ef2f94d1252a8867d7ffecb2b77ce33557fa07ab94f39e40fc2168e5dcf8b5cd.scope. May 16 00:45:19.144712 env[1212]: time="2025-05-16T00:45:19.144641859Z" level=info msg="StartContainer for \"ef2f94d1252a8867d7ffecb2b77ce33557fa07ab94f39e40fc2168e5dcf8b5cd\" returns successfully" May 16 00:45:19.144694 systemd[1]: cri-containerd-ef2f94d1252a8867d7ffecb2b77ce33557fa07ab94f39e40fc2168e5dcf8b5cd.scope: Deactivated successfully. May 16 00:45:19.164761 env[1212]: time="2025-05-16T00:45:19.164705461Z" level=info msg="shim disconnected" id=ef2f94d1252a8867d7ffecb2b77ce33557fa07ab94f39e40fc2168e5dcf8b5cd May 16 00:45:19.164761 env[1212]: time="2025-05-16T00:45:19.164755942Z" level=warning msg="cleaning up after shim disconnected" id=ef2f94d1252a8867d7ffecb2b77ce33557fa07ab94f39e40fc2168e5dcf8b5cd namespace=k8s.io May 16 00:45:19.164761 env[1212]: time="2025-05-16T00:45:19.164766062Z" level=info msg="cleaning up dead shim" May 16 00:45:19.171340 env[1212]: time="2025-05-16T00:45:19.171289034Z" level=warning msg="cleanup warnings time=\"2025-05-16T00:45:19Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3275 runtime=io.containerd.runc.v2\n" May 16 00:45:19.795584 kubelet[1414]: E0516 00:45:19.795544 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:45:20.077990 kubelet[1414]: E0516 00:45:20.077463 1414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:45:20.079529 env[1212]: time="2025-05-16T00:45:20.079484894Z" level=info msg="CreateContainer within sandbox \"4838ac4778753cae216ede32aba39e7e5533bd1666c25dab364aa68acf2a75a1\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 16 00:45:20.095500 env[1212]: time="2025-05-16T00:45:20.095455033Z" level=info msg="CreateContainer within sandbox \"4838ac4778753cae216ede32aba39e7e5533bd1666c25dab364aa68acf2a75a1\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"c7763dfb2da6c6afb4a07d847b17dab84af3149a1d8c2435608d7302d3855d38\"" May 16 00:45:20.096194 env[1212]: time="2025-05-16T00:45:20.096167202Z" level=info msg="StartContainer for \"c7763dfb2da6c6afb4a07d847b17dab84af3149a1d8c2435608d7302d3855d38\"" May 16 00:45:20.113893 systemd[1]: Started cri-containerd-c7763dfb2da6c6afb4a07d847b17dab84af3149a1d8c2435608d7302d3855d38.scope. May 16 00:45:20.142553 systemd[1]: cri-containerd-c7763dfb2da6c6afb4a07d847b17dab84af3149a1d8c2435608d7302d3855d38.scope: Deactivated successfully. May 16 00:45:20.143318 env[1212]: time="2025-05-16T00:45:20.143031443Z" level=info msg="StartContainer for \"c7763dfb2da6c6afb4a07d847b17dab84af3149a1d8c2435608d7302d3855d38\" returns successfully" May 16 00:45:20.162617 env[1212]: time="2025-05-16T00:45:20.162569270Z" level=info msg="shim disconnected" id=c7763dfb2da6c6afb4a07d847b17dab84af3149a1d8c2435608d7302d3855d38 May 16 00:45:20.162617 env[1212]: time="2025-05-16T00:45:20.162613830Z" level=warning msg="cleaning up after shim disconnected" id=c7763dfb2da6c6afb4a07d847b17dab84af3149a1d8c2435608d7302d3855d38 namespace=k8s.io May 16 00:45:20.162617 env[1212]: time="2025-05-16T00:45:20.162623591Z" level=info msg="cleaning up dead shim" May 16 00:45:20.168961 env[1212]: time="2025-05-16T00:45:20.168920197Z" level=warning msg="cleanup warnings time=\"2025-05-16T00:45:20Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3329 runtime=io.containerd.runc.v2\n" May 16 00:45:20.392445 systemd[1]: run-containerd-runc-k8s.io-c7763dfb2da6c6afb4a07d847b17dab84af3149a1d8c2435608d7302d3855d38-runc.98HatS.mount: Deactivated successfully. May 16 00:45:20.392541 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c7763dfb2da6c6afb4a07d847b17dab84af3149a1d8c2435608d7302d3855d38-rootfs.mount: Deactivated successfully. May 16 00:45:20.754086 kubelet[1414]: E0516 00:45:20.753981 1414 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:45:20.768170 env[1212]: time="2025-05-16T00:45:20.768119906Z" level=info msg="StopPodSandbox for \"620d74bb178cad19960755942b01566cec7f2079a544f03e5562e34d71662d0b\"" May 16 00:45:20.768282 env[1212]: time="2025-05-16T00:45:20.768227028Z" level=info msg="TearDown network for sandbox \"620d74bb178cad19960755942b01566cec7f2079a544f03e5562e34d71662d0b\" successfully" May 16 00:45:20.768282 env[1212]: time="2025-05-16T00:45:20.768262588Z" level=info msg="StopPodSandbox for \"620d74bb178cad19960755942b01566cec7f2079a544f03e5562e34d71662d0b\" returns successfully" May 16 00:45:20.768669 env[1212]: time="2025-05-16T00:45:20.768639434Z" level=info msg="RemovePodSandbox for \"620d74bb178cad19960755942b01566cec7f2079a544f03e5562e34d71662d0b\"" May 16 00:45:20.768732 env[1212]: time="2025-05-16T00:45:20.768673634Z" level=info msg="Forcibly stopping sandbox \"620d74bb178cad19960755942b01566cec7f2079a544f03e5562e34d71662d0b\"" May 16 00:45:20.768771 env[1212]: time="2025-05-16T00:45:20.768752035Z" level=info msg="TearDown network for sandbox \"620d74bb178cad19960755942b01566cec7f2079a544f03e5562e34d71662d0b\" successfully" May 16 00:45:20.771954 env[1212]: time="2025-05-16T00:45:20.771928199Z" level=info msg="RemovePodSandbox \"620d74bb178cad19960755942b01566cec7f2079a544f03e5562e34d71662d0b\" returns successfully" May 16 00:45:20.796497 kubelet[1414]: E0516 00:45:20.796464 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:45:20.900285 kubelet[1414]: E0516 00:45:20.900251 1414 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 16 00:45:21.081530 kubelet[1414]: E0516 00:45:21.081499 1414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:45:21.083293 env[1212]: time="2025-05-16T00:45:21.083256630Z" level=info msg="CreateContainer within sandbox \"4838ac4778753cae216ede32aba39e7e5533bd1666c25dab364aa68acf2a75a1\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 16 00:45:21.095736 env[1212]: time="2025-05-16T00:45:21.095690595Z" level=info msg="CreateContainer within sandbox \"4838ac4778753cae216ede32aba39e7e5533bd1666c25dab364aa68acf2a75a1\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"9cdfa4d0beb4612fb19f5fa6846ac8e9ebf094c423f9dbe937f35e98e98fedcb\"" May 16 00:45:21.096402 env[1212]: time="2025-05-16T00:45:21.096368684Z" level=info msg="StartContainer for \"9cdfa4d0beb4612fb19f5fa6846ac8e9ebf094c423f9dbe937f35e98e98fedcb\"" May 16 00:45:21.114659 systemd[1]: Started cri-containerd-9cdfa4d0beb4612fb19f5fa6846ac8e9ebf094c423f9dbe937f35e98e98fedcb.scope. May 16 00:45:21.150536 env[1212]: time="2025-05-16T00:45:21.150484204Z" level=info msg="StartContainer for \"9cdfa4d0beb4612fb19f5fa6846ac8e9ebf094c423f9dbe937f35e98e98fedcb\" returns successfully" May 16 00:45:21.388796 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm-aes-ce))) May 16 00:45:21.796864 kubelet[1414]: E0516 00:45:21.796808 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:45:22.085777 kubelet[1414]: E0516 00:45:22.085754 1414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:45:22.102725 kubelet[1414]: I0516 00:45:22.102640 1414 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-qgjvm" podStartSLOduration=5.102624707 podStartE2EDuration="5.102624707s" podCreationTimestamp="2025-05-16 00:45:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-16 00:45:22.101950178 +0000 UTC m=+62.279667192" watchObservedRunningTime="2025-05-16 00:45:22.102624707 +0000 UTC m=+62.280341721" May 16 00:45:22.202006 kubelet[1414]: I0516 00:45:22.201973 1414 setters.go:602] "Node became not ready" node="10.0.0.92" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-05-16T00:45:22Z","lastTransitionTime":"2025-05-16T00:45:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} May 16 00:45:22.797328 kubelet[1414]: E0516 00:45:22.797249 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:45:23.425344 kubelet[1414]: E0516 00:45:23.425307 1414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:45:23.556893 systemd[1]: run-containerd-runc-k8s.io-9cdfa4d0beb4612fb19f5fa6846ac8e9ebf094c423f9dbe937f35e98e98fedcb-runc.Vt9vG0.mount: Deactivated successfully. May 16 00:45:23.798016 kubelet[1414]: E0516 00:45:23.797904 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:45:24.187076 systemd-networkd[1039]: lxc_health: Link UP May 16 00:45:24.200015 systemd-networkd[1039]: lxc_health: Gained carrier May 16 00:45:24.200735 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready May 16 00:45:24.798650 kubelet[1414]: E0516 00:45:24.798594 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:45:25.424820 kubelet[1414]: E0516 00:45:25.424775 1414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:45:25.799181 kubelet[1414]: E0516 00:45:25.799057 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:45:26.091753 kubelet[1414]: E0516 00:45:26.091721 1414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:45:26.158814 systemd-networkd[1039]: lxc_health: Gained IPv6LL May 16 00:45:26.799697 kubelet[1414]: E0516 00:45:26.799622 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:45:27.093274 kubelet[1414]: E0516 00:45:27.093250 1414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:45:27.800372 kubelet[1414]: E0516 00:45:27.800335 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:45:27.849956 systemd[1]: run-containerd-runc-k8s.io-9cdfa4d0beb4612fb19f5fa6846ac8e9ebf094c423f9dbe937f35e98e98fedcb-runc.cSQOq9.mount: Deactivated successfully. May 16 00:45:27.959199 kubelet[1414]: E0516 00:45:27.959171 1414 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:45:28.801188 kubelet[1414]: E0516 00:45:28.801144 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:45:29.802009 kubelet[1414]: E0516 00:45:29.801972 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:45:29.980833 systemd[1]: run-containerd-runc-k8s.io-9cdfa4d0beb4612fb19f5fa6846ac8e9ebf094c423f9dbe937f35e98e98fedcb-runc.UQXtDo.mount: Deactivated successfully. May 16 00:45:30.803413 kubelet[1414]: E0516 00:45:30.803368 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:45:31.804219 kubelet[1414]: E0516 00:45:31.804180 1414 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"