May 16 00:34:36.733028 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] May 16 00:34:36.733050 kernel: Linux version 5.15.181-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Thu May 15 23:21:39 -00 2025 May 16 00:34:36.733058 kernel: efi: EFI v2.70 by EDK II May 16 00:34:36.733064 kernel: efi: SMBIOS 3.0=0xd9260000 ACPI 2.0=0xd9240000 MEMATTR=0xda32b018 RNG=0xd9220018 MEMRESERVE=0xd9521c18 May 16 00:34:36.733069 kernel: random: crng init done May 16 00:34:36.733075 kernel: ACPI: Early table checksum verification disabled May 16 00:34:36.733081 kernel: ACPI: RSDP 0x00000000D9240000 000024 (v02 BOCHS ) May 16 00:34:36.733089 kernel: ACPI: XSDT 0x00000000D9230000 000064 (v01 BOCHS BXPC 00000001 01000013) May 16 00:34:36.733094 kernel: ACPI: FACP 0x00000000D91E0000 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) May 16 00:34:36.733100 kernel: ACPI: DSDT 0x00000000D91F0000 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 16 00:34:36.733106 kernel: ACPI: APIC 0x00000000D91D0000 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) May 16 00:34:36.733111 kernel: ACPI: PPTT 0x00000000D91C0000 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) May 16 00:34:36.733117 kernel: ACPI: GTDT 0x00000000D91B0000 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 16 00:34:36.733122 kernel: ACPI: MCFG 0x00000000D91A0000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 16 00:34:36.733130 kernel: ACPI: SPCR 0x00000000D9190000 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 16 00:34:36.733137 kernel: ACPI: DBG2 0x00000000D9180000 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) May 16 00:34:36.733143 kernel: ACPI: IORT 0x00000000D9170000 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 16 00:34:36.733149 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 May 16 00:34:36.733155 kernel: NUMA: Failed to initialise from firmware May 16 00:34:36.733161 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] May 16 00:34:36.733167 kernel: NUMA: NODE_DATA [mem 0xdcb0b900-0xdcb10fff] May 16 00:34:36.733173 kernel: Zone ranges: May 16 00:34:36.733179 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] May 16 00:34:36.733186 kernel: DMA32 empty May 16 00:34:36.733192 kernel: Normal empty May 16 00:34:36.733198 kernel: Movable zone start for each node May 16 00:34:36.733204 kernel: Early memory node ranges May 16 00:34:36.733209 kernel: node 0: [mem 0x0000000040000000-0x00000000d924ffff] May 16 00:34:36.733215 kernel: node 0: [mem 0x00000000d9250000-0x00000000d951ffff] May 16 00:34:36.733221 kernel: node 0: [mem 0x00000000d9520000-0x00000000dc7fffff] May 16 00:34:36.733227 kernel: node 0: [mem 0x00000000dc800000-0x00000000dc88ffff] May 16 00:34:36.733233 kernel: node 0: [mem 0x00000000dc890000-0x00000000dc89ffff] May 16 00:34:36.733239 kernel: node 0: [mem 0x00000000dc8a0000-0x00000000dc9bffff] May 16 00:34:36.733245 kernel: node 0: [mem 0x00000000dc9c0000-0x00000000dcffffff] May 16 00:34:36.733251 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] May 16 00:34:36.733258 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges May 16 00:34:36.733264 kernel: psci: probing for conduit method from ACPI. May 16 00:34:36.733270 kernel: psci: PSCIv1.1 detected in firmware. May 16 00:34:36.733276 kernel: psci: Using standard PSCI v0.2 function IDs May 16 00:34:36.733282 kernel: psci: Trusted OS migration not required May 16 00:34:36.733290 kernel: psci: SMC Calling Convention v1.1 May 16 00:34:36.733297 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) May 16 00:34:36.733304 kernel: ACPI: SRAT not present May 16 00:34:36.733311 kernel: percpu: Embedded 30 pages/cpu s83032 r8192 d31656 u122880 May 16 00:34:36.733317 kernel: pcpu-alloc: s83032 r8192 d31656 u122880 alloc=30*4096 May 16 00:34:36.733323 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 May 16 00:34:36.733330 kernel: Detected PIPT I-cache on CPU0 May 16 00:34:36.733343 kernel: CPU features: detected: GIC system register CPU interface May 16 00:34:36.733350 kernel: CPU features: detected: Hardware dirty bit management May 16 00:34:36.733356 kernel: CPU features: detected: Spectre-v4 May 16 00:34:36.733363 kernel: CPU features: detected: Spectre-BHB May 16 00:34:36.733370 kernel: CPU features: kernel page table isolation forced ON by KASLR May 16 00:34:36.733376 kernel: CPU features: detected: Kernel page table isolation (KPTI) May 16 00:34:36.733383 kernel: CPU features: detected: ARM erratum 1418040 May 16 00:34:36.733389 kernel: CPU features: detected: SSBS not fully self-synchronizing May 16 00:34:36.733396 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 May 16 00:34:36.733402 kernel: Policy zone: DMA May 16 00:34:36.733409 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=2d88e96fdc9dc9b028836e57c250f3fd2abd3e6490e27ecbf72d8b216e3efce8 May 16 00:34:36.733416 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 16 00:34:36.733422 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 16 00:34:36.733428 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 16 00:34:36.733435 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 16 00:34:36.733442 kernel: Memory: 2457340K/2572288K available (9792K kernel code, 2094K rwdata, 7584K rodata, 36480K init, 777K bss, 114948K reserved, 0K cma-reserved) May 16 00:34:36.733449 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 16 00:34:36.733455 kernel: trace event string verifier disabled May 16 00:34:36.733461 kernel: rcu: Preemptible hierarchical RCU implementation. May 16 00:34:36.733468 kernel: rcu: RCU event tracing is enabled. May 16 00:34:36.733475 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 16 00:34:36.733481 kernel: Trampoline variant of Tasks RCU enabled. May 16 00:34:36.733487 kernel: Tracing variant of Tasks RCU enabled. May 16 00:34:36.733494 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 16 00:34:36.733500 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 16 00:34:36.733507 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 May 16 00:34:36.733514 kernel: GICv3: 256 SPIs implemented May 16 00:34:36.733520 kernel: GICv3: 0 Extended SPIs implemented May 16 00:34:36.733526 kernel: GICv3: Distributor has no Range Selector support May 16 00:34:36.733533 kernel: Root IRQ handler: gic_handle_irq May 16 00:34:36.733539 kernel: GICv3: 16 PPIs implemented May 16 00:34:36.733545 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 May 16 00:34:36.733551 kernel: ACPI: SRAT not present May 16 00:34:36.733557 kernel: ITS [mem 0x08080000-0x0809ffff] May 16 00:34:36.733564 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400b0000 (indirect, esz 8, psz 64K, shr 1) May 16 00:34:36.733570 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400c0000 (flat, esz 8, psz 64K, shr 1) May 16 00:34:36.733576 kernel: GICv3: using LPI property table @0x00000000400d0000 May 16 00:34:36.733583 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000000400e0000 May 16 00:34:36.733590 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 16 00:34:36.733597 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). May 16 00:34:36.733604 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns May 16 00:34:36.733610 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns May 16 00:34:36.733616 kernel: arm-pv: using stolen time PV May 16 00:34:36.733623 kernel: Console: colour dummy device 80x25 May 16 00:34:36.733629 kernel: ACPI: Core revision 20210730 May 16 00:34:36.733636 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) May 16 00:34:36.733643 kernel: pid_max: default: 32768 minimum: 301 May 16 00:34:36.733649 kernel: LSM: Security Framework initializing May 16 00:34:36.733657 kernel: SELinux: Initializing. May 16 00:34:36.733663 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 16 00:34:36.733670 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 16 00:34:36.733676 kernel: ACPI PPTT: PPTT table found, but unable to locate core 3 (3) May 16 00:34:36.733682 kernel: rcu: Hierarchical SRCU implementation. May 16 00:34:36.733689 kernel: Platform MSI: ITS@0x8080000 domain created May 16 00:34:36.733695 kernel: PCI/MSI: ITS@0x8080000 domain created May 16 00:34:36.733702 kernel: Remapping and enabling EFI services. May 16 00:34:36.733708 kernel: smp: Bringing up secondary CPUs ... May 16 00:34:36.733715 kernel: Detected PIPT I-cache on CPU1 May 16 00:34:36.733722 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 May 16 00:34:36.733729 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000000400f0000 May 16 00:34:36.733735 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 16 00:34:36.733742 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] May 16 00:34:36.733748 kernel: Detected PIPT I-cache on CPU2 May 16 00:34:36.733755 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 May 16 00:34:36.733761 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040100000 May 16 00:34:36.733768 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 16 00:34:36.733774 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] May 16 00:34:36.733782 kernel: Detected PIPT I-cache on CPU3 May 16 00:34:36.733789 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 May 16 00:34:36.733795 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040110000 May 16 00:34:36.733802 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 16 00:34:36.733812 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] May 16 00:34:36.733820 kernel: smp: Brought up 1 node, 4 CPUs May 16 00:34:36.733827 kernel: SMP: Total of 4 processors activated. May 16 00:34:36.733834 kernel: CPU features: detected: 32-bit EL0 Support May 16 00:34:36.733841 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence May 16 00:34:36.733847 kernel: CPU features: detected: Common not Private translations May 16 00:34:36.733854 kernel: CPU features: detected: CRC32 instructions May 16 00:34:36.733861 kernel: CPU features: detected: RCpc load-acquire (LDAPR) May 16 00:34:36.733869 kernel: CPU features: detected: LSE atomic instructions May 16 00:34:36.733882 kernel: CPU features: detected: Privileged Access Never May 16 00:34:36.733889 kernel: CPU features: detected: RAS Extension Support May 16 00:34:36.733896 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) May 16 00:34:36.733903 kernel: CPU: All CPU(s) started at EL1 May 16 00:34:36.733911 kernel: alternatives: patching kernel code May 16 00:34:36.733918 kernel: devtmpfs: initialized May 16 00:34:36.733925 kernel: KASLR enabled May 16 00:34:36.733932 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 16 00:34:36.733939 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 16 00:34:36.733946 kernel: pinctrl core: initialized pinctrl subsystem May 16 00:34:36.733952 kernel: SMBIOS 3.0.0 present. May 16 00:34:36.733959 kernel: DMI: QEMU KVM Virtual Machine, BIOS 0.0.0 02/06/2015 May 16 00:34:36.733966 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 16 00:34:36.733974 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations May 16 00:34:36.733982 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations May 16 00:34:36.733988 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations May 16 00:34:36.733995 kernel: audit: initializing netlink subsys (disabled) May 16 00:34:36.734002 kernel: audit: type=2000 audit(0.031:1): state=initialized audit_enabled=0 res=1 May 16 00:34:36.734009 kernel: thermal_sys: Registered thermal governor 'step_wise' May 16 00:34:36.734016 kernel: cpuidle: using governor menu May 16 00:34:36.734023 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. May 16 00:34:36.734029 kernel: ASID allocator initialised with 32768 entries May 16 00:34:36.734038 kernel: ACPI: bus type PCI registered May 16 00:34:36.734044 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 16 00:34:36.734051 kernel: Serial: AMBA PL011 UART driver May 16 00:34:36.734058 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages May 16 00:34:36.734065 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages May 16 00:34:36.734072 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages May 16 00:34:36.734078 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages May 16 00:34:36.734085 kernel: cryptd: max_cpu_qlen set to 1000 May 16 00:34:36.734092 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) May 16 00:34:36.734100 kernel: ACPI: Added _OSI(Module Device) May 16 00:34:36.734107 kernel: ACPI: Added _OSI(Processor Device) May 16 00:34:36.734113 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 16 00:34:36.734120 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 16 00:34:36.734127 kernel: ACPI: Added _OSI(Linux-Dell-Video) May 16 00:34:36.734134 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) May 16 00:34:36.734140 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) May 16 00:34:36.734147 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 16 00:34:36.734154 kernel: ACPI: Interpreter enabled May 16 00:34:36.734162 kernel: ACPI: Using GIC for interrupt routing May 16 00:34:36.734169 kernel: ACPI: MCFG table detected, 1 entries May 16 00:34:36.734176 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA May 16 00:34:36.734183 kernel: printk: console [ttyAMA0] enabled May 16 00:34:36.734190 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 16 00:34:36.734322 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 16 00:34:36.734398 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] May 16 00:34:36.734461 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] May 16 00:34:36.734521 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 May 16 00:34:36.734580 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] May 16 00:34:36.734589 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] May 16 00:34:36.734597 kernel: PCI host bridge to bus 0000:00 May 16 00:34:36.734662 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] May 16 00:34:36.734728 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] May 16 00:34:36.734783 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] May 16 00:34:36.739434 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 16 00:34:36.739581 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 May 16 00:34:36.739663 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 May 16 00:34:36.739728 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] May 16 00:34:36.740405 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] May 16 00:34:36.740472 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] May 16 00:34:36.740539 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] May 16 00:34:36.740600 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] May 16 00:34:36.740669 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] May 16 00:34:36.740728 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] May 16 00:34:36.740786 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] May 16 00:34:36.740846 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] May 16 00:34:36.740856 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 May 16 00:34:36.740864 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 May 16 00:34:36.740885 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 May 16 00:34:36.740893 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 May 16 00:34:36.740900 kernel: iommu: Default domain type: Translated May 16 00:34:36.740907 kernel: iommu: DMA domain TLB invalidation policy: strict mode May 16 00:34:36.740914 kernel: vgaarb: loaded May 16 00:34:36.740921 kernel: pps_core: LinuxPPS API ver. 1 registered May 16 00:34:36.740930 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti May 16 00:34:36.740937 kernel: PTP clock support registered May 16 00:34:36.740944 kernel: Registered efivars operations May 16 00:34:36.740953 kernel: clocksource: Switched to clocksource arch_sys_counter May 16 00:34:36.740960 kernel: VFS: Disk quotas dquot_6.6.0 May 16 00:34:36.740970 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 16 00:34:36.740976 kernel: pnp: PnP ACPI init May 16 00:34:36.741047 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved May 16 00:34:36.741058 kernel: pnp: PnP ACPI: found 1 devices May 16 00:34:36.741065 kernel: NET: Registered PF_INET protocol family May 16 00:34:36.741073 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 16 00:34:36.741081 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 16 00:34:36.741088 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 16 00:34:36.741095 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 16 00:34:36.741102 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) May 16 00:34:36.741109 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 16 00:34:36.741119 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 16 00:34:36.741127 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 16 00:34:36.741134 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 16 00:34:36.741141 kernel: PCI: CLS 0 bytes, default 64 May 16 00:34:36.741149 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available May 16 00:34:36.741156 kernel: kvm [1]: HYP mode not available May 16 00:34:36.741163 kernel: Initialise system trusted keyrings May 16 00:34:36.741170 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 16 00:34:36.741177 kernel: Key type asymmetric registered May 16 00:34:36.741186 kernel: Asymmetric key parser 'x509' registered May 16 00:34:36.741193 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) May 16 00:34:36.741200 kernel: io scheduler mq-deadline registered May 16 00:34:36.741207 kernel: io scheduler kyber registered May 16 00:34:36.741215 kernel: io scheduler bfq registered May 16 00:34:36.741222 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 May 16 00:34:36.741228 kernel: ACPI: button: Power Button [PWRB] May 16 00:34:36.741236 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 May 16 00:34:36.741305 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) May 16 00:34:36.741315 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 16 00:34:36.741322 kernel: thunder_xcv, ver 1.0 May 16 00:34:36.741329 kernel: thunder_bgx, ver 1.0 May 16 00:34:36.741344 kernel: nicpf, ver 1.0 May 16 00:34:36.741353 kernel: nicvf, ver 1.0 May 16 00:34:36.741429 kernel: rtc-efi rtc-efi.0: registered as rtc0 May 16 00:34:36.741493 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-05-16T00:34:36 UTC (1747355676) May 16 00:34:36.741503 kernel: hid: raw HID events driver (C) Jiri Kosina May 16 00:34:36.741512 kernel: NET: Registered PF_INET6 protocol family May 16 00:34:36.741519 kernel: Segment Routing with IPv6 May 16 00:34:36.741526 kernel: In-situ OAM (IOAM) with IPv6 May 16 00:34:36.741533 kernel: NET: Registered PF_PACKET protocol family May 16 00:34:36.741544 kernel: Key type dns_resolver registered May 16 00:34:36.741551 kernel: registered taskstats version 1 May 16 00:34:36.741558 kernel: Loading compiled-in X.509 certificates May 16 00:34:36.741567 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.181-flatcar: 2793d535c1de6f1789b22ef06bd5666144f4eeb2' May 16 00:34:36.741574 kernel: Key type .fscrypt registered May 16 00:34:36.741581 kernel: Key type fscrypt-provisioning registered May 16 00:34:36.741590 kernel: ima: No TPM chip found, activating TPM-bypass! May 16 00:34:36.741596 kernel: ima: Allocated hash algorithm: sha1 May 16 00:34:36.741603 kernel: ima: No architecture policies found May 16 00:34:36.741612 kernel: clk: Disabling unused clocks May 16 00:34:36.741618 kernel: Freeing unused kernel memory: 36480K May 16 00:34:36.741625 kernel: Run /init as init process May 16 00:34:36.741632 kernel: with arguments: May 16 00:34:36.741639 kernel: /init May 16 00:34:36.741646 kernel: with environment: May 16 00:34:36.741652 kernel: HOME=/ May 16 00:34:36.741659 kernel: TERM=linux May 16 00:34:36.741666 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 16 00:34:36.741676 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) May 16 00:34:36.741685 systemd[1]: Detected virtualization kvm. May 16 00:34:36.741692 systemd[1]: Detected architecture arm64. May 16 00:34:36.741700 systemd[1]: Running in initrd. May 16 00:34:36.741707 systemd[1]: No hostname configured, using default hostname. May 16 00:34:36.741714 systemd[1]: Hostname set to . May 16 00:34:36.741722 systemd[1]: Initializing machine ID from VM UUID. May 16 00:34:36.741731 systemd[1]: Queued start job for default target initrd.target. May 16 00:34:36.741739 systemd[1]: Started systemd-ask-password-console.path. May 16 00:34:36.741746 systemd[1]: Reached target cryptsetup.target. May 16 00:34:36.741753 systemd[1]: Reached target paths.target. May 16 00:34:36.741760 systemd[1]: Reached target slices.target. May 16 00:34:36.741768 systemd[1]: Reached target swap.target. May 16 00:34:36.741775 systemd[1]: Reached target timers.target. May 16 00:34:36.741783 systemd[1]: Listening on iscsid.socket. May 16 00:34:36.741791 systemd[1]: Listening on iscsiuio.socket. May 16 00:34:36.741799 systemd[1]: Listening on systemd-journald-audit.socket. May 16 00:34:36.741807 systemd[1]: Listening on systemd-journald-dev-log.socket. May 16 00:34:36.741814 systemd[1]: Listening on systemd-journald.socket. May 16 00:34:36.741821 systemd[1]: Listening on systemd-networkd.socket. May 16 00:34:36.741829 systemd[1]: Listening on systemd-udevd-control.socket. May 16 00:34:36.741836 systemd[1]: Listening on systemd-udevd-kernel.socket. May 16 00:34:36.741844 systemd[1]: Reached target sockets.target. May 16 00:34:36.741852 systemd[1]: Starting kmod-static-nodes.service... May 16 00:34:36.741860 systemd[1]: Finished network-cleanup.service. May 16 00:34:36.741867 systemd[1]: Starting systemd-fsck-usr.service... May 16 00:34:36.741882 systemd[1]: Starting systemd-journald.service... May 16 00:34:36.741890 systemd[1]: Starting systemd-modules-load.service... May 16 00:34:36.741897 systemd[1]: Starting systemd-resolved.service... May 16 00:34:36.741905 systemd[1]: Starting systemd-vconsole-setup.service... May 16 00:34:36.741912 systemd[1]: Finished kmod-static-nodes.service. May 16 00:34:36.741919 systemd[1]: Finished systemd-fsck-usr.service. May 16 00:34:36.741928 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... May 16 00:34:36.741936 systemd[1]: Finished systemd-vconsole-setup.service. May 16 00:34:36.741943 systemd[1]: Starting dracut-cmdline-ask.service... May 16 00:34:36.741951 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. May 16 00:34:36.741959 kernel: audit: type=1130 audit(1747355676.734:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:34:36.741969 systemd-journald[289]: Journal started May 16 00:34:36.742013 systemd-journald[289]: Runtime Journal (/run/log/journal/84b10aa9238a414fbbe24a68fa2e1b8e) is 6.0M, max 48.7M, 42.6M free. May 16 00:34:36.734000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:34:36.728531 systemd-modules-load[290]: Inserted module 'overlay' May 16 00:34:36.745260 systemd[1]: Started systemd-journald.service. May 16 00:34:36.746000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:34:36.749164 kernel: audit: type=1130 audit(1747355676.746:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:34:36.751378 systemd[1]: Finished dracut-cmdline-ask.service. May 16 00:34:36.754590 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 16 00:34:36.753832 systemd-resolved[291]: Positive Trust Anchors: May 16 00:34:36.753847 systemd-resolved[291]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 16 00:34:36.753885 systemd-resolved[291]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test May 16 00:34:36.765059 kernel: Bridge firewalling registered May 16 00:34:36.765081 kernel: audit: type=1130 audit(1747355676.761:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:34:36.761000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:34:36.758032 systemd-resolved[291]: Defaulting to hostname 'linux'. May 16 00:34:36.768742 kernel: audit: type=1130 audit(1747355676.764:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:34:36.764000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:34:36.762221 systemd[1]: Started systemd-resolved.service. May 16 00:34:36.762604 systemd-modules-load[290]: Inserted module 'br_netfilter' May 16 00:34:36.765785 systemd[1]: Reached target nss-lookup.target. May 16 00:34:36.770301 systemd[1]: Starting dracut-cmdline.service... May 16 00:34:36.774898 kernel: SCSI subsystem initialized May 16 00:34:36.779049 dracut-cmdline[308]: dracut-dracut-053 May 16 00:34:36.781221 dracut-cmdline[308]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=2d88e96fdc9dc9b028836e57c250f3fd2abd3e6490e27ecbf72d8b216e3efce8 May 16 00:34:36.786697 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 16 00:34:36.786718 kernel: device-mapper: uevent: version 1.0.3 May 16 00:34:36.786727 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com May 16 00:34:36.790158 systemd-modules-load[290]: Inserted module 'dm_multipath' May 16 00:34:36.790985 systemd[1]: Finished systemd-modules-load.service. May 16 00:34:36.790000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:34:36.792646 systemd[1]: Starting systemd-sysctl.service... May 16 00:34:36.796303 kernel: audit: type=1130 audit(1747355676.790:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:34:36.800593 systemd[1]: Finished systemd-sysctl.service. May 16 00:34:36.801000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:34:36.804897 kernel: audit: type=1130 audit(1747355676.801:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:34:36.850903 kernel: Loading iSCSI transport class v2.0-870. May 16 00:34:36.862900 kernel: iscsi: registered transport (tcp) May 16 00:34:36.879187 kernel: iscsi: registered transport (qla4xxx) May 16 00:34:36.879212 kernel: QLogic iSCSI HBA Driver May 16 00:34:36.913187 systemd[1]: Finished dracut-cmdline.service. May 16 00:34:36.916934 kernel: audit: type=1130 audit(1747355676.913:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:34:36.913000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:34:36.914870 systemd[1]: Starting dracut-pre-udev.service... May 16 00:34:36.956903 kernel: raid6: neonx8 gen() 13744 MB/s May 16 00:34:36.976894 kernel: raid6: neonx8 xor() 6442 MB/s May 16 00:34:36.993894 kernel: raid6: neonx4 gen() 13552 MB/s May 16 00:34:37.010896 kernel: raid6: neonx4 xor() 11159 MB/s May 16 00:34:37.027890 kernel: raid6: neonx2 gen() 12958 MB/s May 16 00:34:37.044897 kernel: raid6: neonx2 xor() 10636 MB/s May 16 00:34:37.061897 kernel: raid6: neonx1 gen() 10596 MB/s May 16 00:34:37.078892 kernel: raid6: neonx1 xor() 8788 MB/s May 16 00:34:37.095888 kernel: raid6: int64x8 gen() 6269 MB/s May 16 00:34:37.112898 kernel: raid6: int64x8 xor() 3539 MB/s May 16 00:34:37.129899 kernel: raid6: int64x4 gen() 7208 MB/s May 16 00:34:37.146896 kernel: raid6: int64x4 xor() 3847 MB/s May 16 00:34:37.163897 kernel: raid6: int64x2 gen() 6145 MB/s May 16 00:34:37.180896 kernel: raid6: int64x2 xor() 3317 MB/s May 16 00:34:37.197897 kernel: raid6: int64x1 gen() 5039 MB/s May 16 00:34:37.215082 kernel: raid6: int64x1 xor() 2643 MB/s May 16 00:34:37.215097 kernel: raid6: using algorithm neonx8 gen() 13744 MB/s May 16 00:34:37.215106 kernel: raid6: .... xor() 6442 MB/s, rmw enabled May 16 00:34:37.216295 kernel: raid6: using neon recovery algorithm May 16 00:34:37.227305 kernel: xor: measuring software checksum speed May 16 00:34:37.227329 kernel: 8regs : 17209 MB/sec May 16 00:34:37.228026 kernel: 32regs : 19937 MB/sec May 16 00:34:37.229380 kernel: arm64_neon : 27841 MB/sec May 16 00:34:37.229393 kernel: xor: using function: arm64_neon (27841 MB/sec) May 16 00:34:37.285896 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no May 16 00:34:37.296057 systemd[1]: Finished dracut-pre-udev.service. May 16 00:34:37.300806 kernel: audit: type=1130 audit(1747355677.296:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:34:37.300829 kernel: audit: type=1334 audit(1747355677.298:10): prog-id=7 op=LOAD May 16 00:34:37.296000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:34:37.298000 audit: BPF prog-id=7 op=LOAD May 16 00:34:37.299000 audit: BPF prog-id=8 op=LOAD May 16 00:34:37.301206 systemd[1]: Starting systemd-udevd.service... May 16 00:34:37.317365 systemd-udevd[492]: Using default interface naming scheme 'v252'. May 16 00:34:37.320866 systemd[1]: Started systemd-udevd.service. May 16 00:34:37.320000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:34:37.325076 systemd[1]: Starting dracut-pre-trigger.service... May 16 00:34:37.336469 dracut-pre-trigger[506]: rd.md=0: removing MD RAID activation May 16 00:34:37.366258 systemd[1]: Finished dracut-pre-trigger.service. May 16 00:34:37.367000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:34:37.368069 systemd[1]: Starting systemd-udev-trigger.service... May 16 00:34:37.409640 systemd[1]: Finished systemd-udev-trigger.service. May 16 00:34:37.410000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:34:37.449427 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 16 00:34:37.458715 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 16 00:34:37.458740 kernel: GPT:9289727 != 19775487 May 16 00:34:37.458749 kernel: GPT:Alternate GPT header not at the end of the disk. May 16 00:34:37.458758 kernel: GPT:9289727 != 19775487 May 16 00:34:37.458768 kernel: GPT: Use GNU Parted to correct GPT errors. May 16 00:34:37.458776 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 16 00:34:37.472904 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (537) May 16 00:34:37.474283 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. May 16 00:34:37.477182 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. May 16 00:34:37.478169 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. May 16 00:34:37.484287 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. May 16 00:34:37.487840 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. May 16 00:34:37.489624 systemd[1]: Starting disk-uuid.service... May 16 00:34:37.498895 disk-uuid[563]: Primary Header is updated. May 16 00:34:37.498895 disk-uuid[563]: Secondary Entries is updated. May 16 00:34:37.498895 disk-uuid[563]: Secondary Header is updated. May 16 00:34:37.502900 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 16 00:34:37.513908 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 16 00:34:38.513908 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 16 00:34:38.513958 disk-uuid[564]: The operation has completed successfully. May 16 00:34:38.545666 systemd[1]: disk-uuid.service: Deactivated successfully. May 16 00:34:38.545760 systemd[1]: Finished disk-uuid.service. May 16 00:34:38.545000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:34:38.545000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:34:38.561047 systemd[1]: Starting verity-setup.service... May 16 00:34:38.580864 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" May 16 00:34:38.613702 systemd[1]: Found device dev-mapper-usr.device. May 16 00:34:38.615945 systemd[1]: Mounting sysusr-usr.mount... May 16 00:34:38.618649 systemd[1]: Finished verity-setup.service. May 16 00:34:38.619000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:34:38.672898 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. May 16 00:34:38.673553 systemd[1]: Mounted sysusr-usr.mount. May 16 00:34:38.674428 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. May 16 00:34:38.675230 systemd[1]: Starting ignition-setup.service... May 16 00:34:38.677912 systemd[1]: Starting parse-ip-for-networkd.service... May 16 00:34:38.684554 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 16 00:34:38.684589 kernel: BTRFS info (device vda6): using free space tree May 16 00:34:38.684600 kernel: BTRFS info (device vda6): has skinny extents May 16 00:34:38.693620 systemd[1]: mnt-oem.mount: Deactivated successfully. May 16 00:34:38.701203 systemd[1]: Finished ignition-setup.service. May 16 00:34:38.701000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:34:38.703309 systemd[1]: Starting ignition-fetch-offline.service... May 16 00:34:38.788209 systemd[1]: Finished parse-ip-for-networkd.service. May 16 00:34:38.788000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:34:38.789000 audit: BPF prog-id=9 op=LOAD May 16 00:34:38.790778 systemd[1]: Starting systemd-networkd.service... May 16 00:34:38.820671 systemd-networkd[739]: lo: Link UP May 16 00:34:38.820682 systemd-networkd[739]: lo: Gained carrier May 16 00:34:38.821000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:34:38.821280 systemd-networkd[739]: Enumeration completed May 16 00:34:38.821413 systemd[1]: Started systemd-networkd.service. May 16 00:34:38.821645 systemd-networkd[739]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 16 00:34:38.822486 systemd[1]: Reached target network.target. May 16 00:34:38.823472 systemd-networkd[739]: eth0: Link UP May 16 00:34:38.823475 systemd-networkd[739]: eth0: Gained carrier May 16 00:34:38.825556 systemd[1]: Starting iscsiuio.service... May 16 00:34:38.836155 ignition[646]: Ignition 2.14.0 May 16 00:34:38.836166 ignition[646]: Stage: fetch-offline May 16 00:34:38.836206 ignition[646]: no configs at "/usr/lib/ignition/base.d" May 16 00:34:38.836215 ignition[646]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 16 00:34:38.839057 systemd[1]: Started iscsiuio.service. May 16 00:34:38.840000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:34:38.836383 ignition[646]: parsed url from cmdline: "" May 16 00:34:38.836386 ignition[646]: no config URL provided May 16 00:34:38.836391 ignition[646]: reading system config file "/usr/lib/ignition/user.ign" May 16 00:34:38.841838 systemd[1]: Starting iscsid.service... May 16 00:34:38.836399 ignition[646]: no config at "/usr/lib/ignition/user.ign" May 16 00:34:38.846238 iscsid[746]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi May 16 00:34:38.846238 iscsid[746]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log May 16 00:34:38.846238 iscsid[746]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. May 16 00:34:38.846238 iscsid[746]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. May 16 00:34:38.846238 iscsid[746]: If using hardware iscsi like qla4xxx this message can be ignored. May 16 00:34:38.846238 iscsid[746]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi May 16 00:34:38.846238 iscsid[746]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf May 16 00:34:38.847000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:34:38.842974 systemd-networkd[739]: eth0: DHCPv4 address 10.0.0.35/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 16 00:34:38.861000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:34:38.836417 ignition[646]: op(1): [started] loading QEMU firmware config module May 16 00:34:38.848065 systemd[1]: Started iscsid.service. May 16 00:34:38.836422 ignition[646]: op(1): executing: "modprobe" "qemu_fw_cfg" May 16 00:34:38.849725 systemd[1]: Starting dracut-initqueue.service... May 16 00:34:38.842412 ignition[646]: op(1): [finished] loading QEMU firmware config module May 16 00:34:38.860162 systemd[1]: Finished dracut-initqueue.service. May 16 00:34:38.859581 ignition[646]: parsing config with SHA512: 5811aca166fcaec4b087ee6f69a13e4d52836f28b60d40a80c289b313824ae52c9a9fdc3b0259e22ac2bc50242195ed09ac18cac4cef0df5766c555957e7e43c May 16 00:34:38.862029 systemd[1]: Reached target remote-fs-pre.target. May 16 00:34:38.872000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:34:38.870529 ignition[646]: fetch-offline: fetch-offline passed May 16 00:34:38.864108 systemd[1]: Reached target remote-cryptsetup.target. May 16 00:34:38.870581 ignition[646]: Ignition finished successfully May 16 00:34:38.866865 systemd[1]: Reached target remote-fs.target. May 16 00:34:38.869175 systemd[1]: Starting dracut-pre-mount.service... May 16 00:34:38.879000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:34:38.870226 unknown[646]: fetched base config from "system" May 16 00:34:38.870233 unknown[646]: fetched user config from "qemu" May 16 00:34:38.871934 systemd[1]: Finished ignition-fetch-offline.service. May 16 00:34:38.873579 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 16 00:34:38.874456 systemd[1]: Starting ignition-kargs.service... May 16 00:34:38.884620 ignition[758]: Ignition 2.14.0 May 16 00:34:38.878386 systemd[1]: Finished dracut-pre-mount.service. May 16 00:34:38.884627 ignition[758]: Stage: kargs May 16 00:34:38.887095 systemd[1]: Finished ignition-kargs.service. May 16 00:34:38.887000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:34:38.884741 ignition[758]: no configs at "/usr/lib/ignition/base.d" May 16 00:34:38.884750 ignition[758]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 16 00:34:38.889301 systemd[1]: Starting ignition-disks.service... May 16 00:34:38.885427 ignition[758]: kargs: kargs passed May 16 00:34:38.885471 ignition[758]: Ignition finished successfully May 16 00:34:38.896564 ignition[767]: Ignition 2.14.0 May 16 00:34:38.896576 ignition[767]: Stage: disks May 16 00:34:38.896668 ignition[767]: no configs at "/usr/lib/ignition/base.d" May 16 00:34:38.896678 ignition[767]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 16 00:34:38.898214 systemd[1]: Finished ignition-disks.service. May 16 00:34:38.900000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:34:38.897380 ignition[767]: disks: disks passed May 16 00:34:38.900544 systemd[1]: Reached target initrd-root-device.target. May 16 00:34:38.897422 ignition[767]: Ignition finished successfully May 16 00:34:38.902219 systemd[1]: Reached target local-fs-pre.target. May 16 00:34:38.903516 systemd[1]: Reached target local-fs.target. May 16 00:34:38.904989 systemd[1]: Reached target sysinit.target. May 16 00:34:38.906475 systemd[1]: Reached target basic.target. May 16 00:34:38.908810 systemd[1]: Starting systemd-fsck-root.service... May 16 00:34:38.919931 systemd-fsck[775]: ROOT: clean, 619/553520 files, 56022/553472 blocks May 16 00:34:38.923829 systemd[1]: Finished systemd-fsck-root.service. May 16 00:34:38.925000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:34:38.926207 systemd[1]: Mounting sysroot.mount... May 16 00:34:38.932905 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. May 16 00:34:38.933475 systemd[1]: Mounted sysroot.mount. May 16 00:34:38.934258 systemd[1]: Reached target initrd-root-fs.target. May 16 00:34:38.936804 systemd[1]: Mounting sysroot-usr.mount... May 16 00:34:38.937916 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. May 16 00:34:38.937973 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 16 00:34:38.937997 systemd[1]: Reached target ignition-diskful.target. May 16 00:34:38.940035 systemd[1]: Mounted sysroot-usr.mount. May 16 00:34:38.941743 systemd[1]: Starting initrd-setup-root.service... May 16 00:34:38.946156 initrd-setup-root[785]: cut: /sysroot/etc/passwd: No such file or directory May 16 00:34:38.950770 initrd-setup-root[793]: cut: /sysroot/etc/group: No such file or directory May 16 00:34:38.954297 initrd-setup-root[801]: cut: /sysroot/etc/shadow: No such file or directory May 16 00:34:38.958774 initrd-setup-root[809]: cut: /sysroot/etc/gshadow: No such file or directory May 16 00:34:38.988624 systemd[1]: Finished initrd-setup-root.service. May 16 00:34:38.989000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:34:38.990319 systemd[1]: Starting ignition-mount.service... May 16 00:34:38.991689 systemd[1]: Starting sysroot-boot.service... May 16 00:34:38.995829 bash[826]: umount: /sysroot/usr/share/oem: not mounted. May 16 00:34:39.004739 ignition[828]: INFO : Ignition 2.14.0 May 16 00:34:39.004739 ignition[828]: INFO : Stage: mount May 16 00:34:39.006292 ignition[828]: INFO : no configs at "/usr/lib/ignition/base.d" May 16 00:34:39.006292 ignition[828]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 16 00:34:39.006292 ignition[828]: INFO : mount: mount passed May 16 00:34:39.006292 ignition[828]: INFO : Ignition finished successfully May 16 00:34:39.007000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:34:39.006604 systemd[1]: Finished ignition-mount.service. May 16 00:34:39.013812 systemd[1]: Finished sysroot-boot.service. May 16 00:34:39.014000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:34:39.625739 systemd[1]: Mounting sysroot-usr-share-oem.mount... May 16 00:34:39.632719 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (836) May 16 00:34:39.632749 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 16 00:34:39.633490 kernel: BTRFS info (device vda6): using free space tree May 16 00:34:39.633504 kernel: BTRFS info (device vda6): has skinny extents May 16 00:34:39.637439 systemd[1]: Mounted sysroot-usr-share-oem.mount. May 16 00:34:39.639028 systemd[1]: Starting ignition-files.service... May 16 00:34:39.654319 ignition[856]: INFO : Ignition 2.14.0 May 16 00:34:39.654319 ignition[856]: INFO : Stage: files May 16 00:34:39.655984 ignition[856]: INFO : no configs at "/usr/lib/ignition/base.d" May 16 00:34:39.655984 ignition[856]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 16 00:34:39.655984 ignition[856]: DEBUG : files: compiled without relabeling support, skipping May 16 00:34:39.659897 ignition[856]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 16 00:34:39.659897 ignition[856]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 16 00:34:39.662751 ignition[856]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 16 00:34:39.662751 ignition[856]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 16 00:34:39.665489 ignition[856]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 16 00:34:39.665489 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" May 16 00:34:39.665489 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" May 16 00:34:39.665489 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" May 16 00:34:39.665489 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 16 00:34:39.665489 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" May 16 00:34:39.665489 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" May 16 00:34:39.665489 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" May 16 00:34:39.665489 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-arm64.raw: attempt #1 May 16 00:34:39.663014 unknown[856]: wrote ssh authorized keys file for user: core May 16 00:34:40.204059 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK May 16 00:34:40.496083 systemd-networkd[739]: eth0: Gained IPv6LL May 16 00:34:40.571812 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" May 16 00:34:40.571812 ignition[856]: INFO : files: op(7): [started] processing unit "coreos-metadata.service" May 16 00:34:40.575473 ignition[856]: INFO : files: op(7): op(8): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 16 00:34:40.575473 ignition[856]: INFO : files: op(7): op(8): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 16 00:34:40.575473 ignition[856]: INFO : files: op(7): [finished] processing unit "coreos-metadata.service" May 16 00:34:40.575473 ignition[856]: INFO : files: op(9): [started] setting preset to disabled for "coreos-metadata.service" May 16 00:34:40.575473 ignition[856]: INFO : files: op(9): op(a): [started] removing enablement symlink(s) for "coreos-metadata.service" May 16 00:34:40.610384 ignition[856]: INFO : files: op(9): op(a): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 16 00:34:40.612000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:34:40.613294 ignition[856]: INFO : files: op(9): [finished] setting preset to disabled for "coreos-metadata.service" May 16 00:34:40.613294 ignition[856]: INFO : files: createResultFile: createFiles: op(b): [started] writing file "/sysroot/etc/.ignition-result.json" May 16 00:34:40.613294 ignition[856]: INFO : files: createResultFile: createFiles: op(b): [finished] writing file "/sysroot/etc/.ignition-result.json" May 16 00:34:40.613294 ignition[856]: INFO : files: files passed May 16 00:34:40.613294 ignition[856]: INFO : Ignition finished successfully May 16 00:34:40.619000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:34:40.619000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:34:40.612199 systemd[1]: Finished ignition-files.service. May 16 00:34:40.621000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:34:40.613968 systemd[1]: Starting initrd-setup-root-after-ignition.service... May 16 00:34:40.615712 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). May 16 00:34:40.627046 initrd-setup-root-after-ignition[882]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory May 16 00:34:40.616334 systemd[1]: Starting ignition-quench.service... May 16 00:34:40.629386 initrd-setup-root-after-ignition[884]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 16 00:34:40.619838 systemd[1]: ignition-quench.service: Deactivated successfully. May 16 00:34:40.619930 systemd[1]: Finished ignition-quench.service. May 16 00:34:40.621349 systemd[1]: Finished initrd-setup-root-after-ignition.service. May 16 00:34:40.622601 systemd[1]: Reached target ignition-complete.target. May 16 00:34:40.624538 systemd[1]: Starting initrd-parse-etc.service... May 16 00:34:40.636490 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 16 00:34:40.636572 systemd[1]: Finished initrd-parse-etc.service. May 16 00:34:40.638000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:34:40.638000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:34:40.638331 systemd[1]: Reached target initrd-fs.target. May 16 00:34:40.639640 systemd[1]: Reached target initrd.target. May 16 00:34:40.641149 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. May 16 00:34:40.641904 systemd[1]: Starting dracut-pre-pivot.service... May 16 00:34:40.651931 systemd[1]: Finished dracut-pre-pivot.service. May 16 00:34:40.651000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:34:40.653406 systemd[1]: Starting initrd-cleanup.service... May 16 00:34:40.661145 systemd[1]: Stopped target nss-lookup.target. May 16 00:34:40.662018 systemd[1]: Stopped target remote-cryptsetup.target. May 16 00:34:40.663437 systemd[1]: Stopped target timers.target. May 16 00:34:40.664762 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 16 00:34:40.665000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:34:40.664864 systemd[1]: Stopped dracut-pre-pivot.service. May 16 00:34:40.666148 systemd[1]: Stopped target initrd.target. May 16 00:34:40.667601 systemd[1]: Stopped target basic.target. May 16 00:34:40.668863 systemd[1]: Stopped target ignition-complete.target. May 16 00:34:40.670222 systemd[1]: Stopped target ignition-diskful.target. May 16 00:34:40.671582 systemd[1]: Stopped target initrd-root-device.target. May 16 00:34:40.673028 systemd[1]: Stopped target remote-fs.target. May 16 00:34:40.674941 systemd[1]: Stopped target remote-fs-pre.target. May 16 00:34:40.676760 systemd[1]: Stopped target sysinit.target. May 16 00:34:40.679167 systemd[1]: Stopped target local-fs.target. May 16 00:34:40.680616 systemd[1]: Stopped target local-fs-pre.target. May 16 00:34:40.682530 systemd[1]: Stopped target swap.target. May 16 00:34:40.685000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:34:40.684481 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 16 00:34:40.684596 systemd[1]: Stopped dracut-pre-mount.service. May 16 00:34:40.689000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:34:40.685995 systemd[1]: Stopped target cryptsetup.target. May 16 00:34:40.689000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:34:40.687165 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 16 00:34:40.687265 systemd[1]: Stopped dracut-initqueue.service. May 16 00:34:40.689502 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 16 00:34:40.689596 systemd[1]: Stopped ignition-fetch-offline.service. May 16 00:34:40.690932 systemd[1]: Stopped target paths.target. May 16 00:34:40.692182 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 16 00:34:40.695932 systemd[1]: Stopped systemd-ask-password-console.path. May 16 00:34:40.697166 systemd[1]: Stopped target slices.target. May 16 00:34:40.698625 systemd[1]: Stopped target sockets.target. May 16 00:34:40.701000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:34:40.699963 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 16 00:34:40.702000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:34:40.700068 systemd[1]: Stopped initrd-setup-root-after-ignition.service. May 16 00:34:40.706404 iscsid[746]: iscsid shutting down. May 16 00:34:40.701383 systemd[1]: ignition-files.service: Deactivated successfully. May 16 00:34:40.701473 systemd[1]: Stopped ignition-files.service. May 16 00:34:40.703484 systemd[1]: Stopping ignition-mount.service... May 16 00:34:40.704328 systemd[1]: Stopping iscsid.service... May 16 00:34:40.710469 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 16 00:34:40.711353 ignition[897]: INFO : Ignition 2.14.0 May 16 00:34:40.711353 ignition[897]: INFO : Stage: umount May 16 00:34:40.711353 ignition[897]: INFO : no configs at "/usr/lib/ignition/base.d" May 16 00:34:40.711353 ignition[897]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 16 00:34:40.712000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:34:40.715413 ignition[897]: INFO : umount: umount passed May 16 00:34:40.715413 ignition[897]: INFO : Ignition finished successfully May 16 00:34:40.717000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:34:40.712036 systemd[1]: Stopped kmod-static-nodes.service. May 16 00:34:40.717000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:34:40.714435 systemd[1]: Stopping sysroot-boot.service... May 16 00:34:40.715949 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 16 00:34:40.720000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:34:40.716081 systemd[1]: Stopped systemd-udev-trigger.service. May 16 00:34:40.721000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:34:40.717485 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 16 00:34:40.717569 systemd[1]: Stopped dracut-pre-trigger.service. May 16 00:34:40.720208 systemd[1]: iscsid.service: Deactivated successfully. May 16 00:34:40.724000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:34:40.726000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:34:40.720297 systemd[1]: Stopped iscsid.service. May 16 00:34:40.727000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:34:40.721458 systemd[1]: ignition-mount.service: Deactivated successfully. May 16 00:34:40.721529 systemd[1]: Stopped ignition-mount.service. May 16 00:34:40.723124 systemd[1]: iscsid.socket: Deactivated successfully. May 16 00:34:40.733000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:34:40.723192 systemd[1]: Closed iscsid.socket. May 16 00:34:40.724042 systemd[1]: ignition-disks.service: Deactivated successfully. May 16 00:34:40.724080 systemd[1]: Stopped ignition-disks.service. May 16 00:34:40.736000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:34:40.736000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:34:40.725860 systemd[1]: ignition-kargs.service: Deactivated successfully. May 16 00:34:40.725978 systemd[1]: Stopped ignition-kargs.service. May 16 00:34:40.727464 systemd[1]: ignition-setup.service: Deactivated successfully. May 16 00:34:40.727504 systemd[1]: Stopped ignition-setup.service. May 16 00:34:40.728985 systemd[1]: Stopping iscsiuio.service... May 16 00:34:40.731672 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 16 00:34:40.732109 systemd[1]: iscsiuio.service: Deactivated successfully. May 16 00:34:40.732197 systemd[1]: Stopped iscsiuio.service. May 16 00:34:40.733494 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 16 00:34:40.749000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:34:40.733579 systemd[1]: Finished initrd-cleanup.service. May 16 00:34:40.737075 systemd[1]: Stopped target network.target. May 16 00:34:40.739221 systemd[1]: iscsiuio.socket: Deactivated successfully. May 16 00:34:40.739256 systemd[1]: Closed iscsiuio.socket. May 16 00:34:40.756000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:34:40.740031 systemd[1]: Stopping systemd-networkd.service... May 16 00:34:40.757000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:34:40.741273 systemd[1]: Stopping systemd-resolved.service... May 16 00:34:40.759000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:34:40.745908 systemd-networkd[739]: eth0: DHCPv6 lease lost May 16 00:34:40.746849 systemd[1]: systemd-networkd.service: Deactivated successfully. May 16 00:34:40.746953 systemd[1]: Stopped systemd-networkd.service. May 16 00:34:40.749706 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 16 00:34:40.764000 audit: BPF prog-id=9 op=UNLOAD May 16 00:34:40.749733 systemd[1]: Closed systemd-networkd.socket. May 16 00:34:40.753802 systemd[1]: Stopping network-cleanup.service... May 16 00:34:40.766000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:34:40.755062 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 16 00:34:40.755120 systemd[1]: Stopped parse-ip-for-networkd.service. May 16 00:34:40.768000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:34:40.756686 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 16 00:34:40.770000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:34:40.756724 systemd[1]: Stopped systemd-sysctl.service. May 16 00:34:40.758428 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 16 00:34:40.758466 systemd[1]: Stopped systemd-modules-load.service. May 16 00:34:40.774000 audit: BPF prog-id=6 op=UNLOAD May 16 00:34:40.759382 systemd[1]: Stopping systemd-udevd.service... May 16 00:34:40.775000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:34:40.765229 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 16 00:34:40.777000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:34:40.765694 systemd[1]: systemd-resolved.service: Deactivated successfully. May 16 00:34:40.779000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:34:40.765777 systemd[1]: Stopped systemd-resolved.service. May 16 00:34:40.780000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:34:40.767266 systemd[1]: sysroot-boot.service: Deactivated successfully. May 16 00:34:40.767341 systemd[1]: Stopped sysroot-boot.service. May 16 00:34:40.769223 systemd[1]: systemd-udevd.service: Deactivated successfully. May 16 00:34:40.784000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:34:40.769330 systemd[1]: Stopped systemd-udevd.service. May 16 00:34:40.784000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:34:40.771354 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 16 00:34:40.771415 systemd[1]: Closed systemd-udevd-control.socket. May 16 00:34:40.772746 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 16 00:34:40.772777 systemd[1]: Closed systemd-udevd-kernel.socket. May 16 00:34:40.774553 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 16 00:34:40.774603 systemd[1]: Stopped dracut-pre-udev.service. May 16 00:34:40.791000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:34:40.791000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:34:40.776123 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 16 00:34:40.776162 systemd[1]: Stopped dracut-cmdline.service. May 16 00:34:40.777432 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 16 00:34:40.777472 systemd[1]: Stopped dracut-cmdline-ask.service. May 16 00:34:40.779159 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 16 00:34:40.779203 systemd[1]: Stopped initrd-setup-root.service. May 16 00:34:40.781426 systemd[1]: Starting initrd-udevadm-cleanup-db.service... May 16 00:34:40.782847 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 16 00:34:40.782921 systemd[1]: Stopped systemd-vconsole-setup.service. May 16 00:34:40.784581 systemd[1]: network-cleanup.service: Deactivated successfully. May 16 00:34:40.784671 systemd[1]: Stopped network-cleanup.service. May 16 00:34:40.790012 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 16 00:34:40.790095 systemd[1]: Finished initrd-udevadm-cleanup-db.service. May 16 00:34:40.791673 systemd[1]: Reached target initrd-switch-root.target. May 16 00:34:40.793614 systemd[1]: Starting initrd-switch-root.service... May 16 00:34:40.800265 systemd[1]: Switching root. May 16 00:34:40.818473 systemd-journald[289]: Journal stopped May 16 00:34:42.861627 systemd-journald[289]: Received SIGTERM from PID 1 (systemd). May 16 00:34:42.861687 kernel: SELinux: Class mctp_socket not defined in policy. May 16 00:34:42.861700 kernel: SELinux: Class anon_inode not defined in policy. May 16 00:34:42.861710 kernel: SELinux: the above unknown classes and permissions will be allowed May 16 00:34:42.861721 kernel: SELinux: policy capability network_peer_controls=1 May 16 00:34:42.861730 kernel: SELinux: policy capability open_perms=1 May 16 00:34:42.861740 kernel: SELinux: policy capability extended_socket_class=1 May 16 00:34:42.861749 kernel: SELinux: policy capability always_check_network=0 May 16 00:34:42.861759 kernel: SELinux: policy capability cgroup_seclabel=1 May 16 00:34:42.861769 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 16 00:34:42.861779 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 16 00:34:42.861789 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 16 00:34:42.861799 systemd[1]: Successfully loaded SELinux policy in 39.048ms. May 16 00:34:42.861819 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 7.191ms. May 16 00:34:42.861830 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) May 16 00:34:42.861841 systemd[1]: Detected virtualization kvm. May 16 00:34:42.861851 systemd[1]: Detected architecture arm64. May 16 00:34:42.861863 systemd[1]: Detected first boot. May 16 00:34:42.861888 systemd[1]: Initializing machine ID from VM UUID. May 16 00:34:42.861900 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). May 16 00:34:42.861915 systemd[1]: Populated /etc with preset unit settings. May 16 00:34:42.861926 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 16 00:34:42.861937 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 16 00:34:42.861953 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 16 00:34:42.861967 kernel: kauditd_printk_skb: 79 callbacks suppressed May 16 00:34:42.861978 kernel: audit: type=1334 audit(1747355682.675:83): prog-id=12 op=LOAD May 16 00:34:42.861988 kernel: audit: type=1334 audit(1747355682.675:84): prog-id=3 op=UNLOAD May 16 00:34:42.861998 kernel: audit: type=1334 audit(1747355682.675:85): prog-id=13 op=LOAD May 16 00:34:42.862007 kernel: audit: type=1334 audit(1747355682.675:86): prog-id=14 op=LOAD May 16 00:34:42.862017 kernel: audit: type=1334 audit(1747355682.675:87): prog-id=4 op=UNLOAD May 16 00:34:42.862027 kernel: audit: type=1334 audit(1747355682.675:88): prog-id=5 op=UNLOAD May 16 00:34:42.862037 kernel: audit: type=1334 audit(1747355682.676:89): prog-id=15 op=LOAD May 16 00:34:42.862047 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 16 00:34:42.862056 kernel: audit: type=1334 audit(1747355682.676:90): prog-id=12 op=UNLOAD May 16 00:34:42.862069 systemd[1]: Stopped initrd-switch-root.service. May 16 00:34:42.862081 kernel: audit: type=1334 audit(1747355682.677:91): prog-id=16 op=LOAD May 16 00:34:42.862092 kernel: audit: type=1334 audit(1747355682.678:92): prog-id=17 op=LOAD May 16 00:34:42.862102 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 16 00:34:42.862112 systemd[1]: Created slice system-addon\x2dconfig.slice. May 16 00:34:42.862123 systemd[1]: Created slice system-addon\x2drun.slice. May 16 00:34:42.862135 systemd[1]: Created slice system-getty.slice. May 16 00:34:42.862145 systemd[1]: Created slice system-modprobe.slice. May 16 00:34:42.862157 systemd[1]: Created slice system-serial\x2dgetty.slice. May 16 00:34:42.862168 systemd[1]: Created slice system-system\x2dcloudinit.slice. May 16 00:34:42.862179 systemd[1]: Created slice system-systemd\x2dfsck.slice. May 16 00:34:42.862189 systemd[1]: Created slice user.slice. May 16 00:34:42.862200 systemd[1]: Started systemd-ask-password-console.path. May 16 00:34:42.862210 systemd[1]: Started systemd-ask-password-wall.path. May 16 00:34:42.862221 systemd[1]: Set up automount boot.automount. May 16 00:34:42.862231 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. May 16 00:34:42.862242 systemd[1]: Stopped target initrd-switch-root.target. May 16 00:34:42.862253 systemd[1]: Stopped target initrd-fs.target. May 16 00:34:42.862263 systemd[1]: Stopped target initrd-root-fs.target. May 16 00:34:42.862275 systemd[1]: Reached target integritysetup.target. May 16 00:34:42.862285 systemd[1]: Reached target remote-cryptsetup.target. May 16 00:34:42.862296 systemd[1]: Reached target remote-fs.target. May 16 00:34:42.862306 systemd[1]: Reached target slices.target. May 16 00:34:42.862317 systemd[1]: Reached target swap.target. May 16 00:34:42.862330 systemd[1]: Reached target torcx.target. May 16 00:34:42.862341 systemd[1]: Reached target veritysetup.target. May 16 00:34:42.862351 systemd[1]: Listening on systemd-coredump.socket. May 16 00:34:42.862361 systemd[1]: Listening on systemd-initctl.socket. May 16 00:34:42.862372 systemd[1]: Listening on systemd-networkd.socket. May 16 00:34:42.862388 systemd[1]: Listening on systemd-udevd-control.socket. May 16 00:34:42.862401 systemd[1]: Listening on systemd-udevd-kernel.socket. May 16 00:34:42.862411 systemd[1]: Listening on systemd-userdbd.socket. May 16 00:34:42.862422 systemd[1]: Mounting dev-hugepages.mount... May 16 00:34:42.862434 systemd[1]: Mounting dev-mqueue.mount... May 16 00:34:42.862445 systemd[1]: Mounting media.mount... May 16 00:34:42.862455 systemd[1]: Mounting sys-kernel-debug.mount... May 16 00:34:42.862465 systemd[1]: Mounting sys-kernel-tracing.mount... May 16 00:34:42.862476 systemd[1]: Mounting tmp.mount... May 16 00:34:42.862486 systemd[1]: Starting flatcar-tmpfiles.service... May 16 00:34:42.862497 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 16 00:34:42.862507 systemd[1]: Starting kmod-static-nodes.service... May 16 00:34:42.862518 systemd[1]: Starting modprobe@configfs.service... May 16 00:34:42.862530 systemd[1]: Starting modprobe@dm_mod.service... May 16 00:34:42.862540 systemd[1]: Starting modprobe@drm.service... May 16 00:34:42.862551 systemd[1]: Starting modprobe@efi_pstore.service... May 16 00:34:42.862563 systemd[1]: Starting modprobe@fuse.service... May 16 00:34:42.862574 systemd[1]: Starting modprobe@loop.service... May 16 00:34:42.862586 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 16 00:34:42.862598 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 16 00:34:42.862608 systemd[1]: Stopped systemd-fsck-root.service. May 16 00:34:42.862620 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 16 00:34:42.862630 systemd[1]: Stopped systemd-fsck-usr.service. May 16 00:34:42.862641 systemd[1]: Stopped systemd-journald.service. May 16 00:34:42.862651 systemd[1]: Starting systemd-journald.service... May 16 00:34:42.862661 kernel: loop: module loaded May 16 00:34:42.862671 systemd[1]: Starting systemd-modules-load.service... May 16 00:34:42.862682 systemd[1]: Starting systemd-network-generator.service... May 16 00:34:42.862692 systemd[1]: Starting systemd-remount-fs.service... May 16 00:34:42.862703 systemd[1]: Starting systemd-udev-trigger.service... May 16 00:34:42.862713 systemd[1]: verity-setup.service: Deactivated successfully. May 16 00:34:42.862725 systemd[1]: Stopped verity-setup.service. May 16 00:34:42.862735 kernel: fuse: init (API version 7.34) May 16 00:34:42.862745 systemd[1]: Mounted dev-hugepages.mount. May 16 00:34:42.862755 systemd[1]: Mounted dev-mqueue.mount. May 16 00:34:42.862765 systemd[1]: Mounted media.mount. May 16 00:34:42.862776 systemd[1]: Mounted sys-kernel-debug.mount. May 16 00:34:42.862787 systemd[1]: Mounted sys-kernel-tracing.mount. May 16 00:34:42.862797 systemd[1]: Mounted tmp.mount. May 16 00:34:42.862808 systemd[1]: Finished kmod-static-nodes.service. May 16 00:34:42.862820 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 16 00:34:42.862830 systemd[1]: Finished modprobe@configfs.service. May 16 00:34:42.862841 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 16 00:34:42.862851 systemd[1]: Finished modprobe@dm_mod.service. May 16 00:34:42.862862 systemd[1]: modprobe@drm.service: Deactivated successfully. May 16 00:34:42.862880 systemd[1]: Finished modprobe@drm.service. May 16 00:34:42.862893 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 16 00:34:42.862903 systemd[1]: Finished modprobe@efi_pstore.service. May 16 00:34:42.862914 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 16 00:34:42.862924 systemd[1]: Finished modprobe@fuse.service. May 16 00:34:42.862936 systemd-journald[987]: Journal started May 16 00:34:42.862977 systemd-journald[987]: Runtime Journal (/run/log/journal/84b10aa9238a414fbbe24a68fa2e1b8e) is 6.0M, max 48.7M, 42.6M free. May 16 00:34:40.884000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 May 16 00:34:40.958000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 May 16 00:34:40.958000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 May 16 00:34:40.958000 audit: BPF prog-id=10 op=LOAD May 16 00:34:40.958000 audit: BPF prog-id=10 op=UNLOAD May 16 00:34:40.958000 audit: BPF prog-id=11 op=LOAD May 16 00:34:40.958000 audit: BPF prog-id=11 op=UNLOAD May 16 00:34:40.998000 audit[930]: AVC avc: denied { associate } for pid=930 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" May 16 00:34:40.998000 audit[930]: SYSCALL arch=c00000b7 syscall=5 success=yes exit=0 a0=40001220cc a1=400011c0a8 a2=4000120140 a3=32 items=0 ppid=913 pid=930 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:34:40.998000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 May 16 00:34:40.999000 audit[930]: AVC avc: denied { associate } for pid=930 comm="torcx-generator" name="lib" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 May 16 00:34:40.999000 audit[930]: SYSCALL arch=c00000b7 syscall=34 success=yes exit=0 a0=ffffffffffffff9c a1=40001221a5 a2=1ed a3=0 items=2 ppid=913 pid=930 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:34:40.999000 audit: CWD cwd="/" May 16 00:34:40.999000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 16 00:34:40.999000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 16 00:34:40.999000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 May 16 00:34:42.675000 audit: BPF prog-id=12 op=LOAD May 16 00:34:42.675000 audit: BPF prog-id=3 op=UNLOAD May 16 00:34:42.675000 audit: BPF prog-id=13 op=LOAD May 16 00:34:42.675000 audit: BPF prog-id=14 op=LOAD May 16 00:34:42.675000 audit: BPF prog-id=4 op=UNLOAD May 16 00:34:42.675000 audit: BPF prog-id=5 op=UNLOAD May 16 00:34:42.676000 audit: BPF prog-id=15 op=LOAD May 16 00:34:42.676000 audit: BPF prog-id=12 op=UNLOAD May 16 00:34:42.677000 audit: BPF prog-id=16 op=LOAD May 16 00:34:42.678000 audit: BPF prog-id=17 op=LOAD May 16 00:34:42.678000 audit: BPF prog-id=13 op=UNLOAD May 16 00:34:42.678000 audit: BPF prog-id=14 op=UNLOAD May 16 00:34:42.679000 audit: BPF prog-id=18 op=LOAD May 16 00:34:42.679000 audit: BPF prog-id=15 op=UNLOAD May 16 00:34:42.680000 audit: BPF prog-id=19 op=LOAD May 16 00:34:42.680000 audit: BPF prog-id=20 op=LOAD May 16 00:34:42.680000 audit: BPF prog-id=16 op=UNLOAD May 16 00:34:42.680000 audit: BPF prog-id=17 op=UNLOAD May 16 00:34:42.681000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:34:42.687000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:34:42.687000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:34:42.697000 audit: BPF prog-id=18 op=UNLOAD May 16 00:34:42.797000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:34:42.799000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:34:42.802000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:34:42.802000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:34:42.806000 audit: BPF prog-id=21 op=LOAD May 16 00:34:42.806000 audit: BPF prog-id=22 op=LOAD May 16 00:34:42.806000 audit: BPF prog-id=23 op=LOAD May 16 00:34:42.806000 audit: BPF prog-id=19 op=UNLOAD May 16 00:34:42.806000 audit: BPF prog-id=20 op=UNLOAD May 16 00:34:42.833000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:34:42.847000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:34:42.847000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:34:42.847000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:34:42.853000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:34:42.853000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:34:42.855000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:34:42.855000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:34:42.855000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:34:42.855000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:34:42.855000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 May 16 00:34:42.855000 audit[987]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=3 a1=ffffd2d8e520 a2=4000 a3=1 items=0 ppid=1 pid=987 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:34:42.855000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" May 16 00:34:42.862000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:34:42.862000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:34:40.997607 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-05-16T00:34:40Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 16 00:34:42.674620 systemd[1]: Queued start job for default target multi-user.target. May 16 00:34:40.997855 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-05-16T00:34:40Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json May 16 00:34:42.674632 systemd[1]: Unnecessary job was removed for dev-vda6.device. May 16 00:34:40.997883 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-05-16T00:34:40Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json May 16 00:34:42.682006 systemd[1]: systemd-journald.service: Deactivated successfully. May 16 00:34:42.862000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:34:42.864892 systemd[1]: Started systemd-journald.service. May 16 00:34:40.997913 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-05-16T00:34:40Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" May 16 00:34:40.997922 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-05-16T00:34:40Z" level=debug msg="skipped missing lower profile" missing profile=oem May 16 00:34:40.997949 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-05-16T00:34:40Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" May 16 00:34:42.865000 systemd[1]: modprobe@loop.service: Deactivated successfully. May 16 00:34:40.997960 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-05-16T00:34:40Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= May 16 00:34:42.865130 systemd[1]: Finished modprobe@loop.service. May 16 00:34:40.998142 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-05-16T00:34:40Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack May 16 00:34:40.998175 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-05-16T00:34:40Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json May 16 00:34:40.998186 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-05-16T00:34:40Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json May 16 00:34:40.998626 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-05-16T00:34:40Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 May 16 00:34:40.998660 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-05-16T00:34:40Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl May 16 00:34:40.998677 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-05-16T00:34:40Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.7: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.7 May 16 00:34:40.998691 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-05-16T00:34:40Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store May 16 00:34:40.998707 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-05-16T00:34:40Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.7: no such file or directory" path=/var/lib/torcx/store/3510.3.7 May 16 00:34:40.998719 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-05-16T00:34:40Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store May 16 00:34:42.862000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:34:42.862000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:34:42.426689 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-05-16T00:34:42Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 16 00:34:42.426963 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-05-16T00:34:42Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 16 00:34:42.427071 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-05-16T00:34:42Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 16 00:34:42.427280 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-05-16T00:34:42Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 16 00:34:42.427334 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-05-16T00:34:42Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= May 16 00:34:42.427400 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-05-16T00:34:42Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx May 16 00:34:42.866491 systemd[1]: Finished systemd-modules-load.service. May 16 00:34:42.862000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:34:42.867673 systemd[1]: Finished systemd-network-generator.service. May 16 00:34:42.868000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:34:42.868845 systemd[1]: Finished systemd-remount-fs.service. May 16 00:34:42.869000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:34:42.870146 systemd[1]: Reached target network-pre.target. May 16 00:34:42.872215 systemd[1]: Mounting sys-fs-fuse-connections.mount... May 16 00:34:42.878122 systemd[1]: Mounting sys-kernel-config.mount... May 16 00:34:42.879116 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 16 00:34:42.881654 systemd[1]: Starting systemd-hwdb-update.service... May 16 00:34:42.883639 systemd[1]: Starting systemd-journal-flush.service... May 16 00:34:42.884738 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 16 00:34:42.888466 systemd[1]: Starting systemd-random-seed.service... May 16 00:34:42.889324 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 16 00:34:42.890449 systemd[1]: Starting systemd-sysctl.service... May 16 00:34:42.893392 systemd[1]: Mounted sys-fs-fuse-connections.mount. May 16 00:34:42.893707 systemd-journald[987]: Time spent on flushing to /var/log/journal/84b10aa9238a414fbbe24a68fa2e1b8e is 12.545ms for 983 entries. May 16 00:34:42.893707 systemd-journald[987]: System Journal (/var/log/journal/84b10aa9238a414fbbe24a68fa2e1b8e) is 8.0M, max 195.6M, 187.6M free. May 16 00:34:42.913251 systemd-journald[987]: Received client request to flush runtime journal. May 16 00:34:42.903000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:34:42.910000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:34:42.895449 systemd[1]: Mounted sys-kernel-config.mount. May 16 00:34:42.903672 systemd[1]: Finished systemd-random-seed.service. May 16 00:34:42.904838 systemd[1]: Reached target first-boot-complete.target. May 16 00:34:42.910744 systemd[1]: Finished systemd-sysctl.service. May 16 00:34:42.914186 systemd[1]: Finished systemd-journal-flush.service. May 16 00:34:42.914000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:34:42.925088 systemd[1]: Finished flatcar-tmpfiles.service. May 16 00:34:42.925000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:34:42.926223 systemd[1]: Finished systemd-udev-trigger.service. May 16 00:34:42.926000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:34:42.928359 systemd[1]: Starting systemd-sysusers.service... May 16 00:34:42.930153 systemd[1]: Starting systemd-udev-settle.service... May 16 00:34:42.937453 udevadm[1032]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. May 16 00:34:42.944837 systemd[1]: Finished systemd-sysusers.service. May 16 00:34:42.944000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:34:43.245686 systemd[1]: Finished systemd-hwdb-update.service. May 16 00:34:43.246000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:34:43.246000 audit: BPF prog-id=24 op=LOAD May 16 00:34:43.246000 audit: BPF prog-id=25 op=LOAD May 16 00:34:43.246000 audit: BPF prog-id=7 op=UNLOAD May 16 00:34:43.246000 audit: BPF prog-id=8 op=UNLOAD May 16 00:34:43.247909 systemd[1]: Starting systemd-udevd.service... May 16 00:34:43.266721 systemd-udevd[1033]: Using default interface naming scheme 'v252'. May 16 00:34:43.279941 systemd[1]: Started systemd-udevd.service. May 16 00:34:43.279000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:34:43.280000 audit: BPF prog-id=26 op=LOAD May 16 00:34:43.282232 systemd[1]: Starting systemd-networkd.service... May 16 00:34:43.288000 audit: BPF prog-id=27 op=LOAD May 16 00:34:43.288000 audit: BPF prog-id=28 op=LOAD May 16 00:34:43.288000 audit: BPF prog-id=29 op=LOAD May 16 00:34:43.289689 systemd[1]: Starting systemd-userdbd.service... May 16 00:34:43.303094 systemd[1]: Condition check resulted in dev-ttyAMA0.device being skipped. May 16 00:34:43.313826 systemd[1]: Started systemd-userdbd.service. May 16 00:34:43.314000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:34:43.354560 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. May 16 00:34:43.368690 systemd-networkd[1040]: lo: Link UP May 16 00:34:43.368916 systemd-networkd[1040]: lo: Gained carrier May 16 00:34:43.369295 systemd-networkd[1040]: Enumeration completed May 16 00:34:43.369484 systemd[1]: Started systemd-networkd.service. May 16 00:34:43.369835 systemd-networkd[1040]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 16 00:34:43.370000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:34:43.371142 systemd-networkd[1040]: eth0: Link UP May 16 00:34:43.371229 systemd-networkd[1040]: eth0: Gained carrier May 16 00:34:43.373208 systemd[1]: Finished systemd-udev-settle.service. May 16 00:34:43.373000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:34:43.375129 systemd[1]: Starting lvm2-activation-early.service... May 16 00:34:43.382977 systemd-networkd[1040]: eth0: DHCPv4 address 10.0.0.35/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 16 00:34:43.386437 lvm[1066]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 16 00:34:43.409646 systemd[1]: Finished lvm2-activation-early.service. May 16 00:34:43.409000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:34:43.410654 systemd[1]: Reached target cryptsetup.target. May 16 00:34:43.412500 systemd[1]: Starting lvm2-activation.service... May 16 00:34:43.415774 lvm[1067]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 16 00:34:43.448771 systemd[1]: Finished lvm2-activation.service. May 16 00:34:43.448000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:34:43.449774 systemd[1]: Reached target local-fs-pre.target. May 16 00:34:43.450650 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 16 00:34:43.450680 systemd[1]: Reached target local-fs.target. May 16 00:34:43.451472 systemd[1]: Reached target machines.target. May 16 00:34:43.453468 systemd[1]: Starting ldconfig.service... May 16 00:34:43.454549 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 16 00:34:43.454605 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 16 00:34:43.455605 systemd[1]: Starting systemd-boot-update.service... May 16 00:34:43.457655 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... May 16 00:34:43.459824 systemd[1]: Starting systemd-machine-id-commit.service... May 16 00:34:43.461852 systemd[1]: Starting systemd-sysext.service... May 16 00:34:43.463491 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1069 (bootctl) May 16 00:34:43.465083 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... May 16 00:34:43.469226 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. May 16 00:34:43.469000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:34:43.474859 systemd[1]: Unmounting usr-share-oem.mount... May 16 00:34:43.478443 systemd[1]: usr-share-oem.mount: Deactivated successfully. May 16 00:34:43.478620 systemd[1]: Unmounted usr-share-oem.mount. May 16 00:34:43.500913 kernel: loop0: detected capacity change from 0 to 203944 May 16 00:34:43.538808 systemd[1]: Finished systemd-machine-id-commit.service. May 16 00:34:43.539000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:34:43.544551 systemd-fsck[1078]: fsck.fat 4.2 (2021-01-31) May 16 00:34:43.544551 systemd-fsck[1078]: /dev/vda1: 236 files, 117310/258078 clusters May 16 00:34:43.546922 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 16 00:34:43.550461 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. May 16 00:34:43.550000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:34:43.565899 kernel: loop1: detected capacity change from 0 to 203944 May 16 00:34:43.570343 (sd-sysext)[1081]: Using extensions 'kubernetes'. May 16 00:34:43.570664 (sd-sysext)[1081]: Merged extensions into '/usr'. May 16 00:34:43.587761 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 16 00:34:43.589025 systemd[1]: Starting modprobe@dm_mod.service... May 16 00:34:43.591075 systemd[1]: Starting modprobe@efi_pstore.service... May 16 00:34:43.593103 systemd[1]: Starting modprobe@loop.service... May 16 00:34:43.594087 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 16 00:34:43.594212 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 16 00:34:43.594954 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 16 00:34:43.595075 systemd[1]: Finished modprobe@dm_mod.service. May 16 00:34:43.595000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:34:43.595000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:34:43.596506 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 16 00:34:43.596615 systemd[1]: Finished modprobe@efi_pstore.service. May 16 00:34:43.596000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:34:43.596000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:34:43.597907 systemd[1]: modprobe@loop.service: Deactivated successfully. May 16 00:34:43.598034 systemd[1]: Finished modprobe@loop.service. May 16 00:34:43.598000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:34:43.598000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:34:43.599342 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 16 00:34:43.599450 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 16 00:34:43.646112 ldconfig[1068]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 16 00:34:43.650341 systemd[1]: Finished ldconfig.service. May 16 00:34:43.650000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:34:43.835460 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 16 00:34:43.837354 systemd[1]: Mounting boot.mount... May 16 00:34:43.839196 systemd[1]: Mounting usr-share-oem.mount... May 16 00:34:43.843613 systemd[1]: Mounted usr-share-oem.mount. May 16 00:34:43.845898 systemd[1]: Finished systemd-sysext.service. May 16 00:34:43.845000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:34:43.846908 systemd[1]: Mounted boot.mount. May 16 00:34:43.849574 systemd[1]: Starting ensure-sysext.service... May 16 00:34:43.851661 systemd[1]: Starting systemd-tmpfiles-setup.service... May 16 00:34:43.856655 systemd[1]: Finished systemd-boot-update.service. May 16 00:34:43.857000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:34:43.858035 systemd[1]: Reloading. May 16 00:34:43.864892 systemd-tmpfiles[1089]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. May 16 00:34:43.867002 systemd-tmpfiles[1089]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 16 00:34:43.870118 systemd-tmpfiles[1089]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 16 00:34:43.902933 /usr/lib/systemd/system-generators/torcx-generator[1109]: time="2025-05-16T00:34:43Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 16 00:34:43.902962 /usr/lib/systemd/system-generators/torcx-generator[1109]: time="2025-05-16T00:34:43Z" level=info msg="torcx already run" May 16 00:34:43.971196 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 16 00:34:43.971219 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 16 00:34:43.991365 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 16 00:34:44.038000 audit: BPF prog-id=30 op=LOAD May 16 00:34:44.038000 audit: BPF prog-id=26 op=UNLOAD May 16 00:34:44.039000 audit: BPF prog-id=31 op=LOAD May 16 00:34:44.039000 audit: BPF prog-id=27 op=UNLOAD May 16 00:34:44.039000 audit: BPF prog-id=32 op=LOAD May 16 00:34:44.039000 audit: BPF prog-id=33 op=LOAD May 16 00:34:44.039000 audit: BPF prog-id=28 op=UNLOAD May 16 00:34:44.039000 audit: BPF prog-id=29 op=UNLOAD May 16 00:34:44.040000 audit: BPF prog-id=34 op=LOAD May 16 00:34:44.040000 audit: BPF prog-id=35 op=LOAD May 16 00:34:44.040000 audit: BPF prog-id=24 op=UNLOAD May 16 00:34:44.040000 audit: BPF prog-id=25 op=UNLOAD May 16 00:34:44.041000 audit: BPF prog-id=36 op=LOAD May 16 00:34:44.041000 audit: BPF prog-id=21 op=UNLOAD May 16 00:34:44.041000 audit: BPF prog-id=37 op=LOAD May 16 00:34:44.041000 audit: BPF prog-id=38 op=LOAD May 16 00:34:44.041000 audit: BPF prog-id=22 op=UNLOAD May 16 00:34:44.041000 audit: BPF prog-id=23 op=UNLOAD May 16 00:34:44.044604 systemd[1]: Finished systemd-tmpfiles-setup.service. May 16 00:34:44.045000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:34:44.049240 systemd[1]: Starting audit-rules.service... May 16 00:34:44.051560 systemd[1]: Starting clean-ca-certificates.service... May 16 00:34:44.054333 systemd[1]: Starting systemd-journal-catalog-update.service... May 16 00:34:44.055000 audit: BPF prog-id=39 op=LOAD May 16 00:34:44.063131 systemd[1]: Starting systemd-resolved.service... May 16 00:34:44.066000 audit: BPF prog-id=40 op=LOAD May 16 00:34:44.068689 systemd[1]: Starting systemd-timesyncd.service... May 16 00:34:44.070913 systemd[1]: Starting systemd-update-utmp.service... May 16 00:34:44.076148 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 16 00:34:44.075000 audit[1158]: SYSTEM_BOOT pid=1158 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' May 16 00:34:44.077616 systemd[1]: Starting modprobe@dm_mod.service... May 16 00:34:44.079851 systemd[1]: Starting modprobe@efi_pstore.service... May 16 00:34:44.081981 systemd[1]: Starting modprobe@loop.service... May 16 00:34:44.082865 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 16 00:34:44.083032 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 16 00:34:44.084135 systemd[1]: Finished clean-ca-certificates.service. May 16 00:34:44.085000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:34:44.085641 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 16 00:34:44.085764 systemd[1]: Finished modprobe@dm_mod.service. May 16 00:34:44.086000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:34:44.086000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:34:44.087097 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 16 00:34:44.087220 systemd[1]: Finished modprobe@efi_pstore.service. May 16 00:34:44.087000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:34:44.087000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:34:44.088573 systemd[1]: modprobe@loop.service: Deactivated successfully. May 16 00:34:44.088678 systemd[1]: Finished modprobe@loop.service. May 16 00:34:44.089000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:34:44.089000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:34:44.090069 systemd[1]: Finished systemd-journal-catalog-update.service. May 16 00:34:44.090000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:34:44.094851 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 16 00:34:44.096236 systemd[1]: Starting modprobe@dm_mod.service... May 16 00:34:44.098235 systemd[1]: Starting modprobe@efi_pstore.service... May 16 00:34:44.100374 systemd[1]: Starting modprobe@loop.service... May 16 00:34:44.101156 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 16 00:34:44.101345 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 16 00:34:44.102864 systemd[1]: Starting systemd-update-done.service... May 16 00:34:44.105000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:34:44.107000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:34:44.107000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:34:44.103767 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 16 00:34:44.105184 systemd[1]: Finished systemd-update-utmp.service. May 16 00:34:44.108000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:34:44.108000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:34:44.106522 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 16 00:34:44.106629 systemd[1]: Finished modprobe@dm_mod.service. May 16 00:34:44.107910 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 16 00:34:44.108020 systemd[1]: Finished modprobe@efi_pstore.service. May 16 00:34:44.109334 systemd[1]: modprobe@loop.service: Deactivated successfully. May 16 00:34:44.109458 systemd[1]: Finished modprobe@loop.service. May 16 00:34:44.110000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:34:44.110000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:34:44.110789 systemd[1]: Finished systemd-update-done.service. May 16 00:34:44.111000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:34:44.115368 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 16 00:34:44.118446 systemd[1]: Starting modprobe@dm_mod.service... May 16 00:34:44.120700 systemd[1]: Starting modprobe@drm.service... May 16 00:34:44.123089 systemd[1]: Starting modprobe@efi_pstore.service... May 16 00:34:44.123000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 May 16 00:34:44.123000 audit[1174]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffc7e07590 a2=420 a3=0 items=0 ppid=1147 pid=1174 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:34:44.123000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 May 16 00:34:44.124859 augenrules[1174]: No rules May 16 00:34:44.125246 systemd[1]: Starting modprobe@loop.service... May 16 00:34:44.126086 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 16 00:34:44.126251 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 16 00:34:44.127587 systemd[1]: Starting systemd-networkd-wait-online.service... May 16 00:34:44.128682 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 16 00:34:44.129936 systemd[1]: Finished audit-rules.service. May 16 00:34:44.131250 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 16 00:34:44.131376 systemd[1]: Finished modprobe@dm_mod.service. May 16 00:34:44.132829 systemd[1]: modprobe@drm.service: Deactivated successfully. May 16 00:34:44.132962 systemd[1]: Finished modprobe@drm.service. May 16 00:34:44.134151 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 16 00:34:44.134266 systemd[1]: Finished modprobe@efi_pstore.service. May 16 00:34:44.135621 systemd[1]: modprobe@loop.service: Deactivated successfully. May 16 00:34:44.135769 systemd[1]: Finished modprobe@loop.service. May 16 00:34:44.137327 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 16 00:34:44.137433 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 16 00:34:44.138396 systemd[1]: Finished ensure-sysext.service. May 16 00:34:44.145323 systemd-resolved[1151]: Positive Trust Anchors: May 16 00:34:44.145573 systemd-resolved[1151]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 16 00:34:44.145653 systemd-resolved[1151]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test May 16 00:34:44.150706 systemd[1]: Started systemd-timesyncd.service. May 16 00:34:44.151564 systemd-timesyncd[1157]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 16 00:34:44.151622 systemd-timesyncd[1157]: Initial clock synchronization to Fri 2025-05-16 00:34:43.829649 UTC. May 16 00:34:44.152142 systemd[1]: Reached target time-set.target. May 16 00:34:44.162268 systemd-resolved[1151]: Defaulting to hostname 'linux'. May 16 00:34:44.163820 systemd[1]: Started systemd-resolved.service. May 16 00:34:44.164773 systemd[1]: Reached target network.target. May 16 00:34:44.165631 systemd[1]: Reached target nss-lookup.target. May 16 00:34:44.166487 systemd[1]: Reached target sysinit.target. May 16 00:34:44.167368 systemd[1]: Started motdgen.path. May 16 00:34:44.168132 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. May 16 00:34:44.169419 systemd[1]: Started logrotate.timer. May 16 00:34:44.170264 systemd[1]: Started mdadm.timer. May 16 00:34:44.170977 systemd[1]: Started systemd-tmpfiles-clean.timer. May 16 00:34:44.171822 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 16 00:34:44.171858 systemd[1]: Reached target paths.target. May 16 00:34:44.172615 systemd[1]: Reached target timers.target. May 16 00:34:44.173767 systemd[1]: Listening on dbus.socket. May 16 00:34:44.175646 systemd[1]: Starting docker.socket... May 16 00:34:44.179208 systemd[1]: Listening on sshd.socket. May 16 00:34:44.180149 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 16 00:34:44.180759 systemd[1]: Listening on docker.socket. May 16 00:34:44.181681 systemd[1]: Reached target sockets.target. May 16 00:34:44.182498 systemd[1]: Reached target basic.target. May 16 00:34:44.183315 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. May 16 00:34:44.183348 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. May 16 00:34:44.184313 systemd[1]: Starting containerd.service... May 16 00:34:44.186087 systemd[1]: Starting dbus.service... May 16 00:34:44.187983 systemd[1]: Starting enable-oem-cloudinit.service... May 16 00:34:44.190035 systemd[1]: Starting extend-filesystems.service... May 16 00:34:44.191014 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). May 16 00:34:44.192378 systemd[1]: Starting motdgen.service... May 16 00:34:44.194679 systemd[1]: Starting ssh-key-proc-cmdline.service... May 16 00:34:44.197559 systemd[1]: Starting sshd-keygen.service... May 16 00:34:44.200180 jq[1189]: false May 16 00:34:44.202985 systemd[1]: Starting systemd-logind.service... May 16 00:34:44.203655 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 16 00:34:44.203732 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 16 00:34:44.204205 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 16 00:34:44.204947 systemd[1]: Starting update-engine.service... May 16 00:34:44.206928 extend-filesystems[1190]: Found loop1 May 16 00:34:44.206928 extend-filesystems[1190]: Found vda May 16 00:34:44.220705 extend-filesystems[1190]: Found vda1 May 16 00:34:44.220705 extend-filesystems[1190]: Found vda2 May 16 00:34:44.220705 extend-filesystems[1190]: Found vda3 May 16 00:34:44.220705 extend-filesystems[1190]: Found usr May 16 00:34:44.220705 extend-filesystems[1190]: Found vda4 May 16 00:34:44.220705 extend-filesystems[1190]: Found vda6 May 16 00:34:44.220705 extend-filesystems[1190]: Found vda7 May 16 00:34:44.220705 extend-filesystems[1190]: Found vda9 May 16 00:34:44.220705 extend-filesystems[1190]: Checking size of /dev/vda9 May 16 00:34:44.207018 systemd[1]: Starting update-ssh-keys-after-ignition.service... May 16 00:34:44.231137 jq[1203]: true May 16 00:34:44.210618 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 16 00:34:44.210824 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. May 16 00:34:44.232471 jq[1208]: true May 16 00:34:44.211210 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 16 00:34:44.233161 extend-filesystems[1190]: Resized partition /dev/vda9 May 16 00:34:44.211359 systemd[1]: Finished ssh-key-proc-cmdline.service. May 16 00:34:44.258940 extend-filesystems[1214]: resize2fs 1.46.5 (30-Dec-2021) May 16 00:34:44.241982 systemd[1]: Started dbus.service. May 16 00:34:44.241791 dbus-daemon[1188]: [system] SELinux support is enabled May 16 00:34:44.244665 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 16 00:34:44.244697 systemd[1]: Reached target system-config.target. May 16 00:34:44.245784 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 16 00:34:44.245806 systemd[1]: Reached target user-config.target. May 16 00:34:44.262058 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 16 00:34:44.263529 systemd[1]: motdgen.service: Deactivated successfully. May 16 00:34:44.263692 systemd[1]: Finished motdgen.service. May 16 00:34:44.285185 systemd-logind[1199]: Watching system buttons on /dev/input/event0 (Power Button) May 16 00:34:44.285671 systemd-logind[1199]: New seat seat0. May 16 00:34:44.289121 systemd[1]: Started systemd-logind.service. May 16 00:34:44.300897 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 16 00:34:44.316397 update_engine[1201]: I0516 00:34:44.316156 1201 main.cc:92] Flatcar Update Engine starting May 16 00:34:44.321001 extend-filesystems[1214]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 16 00:34:44.321001 extend-filesystems[1214]: old_desc_blocks = 1, new_desc_blocks = 1 May 16 00:34:44.321001 extend-filesystems[1214]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 16 00:34:44.328009 extend-filesystems[1190]: Resized filesystem in /dev/vda9 May 16 00:34:44.328836 bash[1237]: Updated "/home/core/.ssh/authorized_keys" May 16 00:34:44.324218 systemd[1]: extend-filesystems.service: Deactivated successfully. May 16 00:34:44.329015 update_engine[1201]: I0516 00:34:44.324296 1201 update_check_scheduler.cc:74] Next update check in 11m23s May 16 00:34:44.324443 systemd[1]: Finished extend-filesystems.service. May 16 00:34:44.325823 systemd[1]: Started update-engine.service. May 16 00:34:44.328604 systemd[1]: Started locksmithd.service. May 16 00:34:44.330148 systemd[1]: Finished update-ssh-keys-after-ignition.service. May 16 00:34:44.331827 env[1210]: time="2025-05-16T00:34:44.331765480Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 May 16 00:34:44.348521 env[1210]: time="2025-05-16T00:34:44.348427840Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 16 00:34:44.348622 env[1210]: time="2025-05-16T00:34:44.348585720Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 16 00:34:44.349838 env[1210]: time="2025-05-16T00:34:44.349800480Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.181-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 16 00:34:44.349892 env[1210]: time="2025-05-16T00:34:44.349837120Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 16 00:34:44.350081 env[1210]: time="2025-05-16T00:34:44.350058200Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 16 00:34:44.350174 env[1210]: time="2025-05-16T00:34:44.350082520Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 16 00:34:44.350174 env[1210]: time="2025-05-16T00:34:44.350097560Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" May 16 00:34:44.350174 env[1210]: time="2025-05-16T00:34:44.350107600Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 16 00:34:44.350237 env[1210]: time="2025-05-16T00:34:44.350188640Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 16 00:34:44.350515 env[1210]: time="2025-05-16T00:34:44.350495640Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 16 00:34:44.350641 env[1210]: time="2025-05-16T00:34:44.350622240Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 16 00:34:44.350744 env[1210]: time="2025-05-16T00:34:44.350641680Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 16 00:34:44.350744 env[1210]: time="2025-05-16T00:34:44.350695760Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" May 16 00:34:44.350744 env[1210]: time="2025-05-16T00:34:44.350706560Z" level=info msg="metadata content store policy set" policy=shared May 16 00:34:44.355702 env[1210]: time="2025-05-16T00:34:44.355670000Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 16 00:34:44.355753 env[1210]: time="2025-05-16T00:34:44.355704960Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 16 00:34:44.355753 env[1210]: time="2025-05-16T00:34:44.355721040Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 16 00:34:44.355791 env[1210]: time="2025-05-16T00:34:44.355758080Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 16 00:34:44.355791 env[1210]: time="2025-05-16T00:34:44.355774400Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 16 00:34:44.355791 env[1210]: time="2025-05-16T00:34:44.355788040Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 16 00:34:44.355863 env[1210]: time="2025-05-16T00:34:44.355851800Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 16 00:34:44.356260 env[1210]: time="2025-05-16T00:34:44.356229480Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 16 00:34:44.356260 env[1210]: time="2025-05-16T00:34:44.356255360Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 May 16 00:34:44.356317 env[1210]: time="2025-05-16T00:34:44.356269080Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 16 00:34:44.356317 env[1210]: time="2025-05-16T00:34:44.356284440Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 16 00:34:44.356317 env[1210]: time="2025-05-16T00:34:44.356297920Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 16 00:34:44.356453 env[1210]: time="2025-05-16T00:34:44.356427840Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 16 00:34:44.356554 env[1210]: time="2025-05-16T00:34:44.356531080Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 16 00:34:44.356763 env[1210]: time="2025-05-16T00:34:44.356747000Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 16 00:34:44.356802 env[1210]: time="2025-05-16T00:34:44.356774880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 16 00:34:44.356802 env[1210]: time="2025-05-16T00:34:44.356789280Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 16 00:34:44.357115 env[1210]: time="2025-05-16T00:34:44.357048200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 16 00:34:44.357115 env[1210]: time="2025-05-16T00:34:44.357080680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 16 00:34:44.357115 env[1210]: time="2025-05-16T00:34:44.357094000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 16 00:34:44.357115 env[1210]: time="2025-05-16T00:34:44.357107120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 16 00:34:44.357115 env[1210]: time="2025-05-16T00:34:44.357118960Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 16 00:34:44.357241 env[1210]: time="2025-05-16T00:34:44.357132000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 16 00:34:44.357241 env[1210]: time="2025-05-16T00:34:44.357144280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 16 00:34:44.357241 env[1210]: time="2025-05-16T00:34:44.357155680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 16 00:34:44.357241 env[1210]: time="2025-05-16T00:34:44.357168920Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 16 00:34:44.357321 env[1210]: time="2025-05-16T00:34:44.357285120Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 16 00:34:44.357321 env[1210]: time="2025-05-16T00:34:44.357312000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 16 00:34:44.357363 env[1210]: time="2025-05-16T00:34:44.357324400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 16 00:34:44.357363 env[1210]: time="2025-05-16T00:34:44.357336080Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 16 00:34:44.357363 env[1210]: time="2025-05-16T00:34:44.357350240Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 May 16 00:34:44.357363 env[1210]: time="2025-05-16T00:34:44.357360640Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 16 00:34:44.357456 env[1210]: time="2025-05-16T00:34:44.357382200Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" May 16 00:34:44.357456 env[1210]: time="2025-05-16T00:34:44.357423280Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 16 00:34:44.357659 env[1210]: time="2025-05-16T00:34:44.357609280Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 16 00:34:44.361852 env[1210]: time="2025-05-16T00:34:44.357666840Z" level=info msg="Connect containerd service" May 16 00:34:44.361852 env[1210]: time="2025-05-16T00:34:44.357695280Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 16 00:34:44.361852 env[1210]: time="2025-05-16T00:34:44.358536680Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 16 00:34:44.361852 env[1210]: time="2025-05-16T00:34:44.358897880Z" level=info msg="Start subscribing containerd event" May 16 00:34:44.361852 env[1210]: time="2025-05-16T00:34:44.358950120Z" level=info msg="Start recovering state" May 16 00:34:44.361852 env[1210]: time="2025-05-16T00:34:44.359005400Z" level=info msg="Start event monitor" May 16 00:34:44.361852 env[1210]: time="2025-05-16T00:34:44.359023280Z" level=info msg="Start snapshots syncer" May 16 00:34:44.361852 env[1210]: time="2025-05-16T00:34:44.359031520Z" level=info msg="Start cni network conf syncer for default" May 16 00:34:44.361852 env[1210]: time="2025-05-16T00:34:44.359038360Z" level=info msg="Start streaming server" May 16 00:34:44.361852 env[1210]: time="2025-05-16T00:34:44.360372600Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 16 00:34:44.361852 env[1210]: time="2025-05-16T00:34:44.360434040Z" level=info msg=serving... address=/run/containerd/containerd.sock May 16 00:34:44.361852 env[1210]: time="2025-05-16T00:34:44.360477400Z" level=info msg="containerd successfully booted in 0.052048s" May 16 00:34:44.361646 systemd[1]: Started containerd.service. May 16 00:34:44.382702 locksmithd[1240]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 16 00:34:44.976108 systemd-networkd[1040]: eth0: Gained IPv6LL May 16 00:34:44.978200 systemd[1]: Finished systemd-networkd-wait-online.service. May 16 00:34:44.979560 systemd[1]: Reached target network-online.target. May 16 00:34:44.982142 systemd[1]: Starting kubelet.service... May 16 00:34:45.540982 systemd[1]: Started kubelet.service. May 16 00:34:45.968148 kubelet[1253]: E0516 00:34:45.968046 1253 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 16 00:34:45.969978 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 16 00:34:45.970099 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 16 00:34:46.783000 sshd_keygen[1202]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 16 00:34:46.800461 systemd[1]: Finished sshd-keygen.service. May 16 00:34:46.802779 systemd[1]: Starting issuegen.service... May 16 00:34:46.807197 systemd[1]: issuegen.service: Deactivated successfully. May 16 00:34:46.807350 systemd[1]: Finished issuegen.service. May 16 00:34:46.809556 systemd[1]: Starting systemd-user-sessions.service... May 16 00:34:46.815606 systemd[1]: Finished systemd-user-sessions.service. May 16 00:34:46.818044 systemd[1]: Started getty@tty1.service. May 16 00:34:46.820181 systemd[1]: Started serial-getty@ttyAMA0.service. May 16 00:34:46.821236 systemd[1]: Reached target getty.target. May 16 00:34:46.822103 systemd[1]: Reached target multi-user.target. May 16 00:34:46.824113 systemd[1]: Starting systemd-update-utmp-runlevel.service... May 16 00:34:46.830532 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. May 16 00:34:46.830683 systemd[1]: Finished systemd-update-utmp-runlevel.service. May 16 00:34:46.831820 systemd[1]: Startup finished in 582ms (kernel) + 4.274s (initrd) + 5.989s (userspace) = 10.846s. May 16 00:34:48.849313 systemd[1]: Created slice system-sshd.slice. May 16 00:34:48.850456 systemd[1]: Started sshd@0-10.0.0.35:22-10.0.0.1:45506.service. May 16 00:34:48.891392 sshd[1275]: Accepted publickey for core from 10.0.0.1 port 45506 ssh2: RSA SHA256:czXyODm5lEdSCdxgc4UKYE1H3sjGZqNxHBxH/SPqyp4 May 16 00:34:48.893577 sshd[1275]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 16 00:34:48.902696 systemd-logind[1199]: New session 1 of user core. May 16 00:34:48.903572 systemd[1]: Created slice user-500.slice. May 16 00:34:48.904768 systemd[1]: Starting user-runtime-dir@500.service... May 16 00:34:48.913220 systemd[1]: Finished user-runtime-dir@500.service. May 16 00:34:48.914657 systemd[1]: Starting user@500.service... May 16 00:34:48.917598 (systemd)[1278]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 16 00:34:48.976810 systemd[1278]: Queued start job for default target default.target. May 16 00:34:48.977351 systemd[1278]: Reached target paths.target. May 16 00:34:48.977383 systemd[1278]: Reached target sockets.target. May 16 00:34:48.977394 systemd[1278]: Reached target timers.target. May 16 00:34:48.977403 systemd[1278]: Reached target basic.target. May 16 00:34:48.977448 systemd[1278]: Reached target default.target. May 16 00:34:48.977472 systemd[1278]: Startup finished in 54ms. May 16 00:34:48.977541 systemd[1]: Started user@500.service. May 16 00:34:48.978486 systemd[1]: Started session-1.scope. May 16 00:34:49.028683 systemd[1]: Started sshd@1-10.0.0.35:22-10.0.0.1:45514.service. May 16 00:34:49.059398 sshd[1287]: Accepted publickey for core from 10.0.0.1 port 45514 ssh2: RSA SHA256:czXyODm5lEdSCdxgc4UKYE1H3sjGZqNxHBxH/SPqyp4 May 16 00:34:49.060887 sshd[1287]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 16 00:34:49.065312 systemd-logind[1199]: New session 2 of user core. May 16 00:34:49.065647 systemd[1]: Started session-2.scope. May 16 00:34:49.118180 sshd[1287]: pam_unix(sshd:session): session closed for user core May 16 00:34:49.121730 systemd[1]: Started sshd@2-10.0.0.35:22-10.0.0.1:45520.service. May 16 00:34:49.122231 systemd[1]: sshd@1-10.0.0.35:22-10.0.0.1:45514.service: Deactivated successfully. May 16 00:34:49.122862 systemd[1]: session-2.scope: Deactivated successfully. May 16 00:34:49.123369 systemd-logind[1199]: Session 2 logged out. Waiting for processes to exit. May 16 00:34:49.124217 systemd-logind[1199]: Removed session 2. May 16 00:34:49.152522 sshd[1292]: Accepted publickey for core from 10.0.0.1 port 45520 ssh2: RSA SHA256:czXyODm5lEdSCdxgc4UKYE1H3sjGZqNxHBxH/SPqyp4 May 16 00:34:49.153658 sshd[1292]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 16 00:34:49.157432 systemd[1]: Started session-3.scope. May 16 00:34:49.157741 systemd-logind[1199]: New session 3 of user core. May 16 00:34:49.206284 sshd[1292]: pam_unix(sshd:session): session closed for user core May 16 00:34:49.208765 systemd[1]: sshd@2-10.0.0.35:22-10.0.0.1:45520.service: Deactivated successfully. May 16 00:34:49.209302 systemd[1]: session-3.scope: Deactivated successfully. May 16 00:34:49.209773 systemd-logind[1199]: Session 3 logged out. Waiting for processes to exit. May 16 00:34:49.210735 systemd[1]: Started sshd@3-10.0.0.35:22-10.0.0.1:45522.service. May 16 00:34:49.211368 systemd-logind[1199]: Removed session 3. May 16 00:34:49.241857 sshd[1299]: Accepted publickey for core from 10.0.0.1 port 45522 ssh2: RSA SHA256:czXyODm5lEdSCdxgc4UKYE1H3sjGZqNxHBxH/SPqyp4 May 16 00:34:49.243220 sshd[1299]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 16 00:34:49.246401 systemd-logind[1199]: New session 4 of user core. May 16 00:34:49.247197 systemd[1]: Started session-4.scope. May 16 00:34:49.298997 sshd[1299]: pam_unix(sshd:session): session closed for user core May 16 00:34:49.301782 systemd[1]: sshd@3-10.0.0.35:22-10.0.0.1:45522.service: Deactivated successfully. May 16 00:34:49.302353 systemd[1]: session-4.scope: Deactivated successfully. May 16 00:34:49.302859 systemd-logind[1199]: Session 4 logged out. Waiting for processes to exit. May 16 00:34:49.303826 systemd[1]: Started sshd@4-10.0.0.35:22-10.0.0.1:45530.service. May 16 00:34:49.304446 systemd-logind[1199]: Removed session 4. May 16 00:34:49.335966 sshd[1305]: Accepted publickey for core from 10.0.0.1 port 45530 ssh2: RSA SHA256:czXyODm5lEdSCdxgc4UKYE1H3sjGZqNxHBxH/SPqyp4 May 16 00:34:49.337447 sshd[1305]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 16 00:34:49.340682 systemd-logind[1199]: New session 5 of user core. May 16 00:34:49.341453 systemd[1]: Started session-5.scope. May 16 00:34:49.399510 sudo[1308]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 16 00:34:49.399724 sudo[1308]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) May 16 00:34:49.411336 systemd[1]: Starting coreos-metadata.service... May 16 00:34:49.417753 systemd[1]: coreos-metadata.service: Deactivated successfully. May 16 00:34:49.417921 systemd[1]: Finished coreos-metadata.service. May 16 00:34:49.931446 systemd[1]: Stopped kubelet.service. May 16 00:34:49.933913 systemd[1]: Starting kubelet.service... May 16 00:34:49.958816 systemd[1]: Reloading. May 16 00:34:50.024231 /usr/lib/systemd/system-generators/torcx-generator[1366]: time="2025-05-16T00:34:50Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 16 00:34:50.024262 /usr/lib/systemd/system-generators/torcx-generator[1366]: time="2025-05-16T00:34:50Z" level=info msg="torcx already run" May 16 00:34:50.117169 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 16 00:34:50.117190 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 16 00:34:50.132885 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 16 00:34:50.205291 systemd[1]: Started kubelet.service. May 16 00:34:50.208693 systemd[1]: Stopping kubelet.service... May 16 00:34:50.209355 systemd[1]: kubelet.service: Deactivated successfully. May 16 00:34:50.209533 systemd[1]: Stopped kubelet.service. May 16 00:34:50.210992 systemd[1]: Starting kubelet.service... May 16 00:34:50.304588 systemd[1]: Started kubelet.service. May 16 00:34:50.342802 kubelet[1415]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 16 00:34:50.342802 kubelet[1415]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 16 00:34:50.342802 kubelet[1415]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 16 00:34:50.343190 kubelet[1415]: I0516 00:34:50.342861 1415 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 16 00:34:50.961987 kubelet[1415]: I0516 00:34:50.961943 1415 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" May 16 00:34:50.961987 kubelet[1415]: I0516 00:34:50.961979 1415 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 16 00:34:50.962242 kubelet[1415]: I0516 00:34:50.962215 1415 server.go:934] "Client rotation is on, will bootstrap in background" May 16 00:34:51.019404 kubelet[1415]: I0516 00:34:51.019332 1415 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 16 00:34:51.027835 kubelet[1415]: E0516 00:34:51.027799 1415 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 16 00:34:51.027835 kubelet[1415]: I0516 00:34:51.027830 1415 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 16 00:34:51.031630 kubelet[1415]: I0516 00:34:51.031591 1415 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 16 00:34:51.034760 kubelet[1415]: I0516 00:34:51.034721 1415 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 16 00:34:51.035106 kubelet[1415]: I0516 00:34:51.035075 1415 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 16 00:34:51.035434 kubelet[1415]: I0516 00:34:51.035171 1415 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"10.0.0.35","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 16 00:34:51.035782 kubelet[1415]: I0516 00:34:51.035766 1415 topology_manager.go:138] "Creating topology manager with none policy" May 16 00:34:51.035850 kubelet[1415]: I0516 00:34:51.035840 1415 container_manager_linux.go:300] "Creating device plugin manager" May 16 00:34:51.036430 kubelet[1415]: I0516 00:34:51.036413 1415 state_mem.go:36] "Initialized new in-memory state store" May 16 00:34:51.039522 kubelet[1415]: I0516 00:34:51.039499 1415 kubelet.go:408] "Attempting to sync node with API server" May 16 00:34:51.039644 kubelet[1415]: I0516 00:34:51.039632 1415 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 16 00:34:51.039732 kubelet[1415]: I0516 00:34:51.039722 1415 kubelet.go:314] "Adding apiserver pod source" May 16 00:34:51.039989 kubelet[1415]: I0516 00:34:51.039977 1415 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 16 00:34:51.040153 kubelet[1415]: E0516 00:34:51.039957 1415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:34:51.040153 kubelet[1415]: E0516 00:34:51.039945 1415 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:34:51.043981 kubelet[1415]: I0516 00:34:51.043962 1415 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" May 16 00:34:51.045336 kubelet[1415]: I0516 00:34:51.045315 1415 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 16 00:34:51.045886 kubelet[1415]: W0516 00:34:51.045863 1415 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 16 00:34:51.047699 kubelet[1415]: I0516 00:34:51.047679 1415 server.go:1274] "Started kubelet" May 16 00:34:51.059804 kubelet[1415]: I0516 00:34:51.059754 1415 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 16 00:34:51.061736 kubelet[1415]: I0516 00:34:51.061713 1415 server.go:449] "Adding debug handlers to kubelet server" May 16 00:34:51.062936 kubelet[1415]: I0516 00:34:51.062872 1415 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 16 00:34:51.063265 kubelet[1415]: I0516 00:34:51.063246 1415 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 16 00:34:51.065433 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). May 16 00:34:51.065635 kubelet[1415]: I0516 00:34:51.065615 1415 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 16 00:34:51.069792 kubelet[1415]: I0516 00:34:51.069759 1415 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 16 00:34:51.070689 kubelet[1415]: W0516 00:34:51.070663 1415 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope May 16 00:34:51.070868 kubelet[1415]: E0516 00:34:51.070840 1415 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" May 16 00:34:51.072207 kubelet[1415]: I0516 00:34:51.072180 1415 volume_manager.go:289] "Starting Kubelet Volume Manager" May 16 00:34:51.072294 kubelet[1415]: I0516 00:34:51.072278 1415 desired_state_of_world_populator.go:147] "Desired state populator starts to run" May 16 00:34:51.072363 kubelet[1415]: I0516 00:34:51.072349 1415 reconciler.go:26] "Reconciler: start to sync state" May 16 00:34:51.074359 kubelet[1415]: E0516 00:34:51.074316 1415 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.35\" not found" May 16 00:34:51.079743 kubelet[1415]: E0516 00:34:51.079714 1415 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 16 00:34:51.079830 kubelet[1415]: I0516 00:34:51.079791 1415 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 16 00:34:51.081769 kubelet[1415]: I0516 00:34:51.081727 1415 factory.go:221] Registration of the containerd container factory successfully May 16 00:34:51.081769 kubelet[1415]: I0516 00:34:51.081764 1415 factory.go:221] Registration of the systemd container factory successfully May 16 00:34:51.090026 kubelet[1415]: E0516 00:34:51.089983 1415 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.0.0.35\" not found" node="10.0.0.35" May 16 00:34:51.091313 kubelet[1415]: I0516 00:34:51.091292 1415 cpu_manager.go:214] "Starting CPU manager" policy="none" May 16 00:34:51.091411 kubelet[1415]: I0516 00:34:51.091398 1415 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 16 00:34:51.091470 kubelet[1415]: I0516 00:34:51.091460 1415 state_mem.go:36] "Initialized new in-memory state store" May 16 00:34:51.163847 kubelet[1415]: I0516 00:34:51.163816 1415 policy_none.go:49] "None policy: Start" May 16 00:34:51.164815 kubelet[1415]: I0516 00:34:51.164796 1415 memory_manager.go:170] "Starting memorymanager" policy="None" May 16 00:34:51.164908 kubelet[1415]: I0516 00:34:51.164827 1415 state_mem.go:35] "Initializing new in-memory state store" May 16 00:34:51.172854 systemd[1]: Created slice kubepods.slice. May 16 00:34:51.174485 kubelet[1415]: E0516 00:34:51.174454 1415 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.35\" not found" May 16 00:34:51.176580 systemd[1]: Created slice kubepods-burstable.slice. May 16 00:34:51.179111 systemd[1]: Created slice kubepods-besteffort.slice. May 16 00:34:51.188705 kubelet[1415]: I0516 00:34:51.188670 1415 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 16 00:34:51.188863 kubelet[1415]: I0516 00:34:51.188835 1415 eviction_manager.go:189] "Eviction manager: starting control loop" May 16 00:34:51.188911 kubelet[1415]: I0516 00:34:51.188868 1415 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 16 00:34:51.189324 kubelet[1415]: I0516 00:34:51.189084 1415 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 16 00:34:51.190704 kubelet[1415]: E0516 00:34:51.190677 1415 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.0.35\" not found" May 16 00:34:51.240524 kubelet[1415]: I0516 00:34:51.240403 1415 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 16 00:34:51.241672 kubelet[1415]: I0516 00:34:51.241649 1415 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 16 00:34:51.241722 kubelet[1415]: I0516 00:34:51.241678 1415 status_manager.go:217] "Starting to sync pod status with apiserver" May 16 00:34:51.241722 kubelet[1415]: I0516 00:34:51.241698 1415 kubelet.go:2321] "Starting kubelet main sync loop" May 16 00:34:51.241966 kubelet[1415]: E0516 00:34:51.241746 1415 kubelet.go:2345] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" May 16 00:34:51.290615 kubelet[1415]: I0516 00:34:51.290582 1415 kubelet_node_status.go:72] "Attempting to register node" node="10.0.0.35" May 16 00:34:51.295108 kubelet[1415]: I0516 00:34:51.295082 1415 kubelet_node_status.go:75] "Successfully registered node" node="10.0.0.35" May 16 00:34:51.295108 kubelet[1415]: E0516 00:34:51.295112 1415 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"10.0.0.35\": node \"10.0.0.35\" not found" May 16 00:34:51.336546 sudo[1308]: pam_unix(sudo:session): session closed for user root May 16 00:34:51.338414 sshd[1305]: pam_unix(sshd:session): session closed for user core May 16 00:34:51.340778 systemd[1]: sshd@4-10.0.0.35:22-10.0.0.1:45530.service: Deactivated successfully. May 16 00:34:51.341456 systemd[1]: session-5.scope: Deactivated successfully. May 16 00:34:51.341939 systemd-logind[1199]: Session 5 logged out. Waiting for processes to exit. May 16 00:34:51.342605 systemd-logind[1199]: Removed session 5. May 16 00:34:51.407361 kubelet[1415]: I0516 00:34:51.407333 1415 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" May 16 00:34:51.407676 env[1210]: time="2025-05-16T00:34:51.407633104Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 16 00:34:51.407943 kubelet[1415]: I0516 00:34:51.407923 1415 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" May 16 00:34:51.964543 kubelet[1415]: I0516 00:34:51.964491 1415 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" May 16 00:34:51.964898 kubelet[1415]: W0516 00:34:51.964867 1415 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received May 16 00:34:51.964962 kubelet[1415]: W0516 00:34:51.964867 1415 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received May 16 00:34:52.040427 kubelet[1415]: I0516 00:34:52.040382 1415 apiserver.go:52] "Watching apiserver" May 16 00:34:52.040538 kubelet[1415]: E0516 00:34:52.040394 1415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:34:52.049074 systemd[1]: Created slice kubepods-besteffort-pod76252b00_88ca_48a9_89d4_dad174fb9ce3.slice. May 16 00:34:52.063698 systemd[1]: Created slice kubepods-burstable-podd16cbea0_5db6_4553_8710_487c5c45fcf3.slice. May 16 00:34:52.073515 kubelet[1415]: I0516 00:34:52.073476 1415 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" May 16 00:34:52.079684 kubelet[1415]: I0516 00:34:52.079649 1415 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d16cbea0-5db6-4553-8710-487c5c45fcf3-hostproc\") pod \"cilium-r92wd\" (UID: \"d16cbea0-5db6-4553-8710-487c5c45fcf3\") " pod="kube-system/cilium-r92wd" May 16 00:34:52.079759 kubelet[1415]: I0516 00:34:52.079701 1415 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d16cbea0-5db6-4553-8710-487c5c45fcf3-cilium-cgroup\") pod \"cilium-r92wd\" (UID: \"d16cbea0-5db6-4553-8710-487c5c45fcf3\") " pod="kube-system/cilium-r92wd" May 16 00:34:52.079759 kubelet[1415]: I0516 00:34:52.079738 1415 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d16cbea0-5db6-4553-8710-487c5c45fcf3-cni-path\") pod \"cilium-r92wd\" (UID: \"d16cbea0-5db6-4553-8710-487c5c45fcf3\") " pod="kube-system/cilium-r92wd" May 16 00:34:52.079803 kubelet[1415]: I0516 00:34:52.079767 1415 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d16cbea0-5db6-4553-8710-487c5c45fcf3-host-proc-sys-net\") pod \"cilium-r92wd\" (UID: \"d16cbea0-5db6-4553-8710-487c5c45fcf3\") " pod="kube-system/cilium-r92wd" May 16 00:34:52.079837 kubelet[1415]: I0516 00:34:52.079817 1415 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w24w7\" (UniqueName: \"kubernetes.io/projected/d16cbea0-5db6-4553-8710-487c5c45fcf3-kube-api-access-w24w7\") pod \"cilium-r92wd\" (UID: \"d16cbea0-5db6-4553-8710-487c5c45fcf3\") " pod="kube-system/cilium-r92wd" May 16 00:34:52.079864 kubelet[1415]: I0516 00:34:52.079842 1415 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/76252b00-88ca-48a9-89d4-dad174fb9ce3-kube-proxy\") pod \"kube-proxy-j8f5k\" (UID: \"76252b00-88ca-48a9-89d4-dad174fb9ce3\") " pod="kube-system/kube-proxy-j8f5k" May 16 00:34:52.079864 kubelet[1415]: I0516 00:34:52.079858 1415 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d16cbea0-5db6-4553-8710-487c5c45fcf3-bpf-maps\") pod \"cilium-r92wd\" (UID: \"d16cbea0-5db6-4553-8710-487c5c45fcf3\") " pod="kube-system/cilium-r92wd" May 16 00:34:52.079940 kubelet[1415]: I0516 00:34:52.079889 1415 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d16cbea0-5db6-4553-8710-487c5c45fcf3-xtables-lock\") pod \"cilium-r92wd\" (UID: \"d16cbea0-5db6-4553-8710-487c5c45fcf3\") " pod="kube-system/cilium-r92wd" May 16 00:34:52.079940 kubelet[1415]: I0516 00:34:52.079908 1415 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jlzvd\" (UniqueName: \"kubernetes.io/projected/76252b00-88ca-48a9-89d4-dad174fb9ce3-kube-api-access-jlzvd\") pod \"kube-proxy-j8f5k\" (UID: \"76252b00-88ca-48a9-89d4-dad174fb9ce3\") " pod="kube-system/kube-proxy-j8f5k" May 16 00:34:52.079940 kubelet[1415]: I0516 00:34:52.079923 1415 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d16cbea0-5db6-4553-8710-487c5c45fcf3-lib-modules\") pod \"cilium-r92wd\" (UID: \"d16cbea0-5db6-4553-8710-487c5c45fcf3\") " pod="kube-system/cilium-r92wd" May 16 00:34:52.079940 kubelet[1415]: I0516 00:34:52.079939 1415 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d16cbea0-5db6-4553-8710-487c5c45fcf3-hubble-tls\") pod \"cilium-r92wd\" (UID: \"d16cbea0-5db6-4553-8710-487c5c45fcf3\") " pod="kube-system/cilium-r92wd" May 16 00:34:52.080025 kubelet[1415]: I0516 00:34:52.079955 1415 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/76252b00-88ca-48a9-89d4-dad174fb9ce3-lib-modules\") pod \"kube-proxy-j8f5k\" (UID: \"76252b00-88ca-48a9-89d4-dad174fb9ce3\") " pod="kube-system/kube-proxy-j8f5k" May 16 00:34:52.080025 kubelet[1415]: I0516 00:34:52.079969 1415 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d16cbea0-5db6-4553-8710-487c5c45fcf3-clustermesh-secrets\") pod \"cilium-r92wd\" (UID: \"d16cbea0-5db6-4553-8710-487c5c45fcf3\") " pod="kube-system/cilium-r92wd" May 16 00:34:52.080025 kubelet[1415]: I0516 00:34:52.079986 1415 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d16cbea0-5db6-4553-8710-487c5c45fcf3-etc-cni-netd\") pod \"cilium-r92wd\" (UID: \"d16cbea0-5db6-4553-8710-487c5c45fcf3\") " pod="kube-system/cilium-r92wd" May 16 00:34:52.080025 kubelet[1415]: I0516 00:34:52.080000 1415 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d16cbea0-5db6-4553-8710-487c5c45fcf3-cilium-config-path\") pod \"cilium-r92wd\" (UID: \"d16cbea0-5db6-4553-8710-487c5c45fcf3\") " pod="kube-system/cilium-r92wd" May 16 00:34:52.080025 kubelet[1415]: I0516 00:34:52.080024 1415 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d16cbea0-5db6-4553-8710-487c5c45fcf3-host-proc-sys-kernel\") pod \"cilium-r92wd\" (UID: \"d16cbea0-5db6-4553-8710-487c5c45fcf3\") " pod="kube-system/cilium-r92wd" May 16 00:34:52.080124 kubelet[1415]: I0516 00:34:52.080040 1415 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/76252b00-88ca-48a9-89d4-dad174fb9ce3-xtables-lock\") pod \"kube-proxy-j8f5k\" (UID: \"76252b00-88ca-48a9-89d4-dad174fb9ce3\") " pod="kube-system/kube-proxy-j8f5k" May 16 00:34:52.080124 kubelet[1415]: I0516 00:34:52.080054 1415 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d16cbea0-5db6-4553-8710-487c5c45fcf3-cilium-run\") pod \"cilium-r92wd\" (UID: \"d16cbea0-5db6-4553-8710-487c5c45fcf3\") " pod="kube-system/cilium-r92wd" May 16 00:34:52.182393 kubelet[1415]: I0516 00:34:52.182347 1415 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" May 16 00:34:52.362760 kubelet[1415]: E0516 00:34:52.362652 1415 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:34:52.363983 env[1210]: time="2025-05-16T00:34:52.363926036Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-j8f5k,Uid:76252b00-88ca-48a9-89d4-dad174fb9ce3,Namespace:kube-system,Attempt:0,}" May 16 00:34:52.375419 kubelet[1415]: E0516 00:34:52.375391 1415 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:34:52.376076 env[1210]: time="2025-05-16T00:34:52.376023261Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-r92wd,Uid:d16cbea0-5db6-4553-8710-487c5c45fcf3,Namespace:kube-system,Attempt:0,}" May 16 00:34:53.022527 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1686947106.mount: Deactivated successfully. May 16 00:34:53.028599 env[1210]: time="2025-05-16T00:34:53.028560470Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:34:53.029370 env[1210]: time="2025-05-16T00:34:53.029349293Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:34:53.031549 env[1210]: time="2025-05-16T00:34:53.031518765Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:34:53.032776 env[1210]: time="2025-05-16T00:34:53.032750894Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:34:53.033557 env[1210]: time="2025-05-16T00:34:53.033529895Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:34:53.035538 env[1210]: time="2025-05-16T00:34:53.035504143Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:34:53.036218 env[1210]: time="2025-05-16T00:34:53.036192810Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:34:53.039506 env[1210]: time="2025-05-16T00:34:53.039468181Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:34:53.041009 kubelet[1415]: E0516 00:34:53.040982 1415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:34:53.072626 env[1210]: time="2025-05-16T00:34:53.072554107Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 16 00:34:53.072626 env[1210]: time="2025-05-16T00:34:53.072598840Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 16 00:34:53.072626 env[1210]: time="2025-05-16T00:34:53.072613120Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 00:34:53.073205 env[1210]: time="2025-05-16T00:34:53.073158752Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a8a60a29bfbf91b1f494b2e176307e469e6f84425590bdbf21e7f7b4b1856d9c pid=1478 runtime=io.containerd.runc.v2 May 16 00:34:53.073421 env[1210]: time="2025-05-16T00:34:53.073373029Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 16 00:34:53.073468 env[1210]: time="2025-05-16T00:34:53.073408531Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 16 00:34:53.073468 env[1210]: time="2025-05-16T00:34:53.073439260Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 00:34:53.073622 env[1210]: time="2025-05-16T00:34:53.073570777Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f087d45667da259c2e00f79ecac8f08a6b95a57d1f383aa6f49ab85c3695ca94 pid=1479 runtime=io.containerd.runc.v2 May 16 00:34:53.100163 systemd[1]: Started cri-containerd-a8a60a29bfbf91b1f494b2e176307e469e6f84425590bdbf21e7f7b4b1856d9c.scope. May 16 00:34:53.101432 systemd[1]: Started cri-containerd-f087d45667da259c2e00f79ecac8f08a6b95a57d1f383aa6f49ab85c3695ca94.scope. May 16 00:34:53.139580 env[1210]: time="2025-05-16T00:34:53.139528590Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-r92wd,Uid:d16cbea0-5db6-4553-8710-487c5c45fcf3,Namespace:kube-system,Attempt:0,} returns sandbox id \"a8a60a29bfbf91b1f494b2e176307e469e6f84425590bdbf21e7f7b4b1856d9c\"" May 16 00:34:53.140889 kubelet[1415]: E0516 00:34:53.140838 1415 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:34:53.142215 env[1210]: time="2025-05-16T00:34:53.142164642Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 16 00:34:53.144440 env[1210]: time="2025-05-16T00:34:53.144400029Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-j8f5k,Uid:76252b00-88ca-48a9-89d4-dad174fb9ce3,Namespace:kube-system,Attempt:0,} returns sandbox id \"f087d45667da259c2e00f79ecac8f08a6b95a57d1f383aa6f49ab85c3695ca94\"" May 16 00:34:53.145416 kubelet[1415]: E0516 00:34:53.145394 1415 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:34:54.042039 kubelet[1415]: E0516 00:34:54.041994 1415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:34:55.042917 kubelet[1415]: E0516 00:34:55.042842 1415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:34:56.043254 kubelet[1415]: E0516 00:34:56.043202 1415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:34:57.044142 kubelet[1415]: E0516 00:34:57.044089 1415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:34:58.044363 kubelet[1415]: E0516 00:34:58.044319 1415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:34:58.336543 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2031933023.mount: Deactivated successfully. May 16 00:34:59.044542 kubelet[1415]: E0516 00:34:59.044499 1415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:35:00.045413 kubelet[1415]: E0516 00:35:00.045364 1415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:35:00.709021 env[1210]: time="2025-05-16T00:35:00.708966977Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:35:00.710434 env[1210]: time="2025-05-16T00:35:00.710395858Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:35:00.713828 env[1210]: time="2025-05-16T00:35:00.713774737Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:35:00.714632 env[1210]: time="2025-05-16T00:35:00.714593986Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" May 16 00:35:00.716650 env[1210]: time="2025-05-16T00:35:00.716566779Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.9\"" May 16 00:35:00.717494 env[1210]: time="2025-05-16T00:35:00.717453738Z" level=info msg="CreateContainer within sandbox \"a8a60a29bfbf91b1f494b2e176307e469e6f84425590bdbf21e7f7b4b1856d9c\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 16 00:35:00.729263 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1194190903.mount: Deactivated successfully. May 16 00:35:00.737481 env[1210]: time="2025-05-16T00:35:00.737413640Z" level=info msg="CreateContainer within sandbox \"a8a60a29bfbf91b1f494b2e176307e469e6f84425590bdbf21e7f7b4b1856d9c\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"68fea81b3bf70de052050108f0fdf599d441249d6113af94b5e8feb4388492b0\"" May 16 00:35:00.739263 env[1210]: time="2025-05-16T00:35:00.739194680Z" level=info msg="StartContainer for \"68fea81b3bf70de052050108f0fdf599d441249d6113af94b5e8feb4388492b0\"" May 16 00:35:00.760928 systemd[1]: Started cri-containerd-68fea81b3bf70de052050108f0fdf599d441249d6113af94b5e8feb4388492b0.scope. May 16 00:35:00.803544 env[1210]: time="2025-05-16T00:35:00.803497089Z" level=info msg="StartContainer for \"68fea81b3bf70de052050108f0fdf599d441249d6113af94b5e8feb4388492b0\" returns successfully" May 16 00:35:00.838965 systemd[1]: cri-containerd-68fea81b3bf70de052050108f0fdf599d441249d6113af94b5e8feb4388492b0.scope: Deactivated successfully. May 16 00:35:00.956715 env[1210]: time="2025-05-16T00:35:00.956656612Z" level=info msg="shim disconnected" id=68fea81b3bf70de052050108f0fdf599d441249d6113af94b5e8feb4388492b0 May 16 00:35:00.956715 env[1210]: time="2025-05-16T00:35:00.956703476Z" level=warning msg="cleaning up after shim disconnected" id=68fea81b3bf70de052050108f0fdf599d441249d6113af94b5e8feb4388492b0 namespace=k8s.io May 16 00:35:00.956715 env[1210]: time="2025-05-16T00:35:00.956714019Z" level=info msg="cleaning up dead shim" May 16 00:35:00.964668 env[1210]: time="2025-05-16T00:35:00.964522840Z" level=warning msg="cleanup warnings time=\"2025-05-16T00:35:00Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1595 runtime=io.containerd.runc.v2\n" May 16 00:35:01.045592 kubelet[1415]: E0516 00:35:01.045542 1415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:35:01.262685 kubelet[1415]: E0516 00:35:01.262578 1415 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:35:01.265468 env[1210]: time="2025-05-16T00:35:01.265418268Z" level=info msg="CreateContainer within sandbox \"a8a60a29bfbf91b1f494b2e176307e469e6f84425590bdbf21e7f7b4b1856d9c\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 16 00:35:01.276889 env[1210]: time="2025-05-16T00:35:01.276823558Z" level=info msg="CreateContainer within sandbox \"a8a60a29bfbf91b1f494b2e176307e469e6f84425590bdbf21e7f7b4b1856d9c\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"292250957ff4dda62339693a8fcd697065a38dae3939b68305ce66e70965ce08\"" May 16 00:35:01.277352 env[1210]: time="2025-05-16T00:35:01.277287543Z" level=info msg="StartContainer for \"292250957ff4dda62339693a8fcd697065a38dae3939b68305ce66e70965ce08\"" May 16 00:35:01.293377 systemd[1]: Started cri-containerd-292250957ff4dda62339693a8fcd697065a38dae3939b68305ce66e70965ce08.scope. May 16 00:35:01.327859 env[1210]: time="2025-05-16T00:35:01.327805813Z" level=info msg="StartContainer for \"292250957ff4dda62339693a8fcd697065a38dae3939b68305ce66e70965ce08\" returns successfully" May 16 00:35:01.342769 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 16 00:35:01.342992 systemd[1]: Stopped systemd-sysctl.service. May 16 00:35:01.343274 systemd[1]: Stopping systemd-sysctl.service... May 16 00:35:01.344913 systemd[1]: Starting systemd-sysctl.service... May 16 00:35:01.348493 systemd[1]: cri-containerd-292250957ff4dda62339693a8fcd697065a38dae3939b68305ce66e70965ce08.scope: Deactivated successfully. May 16 00:35:01.352580 systemd[1]: Finished systemd-sysctl.service. May 16 00:35:01.375608 env[1210]: time="2025-05-16T00:35:01.375560751Z" level=info msg="shim disconnected" id=292250957ff4dda62339693a8fcd697065a38dae3939b68305ce66e70965ce08 May 16 00:35:01.375948 env[1210]: time="2025-05-16T00:35:01.375925769Z" level=warning msg="cleaning up after shim disconnected" id=292250957ff4dda62339693a8fcd697065a38dae3939b68305ce66e70965ce08 namespace=k8s.io May 16 00:35:01.376036 env[1210]: time="2025-05-16T00:35:01.376022388Z" level=info msg="cleaning up dead shim" May 16 00:35:01.382995 env[1210]: time="2025-05-16T00:35:01.382950723Z" level=warning msg="cleanup warnings time=\"2025-05-16T00:35:01Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1660 runtime=io.containerd.runc.v2\n" May 16 00:35:01.726400 systemd[1]: run-containerd-runc-k8s.io-68fea81b3bf70de052050108f0fdf599d441249d6113af94b5e8feb4388492b0-runc.aizBge.mount: Deactivated successfully. May 16 00:35:01.726494 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-68fea81b3bf70de052050108f0fdf599d441249d6113af94b5e8feb4388492b0-rootfs.mount: Deactivated successfully. May 16 00:35:02.035465 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3996345270.mount: Deactivated successfully. May 16 00:35:02.046349 kubelet[1415]: E0516 00:35:02.046311 1415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:35:02.265406 kubelet[1415]: E0516 00:35:02.265358 1415 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:35:02.267283 env[1210]: time="2025-05-16T00:35:02.267234225Z" level=info msg="CreateContainer within sandbox \"a8a60a29bfbf91b1f494b2e176307e469e6f84425590bdbf21e7f7b4b1856d9c\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 16 00:35:02.292805 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2362422817.mount: Deactivated successfully. May 16 00:35:02.300698 env[1210]: time="2025-05-16T00:35:02.300641207Z" level=info msg="CreateContainer within sandbox \"a8a60a29bfbf91b1f494b2e176307e469e6f84425590bdbf21e7f7b4b1856d9c\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"c5c92afe3bbce92845755c7d822b6490582a10f972a488f6a0d3197ede773875\"" May 16 00:35:02.301228 env[1210]: time="2025-05-16T00:35:02.301168726Z" level=info msg="StartContainer for \"c5c92afe3bbce92845755c7d822b6490582a10f972a488f6a0d3197ede773875\"" May 16 00:35:02.316721 systemd[1]: Started cri-containerd-c5c92afe3bbce92845755c7d822b6490582a10f972a488f6a0d3197ede773875.scope. May 16 00:35:02.363702 env[1210]: time="2025-05-16T00:35:02.363652243Z" level=info msg="StartContainer for \"c5c92afe3bbce92845755c7d822b6490582a10f972a488f6a0d3197ede773875\" returns successfully" May 16 00:35:02.381516 systemd[1]: cri-containerd-c5c92afe3bbce92845755c7d822b6490582a10f972a488f6a0d3197ede773875.scope: Deactivated successfully. May 16 00:35:02.506691 env[1210]: time="2025-05-16T00:35:02.506642024Z" level=info msg="shim disconnected" id=c5c92afe3bbce92845755c7d822b6490582a10f972a488f6a0d3197ede773875 May 16 00:35:02.506691 env[1210]: time="2025-05-16T00:35:02.506687354Z" level=warning msg="cleaning up after shim disconnected" id=c5c92afe3bbce92845755c7d822b6490582a10f972a488f6a0d3197ede773875 namespace=k8s.io May 16 00:35:02.506691 env[1210]: time="2025-05-16T00:35:02.506708466Z" level=info msg="cleaning up dead shim" May 16 00:35:02.514700 env[1210]: time="2025-05-16T00:35:02.514644865Z" level=warning msg="cleanup warnings time=\"2025-05-16T00:35:02Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1717 runtime=io.containerd.runc.v2\n" May 16 00:35:02.515299 env[1210]: time="2025-05-16T00:35:02.515270415Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.31.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:35:02.517621 env[1210]: time="2025-05-16T00:35:02.517587345Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:cbfba5e6542fe387b24d9e73bf5a054a6b07b95af1392268fd82b6f449ef1c27,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:35:02.519354 env[1210]: time="2025-05-16T00:35:02.519323380Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.31.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:35:02.520964 env[1210]: time="2025-05-16T00:35:02.520933461Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:fdf026cf2434537e499e9c739d189ca8fc57101d929ac5ccd8e24f979a9738c1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:35:02.521387 env[1210]: time="2025-05-16T00:35:02.521357093Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.9\" returns image reference \"sha256:cbfba5e6542fe387b24d9e73bf5a054a6b07b95af1392268fd82b6f449ef1c27\"" May 16 00:35:02.523636 env[1210]: time="2025-05-16T00:35:02.523591807Z" level=info msg="CreateContainer within sandbox \"f087d45667da259c2e00f79ecac8f08a6b95a57d1f383aa6f49ab85c3695ca94\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 16 00:35:02.534682 env[1210]: time="2025-05-16T00:35:02.534629663Z" level=info msg="CreateContainer within sandbox \"f087d45667da259c2e00f79ecac8f08a6b95a57d1f383aa6f49ab85c3695ca94\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"f32a71db47704ce58a5e81661ef4bca87881596779f3139ce19b19726b759808\"" May 16 00:35:02.535351 env[1210]: time="2025-05-16T00:35:02.535320380Z" level=info msg="StartContainer for \"f32a71db47704ce58a5e81661ef4bca87881596779f3139ce19b19726b759808\"" May 16 00:35:02.550559 systemd[1]: Started cri-containerd-f32a71db47704ce58a5e81661ef4bca87881596779f3139ce19b19726b759808.scope. May 16 00:35:02.590043 env[1210]: time="2025-05-16T00:35:02.589982498Z" level=info msg="StartContainer for \"f32a71db47704ce58a5e81661ef4bca87881596779f3139ce19b19726b759808\" returns successfully" May 16 00:35:02.726216 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3655288770.mount: Deactivated successfully. May 16 00:35:03.047082 kubelet[1415]: E0516 00:35:03.047030 1415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:35:03.269295 kubelet[1415]: E0516 00:35:03.269135 1415 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:35:03.271394 kubelet[1415]: E0516 00:35:03.271369 1415 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:35:03.271737 env[1210]: time="2025-05-16T00:35:03.271695576Z" level=info msg="CreateContainer within sandbox \"a8a60a29bfbf91b1f494b2e176307e469e6f84425590bdbf21e7f7b4b1856d9c\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 16 00:35:03.300870 env[1210]: time="2025-05-16T00:35:03.300557956Z" level=info msg="CreateContainer within sandbox \"a8a60a29bfbf91b1f494b2e176307e469e6f84425590bdbf21e7f7b4b1856d9c\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"1fdc2c798f035533e873f2790742a76918f46468161a94af67a04f2bddba5ab9\"" May 16 00:35:03.301395 env[1210]: time="2025-05-16T00:35:03.301349268Z" level=info msg="StartContainer for \"1fdc2c798f035533e873f2790742a76918f46468161a94af67a04f2bddba5ab9\"" May 16 00:35:03.323055 systemd[1]: Started cri-containerd-1fdc2c798f035533e873f2790742a76918f46468161a94af67a04f2bddba5ab9.scope. May 16 00:35:03.365512 systemd[1]: cri-containerd-1fdc2c798f035533e873f2790742a76918f46468161a94af67a04f2bddba5ab9.scope: Deactivated successfully. May 16 00:35:03.371438 env[1210]: time="2025-05-16T00:35:03.371375371Z" level=info msg="StartContainer for \"1fdc2c798f035533e873f2790742a76918f46468161a94af67a04f2bddba5ab9\" returns successfully" May 16 00:35:03.407446 env[1210]: time="2025-05-16T00:35:03.407399891Z" level=info msg="shim disconnected" id=1fdc2c798f035533e873f2790742a76918f46468161a94af67a04f2bddba5ab9 May 16 00:35:03.407756 env[1210]: time="2025-05-16T00:35:03.407734032Z" level=warning msg="cleaning up after shim disconnected" id=1fdc2c798f035533e873f2790742a76918f46468161a94af67a04f2bddba5ab9 namespace=k8s.io May 16 00:35:03.407840 env[1210]: time="2025-05-16T00:35:03.407825020Z" level=info msg="cleaning up dead shim" May 16 00:35:03.414885 env[1210]: time="2025-05-16T00:35:03.414831089Z" level=warning msg="cleanup warnings time=\"2025-05-16T00:35:03Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1943 runtime=io.containerd.runc.v2\n" May 16 00:35:03.725607 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1fdc2c798f035533e873f2790742a76918f46468161a94af67a04f2bddba5ab9-rootfs.mount: Deactivated successfully. May 16 00:35:04.047575 kubelet[1415]: E0516 00:35:04.047519 1415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:35:04.274654 kubelet[1415]: E0516 00:35:04.274599 1415 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:35:04.275603 kubelet[1415]: E0516 00:35:04.275313 1415 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:35:04.277667 env[1210]: time="2025-05-16T00:35:04.277620394Z" level=info msg="CreateContainer within sandbox \"a8a60a29bfbf91b1f494b2e176307e469e6f84425590bdbf21e7f7b4b1856d9c\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 16 00:35:04.292376 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1119794425.mount: Deactivated successfully. May 16 00:35:04.297012 kubelet[1415]: I0516 00:35:04.296899 1415 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-j8f5k" podStartSLOduration=3.920370781 podStartE2EDuration="13.296870865s" podCreationTimestamp="2025-05-16 00:34:51 +0000 UTC" firstStartedPulling="2025-05-16 00:34:53.145821544 +0000 UTC m=+2.838149721" lastFinishedPulling="2025-05-16 00:35:02.522321628 +0000 UTC m=+12.214649805" observedRunningTime="2025-05-16 00:35:03.302708507 +0000 UTC m=+12.995036684" watchObservedRunningTime="2025-05-16 00:35:04.296870865 +0000 UTC m=+13.989199042" May 16 00:35:04.298615 env[1210]: time="2025-05-16T00:35:04.298282838Z" level=info msg="CreateContainer within sandbox \"a8a60a29bfbf91b1f494b2e176307e469e6f84425590bdbf21e7f7b4b1856d9c\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"087759120a002be0612fd9b9907cc9b72a7f9b9f20c586e44c222882333d8bb3\"" May 16 00:35:04.299357 env[1210]: time="2025-05-16T00:35:04.299305175Z" level=info msg="StartContainer for \"087759120a002be0612fd9b9907cc9b72a7f9b9f20c586e44c222882333d8bb3\"" May 16 00:35:04.316490 systemd[1]: Started cri-containerd-087759120a002be0612fd9b9907cc9b72a7f9b9f20c586e44c222882333d8bb3.scope. May 16 00:35:04.369674 env[1210]: time="2025-05-16T00:35:04.369627262Z" level=info msg="StartContainer for \"087759120a002be0612fd9b9907cc9b72a7f9b9f20c586e44c222882333d8bb3\" returns successfully" May 16 00:35:04.526675 kubelet[1415]: I0516 00:35:04.525998 1415 kubelet_node_status.go:488] "Fast updating node status as it just became ready" May 16 00:35:04.879918 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! May 16 00:35:05.047944 kubelet[1415]: E0516 00:35:05.047899 1415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:35:05.145906 kernel: Initializing XFRM netlink socket May 16 00:35:05.149906 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! May 16 00:35:05.279248 kubelet[1415]: E0516 00:35:05.279206 1415 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:35:06.048100 kubelet[1415]: E0516 00:35:06.048053 1415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:35:06.280563 kubelet[1415]: E0516 00:35:06.280534 1415 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:35:06.765737 systemd-networkd[1040]: cilium_host: Link UP May 16 00:35:06.766307 systemd-networkd[1040]: cilium_net: Link UP May 16 00:35:06.768360 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready May 16 00:35:06.768445 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready May 16 00:35:06.768861 systemd-networkd[1040]: cilium_net: Gained carrier May 16 00:35:06.769082 systemd-networkd[1040]: cilium_host: Gained carrier May 16 00:35:06.856402 systemd-networkd[1040]: cilium_vxlan: Link UP May 16 00:35:06.856417 systemd-networkd[1040]: cilium_vxlan: Gained carrier May 16 00:35:07.049200 kubelet[1415]: E0516 00:35:07.049134 1415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:35:07.172983 kubelet[1415]: I0516 00:35:07.172915 1415 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-r92wd" podStartSLOduration=8.598829275 podStartE2EDuration="16.172894368s" podCreationTimestamp="2025-05-16 00:34:51 +0000 UTC" firstStartedPulling="2025-05-16 00:34:53.141617432 +0000 UTC m=+2.833945608" lastFinishedPulling="2025-05-16 00:35:00.715682564 +0000 UTC m=+10.408010701" observedRunningTime="2025-05-16 00:35:05.295826393 +0000 UTC m=+14.988154570" watchObservedRunningTime="2025-05-16 00:35:07.172894368 +0000 UTC m=+16.865222545" May 16 00:35:07.178163 systemd[1]: Created slice kubepods-besteffort-pod6030f231_5245_4c8a_b213_8ff6ce882304.slice. May 16 00:35:07.179964 kernel: NET: Registered PF_ALG protocol family May 16 00:35:07.279771 kubelet[1415]: I0516 00:35:07.279724 1415 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pplb7\" (UniqueName: \"kubernetes.io/projected/6030f231-5245-4c8a-b213-8ff6ce882304-kube-api-access-pplb7\") pod \"nginx-deployment-8587fbcb89-k5xn4\" (UID: \"6030f231-5245-4c8a-b213-8ff6ce882304\") " pod="default/nginx-deployment-8587fbcb89-k5xn4" May 16 00:35:07.282319 kubelet[1415]: E0516 00:35:07.282287 1415 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:35:07.481678 env[1210]: time="2025-05-16T00:35:07.481552299Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-k5xn4,Uid:6030f231-5245-4c8a-b213-8ff6ce882304,Namespace:default,Attempt:0,}" May 16 00:35:07.568089 systemd-networkd[1040]: cilium_net: Gained IPv6LL May 16 00:35:07.810075 systemd-networkd[1040]: lxc_health: Link UP May 16 00:35:07.819908 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready May 16 00:35:07.821397 systemd-networkd[1040]: lxc_health: Gained carrier May 16 00:35:07.823970 systemd-networkd[1040]: cilium_host: Gained IPv6LL May 16 00:35:08.033632 systemd-networkd[1040]: lxc1f1acfa4ddef: Link UP May 16 00:35:08.047935 kernel: eth0: renamed from tmpf86f3 May 16 00:35:08.050175 kubelet[1415]: E0516 00:35:08.050116 1415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:35:08.052530 systemd-networkd[1040]: lxc1f1acfa4ddef: Gained carrier May 16 00:35:08.052926 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc1f1acfa4ddef: link becomes ready May 16 00:35:08.144310 systemd-networkd[1040]: cilium_vxlan: Gained IPv6LL May 16 00:35:08.284007 kubelet[1415]: E0516 00:35:08.283781 1415 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:35:09.051262 kubelet[1415]: E0516 00:35:09.051206 1415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:35:09.285044 kubelet[1415]: E0516 00:35:09.284918 1415 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:35:09.424303 systemd-networkd[1040]: lxc1f1acfa4ddef: Gained IPv6LL May 16 00:35:09.552245 systemd-networkd[1040]: lxc_health: Gained IPv6LL May 16 00:35:10.052296 kubelet[1415]: E0516 00:35:10.052257 1415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:35:10.286209 kubelet[1415]: E0516 00:35:10.286177 1415 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:35:11.040089 kubelet[1415]: E0516 00:35:11.040039 1415 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:35:11.053565 kubelet[1415]: E0516 00:35:11.053521 1415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:35:11.803359 env[1210]: time="2025-05-16T00:35:11.803282057Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 16 00:35:11.803359 env[1210]: time="2025-05-16T00:35:11.803323006Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 16 00:35:11.803359 env[1210]: time="2025-05-16T00:35:11.803352889Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 00:35:11.803750 env[1210]: time="2025-05-16T00:35:11.803552239Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f86f320d0f82ac85c0af5a6924711198eac248f8658232695508d36bc09841ac pid=2489 runtime=io.containerd.runc.v2 May 16 00:35:11.816947 systemd[1]: Started cri-containerd-f86f320d0f82ac85c0af5a6924711198eac248f8658232695508d36bc09841ac.scope. May 16 00:35:11.880588 systemd-resolved[1151]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 16 00:35:11.898560 env[1210]: time="2025-05-16T00:35:11.898512156Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-k5xn4,Uid:6030f231-5245-4c8a-b213-8ff6ce882304,Namespace:default,Attempt:0,} returns sandbox id \"f86f320d0f82ac85c0af5a6924711198eac248f8658232695508d36bc09841ac\"" May 16 00:35:11.900176 env[1210]: time="2025-05-16T00:35:11.900145792Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" May 16 00:35:12.054733 kubelet[1415]: E0516 00:35:12.054586 1415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:35:13.054725 kubelet[1415]: E0516 00:35:13.054675 1415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:35:14.055227 kubelet[1415]: E0516 00:35:14.055173 1415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:35:14.137362 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3346488113.mount: Deactivated successfully. May 16 00:35:15.055686 kubelet[1415]: E0516 00:35:15.055641 1415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:35:15.352109 env[1210]: time="2025-05-16T00:35:15.352010221Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:35:15.353475 env[1210]: time="2025-05-16T00:35:15.353425943Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e8b1cb61bd96acc3ff3c695318c9cc691213d532eee3731d038af92816fcb5f4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:35:15.355962 env[1210]: time="2025-05-16T00:35:15.355929828Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:35:15.357616 env[1210]: time="2025-05-16T00:35:15.357589132Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx@sha256:beabce8f1782671ba500ddff99dd260fbf9c5ec85fb9c3162e35a3c40bafd023,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:35:15.359140 env[1210]: time="2025-05-16T00:35:15.359106420Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:e8b1cb61bd96acc3ff3c695318c9cc691213d532eee3731d038af92816fcb5f4\"" May 16 00:35:15.360944 env[1210]: time="2025-05-16T00:35:15.360913255Z" level=info msg="CreateContainer within sandbox \"f86f320d0f82ac85c0af5a6924711198eac248f8658232695508d36bc09841ac\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" May 16 00:35:15.370233 env[1210]: time="2025-05-16T00:35:15.370188177Z" level=info msg="CreateContainer within sandbox \"f86f320d0f82ac85c0af5a6924711198eac248f8658232695508d36bc09841ac\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"5f9395a2992574e18d7331686fbd61447e4441fa034eafb3dadae9d5a22ebf47\"" May 16 00:35:15.370881 env[1210]: time="2025-05-16T00:35:15.370834184Z" level=info msg="StartContainer for \"5f9395a2992574e18d7331686fbd61447e4441fa034eafb3dadae9d5a22ebf47\"" May 16 00:35:15.387913 systemd[1]: Started cri-containerd-5f9395a2992574e18d7331686fbd61447e4441fa034eafb3dadae9d5a22ebf47.scope. May 16 00:35:15.421055 env[1210]: time="2025-05-16T00:35:15.421014043Z" level=info msg="StartContainer for \"5f9395a2992574e18d7331686fbd61447e4441fa034eafb3dadae9d5a22ebf47\" returns successfully" May 16 00:35:16.056044 kubelet[1415]: E0516 00:35:16.055996 1415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:35:16.305528 kubelet[1415]: I0516 00:35:16.305462 1415 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-8587fbcb89-k5xn4" podStartSLOduration=5.8455073859999995 podStartE2EDuration="9.305447643s" podCreationTimestamp="2025-05-16 00:35:07 +0000 UTC" firstStartedPulling="2025-05-16 00:35:11.899890672 +0000 UTC m=+21.592218809" lastFinishedPulling="2025-05-16 00:35:15.359830889 +0000 UTC m=+25.052159066" observedRunningTime="2025-05-16 00:35:16.304784538 +0000 UTC m=+25.997112715" watchObservedRunningTime="2025-05-16 00:35:16.305447643 +0000 UTC m=+25.997775820" May 16 00:35:17.056475 kubelet[1415]: E0516 00:35:17.056432 1415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:35:18.056909 kubelet[1415]: E0516 00:35:18.056853 1415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:35:19.057275 kubelet[1415]: E0516 00:35:19.057229 1415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:35:19.388534 systemd[1]: Created slice kubepods-besteffort-pod866fffee_2805_4dca_ace0_7605bb50a2d4.slice. May 16 00:35:19.444596 kubelet[1415]: I0516 00:35:19.444534 1415 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/866fffee-2805-4dca-ace0-7605bb50a2d4-data\") pod \"nfs-server-provisioner-0\" (UID: \"866fffee-2805-4dca-ace0-7605bb50a2d4\") " pod="default/nfs-server-provisioner-0" May 16 00:35:19.444596 kubelet[1415]: I0516 00:35:19.444594 1415 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xmmk5\" (UniqueName: \"kubernetes.io/projected/866fffee-2805-4dca-ace0-7605bb50a2d4-kube-api-access-xmmk5\") pod \"nfs-server-provisioner-0\" (UID: \"866fffee-2805-4dca-ace0-7605bb50a2d4\") " pod="default/nfs-server-provisioner-0" May 16 00:35:19.691192 env[1210]: time="2025-05-16T00:35:19.691085597Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:866fffee-2805-4dca-ace0-7605bb50a2d4,Namespace:default,Attempt:0,}" May 16 00:35:19.722078 systemd-networkd[1040]: lxce645458b3a01: Link UP May 16 00:35:19.731946 kernel: eth0: renamed from tmpa94c0 May 16 00:35:19.740025 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready May 16 00:35:19.740122 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxce645458b3a01: link becomes ready May 16 00:35:19.740216 systemd-networkd[1040]: lxce645458b3a01: Gained carrier May 16 00:35:19.878166 env[1210]: time="2025-05-16T00:35:19.878092291Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 16 00:35:19.878166 env[1210]: time="2025-05-16T00:35:19.878132095Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 16 00:35:19.878166 env[1210]: time="2025-05-16T00:35:19.878143377Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 00:35:19.878366 env[1210]: time="2025-05-16T00:35:19.878322439Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a94c00a57d6eed3a1a464bdc6b35afe25c5e0f9694b1b5607748f6a2cc4ee8a1 pid=2621 runtime=io.containerd.runc.v2 May 16 00:35:19.892326 systemd[1]: Started cri-containerd-a94c00a57d6eed3a1a464bdc6b35afe25c5e0f9694b1b5607748f6a2cc4ee8a1.scope. May 16 00:35:19.919621 systemd-resolved[1151]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 16 00:35:19.935891 env[1210]: time="2025-05-16T00:35:19.935841213Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:866fffee-2805-4dca-ace0-7605bb50a2d4,Namespace:default,Attempt:0,} returns sandbox id \"a94c00a57d6eed3a1a464bdc6b35afe25c5e0f9694b1b5607748f6a2cc4ee8a1\"" May 16 00:35:19.937520 env[1210]: time="2025-05-16T00:35:19.937486654Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" May 16 00:35:20.057502 kubelet[1415]: E0516 00:35:20.057448 1415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:35:21.057749 kubelet[1415]: E0516 00:35:21.057703 1415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:35:21.200828 systemd-networkd[1040]: lxce645458b3a01: Gained IPv6LL May 16 00:35:22.036055 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3302944547.mount: Deactivated successfully. May 16 00:35:22.058621 kubelet[1415]: E0516 00:35:22.058584 1415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:35:23.059640 kubelet[1415]: E0516 00:35:23.059579 1415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:35:23.761380 env[1210]: time="2025-05-16T00:35:23.761328023Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:35:23.762819 env[1210]: time="2025-05-16T00:35:23.762782205Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:35:23.764522 env[1210]: time="2025-05-16T00:35:23.764499294Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:35:23.766623 env[1210]: time="2025-05-16T00:35:23.766590899Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:35:23.767342 env[1210]: time="2025-05-16T00:35:23.767309450Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:5a42a519e0a8cf95c3c5f18f767c58c8c8b072aaea0a26e5e47a6f206c7df685\"" May 16 00:35:23.769816 env[1210]: time="2025-05-16T00:35:23.769784573Z" level=info msg="CreateContainer within sandbox \"a94c00a57d6eed3a1a464bdc6b35afe25c5e0f9694b1b5607748f6a2cc4ee8a1\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" May 16 00:35:23.781213 env[1210]: time="2025-05-16T00:35:23.781168011Z" level=info msg="CreateContainer within sandbox \"a94c00a57d6eed3a1a464bdc6b35afe25c5e0f9694b1b5607748f6a2cc4ee8a1\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"a61348839825abb1160acc38650fe36733e47a39f1290d24d113da0d30c32d5b\"" May 16 00:35:23.781671 env[1210]: time="2025-05-16T00:35:23.781646698Z" level=info msg="StartContainer for \"a61348839825abb1160acc38650fe36733e47a39f1290d24d113da0d30c32d5b\"" May 16 00:35:23.804120 systemd[1]: Started cri-containerd-a61348839825abb1160acc38650fe36733e47a39f1290d24d113da0d30c32d5b.scope. May 16 00:35:23.854243 env[1210]: time="2025-05-16T00:35:23.854196623Z" level=info msg="StartContainer for \"a61348839825abb1160acc38650fe36733e47a39f1290d24d113da0d30c32d5b\" returns successfully" May 16 00:35:24.060706 kubelet[1415]: E0516 00:35:24.060120 1415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:35:24.322463 kubelet[1415]: I0516 00:35:24.322158 1415 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=1.4907552370000001 podStartE2EDuration="5.322142995s" podCreationTimestamp="2025-05-16 00:35:19 +0000 UTC" firstStartedPulling="2025-05-16 00:35:19.937139292 +0000 UTC m=+29.629467429" lastFinishedPulling="2025-05-16 00:35:23.76852701 +0000 UTC m=+33.460855187" observedRunningTime="2025-05-16 00:35:24.321938856 +0000 UTC m=+34.014267073" watchObservedRunningTime="2025-05-16 00:35:24.322142995 +0000 UTC m=+34.014471172" May 16 00:35:25.060629 kubelet[1415]: E0516 00:35:25.060579 1415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:35:26.061014 kubelet[1415]: E0516 00:35:26.060971 1415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:35:27.061863 kubelet[1415]: E0516 00:35:27.061818 1415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:35:28.062155 kubelet[1415]: E0516 00:35:28.062112 1415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:35:29.063061 kubelet[1415]: E0516 00:35:29.063018 1415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:35:29.987086 update_engine[1201]: I0516 00:35:29.987035 1201 update_attempter.cc:509] Updating boot flags... May 16 00:35:30.063742 kubelet[1415]: E0516 00:35:30.063699 1415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:35:31.040025 kubelet[1415]: E0516 00:35:31.039966 1415 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:35:31.064466 kubelet[1415]: E0516 00:35:31.064433 1415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:35:32.064973 kubelet[1415]: E0516 00:35:32.064922 1415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:35:33.066137 kubelet[1415]: E0516 00:35:33.066071 1415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:35:34.066255 kubelet[1415]: E0516 00:35:34.066195 1415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:35:34.188895 systemd[1]: Created slice kubepods-besteffort-pod80204237_fde3_4c00_a84c_3f9f6e59f188.slice. May 16 00:35:34.225324 kubelet[1415]: I0516 00:35:34.225289 1415 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-1e48aaea-3d47-44da-9bfe-661219efe6d4\" (UniqueName: \"kubernetes.io/nfs/80204237-fde3-4c00-a84c-3f9f6e59f188-pvc-1e48aaea-3d47-44da-9bfe-661219efe6d4\") pod \"test-pod-1\" (UID: \"80204237-fde3-4c00-a84c-3f9f6e59f188\") " pod="default/test-pod-1" May 16 00:35:34.225684 kubelet[1415]: I0516 00:35:34.225658 1415 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z7c4m\" (UniqueName: \"kubernetes.io/projected/80204237-fde3-4c00-a84c-3f9f6e59f188-kube-api-access-z7c4m\") pod \"test-pod-1\" (UID: \"80204237-fde3-4c00-a84c-3f9f6e59f188\") " pod="default/test-pod-1" May 16 00:35:34.360909 kernel: FS-Cache: Loaded May 16 00:35:34.389170 kernel: RPC: Registered named UNIX socket transport module. May 16 00:35:34.389291 kernel: RPC: Registered udp transport module. May 16 00:35:34.389318 kernel: RPC: Registered tcp transport module. May 16 00:35:34.390548 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. May 16 00:35:34.431905 kernel: FS-Cache: Netfs 'nfs' registered for caching May 16 00:35:34.561385 kernel: NFS: Registering the id_resolver key type May 16 00:35:34.561514 kernel: Key type id_resolver registered May 16 00:35:34.562038 kernel: Key type id_legacy registered May 16 00:35:34.594505 nfsidmap[2752]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' May 16 00:35:34.598159 nfsidmap[2755]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' May 16 00:35:34.792234 env[1210]: time="2025-05-16T00:35:34.792181479Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:80204237-fde3-4c00-a84c-3f9f6e59f188,Namespace:default,Attempt:0,}" May 16 00:35:34.813670 systemd-networkd[1040]: lxcf7b54cc94cb3: Link UP May 16 00:35:34.824972 kernel: eth0: renamed from tmp5dd88 May 16 00:35:34.831929 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready May 16 00:35:34.832029 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcf7b54cc94cb3: link becomes ready May 16 00:35:34.831899 systemd-networkd[1040]: lxcf7b54cc94cb3: Gained carrier May 16 00:35:35.013620 env[1210]: time="2025-05-16T00:35:35.013535014Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 16 00:35:35.013620 env[1210]: time="2025-05-16T00:35:35.013589097Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 16 00:35:35.014580 env[1210]: time="2025-05-16T00:35:35.013599697Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 00:35:35.014580 env[1210]: time="2025-05-16T00:35:35.013754746Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/5dd889a322884a221ff0761384cce486469f95e7ee56f19986d9476981a6d6fb pid=2788 runtime=io.containerd.runc.v2 May 16 00:35:35.032552 systemd[1]: Started cri-containerd-5dd889a322884a221ff0761384cce486469f95e7ee56f19986d9476981a6d6fb.scope. May 16 00:35:35.067629 kubelet[1415]: E0516 00:35:35.067086 1415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:35:35.085775 systemd-resolved[1151]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 16 00:35:35.102542 env[1210]: time="2025-05-16T00:35:35.102487893Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:80204237-fde3-4c00-a84c-3f9f6e59f188,Namespace:default,Attempt:0,} returns sandbox id \"5dd889a322884a221ff0761384cce486469f95e7ee56f19986d9476981a6d6fb\"" May 16 00:35:35.104163 env[1210]: time="2025-05-16T00:35:35.104133981Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" May 16 00:35:35.324646 env[1210]: time="2025-05-16T00:35:35.324281800Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:35:35.325889 env[1210]: time="2025-05-16T00:35:35.325838804Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:e8b1cb61bd96acc3ff3c695318c9cc691213d532eee3731d038af92816fcb5f4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:35:35.327641 env[1210]: time="2025-05-16T00:35:35.327617379Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:35:35.330658 env[1210]: time="2025-05-16T00:35:35.330619019Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx@sha256:beabce8f1782671ba500ddff99dd260fbf9c5ec85fb9c3162e35a3c40bafd023,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:35:35.331501 env[1210]: time="2025-05-16T00:35:35.331475825Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:e8b1cb61bd96acc3ff3c695318c9cc691213d532eee3731d038af92816fcb5f4\"" May 16 00:35:35.333810 env[1210]: time="2025-05-16T00:35:35.333779268Z" level=info msg="CreateContainer within sandbox \"5dd889a322884a221ff0761384cce486469f95e7ee56f19986d9476981a6d6fb\" for container &ContainerMetadata{Name:test,Attempt:0,}" May 16 00:35:35.348717 env[1210]: time="2025-05-16T00:35:35.348660185Z" level=info msg="CreateContainer within sandbox \"5dd889a322884a221ff0761384cce486469f95e7ee56f19986d9476981a6d6fb\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"ade7b2ae6a8a4dcde8e4e410bf0acecf7af9513553569cf7263a3408bb5b27cc\"" May 16 00:35:35.349448 env[1210]: time="2025-05-16T00:35:35.349416265Z" level=info msg="StartContainer for \"ade7b2ae6a8a4dcde8e4e410bf0acecf7af9513553569cf7263a3408bb5b27cc\"" May 16 00:35:35.366194 systemd[1]: Started cri-containerd-ade7b2ae6a8a4dcde8e4e410bf0acecf7af9513553569cf7263a3408bb5b27cc.scope. May 16 00:35:35.400944 env[1210]: time="2025-05-16T00:35:35.400899740Z" level=info msg="StartContainer for \"ade7b2ae6a8a4dcde8e4e410bf0acecf7af9513553569cf7263a3408bb5b27cc\" returns successfully" May 16 00:35:36.068253 kubelet[1415]: E0516 00:35:36.068192 1415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:35:36.337086 systemd[1]: run-containerd-runc-k8s.io-ade7b2ae6a8a4dcde8e4e410bf0acecf7af9513553569cf7263a3408bb5b27cc-runc.iOMghq.mount: Deactivated successfully. May 16 00:35:36.340803 kubelet[1415]: I0516 00:35:36.340742 1415 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=17.111850129 podStartE2EDuration="17.340725935s" podCreationTimestamp="2025-05-16 00:35:19 +0000 UTC" firstStartedPulling="2025-05-16 00:35:35.103655716 +0000 UTC m=+44.795983893" lastFinishedPulling="2025-05-16 00:35:35.332531522 +0000 UTC m=+45.024859699" observedRunningTime="2025-05-16 00:35:36.340516444 +0000 UTC m=+46.032844621" watchObservedRunningTime="2025-05-16 00:35:36.340725935 +0000 UTC m=+46.033054112" May 16 00:35:36.816089 systemd-networkd[1040]: lxcf7b54cc94cb3: Gained IPv6LL May 16 00:35:37.069339 kubelet[1415]: E0516 00:35:37.069213 1415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:35:38.069946 kubelet[1415]: E0516 00:35:38.069905 1415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:35:39.071005 kubelet[1415]: E0516 00:35:39.070961 1415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:35:40.071584 kubelet[1415]: E0516 00:35:40.071540 1415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:35:41.072117 kubelet[1415]: E0516 00:35:41.072072 1415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:35:42.072436 kubelet[1415]: E0516 00:35:42.072390 1415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:35:42.437214 env[1210]: time="2025-05-16T00:35:42.436899463Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 16 00:35:42.442121 env[1210]: time="2025-05-16T00:35:42.442084148Z" level=info msg="StopContainer for \"087759120a002be0612fd9b9907cc9b72a7f9b9f20c586e44c222882333d8bb3\" with timeout 2 (s)" May 16 00:35:42.443328 env[1210]: time="2025-05-16T00:35:42.443295076Z" level=info msg="Stop container \"087759120a002be0612fd9b9907cc9b72a7f9b9f20c586e44c222882333d8bb3\" with signal terminated" May 16 00:35:42.448946 systemd-networkd[1040]: lxc_health: Link DOWN May 16 00:35:42.448951 systemd-networkd[1040]: lxc_health: Lost carrier May 16 00:35:42.497248 systemd[1]: cri-containerd-087759120a002be0612fd9b9907cc9b72a7f9b9f20c586e44c222882333d8bb3.scope: Deactivated successfully. May 16 00:35:42.497575 systemd[1]: cri-containerd-087759120a002be0612fd9b9907cc9b72a7f9b9f20c586e44c222882333d8bb3.scope: Consumed 7.091s CPU time. May 16 00:35:42.514434 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-087759120a002be0612fd9b9907cc9b72a7f9b9f20c586e44c222882333d8bb3-rootfs.mount: Deactivated successfully. May 16 00:35:42.680998 env[1210]: time="2025-05-16T00:35:42.680952289Z" level=info msg="shim disconnected" id=087759120a002be0612fd9b9907cc9b72a7f9b9f20c586e44c222882333d8bb3 May 16 00:35:42.681245 env[1210]: time="2025-05-16T00:35:42.681225620Z" level=warning msg="cleaning up after shim disconnected" id=087759120a002be0612fd9b9907cc9b72a7f9b9f20c586e44c222882333d8bb3 namespace=k8s.io May 16 00:35:42.681323 env[1210]: time="2025-05-16T00:35:42.681310343Z" level=info msg="cleaning up dead shim" May 16 00:35:42.687581 env[1210]: time="2025-05-16T00:35:42.687483667Z" level=warning msg="cleanup warnings time=\"2025-05-16T00:35:42Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2920 runtime=io.containerd.runc.v2\n" May 16 00:35:42.690932 env[1210]: time="2025-05-16T00:35:42.690897803Z" level=info msg="StopContainer for \"087759120a002be0612fd9b9907cc9b72a7f9b9f20c586e44c222882333d8bb3\" returns successfully" May 16 00:35:42.691615 env[1210]: time="2025-05-16T00:35:42.691590230Z" level=info msg="StopPodSandbox for \"a8a60a29bfbf91b1f494b2e176307e469e6f84425590bdbf21e7f7b4b1856d9c\"" May 16 00:35:42.691673 env[1210]: time="2025-05-16T00:35:42.691647152Z" level=info msg="Container to stop \"292250957ff4dda62339693a8fcd697065a38dae3939b68305ce66e70965ce08\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 16 00:35:42.691673 env[1210]: time="2025-05-16T00:35:42.691663713Z" level=info msg="Container to stop \"68fea81b3bf70de052050108f0fdf599d441249d6113af94b5e8feb4388492b0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 16 00:35:42.691724 env[1210]: time="2025-05-16T00:35:42.691675073Z" level=info msg="Container to stop \"c5c92afe3bbce92845755c7d822b6490582a10f972a488f6a0d3197ede773875\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 16 00:35:42.691724 env[1210]: time="2025-05-16T00:35:42.691686594Z" level=info msg="Container to stop \"1fdc2c798f035533e873f2790742a76918f46468161a94af67a04f2bddba5ab9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 16 00:35:42.691724 env[1210]: time="2025-05-16T00:35:42.691697634Z" level=info msg="Container to stop \"087759120a002be0612fd9b9907cc9b72a7f9b9f20c586e44c222882333d8bb3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 16 00:35:42.693269 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a8a60a29bfbf91b1f494b2e176307e469e6f84425590bdbf21e7f7b4b1856d9c-shm.mount: Deactivated successfully. May 16 00:35:42.698448 systemd[1]: cri-containerd-a8a60a29bfbf91b1f494b2e176307e469e6f84425590bdbf21e7f7b4b1856d9c.scope: Deactivated successfully. May 16 00:35:42.720312 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a8a60a29bfbf91b1f494b2e176307e469e6f84425590bdbf21e7f7b4b1856d9c-rootfs.mount: Deactivated successfully. May 16 00:35:42.729011 env[1210]: time="2025-05-16T00:35:42.728956750Z" level=info msg="shim disconnected" id=a8a60a29bfbf91b1f494b2e176307e469e6f84425590bdbf21e7f7b4b1856d9c May 16 00:35:42.729011 env[1210]: time="2025-05-16T00:35:42.729010592Z" level=warning msg="cleaning up after shim disconnected" id=a8a60a29bfbf91b1f494b2e176307e469e6f84425590bdbf21e7f7b4b1856d9c namespace=k8s.io May 16 00:35:42.729259 env[1210]: time="2025-05-16T00:35:42.729020713Z" level=info msg="cleaning up dead shim" May 16 00:35:42.736204 env[1210]: time="2025-05-16T00:35:42.736163556Z" level=warning msg="cleanup warnings time=\"2025-05-16T00:35:42Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2951 runtime=io.containerd.runc.v2\n" May 16 00:35:42.736505 env[1210]: time="2025-05-16T00:35:42.736483968Z" level=info msg="TearDown network for sandbox \"a8a60a29bfbf91b1f494b2e176307e469e6f84425590bdbf21e7f7b4b1856d9c\" successfully" May 16 00:35:42.736548 env[1210]: time="2025-05-16T00:35:42.736506809Z" level=info msg="StopPodSandbox for \"a8a60a29bfbf91b1f494b2e176307e469e6f84425590bdbf21e7f7b4b1856d9c\" returns successfully" May 16 00:35:42.875546 kubelet[1415]: I0516 00:35:42.875501 1415 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d16cbea0-5db6-4553-8710-487c5c45fcf3-lib-modules\") pod \"d16cbea0-5db6-4553-8710-487c5c45fcf3\" (UID: \"d16cbea0-5db6-4553-8710-487c5c45fcf3\") " May 16 00:35:42.875546 kubelet[1415]: I0516 00:35:42.875542 1415 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d16cbea0-5db6-4553-8710-487c5c45fcf3-cilium-run\") pod \"d16cbea0-5db6-4553-8710-487c5c45fcf3\" (UID: \"d16cbea0-5db6-4553-8710-487c5c45fcf3\") " May 16 00:35:42.875758 kubelet[1415]: I0516 00:35:42.875561 1415 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d16cbea0-5db6-4553-8710-487c5c45fcf3-etc-cni-netd\") pod \"d16cbea0-5db6-4553-8710-487c5c45fcf3\" (UID: \"d16cbea0-5db6-4553-8710-487c5c45fcf3\") " May 16 00:35:42.875758 kubelet[1415]: I0516 00:35:42.875579 1415 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d16cbea0-5db6-4553-8710-487c5c45fcf3-hostproc\") pod \"d16cbea0-5db6-4553-8710-487c5c45fcf3\" (UID: \"d16cbea0-5db6-4553-8710-487c5c45fcf3\") " May 16 00:35:42.875758 kubelet[1415]: I0516 00:35:42.875593 1415 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d16cbea0-5db6-4553-8710-487c5c45fcf3-host-proc-sys-net\") pod \"d16cbea0-5db6-4553-8710-487c5c45fcf3\" (UID: \"d16cbea0-5db6-4553-8710-487c5c45fcf3\") " May 16 00:35:42.875758 kubelet[1415]: I0516 00:35:42.875615 1415 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w24w7\" (UniqueName: \"kubernetes.io/projected/d16cbea0-5db6-4553-8710-487c5c45fcf3-kube-api-access-w24w7\") pod \"d16cbea0-5db6-4553-8710-487c5c45fcf3\" (UID: \"d16cbea0-5db6-4553-8710-487c5c45fcf3\") " May 16 00:35:42.875758 kubelet[1415]: I0516 00:35:42.875631 1415 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d16cbea0-5db6-4553-8710-487c5c45fcf3-bpf-maps\") pod \"d16cbea0-5db6-4553-8710-487c5c45fcf3\" (UID: \"d16cbea0-5db6-4553-8710-487c5c45fcf3\") " May 16 00:35:42.875758 kubelet[1415]: I0516 00:35:42.875645 1415 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d16cbea0-5db6-4553-8710-487c5c45fcf3-cilium-cgroup\") pod \"d16cbea0-5db6-4553-8710-487c5c45fcf3\" (UID: \"d16cbea0-5db6-4553-8710-487c5c45fcf3\") " May 16 00:35:42.875942 kubelet[1415]: I0516 00:35:42.875663 1415 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d16cbea0-5db6-4553-8710-487c5c45fcf3-clustermesh-secrets\") pod \"d16cbea0-5db6-4553-8710-487c5c45fcf3\" (UID: \"d16cbea0-5db6-4553-8710-487c5c45fcf3\") " May 16 00:35:42.875942 kubelet[1415]: I0516 00:35:42.875679 1415 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d16cbea0-5db6-4553-8710-487c5c45fcf3-cilium-config-path\") pod \"d16cbea0-5db6-4553-8710-487c5c45fcf3\" (UID: \"d16cbea0-5db6-4553-8710-487c5c45fcf3\") " May 16 00:35:42.875942 kubelet[1415]: I0516 00:35:42.875693 1415 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d16cbea0-5db6-4553-8710-487c5c45fcf3-host-proc-sys-kernel\") pod \"d16cbea0-5db6-4553-8710-487c5c45fcf3\" (UID: \"d16cbea0-5db6-4553-8710-487c5c45fcf3\") " May 16 00:35:42.875942 kubelet[1415]: I0516 00:35:42.875709 1415 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d16cbea0-5db6-4553-8710-487c5c45fcf3-hubble-tls\") pod \"d16cbea0-5db6-4553-8710-487c5c45fcf3\" (UID: \"d16cbea0-5db6-4553-8710-487c5c45fcf3\") " May 16 00:35:42.875942 kubelet[1415]: I0516 00:35:42.875725 1415 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d16cbea0-5db6-4553-8710-487c5c45fcf3-cni-path\") pod \"d16cbea0-5db6-4553-8710-487c5c45fcf3\" (UID: \"d16cbea0-5db6-4553-8710-487c5c45fcf3\") " May 16 00:35:42.875942 kubelet[1415]: I0516 00:35:42.875740 1415 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d16cbea0-5db6-4553-8710-487c5c45fcf3-xtables-lock\") pod \"d16cbea0-5db6-4553-8710-487c5c45fcf3\" (UID: \"d16cbea0-5db6-4553-8710-487c5c45fcf3\") " May 16 00:35:42.876091 kubelet[1415]: I0516 00:35:42.875797 1415 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d16cbea0-5db6-4553-8710-487c5c45fcf3-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "d16cbea0-5db6-4553-8710-487c5c45fcf3" (UID: "d16cbea0-5db6-4553-8710-487c5c45fcf3"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 16 00:35:42.876091 kubelet[1415]: I0516 00:35:42.875830 1415 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d16cbea0-5db6-4553-8710-487c5c45fcf3-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "d16cbea0-5db6-4553-8710-487c5c45fcf3" (UID: "d16cbea0-5db6-4553-8710-487c5c45fcf3"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 16 00:35:42.876091 kubelet[1415]: I0516 00:35:42.875844 1415 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d16cbea0-5db6-4553-8710-487c5c45fcf3-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "d16cbea0-5db6-4553-8710-487c5c45fcf3" (UID: "d16cbea0-5db6-4553-8710-487c5c45fcf3"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 16 00:35:42.876091 kubelet[1415]: I0516 00:35:42.875859 1415 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d16cbea0-5db6-4553-8710-487c5c45fcf3-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "d16cbea0-5db6-4553-8710-487c5c45fcf3" (UID: "d16cbea0-5db6-4553-8710-487c5c45fcf3"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 16 00:35:42.876091 kubelet[1415]: I0516 00:35:42.875871 1415 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d16cbea0-5db6-4553-8710-487c5c45fcf3-hostproc" (OuterVolumeSpecName: "hostproc") pod "d16cbea0-5db6-4553-8710-487c5c45fcf3" (UID: "d16cbea0-5db6-4553-8710-487c5c45fcf3"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 16 00:35:42.876214 kubelet[1415]: I0516 00:35:42.875915 1415 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d16cbea0-5db6-4553-8710-487c5c45fcf3-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "d16cbea0-5db6-4553-8710-487c5c45fcf3" (UID: "d16cbea0-5db6-4553-8710-487c5c45fcf3"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 16 00:35:42.876305 kubelet[1415]: I0516 00:35:42.876278 1415 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d16cbea0-5db6-4553-8710-487c5c45fcf3-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "d16cbea0-5db6-4553-8710-487c5c45fcf3" (UID: "d16cbea0-5db6-4553-8710-487c5c45fcf3"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 16 00:35:42.876339 kubelet[1415]: I0516 00:35:42.876318 1415 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d16cbea0-5db6-4553-8710-487c5c45fcf3-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "d16cbea0-5db6-4553-8710-487c5c45fcf3" (UID: "d16cbea0-5db6-4553-8710-487c5c45fcf3"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 16 00:35:42.877073 kubelet[1415]: I0516 00:35:42.877007 1415 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d16cbea0-5db6-4553-8710-487c5c45fcf3-cni-path" (OuterVolumeSpecName: "cni-path") pod "d16cbea0-5db6-4553-8710-487c5c45fcf3" (UID: "d16cbea0-5db6-4553-8710-487c5c45fcf3"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 16 00:35:42.877073 kubelet[1415]: I0516 00:35:42.877049 1415 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d16cbea0-5db6-4553-8710-487c5c45fcf3-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "d16cbea0-5db6-4553-8710-487c5c45fcf3" (UID: "d16cbea0-5db6-4553-8710-487c5c45fcf3"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 16 00:35:42.878275 kubelet[1415]: I0516 00:35:42.878149 1415 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d16cbea0-5db6-4553-8710-487c5c45fcf3-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "d16cbea0-5db6-4553-8710-487c5c45fcf3" (UID: "d16cbea0-5db6-4553-8710-487c5c45fcf3"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 16 00:35:42.880694 systemd[1]: var-lib-kubelet-pods-d16cbea0\x2d5db6\x2d4553\x2d8710\x2d487c5c45fcf3-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 16 00:35:42.880786 systemd[1]: var-lib-kubelet-pods-d16cbea0\x2d5db6\x2d4553\x2d8710\x2d487c5c45fcf3-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 16 00:35:42.881403 kubelet[1415]: I0516 00:35:42.881131 1415 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d16cbea0-5db6-4553-8710-487c5c45fcf3-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "d16cbea0-5db6-4553-8710-487c5c45fcf3" (UID: "d16cbea0-5db6-4553-8710-487c5c45fcf3"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" May 16 00:35:42.881700 kubelet[1415]: I0516 00:35:42.881485 1415 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d16cbea0-5db6-4553-8710-487c5c45fcf3-kube-api-access-w24w7" (OuterVolumeSpecName: "kube-api-access-w24w7") pod "d16cbea0-5db6-4553-8710-487c5c45fcf3" (UID: "d16cbea0-5db6-4553-8710-487c5c45fcf3"). InnerVolumeSpecName "kube-api-access-w24w7". PluginName "kubernetes.io/projected", VolumeGidValue "" May 16 00:35:42.881700 kubelet[1415]: I0516 00:35:42.881618 1415 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d16cbea0-5db6-4553-8710-487c5c45fcf3-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "d16cbea0-5db6-4553-8710-487c5c45fcf3" (UID: "d16cbea0-5db6-4553-8710-487c5c45fcf3"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" May 16 00:35:42.977958 kubelet[1415]: I0516 00:35:42.976795 1415 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d16cbea0-5db6-4553-8710-487c5c45fcf3-hubble-tls\") on node \"10.0.0.35\" DevicePath \"\"" May 16 00:35:42.977958 kubelet[1415]: I0516 00:35:42.976826 1415 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d16cbea0-5db6-4553-8710-487c5c45fcf3-cni-path\") on node \"10.0.0.35\" DevicePath \"\"" May 16 00:35:42.977958 kubelet[1415]: I0516 00:35:42.976835 1415 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d16cbea0-5db6-4553-8710-487c5c45fcf3-xtables-lock\") on node \"10.0.0.35\" DevicePath \"\"" May 16 00:35:42.977958 kubelet[1415]: I0516 00:35:42.976843 1415 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d16cbea0-5db6-4553-8710-487c5c45fcf3-lib-modules\") on node \"10.0.0.35\" DevicePath \"\"" May 16 00:35:42.977958 kubelet[1415]: I0516 00:35:42.976851 1415 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d16cbea0-5db6-4553-8710-487c5c45fcf3-cilium-run\") on node \"10.0.0.35\" DevicePath \"\"" May 16 00:35:42.977958 kubelet[1415]: I0516 00:35:42.976859 1415 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d16cbea0-5db6-4553-8710-487c5c45fcf3-bpf-maps\") on node \"10.0.0.35\" DevicePath \"\"" May 16 00:35:42.977958 kubelet[1415]: I0516 00:35:42.976867 1415 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d16cbea0-5db6-4553-8710-487c5c45fcf3-etc-cni-netd\") on node \"10.0.0.35\" DevicePath \"\"" May 16 00:35:42.977958 kubelet[1415]: I0516 00:35:42.976889 1415 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d16cbea0-5db6-4553-8710-487c5c45fcf3-hostproc\") on node \"10.0.0.35\" DevicePath \"\"" May 16 00:35:42.978266 kubelet[1415]: I0516 00:35:42.976897 1415 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d16cbea0-5db6-4553-8710-487c5c45fcf3-host-proc-sys-net\") on node \"10.0.0.35\" DevicePath \"\"" May 16 00:35:42.978266 kubelet[1415]: I0516 00:35:42.976910 1415 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w24w7\" (UniqueName: \"kubernetes.io/projected/d16cbea0-5db6-4553-8710-487c5c45fcf3-kube-api-access-w24w7\") on node \"10.0.0.35\" DevicePath \"\"" May 16 00:35:42.978266 kubelet[1415]: I0516 00:35:42.976919 1415 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d16cbea0-5db6-4553-8710-487c5c45fcf3-host-proc-sys-kernel\") on node \"10.0.0.35\" DevicePath \"\"" May 16 00:35:42.978266 kubelet[1415]: I0516 00:35:42.976927 1415 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d16cbea0-5db6-4553-8710-487c5c45fcf3-cilium-cgroup\") on node \"10.0.0.35\" DevicePath \"\"" May 16 00:35:42.978266 kubelet[1415]: I0516 00:35:42.976935 1415 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d16cbea0-5db6-4553-8710-487c5c45fcf3-clustermesh-secrets\") on node \"10.0.0.35\" DevicePath \"\"" May 16 00:35:42.978266 kubelet[1415]: I0516 00:35:42.976944 1415 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d16cbea0-5db6-4553-8710-487c5c45fcf3-cilium-config-path\") on node \"10.0.0.35\" DevicePath \"\"" May 16 00:35:43.073687 kubelet[1415]: E0516 00:35:43.073648 1415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:35:43.248220 systemd[1]: Removed slice kubepods-burstable-podd16cbea0_5db6_4553_8710_487c5c45fcf3.slice. May 16 00:35:43.248319 systemd[1]: kubepods-burstable-podd16cbea0_5db6_4553_8710_487c5c45fcf3.slice: Consumed 7.306s CPU time. May 16 00:35:43.346250 kubelet[1415]: I0516 00:35:43.346198 1415 scope.go:117] "RemoveContainer" containerID="087759120a002be0612fd9b9907cc9b72a7f9b9f20c586e44c222882333d8bb3" May 16 00:35:43.348701 env[1210]: time="2025-05-16T00:35:43.348656708Z" level=info msg="RemoveContainer for \"087759120a002be0612fd9b9907cc9b72a7f9b9f20c586e44c222882333d8bb3\"" May 16 00:35:43.352582 env[1210]: time="2025-05-16T00:35:43.352501295Z" level=info msg="RemoveContainer for \"087759120a002be0612fd9b9907cc9b72a7f9b9f20c586e44c222882333d8bb3\" returns successfully" May 16 00:35:43.352935 kubelet[1415]: I0516 00:35:43.352899 1415 scope.go:117] "RemoveContainer" containerID="1fdc2c798f035533e873f2790742a76918f46468161a94af67a04f2bddba5ab9" May 16 00:35:43.354084 env[1210]: time="2025-05-16T00:35:43.354056234Z" level=info msg="RemoveContainer for \"1fdc2c798f035533e873f2790742a76918f46468161a94af67a04f2bddba5ab9\"" May 16 00:35:43.360818 env[1210]: time="2025-05-16T00:35:43.360714447Z" level=info msg="RemoveContainer for \"1fdc2c798f035533e873f2790742a76918f46468161a94af67a04f2bddba5ab9\" returns successfully" May 16 00:35:43.361051 kubelet[1415]: I0516 00:35:43.361025 1415 scope.go:117] "RemoveContainer" containerID="c5c92afe3bbce92845755c7d822b6490582a10f972a488f6a0d3197ede773875" May 16 00:35:43.363325 env[1210]: time="2025-05-16T00:35:43.363283305Z" level=info msg="RemoveContainer for \"c5c92afe3bbce92845755c7d822b6490582a10f972a488f6a0d3197ede773875\"" May 16 00:35:43.365500 env[1210]: time="2025-05-16T00:35:43.365454468Z" level=info msg="RemoveContainer for \"c5c92afe3bbce92845755c7d822b6490582a10f972a488f6a0d3197ede773875\" returns successfully" May 16 00:35:43.365670 kubelet[1415]: I0516 00:35:43.365636 1415 scope.go:117] "RemoveContainer" containerID="292250957ff4dda62339693a8fcd697065a38dae3939b68305ce66e70965ce08" May 16 00:35:43.366570 env[1210]: time="2025-05-16T00:35:43.366544509Z" level=info msg="RemoveContainer for \"292250957ff4dda62339693a8fcd697065a38dae3939b68305ce66e70965ce08\"" May 16 00:35:43.368511 env[1210]: time="2025-05-16T00:35:43.368462623Z" level=info msg="RemoveContainer for \"292250957ff4dda62339693a8fcd697065a38dae3939b68305ce66e70965ce08\" returns successfully" May 16 00:35:43.368637 kubelet[1415]: I0516 00:35:43.368601 1415 scope.go:117] "RemoveContainer" containerID="68fea81b3bf70de052050108f0fdf599d441249d6113af94b5e8feb4388492b0" May 16 00:35:43.369544 env[1210]: time="2025-05-16T00:35:43.369515903Z" level=info msg="RemoveContainer for \"68fea81b3bf70de052050108f0fdf599d441249d6113af94b5e8feb4388492b0\"" May 16 00:35:43.371787 env[1210]: time="2025-05-16T00:35:43.371753748Z" level=info msg="RemoveContainer for \"68fea81b3bf70de052050108f0fdf599d441249d6113af94b5e8feb4388492b0\" returns successfully" May 16 00:35:43.372048 kubelet[1415]: I0516 00:35:43.372010 1415 scope.go:117] "RemoveContainer" containerID="087759120a002be0612fd9b9907cc9b72a7f9b9f20c586e44c222882333d8bb3" May 16 00:35:43.372299 env[1210]: time="2025-05-16T00:35:43.372193605Z" level=error msg="ContainerStatus for \"087759120a002be0612fd9b9907cc9b72a7f9b9f20c586e44c222882333d8bb3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"087759120a002be0612fd9b9907cc9b72a7f9b9f20c586e44c222882333d8bb3\": not found" May 16 00:35:43.372410 kubelet[1415]: E0516 00:35:43.372389 1415 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"087759120a002be0612fd9b9907cc9b72a7f9b9f20c586e44c222882333d8bb3\": not found" containerID="087759120a002be0612fd9b9907cc9b72a7f9b9f20c586e44c222882333d8bb3" May 16 00:35:43.372491 kubelet[1415]: I0516 00:35:43.372421 1415 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"087759120a002be0612fd9b9907cc9b72a7f9b9f20c586e44c222882333d8bb3"} err="failed to get container status \"087759120a002be0612fd9b9907cc9b72a7f9b9f20c586e44c222882333d8bb3\": rpc error: code = NotFound desc = an error occurred when try to find container \"087759120a002be0612fd9b9907cc9b72a7f9b9f20c586e44c222882333d8bb3\": not found" May 16 00:35:43.372522 kubelet[1415]: I0516 00:35:43.372493 1415 scope.go:117] "RemoveContainer" containerID="1fdc2c798f035533e873f2790742a76918f46468161a94af67a04f2bddba5ab9" May 16 00:35:43.372691 env[1210]: time="2025-05-16T00:35:43.372641902Z" level=error msg="ContainerStatus for \"1fdc2c798f035533e873f2790742a76918f46468161a94af67a04f2bddba5ab9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1fdc2c798f035533e873f2790742a76918f46468161a94af67a04f2bddba5ab9\": not found" May 16 00:35:43.372775 kubelet[1415]: E0516 00:35:43.372755 1415 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1fdc2c798f035533e873f2790742a76918f46468161a94af67a04f2bddba5ab9\": not found" containerID="1fdc2c798f035533e873f2790742a76918f46468161a94af67a04f2bddba5ab9" May 16 00:35:43.372813 kubelet[1415]: I0516 00:35:43.372779 1415 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1fdc2c798f035533e873f2790742a76918f46468161a94af67a04f2bddba5ab9"} err="failed to get container status \"1fdc2c798f035533e873f2790742a76918f46468161a94af67a04f2bddba5ab9\": rpc error: code = NotFound desc = an error occurred when try to find container \"1fdc2c798f035533e873f2790742a76918f46468161a94af67a04f2bddba5ab9\": not found" May 16 00:35:43.372813 kubelet[1415]: I0516 00:35:43.372793 1415 scope.go:117] "RemoveContainer" containerID="c5c92afe3bbce92845755c7d822b6490582a10f972a488f6a0d3197ede773875" May 16 00:35:43.373008 env[1210]: time="2025-05-16T00:35:43.372964594Z" level=error msg="ContainerStatus for \"c5c92afe3bbce92845755c7d822b6490582a10f972a488f6a0d3197ede773875\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c5c92afe3bbce92845755c7d822b6490582a10f972a488f6a0d3197ede773875\": not found" May 16 00:35:43.373133 kubelet[1415]: E0516 00:35:43.373112 1415 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c5c92afe3bbce92845755c7d822b6490582a10f972a488f6a0d3197ede773875\": not found" containerID="c5c92afe3bbce92845755c7d822b6490582a10f972a488f6a0d3197ede773875" May 16 00:35:43.373170 kubelet[1415]: I0516 00:35:43.373139 1415 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c5c92afe3bbce92845755c7d822b6490582a10f972a488f6a0d3197ede773875"} err="failed to get container status \"c5c92afe3bbce92845755c7d822b6490582a10f972a488f6a0d3197ede773875\": rpc error: code = NotFound desc = an error occurred when try to find container \"c5c92afe3bbce92845755c7d822b6490582a10f972a488f6a0d3197ede773875\": not found" May 16 00:35:43.373170 kubelet[1415]: I0516 00:35:43.373158 1415 scope.go:117] "RemoveContainer" containerID="292250957ff4dda62339693a8fcd697065a38dae3939b68305ce66e70965ce08" May 16 00:35:43.373446 env[1210]: time="2025-05-16T00:35:43.373396090Z" level=error msg="ContainerStatus for \"292250957ff4dda62339693a8fcd697065a38dae3939b68305ce66e70965ce08\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"292250957ff4dda62339693a8fcd697065a38dae3939b68305ce66e70965ce08\": not found" May 16 00:35:43.373619 kubelet[1415]: E0516 00:35:43.373598 1415 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"292250957ff4dda62339693a8fcd697065a38dae3939b68305ce66e70965ce08\": not found" containerID="292250957ff4dda62339693a8fcd697065a38dae3939b68305ce66e70965ce08" May 16 00:35:43.373662 kubelet[1415]: I0516 00:35:43.373620 1415 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"292250957ff4dda62339693a8fcd697065a38dae3939b68305ce66e70965ce08"} err="failed to get container status \"292250957ff4dda62339693a8fcd697065a38dae3939b68305ce66e70965ce08\": rpc error: code = NotFound desc = an error occurred when try to find container \"292250957ff4dda62339693a8fcd697065a38dae3939b68305ce66e70965ce08\": not found" May 16 00:35:43.373662 kubelet[1415]: I0516 00:35:43.373634 1415 scope.go:117] "RemoveContainer" containerID="68fea81b3bf70de052050108f0fdf599d441249d6113af94b5e8feb4388492b0" May 16 00:35:43.373831 env[1210]: time="2025-05-16T00:35:43.373784305Z" level=error msg="ContainerStatus for \"68fea81b3bf70de052050108f0fdf599d441249d6113af94b5e8feb4388492b0\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"68fea81b3bf70de052050108f0fdf599d441249d6113af94b5e8feb4388492b0\": not found" May 16 00:35:43.373962 kubelet[1415]: E0516 00:35:43.373943 1415 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"68fea81b3bf70de052050108f0fdf599d441249d6113af94b5e8feb4388492b0\": not found" containerID="68fea81b3bf70de052050108f0fdf599d441249d6113af94b5e8feb4388492b0" May 16 00:35:43.374005 kubelet[1415]: I0516 00:35:43.373967 1415 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"68fea81b3bf70de052050108f0fdf599d441249d6113af94b5e8feb4388492b0"} err="failed to get container status \"68fea81b3bf70de052050108f0fdf599d441249d6113af94b5e8feb4388492b0\": rpc error: code = NotFound desc = an error occurred when try to find container \"68fea81b3bf70de052050108f0fdf599d441249d6113af94b5e8feb4388492b0\": not found" May 16 00:35:43.398863 systemd[1]: var-lib-kubelet-pods-d16cbea0\x2d5db6\x2d4553\x2d8710\x2d487c5c45fcf3-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dw24w7.mount: Deactivated successfully. May 16 00:35:44.074041 kubelet[1415]: E0516 00:35:44.073994 1415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:35:45.074747 kubelet[1415]: E0516 00:35:45.074683 1415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:35:45.244846 kubelet[1415]: I0516 00:35:45.244792 1415 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d16cbea0-5db6-4553-8710-487c5c45fcf3" path="/var/lib/kubelet/pods/d16cbea0-5db6-4553-8710-487c5c45fcf3/volumes" May 16 00:35:45.403090 kubelet[1415]: E0516 00:35:45.402921 1415 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d16cbea0-5db6-4553-8710-487c5c45fcf3" containerName="mount-bpf-fs" May 16 00:35:45.403090 kubelet[1415]: E0516 00:35:45.402954 1415 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d16cbea0-5db6-4553-8710-487c5c45fcf3" containerName="cilium-agent" May 16 00:35:45.403090 kubelet[1415]: E0516 00:35:45.402962 1415 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d16cbea0-5db6-4553-8710-487c5c45fcf3" containerName="apply-sysctl-overwrites" May 16 00:35:45.403090 kubelet[1415]: E0516 00:35:45.402968 1415 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d16cbea0-5db6-4553-8710-487c5c45fcf3" containerName="clean-cilium-state" May 16 00:35:45.403090 kubelet[1415]: E0516 00:35:45.402974 1415 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d16cbea0-5db6-4553-8710-487c5c45fcf3" containerName="mount-cgroup" May 16 00:35:45.403090 kubelet[1415]: I0516 00:35:45.402993 1415 memory_manager.go:354] "RemoveStaleState removing state" podUID="d16cbea0-5db6-4553-8710-487c5c45fcf3" containerName="cilium-agent" May 16 00:35:45.407448 systemd[1]: Created slice kubepods-besteffort-pode7e6ff28_adf2_46cb_ab47_29a0b84f46a2.slice. May 16 00:35:45.412252 systemd[1]: Created slice kubepods-burstable-pod7c22b18c_aa70_4154_a611_b3f533b3ba5b.slice. May 16 00:35:45.581165 kubelet[1415]: E0516 00:35:45.581114 1415 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[bpf-maps cilium-cgroup cilium-config-path cilium-ipsec-secrets cilium-run clustermesh-secrets cni-path etc-cni-netd host-proc-sys-kernel host-proc-sys-net hostproc hubble-tls kube-api-access-85qrz lib-modules xtables-lock], unattached volumes=[], failed to process volumes=[]: context canceled" pod="kube-system/cilium-7fg4p" podUID="7c22b18c-aa70-4154-a611-b3f533b3ba5b" May 16 00:35:45.593770 kubelet[1415]: I0516 00:35:45.593724 1415 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/7c22b18c-aa70-4154-a611-b3f533b3ba5b-cilium-ipsec-secrets\") pod \"cilium-7fg4p\" (UID: \"7c22b18c-aa70-4154-a611-b3f533b3ba5b\") " pod="kube-system/cilium-7fg4p" May 16 00:35:45.593920 kubelet[1415]: I0516 00:35:45.593776 1415 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7c22b18c-aa70-4154-a611-b3f533b3ba5b-host-proc-sys-kernel\") pod \"cilium-7fg4p\" (UID: \"7c22b18c-aa70-4154-a611-b3f533b3ba5b\") " pod="kube-system/cilium-7fg4p" May 16 00:35:45.593920 kubelet[1415]: I0516 00:35:45.593799 1415 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-85qrz\" (UniqueName: \"kubernetes.io/projected/7c22b18c-aa70-4154-a611-b3f533b3ba5b-kube-api-access-85qrz\") pod \"cilium-7fg4p\" (UID: \"7c22b18c-aa70-4154-a611-b3f533b3ba5b\") " pod="kube-system/cilium-7fg4p" May 16 00:35:45.593920 kubelet[1415]: I0516 00:35:45.593829 1415 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e7e6ff28-adf2-46cb-ab47-29a0b84f46a2-cilium-config-path\") pod \"cilium-operator-5d85765b45-mt7j9\" (UID: \"e7e6ff28-adf2-46cb-ab47-29a0b84f46a2\") " pod="kube-system/cilium-operator-5d85765b45-mt7j9" May 16 00:35:45.593920 kubelet[1415]: I0516 00:35:45.593847 1415 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7c22b18c-aa70-4154-a611-b3f533b3ba5b-hostproc\") pod \"cilium-7fg4p\" (UID: \"7c22b18c-aa70-4154-a611-b3f533b3ba5b\") " pod="kube-system/cilium-7fg4p" May 16 00:35:45.593920 kubelet[1415]: I0516 00:35:45.593891 1415 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7c22b18c-aa70-4154-a611-b3f533b3ba5b-cni-path\") pod \"cilium-7fg4p\" (UID: \"7c22b18c-aa70-4154-a611-b3f533b3ba5b\") " pod="kube-system/cilium-7fg4p" May 16 00:35:45.594045 kubelet[1415]: I0516 00:35:45.593912 1415 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7c22b18c-aa70-4154-a611-b3f533b3ba5b-host-proc-sys-net\") pod \"cilium-7fg4p\" (UID: \"7c22b18c-aa70-4154-a611-b3f533b3ba5b\") " pod="kube-system/cilium-7fg4p" May 16 00:35:45.594045 kubelet[1415]: I0516 00:35:45.593932 1415 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7c22b18c-aa70-4154-a611-b3f533b3ba5b-etc-cni-netd\") pod \"cilium-7fg4p\" (UID: \"7c22b18c-aa70-4154-a611-b3f533b3ba5b\") " pod="kube-system/cilium-7fg4p" May 16 00:35:45.594045 kubelet[1415]: I0516 00:35:45.593949 1415 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7c22b18c-aa70-4154-a611-b3f533b3ba5b-cilium-config-path\") pod \"cilium-7fg4p\" (UID: \"7c22b18c-aa70-4154-a611-b3f533b3ba5b\") " pod="kube-system/cilium-7fg4p" May 16 00:35:45.594045 kubelet[1415]: I0516 00:35:45.593974 1415 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7c22b18c-aa70-4154-a611-b3f533b3ba5b-cilium-run\") pod \"cilium-7fg4p\" (UID: \"7c22b18c-aa70-4154-a611-b3f533b3ba5b\") " pod="kube-system/cilium-7fg4p" May 16 00:35:45.594045 kubelet[1415]: I0516 00:35:45.593991 1415 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7c22b18c-aa70-4154-a611-b3f533b3ba5b-cilium-cgroup\") pod \"cilium-7fg4p\" (UID: \"7c22b18c-aa70-4154-a611-b3f533b3ba5b\") " pod="kube-system/cilium-7fg4p" May 16 00:35:45.594045 kubelet[1415]: I0516 00:35:45.594008 1415 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7c22b18c-aa70-4154-a611-b3f533b3ba5b-xtables-lock\") pod \"cilium-7fg4p\" (UID: \"7c22b18c-aa70-4154-a611-b3f533b3ba5b\") " pod="kube-system/cilium-7fg4p" May 16 00:35:45.594174 kubelet[1415]: I0516 00:35:45.594024 1415 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7c22b18c-aa70-4154-a611-b3f533b3ba5b-clustermesh-secrets\") pod \"cilium-7fg4p\" (UID: \"7c22b18c-aa70-4154-a611-b3f533b3ba5b\") " pod="kube-system/cilium-7fg4p" May 16 00:35:45.594174 kubelet[1415]: I0516 00:35:45.594049 1415 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7c22b18c-aa70-4154-a611-b3f533b3ba5b-bpf-maps\") pod \"cilium-7fg4p\" (UID: \"7c22b18c-aa70-4154-a611-b3f533b3ba5b\") " pod="kube-system/cilium-7fg4p" May 16 00:35:45.594174 kubelet[1415]: I0516 00:35:45.594067 1415 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7c22b18c-aa70-4154-a611-b3f533b3ba5b-lib-modules\") pod \"cilium-7fg4p\" (UID: \"7c22b18c-aa70-4154-a611-b3f533b3ba5b\") " pod="kube-system/cilium-7fg4p" May 16 00:35:45.594174 kubelet[1415]: I0516 00:35:45.594082 1415 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7c22b18c-aa70-4154-a611-b3f533b3ba5b-hubble-tls\") pod \"cilium-7fg4p\" (UID: \"7c22b18c-aa70-4154-a611-b3f533b3ba5b\") " pod="kube-system/cilium-7fg4p" May 16 00:35:45.594174 kubelet[1415]: I0516 00:35:45.594097 1415 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-65frm\" (UniqueName: \"kubernetes.io/projected/e7e6ff28-adf2-46cb-ab47-29a0b84f46a2-kube-api-access-65frm\") pod \"cilium-operator-5d85765b45-mt7j9\" (UID: \"e7e6ff28-adf2-46cb-ab47-29a0b84f46a2\") " pod="kube-system/cilium-operator-5d85765b45-mt7j9" May 16 00:35:46.010117 kubelet[1415]: E0516 00:35:46.010081 1415 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:35:46.010591 env[1210]: time="2025-05-16T00:35:46.010550803Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-mt7j9,Uid:e7e6ff28-adf2-46cb-ab47-29a0b84f46a2,Namespace:kube-system,Attempt:0,}" May 16 00:35:46.022251 env[1210]: time="2025-05-16T00:35:46.022170079Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 16 00:35:46.022251 env[1210]: time="2025-05-16T00:35:46.022222480Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 16 00:35:46.022251 env[1210]: time="2025-05-16T00:35:46.022233561Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 00:35:46.022653 env[1210]: time="2025-05-16T00:35:46.022611254Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/29d08d9697c0043318f489319a28f0919c2e3a1db35c2c81664a610c67679062 pid=2981 runtime=io.containerd.runc.v2 May 16 00:35:46.033213 systemd[1]: Started cri-containerd-29d08d9697c0043318f489319a28f0919c2e3a1db35c2c81664a610c67679062.scope. May 16 00:35:46.075685 kubelet[1415]: E0516 00:35:46.075635 1415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:35:46.086200 env[1210]: time="2025-05-16T00:35:46.086153179Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-mt7j9,Uid:e7e6ff28-adf2-46cb-ab47-29a0b84f46a2,Namespace:kube-system,Attempt:0,} returns sandbox id \"29d08d9697c0043318f489319a28f0919c2e3a1db35c2c81664a610c67679062\"" May 16 00:35:46.086811 kubelet[1415]: E0516 00:35:46.086786 1415 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:35:46.087808 env[1210]: time="2025-05-16T00:35:46.087781074Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 16 00:35:46.201148 kubelet[1415]: E0516 00:35:46.201096 1415 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 16 00:35:46.499347 kubelet[1415]: I0516 00:35:46.499297 1415 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7c22b18c-aa70-4154-a611-b3f533b3ba5b-cilium-config-path\") pod \"7c22b18c-aa70-4154-a611-b3f533b3ba5b\" (UID: \"7c22b18c-aa70-4154-a611-b3f533b3ba5b\") " May 16 00:35:46.499347 kubelet[1415]: I0516 00:35:46.499350 1415 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7c22b18c-aa70-4154-a611-b3f533b3ba5b-host-proc-sys-kernel\") pod \"7c22b18c-aa70-4154-a611-b3f533b3ba5b\" (UID: \"7c22b18c-aa70-4154-a611-b3f533b3ba5b\") " May 16 00:35:46.499554 kubelet[1415]: I0516 00:35:46.499374 1415 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7c22b18c-aa70-4154-a611-b3f533b3ba5b-host-proc-sys-net\") pod \"7c22b18c-aa70-4154-a611-b3f533b3ba5b\" (UID: \"7c22b18c-aa70-4154-a611-b3f533b3ba5b\") " May 16 00:35:46.499554 kubelet[1415]: I0516 00:35:46.499398 1415 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7c22b18c-aa70-4154-a611-b3f533b3ba5b-cilium-run\") pod \"7c22b18c-aa70-4154-a611-b3f533b3ba5b\" (UID: \"7c22b18c-aa70-4154-a611-b3f533b3ba5b\") " May 16 00:35:46.499554 kubelet[1415]: I0516 00:35:46.499416 1415 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7c22b18c-aa70-4154-a611-b3f533b3ba5b-clustermesh-secrets\") pod \"7c22b18c-aa70-4154-a611-b3f533b3ba5b\" (UID: \"7c22b18c-aa70-4154-a611-b3f533b3ba5b\") " May 16 00:35:46.499554 kubelet[1415]: I0516 00:35:46.499430 1415 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7c22b18c-aa70-4154-a611-b3f533b3ba5b-lib-modules\") pod \"7c22b18c-aa70-4154-a611-b3f533b3ba5b\" (UID: \"7c22b18c-aa70-4154-a611-b3f533b3ba5b\") " May 16 00:35:46.499554 kubelet[1415]: I0516 00:35:46.499446 1415 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7c22b18c-aa70-4154-a611-b3f533b3ba5b-hubble-tls\") pod \"7c22b18c-aa70-4154-a611-b3f533b3ba5b\" (UID: \"7c22b18c-aa70-4154-a611-b3f533b3ba5b\") " May 16 00:35:46.499554 kubelet[1415]: I0516 00:35:46.499462 1415 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-85qrz\" (UniqueName: \"kubernetes.io/projected/7c22b18c-aa70-4154-a611-b3f533b3ba5b-kube-api-access-85qrz\") pod \"7c22b18c-aa70-4154-a611-b3f533b3ba5b\" (UID: \"7c22b18c-aa70-4154-a611-b3f533b3ba5b\") " May 16 00:35:46.499687 kubelet[1415]: I0516 00:35:46.499478 1415 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7c22b18c-aa70-4154-a611-b3f533b3ba5b-hostproc\") pod \"7c22b18c-aa70-4154-a611-b3f533b3ba5b\" (UID: \"7c22b18c-aa70-4154-a611-b3f533b3ba5b\") " May 16 00:35:46.499687 kubelet[1415]: I0516 00:35:46.499491 1415 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7c22b18c-aa70-4154-a611-b3f533b3ba5b-cni-path\") pod \"7c22b18c-aa70-4154-a611-b3f533b3ba5b\" (UID: \"7c22b18c-aa70-4154-a611-b3f533b3ba5b\") " May 16 00:35:46.499687 kubelet[1415]: I0516 00:35:46.499506 1415 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7c22b18c-aa70-4154-a611-b3f533b3ba5b-etc-cni-netd\") pod \"7c22b18c-aa70-4154-a611-b3f533b3ba5b\" (UID: \"7c22b18c-aa70-4154-a611-b3f533b3ba5b\") " May 16 00:35:46.499687 kubelet[1415]: I0516 00:35:46.499519 1415 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7c22b18c-aa70-4154-a611-b3f533b3ba5b-cilium-cgroup\") pod \"7c22b18c-aa70-4154-a611-b3f533b3ba5b\" (UID: \"7c22b18c-aa70-4154-a611-b3f533b3ba5b\") " May 16 00:35:46.499687 kubelet[1415]: I0516 00:35:46.499532 1415 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7c22b18c-aa70-4154-a611-b3f533b3ba5b-xtables-lock\") pod \"7c22b18c-aa70-4154-a611-b3f533b3ba5b\" (UID: \"7c22b18c-aa70-4154-a611-b3f533b3ba5b\") " May 16 00:35:46.499687 kubelet[1415]: I0516 00:35:46.499546 1415 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7c22b18c-aa70-4154-a611-b3f533b3ba5b-bpf-maps\") pod \"7c22b18c-aa70-4154-a611-b3f533b3ba5b\" (UID: \"7c22b18c-aa70-4154-a611-b3f533b3ba5b\") " May 16 00:35:46.499814 kubelet[1415]: I0516 00:35:46.499562 1415 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/7c22b18c-aa70-4154-a611-b3f533b3ba5b-cilium-ipsec-secrets\") pod \"7c22b18c-aa70-4154-a611-b3f533b3ba5b\" (UID: \"7c22b18c-aa70-4154-a611-b3f533b3ba5b\") " May 16 00:35:46.499939 kubelet[1415]: I0516 00:35:46.499914 1415 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7c22b18c-aa70-4154-a611-b3f533b3ba5b-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "7c22b18c-aa70-4154-a611-b3f533b3ba5b" (UID: "7c22b18c-aa70-4154-a611-b3f533b3ba5b"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 16 00:35:46.500018 kubelet[1415]: I0516 00:35:46.499926 1415 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7c22b18c-aa70-4154-a611-b3f533b3ba5b-hostproc" (OuterVolumeSpecName: "hostproc") pod "7c22b18c-aa70-4154-a611-b3f533b3ba5b" (UID: "7c22b18c-aa70-4154-a611-b3f533b3ba5b"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 16 00:35:46.500118 kubelet[1415]: I0516 00:35:46.500101 1415 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7c22b18c-aa70-4154-a611-b3f533b3ba5b-cni-path" (OuterVolumeSpecName: "cni-path") pod "7c22b18c-aa70-4154-a611-b3f533b3ba5b" (UID: "7c22b18c-aa70-4154-a611-b3f533b3ba5b"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 16 00:35:46.500210 kubelet[1415]: I0516 00:35:46.500197 1415 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7c22b18c-aa70-4154-a611-b3f533b3ba5b-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "7c22b18c-aa70-4154-a611-b3f533b3ba5b" (UID: "7c22b18c-aa70-4154-a611-b3f533b3ba5b"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 16 00:35:46.500299 kubelet[1415]: I0516 00:35:46.500287 1415 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7c22b18c-aa70-4154-a611-b3f533b3ba5b-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "7c22b18c-aa70-4154-a611-b3f533b3ba5b" (UID: "7c22b18c-aa70-4154-a611-b3f533b3ba5b"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 16 00:35:46.500384 kubelet[1415]: I0516 00:35:46.500327 1415 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7c22b18c-aa70-4154-a611-b3f533b3ba5b-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "7c22b18c-aa70-4154-a611-b3f533b3ba5b" (UID: "7c22b18c-aa70-4154-a611-b3f533b3ba5b"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 16 00:35:46.500455 kubelet[1415]: I0516 00:35:46.500347 1415 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7c22b18c-aa70-4154-a611-b3f533b3ba5b-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "7c22b18c-aa70-4154-a611-b3f533b3ba5b" (UID: "7c22b18c-aa70-4154-a611-b3f533b3ba5b"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 16 00:35:46.500537 kubelet[1415]: I0516 00:35:46.500522 1415 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7c22b18c-aa70-4154-a611-b3f533b3ba5b-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "7c22b18c-aa70-4154-a611-b3f533b3ba5b" (UID: "7c22b18c-aa70-4154-a611-b3f533b3ba5b"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 16 00:35:46.502099 kubelet[1415]: I0516 00:35:46.502064 1415 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7c22b18c-aa70-4154-a611-b3f533b3ba5b-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "7c22b18c-aa70-4154-a611-b3f533b3ba5b" (UID: "7c22b18c-aa70-4154-a611-b3f533b3ba5b"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 16 00:35:46.502176 kubelet[1415]: I0516 00:35:46.502114 1415 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7c22b18c-aa70-4154-a611-b3f533b3ba5b-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "7c22b18c-aa70-4154-a611-b3f533b3ba5b" (UID: "7c22b18c-aa70-4154-a611-b3f533b3ba5b"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 16 00:35:46.502176 kubelet[1415]: I0516 00:35:46.502132 1415 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7c22b18c-aa70-4154-a611-b3f533b3ba5b-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "7c22b18c-aa70-4154-a611-b3f533b3ba5b" (UID: "7c22b18c-aa70-4154-a611-b3f533b3ba5b"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 16 00:35:46.503521 kubelet[1415]: I0516 00:35:46.503487 1415 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c22b18c-aa70-4154-a611-b3f533b3ba5b-kube-api-access-85qrz" (OuterVolumeSpecName: "kube-api-access-85qrz") pod "7c22b18c-aa70-4154-a611-b3f533b3ba5b" (UID: "7c22b18c-aa70-4154-a611-b3f533b3ba5b"). InnerVolumeSpecName "kube-api-access-85qrz". PluginName "kubernetes.io/projected", VolumeGidValue "" May 16 00:35:46.503590 kubelet[1415]: I0516 00:35:46.503559 1415 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c22b18c-aa70-4154-a611-b3f533b3ba5b-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "7c22b18c-aa70-4154-a611-b3f533b3ba5b" (UID: "7c22b18c-aa70-4154-a611-b3f533b3ba5b"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" May 16 00:35:46.503923 kubelet[1415]: I0516 00:35:46.503897 1415 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c22b18c-aa70-4154-a611-b3f533b3ba5b-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "7c22b18c-aa70-4154-a611-b3f533b3ba5b" (UID: "7c22b18c-aa70-4154-a611-b3f533b3ba5b"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" May 16 00:35:46.505218 kubelet[1415]: I0516 00:35:46.505193 1415 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c22b18c-aa70-4154-a611-b3f533b3ba5b-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "7c22b18c-aa70-4154-a611-b3f533b3ba5b" (UID: "7c22b18c-aa70-4154-a611-b3f533b3ba5b"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" May 16 00:35:46.599737 kubelet[1415]: I0516 00:35:46.599694 1415 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7c22b18c-aa70-4154-a611-b3f533b3ba5b-cilium-config-path\") on node \"10.0.0.35\" DevicePath \"\"" May 16 00:35:46.599737 kubelet[1415]: I0516 00:35:46.599730 1415 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7c22b18c-aa70-4154-a611-b3f533b3ba5b-host-proc-sys-kernel\") on node \"10.0.0.35\" DevicePath \"\"" May 16 00:35:46.599737 kubelet[1415]: I0516 00:35:46.599742 1415 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7c22b18c-aa70-4154-a611-b3f533b3ba5b-hubble-tls\") on node \"10.0.0.35\" DevicePath \"\"" May 16 00:35:46.599934 kubelet[1415]: I0516 00:35:46.599750 1415 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-85qrz\" (UniqueName: \"kubernetes.io/projected/7c22b18c-aa70-4154-a611-b3f533b3ba5b-kube-api-access-85qrz\") on node \"10.0.0.35\" DevicePath \"\"" May 16 00:35:46.599934 kubelet[1415]: I0516 00:35:46.599759 1415 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7c22b18c-aa70-4154-a611-b3f533b3ba5b-host-proc-sys-net\") on node \"10.0.0.35\" DevicePath \"\"" May 16 00:35:46.599934 kubelet[1415]: I0516 00:35:46.599768 1415 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7c22b18c-aa70-4154-a611-b3f533b3ba5b-cilium-run\") on node \"10.0.0.35\" DevicePath \"\"" May 16 00:35:46.599934 kubelet[1415]: I0516 00:35:46.599775 1415 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7c22b18c-aa70-4154-a611-b3f533b3ba5b-clustermesh-secrets\") on node \"10.0.0.35\" DevicePath \"\"" May 16 00:35:46.599934 kubelet[1415]: I0516 00:35:46.599783 1415 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7c22b18c-aa70-4154-a611-b3f533b3ba5b-lib-modules\") on node \"10.0.0.35\" DevicePath \"\"" May 16 00:35:46.599934 kubelet[1415]: I0516 00:35:46.599791 1415 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7c22b18c-aa70-4154-a611-b3f533b3ba5b-xtables-lock\") on node \"10.0.0.35\" DevicePath \"\"" May 16 00:35:46.599934 kubelet[1415]: I0516 00:35:46.599799 1415 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7c22b18c-aa70-4154-a611-b3f533b3ba5b-bpf-maps\") on node \"10.0.0.35\" DevicePath \"\"" May 16 00:35:46.599934 kubelet[1415]: I0516 00:35:46.599808 1415 reconciler_common.go:293] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/7c22b18c-aa70-4154-a611-b3f533b3ba5b-cilium-ipsec-secrets\") on node \"10.0.0.35\" DevicePath \"\"" May 16 00:35:46.600120 kubelet[1415]: I0516 00:35:46.599815 1415 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7c22b18c-aa70-4154-a611-b3f533b3ba5b-hostproc\") on node \"10.0.0.35\" DevicePath \"\"" May 16 00:35:46.600120 kubelet[1415]: I0516 00:35:46.599822 1415 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7c22b18c-aa70-4154-a611-b3f533b3ba5b-cni-path\") on node \"10.0.0.35\" DevicePath \"\"" May 16 00:35:46.600120 kubelet[1415]: I0516 00:35:46.599830 1415 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7c22b18c-aa70-4154-a611-b3f533b3ba5b-etc-cni-netd\") on node \"10.0.0.35\" DevicePath \"\"" May 16 00:35:46.600120 kubelet[1415]: I0516 00:35:46.599838 1415 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7c22b18c-aa70-4154-a611-b3f533b3ba5b-cilium-cgroup\") on node \"10.0.0.35\" DevicePath \"\"" May 16 00:35:46.700695 systemd[1]: var-lib-kubelet-pods-7c22b18c\x2daa70\x2d4154\x2da611\x2db3f533b3ba5b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d85qrz.mount: Deactivated successfully. May 16 00:35:46.700787 systemd[1]: var-lib-kubelet-pods-7c22b18c\x2daa70\x2d4154\x2da611\x2db3f533b3ba5b-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. May 16 00:35:46.700842 systemd[1]: var-lib-kubelet-pods-7c22b18c\x2daa70\x2d4154\x2da611\x2db3f533b3ba5b-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 16 00:35:46.700908 systemd[1]: var-lib-kubelet-pods-7c22b18c\x2daa70\x2d4154\x2da611\x2db3f533b3ba5b-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 16 00:35:47.076083 kubelet[1415]: E0516 00:35:47.076046 1415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:35:47.253314 systemd[1]: Removed slice kubepods-burstable-pod7c22b18c_aa70_4154_a611_b3f533b3ba5b.slice. May 16 00:35:47.402119 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2439919287.mount: Deactivated successfully. May 16 00:35:47.409132 systemd[1]: Created slice kubepods-burstable-podc6b61e3a_26d6_43b8_b744_5cb229958e88.slice. May 16 00:35:47.504068 kubelet[1415]: I0516 00:35:47.504023 1415 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c6b61e3a-26d6-43b8-b744-5cb229958e88-cni-path\") pod \"cilium-xmncp\" (UID: \"c6b61e3a-26d6-43b8-b744-5cb229958e88\") " pod="kube-system/cilium-xmncp" May 16 00:35:47.504285 kubelet[1415]: I0516 00:35:47.504263 1415 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c6b61e3a-26d6-43b8-b744-5cb229958e88-xtables-lock\") pod \"cilium-xmncp\" (UID: \"c6b61e3a-26d6-43b8-b744-5cb229958e88\") " pod="kube-system/cilium-xmncp" May 16 00:35:47.504376 kubelet[1415]: I0516 00:35:47.504361 1415 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c6b61e3a-26d6-43b8-b744-5cb229958e88-cilium-run\") pod \"cilium-xmncp\" (UID: \"c6b61e3a-26d6-43b8-b744-5cb229958e88\") " pod="kube-system/cilium-xmncp" May 16 00:35:47.504512 kubelet[1415]: I0516 00:35:47.504464 1415 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c6b61e3a-26d6-43b8-b744-5cb229958e88-cilium-cgroup\") pod \"cilium-xmncp\" (UID: \"c6b61e3a-26d6-43b8-b744-5cb229958e88\") " pod="kube-system/cilium-xmncp" May 16 00:35:47.504564 kubelet[1415]: I0516 00:35:47.504521 1415 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c6b61e3a-26d6-43b8-b744-5cb229958e88-hostproc\") pod \"cilium-xmncp\" (UID: \"c6b61e3a-26d6-43b8-b744-5cb229958e88\") " pod="kube-system/cilium-xmncp" May 16 00:35:47.504564 kubelet[1415]: I0516 00:35:47.504541 1415 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c6b61e3a-26d6-43b8-b744-5cb229958e88-etc-cni-netd\") pod \"cilium-xmncp\" (UID: \"c6b61e3a-26d6-43b8-b744-5cb229958e88\") " pod="kube-system/cilium-xmncp" May 16 00:35:47.504564 kubelet[1415]: I0516 00:35:47.504555 1415 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c6b61e3a-26d6-43b8-b744-5cb229958e88-cilium-config-path\") pod \"cilium-xmncp\" (UID: \"c6b61e3a-26d6-43b8-b744-5cb229958e88\") " pod="kube-system/cilium-xmncp" May 16 00:35:47.504641 kubelet[1415]: I0516 00:35:47.504572 1415 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c6b61e3a-26d6-43b8-b744-5cb229958e88-host-proc-sys-kernel\") pod \"cilium-xmncp\" (UID: \"c6b61e3a-26d6-43b8-b744-5cb229958e88\") " pod="kube-system/cilium-xmncp" May 16 00:35:47.504641 kubelet[1415]: I0516 00:35:47.504596 1415 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c6b61e3a-26d6-43b8-b744-5cb229958e88-hubble-tls\") pod \"cilium-xmncp\" (UID: \"c6b61e3a-26d6-43b8-b744-5cb229958e88\") " pod="kube-system/cilium-xmncp" May 16 00:35:47.504641 kubelet[1415]: I0516 00:35:47.504613 1415 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c6b61e3a-26d6-43b8-b744-5cb229958e88-lib-modules\") pod \"cilium-xmncp\" (UID: \"c6b61e3a-26d6-43b8-b744-5cb229958e88\") " pod="kube-system/cilium-xmncp" May 16 00:35:47.504641 kubelet[1415]: I0516 00:35:47.504634 1415 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c6b61e3a-26d6-43b8-b744-5cb229958e88-clustermesh-secrets\") pod \"cilium-xmncp\" (UID: \"c6b61e3a-26d6-43b8-b744-5cb229958e88\") " pod="kube-system/cilium-xmncp" May 16 00:35:47.504730 kubelet[1415]: I0516 00:35:47.504652 1415 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c6b61e3a-26d6-43b8-b744-5cb229958e88-host-proc-sys-net\") pod \"cilium-xmncp\" (UID: \"c6b61e3a-26d6-43b8-b744-5cb229958e88\") " pod="kube-system/cilium-xmncp" May 16 00:35:47.504730 kubelet[1415]: I0516 00:35:47.504668 1415 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lldlt\" (UniqueName: \"kubernetes.io/projected/c6b61e3a-26d6-43b8-b744-5cb229958e88-kube-api-access-lldlt\") pod \"cilium-xmncp\" (UID: \"c6b61e3a-26d6-43b8-b744-5cb229958e88\") " pod="kube-system/cilium-xmncp" May 16 00:35:47.504730 kubelet[1415]: I0516 00:35:47.504684 1415 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c6b61e3a-26d6-43b8-b744-5cb229958e88-bpf-maps\") pod \"cilium-xmncp\" (UID: \"c6b61e3a-26d6-43b8-b744-5cb229958e88\") " pod="kube-system/cilium-xmncp" May 16 00:35:47.504730 kubelet[1415]: I0516 00:35:47.504701 1415 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/c6b61e3a-26d6-43b8-b744-5cb229958e88-cilium-ipsec-secrets\") pod \"cilium-xmncp\" (UID: \"c6b61e3a-26d6-43b8-b744-5cb229958e88\") " pod="kube-system/cilium-xmncp" May 16 00:35:47.717565 kubelet[1415]: E0516 00:35:47.717460 1415 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:35:47.717984 env[1210]: time="2025-05-16T00:35:47.717947132Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-xmncp,Uid:c6b61e3a-26d6-43b8-b744-5cb229958e88,Namespace:kube-system,Attempt:0,}" May 16 00:35:47.734232 env[1210]: time="2025-05-16T00:35:47.734162065Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 16 00:35:47.734232 env[1210]: time="2025-05-16T00:35:47.734203506Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 16 00:35:47.734232 env[1210]: time="2025-05-16T00:35:47.734213947Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 00:35:47.734414 env[1210]: time="2025-05-16T00:35:47.734368632Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1dc37d1c8de4564f56c0fc4affd9c403dd2e94bbc8a4a0f072754939796a7353 pid=3031 runtime=io.containerd.runc.v2 May 16 00:35:47.752487 systemd[1]: Started cri-containerd-1dc37d1c8de4564f56c0fc4affd9c403dd2e94bbc8a4a0f072754939796a7353.scope. May 16 00:35:47.753953 systemd[1]: run-containerd-runc-k8s.io-1dc37d1c8de4564f56c0fc4affd9c403dd2e94bbc8a4a0f072754939796a7353-runc.OubAmz.mount: Deactivated successfully. May 16 00:35:47.795863 env[1210]: time="2025-05-16T00:35:47.795695609Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-xmncp,Uid:c6b61e3a-26d6-43b8-b744-5cb229958e88,Namespace:kube-system,Attempt:0,} returns sandbox id \"1dc37d1c8de4564f56c0fc4affd9c403dd2e94bbc8a4a0f072754939796a7353\"" May 16 00:35:47.796489 kubelet[1415]: E0516 00:35:47.796464 1415 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:35:47.798586 env[1210]: time="2025-05-16T00:35:47.798548423Z" level=info msg="CreateContainer within sandbox \"1dc37d1c8de4564f56c0fc4affd9c403dd2e94bbc8a4a0f072754939796a7353\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 16 00:35:47.847528 env[1210]: time="2025-05-16T00:35:47.847478153Z" level=info msg="CreateContainer within sandbox \"1dc37d1c8de4564f56c0fc4affd9c403dd2e94bbc8a4a0f072754939796a7353\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"2b4d1d300c18585f9aac4c45a4d808d78ff35078f6cfe5cfd879a63b90114568\"" May 16 00:35:47.848365 env[1210]: time="2025-05-16T00:35:47.848327261Z" level=info msg="StartContainer for \"2b4d1d300c18585f9aac4c45a4d808d78ff35078f6cfe5cfd879a63b90114568\"" May 16 00:35:47.861577 systemd[1]: Started cri-containerd-2b4d1d300c18585f9aac4c45a4d808d78ff35078f6cfe5cfd879a63b90114568.scope. May 16 00:35:47.908597 env[1210]: time="2025-05-16T00:35:47.908554762Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:35:47.910197 env[1210]: time="2025-05-16T00:35:47.910159935Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:35:47.911791 env[1210]: time="2025-05-16T00:35:47.911749627Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:35:47.912446 env[1210]: time="2025-05-16T00:35:47.912408528Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" May 16 00:35:47.915065 env[1210]: time="2025-05-16T00:35:47.915029335Z" level=info msg="CreateContainer within sandbox \"29d08d9697c0043318f489319a28f0919c2e3a1db35c2c81664a610c67679062\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 16 00:35:47.922143 env[1210]: time="2025-05-16T00:35:47.922105447Z" level=info msg="StartContainer for \"2b4d1d300c18585f9aac4c45a4d808d78ff35078f6cfe5cfd879a63b90114568\" returns successfully" May 16 00:35:47.932901 env[1210]: time="2025-05-16T00:35:47.931217067Z" level=info msg="CreateContainer within sandbox \"29d08d9697c0043318f489319a28f0919c2e3a1db35c2c81664a610c67679062\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"252e1f2bff986bbfacf240ce98f01cc6e4dc8137bb90376719061b77bb9141fb\"" May 16 00:35:47.933198 env[1210]: time="2025-05-16T00:35:47.933150291Z" level=info msg="StartContainer for \"252e1f2bff986bbfacf240ce98f01cc6e4dc8137bb90376719061b77bb9141fb\"" May 16 00:35:47.937199 systemd[1]: cri-containerd-2b4d1d300c18585f9aac4c45a4d808d78ff35078f6cfe5cfd879a63b90114568.scope: Deactivated successfully. May 16 00:35:47.950794 systemd[1]: Started cri-containerd-252e1f2bff986bbfacf240ce98f01cc6e4dc8137bb90376719061b77bb9141fb.scope. May 16 00:35:47.966042 env[1210]: time="2025-05-16T00:35:47.965975171Z" level=info msg="shim disconnected" id=2b4d1d300c18585f9aac4c45a4d808d78ff35078f6cfe5cfd879a63b90114568 May 16 00:35:47.966042 env[1210]: time="2025-05-16T00:35:47.966028172Z" level=warning msg="cleaning up after shim disconnected" id=2b4d1d300c18585f9aac4c45a4d808d78ff35078f6cfe5cfd879a63b90114568 namespace=k8s.io May 16 00:35:47.966042 env[1210]: time="2025-05-16T00:35:47.966039173Z" level=info msg="cleaning up dead shim" May 16 00:35:47.983859 env[1210]: time="2025-05-16T00:35:47.980378964Z" level=warning msg="cleanup warnings time=\"2025-05-16T00:35:47Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3132 runtime=io.containerd.runc.v2\n" May 16 00:35:48.012469 env[1210]: time="2025-05-16T00:35:48.012364044Z" level=info msg="StartContainer for \"252e1f2bff986bbfacf240ce98f01cc6e4dc8137bb90376719061b77bb9141fb\" returns successfully" May 16 00:35:48.077190 kubelet[1415]: E0516 00:35:48.077129 1415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:35:48.358718 kubelet[1415]: E0516 00:35:48.358684 1415 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:35:48.359654 kubelet[1415]: E0516 00:35:48.359624 1415 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:35:48.361268 env[1210]: time="2025-05-16T00:35:48.361188094Z" level=info msg="CreateContainer within sandbox \"1dc37d1c8de4564f56c0fc4affd9c403dd2e94bbc8a4a0f072754939796a7353\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 16 00:35:48.366848 kubelet[1415]: I0516 00:35:48.366788 1415 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-mt7j9" podStartSLOduration=1.540729651 podStartE2EDuration="3.366774312s" podCreationTimestamp="2025-05-16 00:35:45 +0000 UTC" firstStartedPulling="2025-05-16 00:35:46.087514345 +0000 UTC m=+55.779842482" lastFinishedPulling="2025-05-16 00:35:47.913558966 +0000 UTC m=+57.605887143" observedRunningTime="2025-05-16 00:35:48.366168613 +0000 UTC m=+58.058496790" watchObservedRunningTime="2025-05-16 00:35:48.366774312 +0000 UTC m=+58.059102489" May 16 00:35:48.371754 env[1210]: time="2025-05-16T00:35:48.371689748Z" level=info msg="CreateContainer within sandbox \"1dc37d1c8de4564f56c0fc4affd9c403dd2e94bbc8a4a0f072754939796a7353\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"1f0b05e9b727b7675abb519b89aff235db62b0a6fc862437d2d058b739361fdd\"" May 16 00:35:48.374115 env[1210]: time="2025-05-16T00:35:48.374068584Z" level=info msg="StartContainer for \"1f0b05e9b727b7675abb519b89aff235db62b0a6fc862437d2d058b739361fdd\"" May 16 00:35:48.392973 systemd[1]: Started cri-containerd-1f0b05e9b727b7675abb519b89aff235db62b0a6fc862437d2d058b739361fdd.scope. May 16 00:35:48.425127 env[1210]: time="2025-05-16T00:35:48.425066285Z" level=info msg="StartContainer for \"1f0b05e9b727b7675abb519b89aff235db62b0a6fc862437d2d058b739361fdd\" returns successfully" May 16 00:35:48.448758 systemd[1]: cri-containerd-1f0b05e9b727b7675abb519b89aff235db62b0a6fc862437d2d058b739361fdd.scope: Deactivated successfully. May 16 00:35:48.467872 env[1210]: time="2025-05-16T00:35:48.467824525Z" level=info msg="shim disconnected" id=1f0b05e9b727b7675abb519b89aff235db62b0a6fc862437d2d058b739361fdd May 16 00:35:48.467872 env[1210]: time="2025-05-16T00:35:48.467867886Z" level=warning msg="cleaning up after shim disconnected" id=1f0b05e9b727b7675abb519b89aff235db62b0a6fc862437d2d058b739361fdd namespace=k8s.io May 16 00:35:48.467872 env[1210]: time="2025-05-16T00:35:48.467887407Z" level=info msg="cleaning up dead shim" May 16 00:35:48.474827 env[1210]: time="2025-05-16T00:35:48.474778226Z" level=warning msg="cleanup warnings time=\"2025-05-16T00:35:48Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3216 runtime=io.containerd.runc.v2\n" May 16 00:35:49.077847 kubelet[1415]: E0516 00:35:49.077794 1415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:35:49.244321 kubelet[1415]: I0516 00:35:49.244270 1415 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7c22b18c-aa70-4154-a611-b3f533b3ba5b" path="/var/lib/kubelet/pods/7c22b18c-aa70-4154-a611-b3f533b3ba5b/volumes" May 16 00:35:49.362841 kubelet[1415]: E0516 00:35:49.362734 1415 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:35:49.363477 kubelet[1415]: E0516 00:35:49.363457 1415 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:35:49.365575 env[1210]: time="2025-05-16T00:35:49.365533050Z" level=info msg="CreateContainer within sandbox \"1dc37d1c8de4564f56c0fc4affd9c403dd2e94bbc8a4a0f072754939796a7353\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 16 00:35:49.379899 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3043002097.mount: Deactivated successfully. May 16 00:35:49.385664 env[1210]: time="2025-05-16T00:35:49.385618508Z" level=info msg="CreateContainer within sandbox \"1dc37d1c8de4564f56c0fc4affd9c403dd2e94bbc8a4a0f072754939796a7353\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"59f00922d948c89b582359c7d8b418058671559e5ba822221ff48dbd35cbc359\"" May 16 00:35:49.386146 env[1210]: time="2025-05-16T00:35:49.386092842Z" level=info msg="StartContainer for \"59f00922d948c89b582359c7d8b418058671559e5ba822221ff48dbd35cbc359\"" May 16 00:35:49.412419 systemd[1]: Started cri-containerd-59f00922d948c89b582359c7d8b418058671559e5ba822221ff48dbd35cbc359.scope. May 16 00:35:49.444911 env[1210]: time="2025-05-16T00:35:49.444856530Z" level=info msg="StartContainer for \"59f00922d948c89b582359c7d8b418058671559e5ba822221ff48dbd35cbc359\" returns successfully" May 16 00:35:49.447717 systemd[1]: cri-containerd-59f00922d948c89b582359c7d8b418058671559e5ba822221ff48dbd35cbc359.scope: Deactivated successfully. May 16 00:35:49.468421 env[1210]: time="2025-05-16T00:35:49.468357173Z" level=info msg="shim disconnected" id=59f00922d948c89b582359c7d8b418058671559e5ba822221ff48dbd35cbc359 May 16 00:35:49.468421 env[1210]: time="2025-05-16T00:35:49.468399814Z" level=warning msg="cleaning up after shim disconnected" id=59f00922d948c89b582359c7d8b418058671559e5ba822221ff48dbd35cbc359 namespace=k8s.io May 16 00:35:49.468421 env[1210]: time="2025-05-16T00:35:49.468410894Z" level=info msg="cleaning up dead shim" May 16 00:35:49.474671 env[1210]: time="2025-05-16T00:35:49.474623686Z" level=warning msg="cleanup warnings time=\"2025-05-16T00:35:49Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3272 runtime=io.containerd.runc.v2\n" May 16 00:35:49.699828 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-59f00922d948c89b582359c7d8b418058671559e5ba822221ff48dbd35cbc359-rootfs.mount: Deactivated successfully. May 16 00:35:50.077993 kubelet[1415]: E0516 00:35:50.077947 1415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:35:50.368769 kubelet[1415]: E0516 00:35:50.366959 1415 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:35:50.370994 env[1210]: time="2025-05-16T00:35:50.370574208Z" level=info msg="CreateContainer within sandbox \"1dc37d1c8de4564f56c0fc4affd9c403dd2e94bbc8a4a0f072754939796a7353\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 16 00:35:50.383608 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount899822863.mount: Deactivated successfully. May 16 00:35:50.389376 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1903284611.mount: Deactivated successfully. May 16 00:35:50.392535 env[1210]: time="2025-05-16T00:35:50.392488461Z" level=info msg="CreateContainer within sandbox \"1dc37d1c8de4564f56c0fc4affd9c403dd2e94bbc8a4a0f072754939796a7353\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"fd8f944bb1cf6859969ec4a1e66549fd52264b88fcc9ce24933cb7ca6e5dffa8\"" May 16 00:35:50.393301 env[1210]: time="2025-05-16T00:35:50.393276205Z" level=info msg="StartContainer for \"fd8f944bb1cf6859969ec4a1e66549fd52264b88fcc9ce24933cb7ca6e5dffa8\"" May 16 00:35:50.415996 systemd[1]: Started cri-containerd-fd8f944bb1cf6859969ec4a1e66549fd52264b88fcc9ce24933cb7ca6e5dffa8.scope. May 16 00:35:50.442556 systemd[1]: cri-containerd-fd8f944bb1cf6859969ec4a1e66549fd52264b88fcc9ce24933cb7ca6e5dffa8.scope: Deactivated successfully. May 16 00:35:50.444847 env[1210]: time="2025-05-16T00:35:50.444749698Z" level=info msg="StartContainer for \"fd8f944bb1cf6859969ec4a1e66549fd52264b88fcc9ce24933cb7ca6e5dffa8\" returns successfully" May 16 00:35:50.445213 env[1210]: time="2025-05-16T00:35:50.444982985Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc6b61e3a_26d6_43b8_b744_5cb229958e88.slice/cri-containerd-fd8f944bb1cf6859969ec4a1e66549fd52264b88fcc9ce24933cb7ca6e5dffa8.scope/memory.events\": no such file or directory" May 16 00:35:50.463988 env[1210]: time="2025-05-16T00:35:50.463935030Z" level=info msg="shim disconnected" id=fd8f944bb1cf6859969ec4a1e66549fd52264b88fcc9ce24933cb7ca6e5dffa8 May 16 00:35:50.463988 env[1210]: time="2025-05-16T00:35:50.463986311Z" level=warning msg="cleaning up after shim disconnected" id=fd8f944bb1cf6859969ec4a1e66549fd52264b88fcc9ce24933cb7ca6e5dffa8 namespace=k8s.io May 16 00:35:50.463988 env[1210]: time="2025-05-16T00:35:50.463995312Z" level=info msg="cleaning up dead shim" May 16 00:35:50.470670 env[1210]: time="2025-05-16T00:35:50.470631149Z" level=warning msg="cleanup warnings time=\"2025-05-16T00:35:50Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3327 runtime=io.containerd.runc.v2\n" May 16 00:35:51.040370 kubelet[1415]: E0516 00:35:51.040321 1415 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:35:51.078565 kubelet[1415]: E0516 00:35:51.078519 1415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:35:51.080965 env[1210]: time="2025-05-16T00:35:51.080924433Z" level=info msg="StopPodSandbox for \"a8a60a29bfbf91b1f494b2e176307e469e6f84425590bdbf21e7f7b4b1856d9c\"" May 16 00:35:51.081056 env[1210]: time="2025-05-16T00:35:51.081009916Z" level=info msg="TearDown network for sandbox \"a8a60a29bfbf91b1f494b2e176307e469e6f84425590bdbf21e7f7b4b1856d9c\" successfully" May 16 00:35:51.081056 env[1210]: time="2025-05-16T00:35:51.081047157Z" level=info msg="StopPodSandbox for \"a8a60a29bfbf91b1f494b2e176307e469e6f84425590bdbf21e7f7b4b1856d9c\" returns successfully" May 16 00:35:51.081420 env[1210]: time="2025-05-16T00:35:51.081381167Z" level=info msg="RemovePodSandbox for \"a8a60a29bfbf91b1f494b2e176307e469e6f84425590bdbf21e7f7b4b1856d9c\"" May 16 00:35:51.081451 env[1210]: time="2025-05-16T00:35:51.081420328Z" level=info msg="Forcibly stopping sandbox \"a8a60a29bfbf91b1f494b2e176307e469e6f84425590bdbf21e7f7b4b1856d9c\"" May 16 00:35:51.081503 env[1210]: time="2025-05-16T00:35:51.081488410Z" level=info msg="TearDown network for sandbox \"a8a60a29bfbf91b1f494b2e176307e469e6f84425590bdbf21e7f7b4b1856d9c\" successfully" May 16 00:35:51.085142 env[1210]: time="2025-05-16T00:35:51.085114475Z" level=info msg="RemovePodSandbox \"a8a60a29bfbf91b1f494b2e176307e469e6f84425590bdbf21e7f7b4b1856d9c\" returns successfully" May 16 00:35:51.201714 kubelet[1415]: E0516 00:35:51.201671 1415 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 16 00:35:51.371505 kubelet[1415]: E0516 00:35:51.371150 1415 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:35:51.373630 env[1210]: time="2025-05-16T00:35:51.373587247Z" level=info msg="CreateContainer within sandbox \"1dc37d1c8de4564f56c0fc4affd9c403dd2e94bbc8a4a0f072754939796a7353\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 16 00:35:51.386657 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3436775151.mount: Deactivated successfully. May 16 00:35:51.388454 env[1210]: time="2025-05-16T00:35:51.388235110Z" level=info msg="CreateContainer within sandbox \"1dc37d1c8de4564f56c0fc4affd9c403dd2e94bbc8a4a0f072754939796a7353\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"2d3c20a186caa62a71067a2ec30821c014e20c079b9c46bae15456fb9c32e5d6\"" May 16 00:35:51.388735 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2790126267.mount: Deactivated successfully. May 16 00:35:51.390176 env[1210]: time="2025-05-16T00:35:51.390137045Z" level=info msg="StartContainer for \"2d3c20a186caa62a71067a2ec30821c014e20c079b9c46bae15456fb9c32e5d6\"" May 16 00:35:51.403546 systemd[1]: Started cri-containerd-2d3c20a186caa62a71067a2ec30821c014e20c079b9c46bae15456fb9c32e5d6.scope. May 16 00:35:51.441703 env[1210]: time="2025-05-16T00:35:51.441642013Z" level=info msg="StartContainer for \"2d3c20a186caa62a71067a2ec30821c014e20c079b9c46bae15456fb9c32e5d6\" returns successfully" May 16 00:35:51.697899 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm-aes-ce))) May 16 00:35:52.079370 kubelet[1415]: E0516 00:35:52.079317 1415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:35:52.375547 kubelet[1415]: E0516 00:35:52.375358 1415 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:35:52.390478 kubelet[1415]: I0516 00:35:52.390419 1415 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-xmncp" podStartSLOduration=5.390404168 podStartE2EDuration="5.390404168s" podCreationTimestamp="2025-05-16 00:35:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-16 00:35:52.389832752 +0000 UTC m=+62.082160929" watchObservedRunningTime="2025-05-16 00:35:52.390404168 +0000 UTC m=+62.082732305" May 16 00:35:52.705436 kubelet[1415]: I0516 00:35:52.705121 1415 setters.go:600] "Node became not ready" node="10.0.0.35" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-05-16T00:35:52Z","lastTransitionTime":"2025-05-16T00:35:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} May 16 00:35:53.080455 kubelet[1415]: E0516 00:35:53.080408 1415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:35:53.718775 kubelet[1415]: E0516 00:35:53.718739 1415 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:35:54.081215 kubelet[1415]: E0516 00:35:54.081163 1415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:35:54.538623 systemd-networkd[1040]: lxc_health: Link UP May 16 00:35:54.539754 systemd-networkd[1040]: lxc_health: Gained carrier May 16 00:35:54.539903 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready May 16 00:35:55.081740 kubelet[1415]: E0516 00:35:55.081680 1415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:35:55.568003 systemd-networkd[1040]: lxc_health: Gained IPv6LL May 16 00:35:55.719821 kubelet[1415]: E0516 00:35:55.719738 1415 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:35:56.082826 kubelet[1415]: E0516 00:35:56.082769 1415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:35:56.382982 kubelet[1415]: E0516 00:35:56.382656 1415 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:35:57.083286 kubelet[1415]: E0516 00:35:57.083244 1415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:35:57.384700 kubelet[1415]: E0516 00:35:57.384395 1415 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:35:58.083431 kubelet[1415]: E0516 00:35:58.083396 1415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:35:59.084746 kubelet[1415]: E0516 00:35:59.084684 1415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:36:00.085862 kubelet[1415]: E0516 00:36:00.085802 1415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:36:00.275216 systemd[1]: run-containerd-runc-k8s.io-2d3c20a186caa62a71067a2ec30821c014e20c079b9c46bae15456fb9c32e5d6-runc.tdk1pF.mount: Deactivated successfully. May 16 00:36:01.087011 kubelet[1415]: E0516 00:36:01.086952 1415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 16 00:36:02.087908 kubelet[1415]: E0516 00:36:02.087837 1415 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"