May 8 00:49:44.721969 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] May 8 00:49:44.721987 kernel: Linux version 5.15.180-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Wed May 7 23:24:31 -00 2025 May 8 00:49:44.721995 kernel: efi: EFI v2.70 by EDK II May 8 00:49:44.722001 kernel: efi: SMBIOS 3.0=0xd9260000 ACPI 2.0=0xd9240000 MEMATTR=0xda32b018 RNG=0xd9220018 MEMRESERVE=0xd9521c18 May 8 00:49:44.722006 kernel: random: crng init done May 8 00:49:44.722011 kernel: ACPI: Early table checksum verification disabled May 8 00:49:44.722018 kernel: ACPI: RSDP 0x00000000D9240000 000024 (v02 BOCHS ) May 8 00:49:44.722024 kernel: ACPI: XSDT 0x00000000D9230000 000064 (v01 BOCHS BXPC 00000001 01000013) May 8 00:49:44.722030 kernel: ACPI: FACP 0x00000000D91E0000 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:49:44.722035 kernel: ACPI: DSDT 0x00000000D91F0000 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:49:44.722040 kernel: ACPI: APIC 0x00000000D91D0000 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:49:44.722045 kernel: ACPI: PPTT 0x00000000D91C0000 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:49:44.722051 kernel: ACPI: GTDT 0x00000000D91B0000 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:49:44.722056 kernel: ACPI: MCFG 0x00000000D91A0000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:49:44.722064 kernel: ACPI: SPCR 0x00000000D9190000 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:49:44.722070 kernel: ACPI: DBG2 0x00000000D9180000 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:49:44.722076 kernel: ACPI: IORT 0x00000000D9170000 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 8 00:49:44.722081 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 May 8 00:49:44.722087 kernel: NUMA: Failed to initialise from firmware May 8 00:49:44.722093 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] May 8 00:49:44.722098 kernel: NUMA: NODE_DATA [mem 0xdcb09900-0xdcb0efff] May 8 00:49:44.722104 kernel: Zone ranges: May 8 00:49:44.722109 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] May 8 00:49:44.722116 kernel: DMA32 empty May 8 00:49:44.722122 kernel: Normal empty May 8 00:49:44.722127 kernel: Movable zone start for each node May 8 00:49:44.722133 kernel: Early memory node ranges May 8 00:49:44.722138 kernel: node 0: [mem 0x0000000040000000-0x00000000d924ffff] May 8 00:49:44.722144 kernel: node 0: [mem 0x00000000d9250000-0x00000000d951ffff] May 8 00:49:44.722150 kernel: node 0: [mem 0x00000000d9520000-0x00000000dc7fffff] May 8 00:49:44.722155 kernel: node 0: [mem 0x00000000dc800000-0x00000000dc88ffff] May 8 00:49:44.722161 kernel: node 0: [mem 0x00000000dc890000-0x00000000dc89ffff] May 8 00:49:44.722166 kernel: node 0: [mem 0x00000000dc8a0000-0x00000000dc9bffff] May 8 00:49:44.722172 kernel: node 0: [mem 0x00000000dc9c0000-0x00000000dcffffff] May 8 00:49:44.722177 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] May 8 00:49:44.722184 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges May 8 00:49:44.722190 kernel: psci: probing for conduit method from ACPI. May 8 00:49:44.722195 kernel: psci: PSCIv1.1 detected in firmware. May 8 00:49:44.722201 kernel: psci: Using standard PSCI v0.2 function IDs May 8 00:49:44.722206 kernel: psci: Trusted OS migration not required May 8 00:49:44.722214 kernel: psci: SMC Calling Convention v1.1 May 8 00:49:44.722220 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) May 8 00:49:44.722228 kernel: ACPI: SRAT not present May 8 00:49:44.722234 kernel: percpu: Embedded 30 pages/cpu s83032 r8192 d31656 u122880 May 8 00:49:44.722240 kernel: pcpu-alloc: s83032 r8192 d31656 u122880 alloc=30*4096 May 8 00:49:44.722246 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 May 8 00:49:44.722252 kernel: Detected PIPT I-cache on CPU0 May 8 00:49:44.722258 kernel: CPU features: detected: GIC system register CPU interface May 8 00:49:44.722264 kernel: CPU features: detected: Hardware dirty bit management May 8 00:49:44.722270 kernel: CPU features: detected: Spectre-v4 May 8 00:49:44.722276 kernel: CPU features: detected: Spectre-BHB May 8 00:49:44.722283 kernel: CPU features: kernel page table isolation forced ON by KASLR May 8 00:49:44.722289 kernel: CPU features: detected: Kernel page table isolation (KPTI) May 8 00:49:44.722295 kernel: CPU features: detected: ARM erratum 1418040 May 8 00:49:44.722301 kernel: CPU features: detected: SSBS not fully self-synchronizing May 8 00:49:44.722307 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 May 8 00:49:44.722313 kernel: Policy zone: DMA May 8 00:49:44.722320 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=3816e7a7ab4f80032c381006006d7d5ba477c6a86a1527e782723d869b29d497 May 8 00:49:44.722335 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 8 00:49:44.722342 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 8 00:49:44.722348 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 8 00:49:44.722354 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 8 00:49:44.722362 kernel: Memory: 2457396K/2572288K available (9792K kernel code, 2094K rwdata, 7584K rodata, 36416K init, 777K bss, 114892K reserved, 0K cma-reserved) May 8 00:49:44.722368 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 8 00:49:44.722374 kernel: trace event string verifier disabled May 8 00:49:44.722380 kernel: rcu: Preemptible hierarchical RCU implementation. May 8 00:49:44.722387 kernel: rcu: RCU event tracing is enabled. May 8 00:49:44.722393 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 8 00:49:44.722400 kernel: Trampoline variant of Tasks RCU enabled. May 8 00:49:44.722406 kernel: Tracing variant of Tasks RCU enabled. May 8 00:49:44.722412 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 8 00:49:44.722418 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 8 00:49:44.722424 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 May 8 00:49:44.722447 kernel: GICv3: 256 SPIs implemented May 8 00:49:44.722454 kernel: GICv3: 0 Extended SPIs implemented May 8 00:49:44.722460 kernel: GICv3: Distributor has no Range Selector support May 8 00:49:44.722466 kernel: Root IRQ handler: gic_handle_irq May 8 00:49:44.722472 kernel: GICv3: 16 PPIs implemented May 8 00:49:44.722478 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 May 8 00:49:44.722484 kernel: ACPI: SRAT not present May 8 00:49:44.722490 kernel: ITS [mem 0x08080000-0x0809ffff] May 8 00:49:44.722496 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400b0000 (indirect, esz 8, psz 64K, shr 1) May 8 00:49:44.722503 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400c0000 (flat, esz 8, psz 64K, shr 1) May 8 00:49:44.722509 kernel: GICv3: using LPI property table @0x00000000400d0000 May 8 00:49:44.722515 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000000400e0000 May 8 00:49:44.722522 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 00:49:44.722528 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). May 8 00:49:44.722535 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns May 8 00:49:44.722541 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns May 8 00:49:44.722547 kernel: arm-pv: using stolen time PV May 8 00:49:44.722554 kernel: Console: colour dummy device 80x25 May 8 00:49:44.722560 kernel: ACPI: Core revision 20210730 May 8 00:49:44.722566 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) May 8 00:49:44.722572 kernel: pid_max: default: 32768 minimum: 301 May 8 00:49:44.722578 kernel: LSM: Security Framework initializing May 8 00:49:44.722585 kernel: SELinux: Initializing. May 8 00:49:44.722592 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 8 00:49:44.722598 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 8 00:49:44.722604 kernel: rcu: Hierarchical SRCU implementation. May 8 00:49:44.722610 kernel: Platform MSI: ITS@0x8080000 domain created May 8 00:49:44.722616 kernel: PCI/MSI: ITS@0x8080000 domain created May 8 00:49:44.722622 kernel: Remapping and enabling EFI services. May 8 00:49:44.722629 kernel: smp: Bringing up secondary CPUs ... May 8 00:49:44.722635 kernel: Detected PIPT I-cache on CPU1 May 8 00:49:44.722642 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 May 8 00:49:44.722648 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000000400f0000 May 8 00:49:44.722655 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 00:49:44.722661 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] May 8 00:49:44.722667 kernel: Detected PIPT I-cache on CPU2 May 8 00:49:44.722673 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 May 8 00:49:44.722680 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040100000 May 8 00:49:44.722686 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 00:49:44.722692 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] May 8 00:49:44.722698 kernel: Detected PIPT I-cache on CPU3 May 8 00:49:44.722705 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 May 8 00:49:44.722711 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040110000 May 8 00:49:44.722717 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 8 00:49:44.722723 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] May 8 00:49:44.722734 kernel: smp: Brought up 1 node, 4 CPUs May 8 00:49:44.722741 kernel: SMP: Total of 4 processors activated. May 8 00:49:44.722748 kernel: CPU features: detected: 32-bit EL0 Support May 8 00:49:44.722754 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence May 8 00:49:44.722760 kernel: CPU features: detected: Common not Private translations May 8 00:49:44.722767 kernel: CPU features: detected: CRC32 instructions May 8 00:49:44.722773 kernel: CPU features: detected: RCpc load-acquire (LDAPR) May 8 00:49:44.722780 kernel: CPU features: detected: LSE atomic instructions May 8 00:49:44.722787 kernel: CPU features: detected: Privileged Access Never May 8 00:49:44.722794 kernel: CPU features: detected: RAS Extension Support May 8 00:49:44.722800 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) May 8 00:49:44.722806 kernel: CPU: All CPU(s) started at EL1 May 8 00:49:44.722813 kernel: alternatives: patching kernel code May 8 00:49:44.722821 kernel: devtmpfs: initialized May 8 00:49:44.722827 kernel: KASLR enabled May 8 00:49:44.722834 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 8 00:49:44.722840 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 8 00:49:44.722847 kernel: pinctrl core: initialized pinctrl subsystem May 8 00:49:44.722853 kernel: SMBIOS 3.0.0 present. May 8 00:49:44.722859 kernel: DMI: QEMU KVM Virtual Machine, BIOS 0.0.0 02/06/2015 May 8 00:49:44.722866 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 8 00:49:44.722872 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations May 8 00:49:44.722880 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations May 8 00:49:44.722887 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations May 8 00:49:44.722894 kernel: audit: initializing netlink subsys (disabled) May 8 00:49:44.722900 kernel: audit: type=2000 audit(0.031:1): state=initialized audit_enabled=0 res=1 May 8 00:49:44.722907 kernel: thermal_sys: Registered thermal governor 'step_wise' May 8 00:49:44.722913 kernel: cpuidle: using governor menu May 8 00:49:44.722919 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. May 8 00:49:44.722926 kernel: ASID allocator initialised with 32768 entries May 8 00:49:44.722932 kernel: ACPI: bus type PCI registered May 8 00:49:44.722940 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 8 00:49:44.722947 kernel: Serial: AMBA PL011 UART driver May 8 00:49:44.722953 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages May 8 00:49:44.722960 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages May 8 00:49:44.722966 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages May 8 00:49:44.722973 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages May 8 00:49:44.722979 kernel: cryptd: max_cpu_qlen set to 1000 May 8 00:49:44.722986 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) May 8 00:49:44.722992 kernel: ACPI: Added _OSI(Module Device) May 8 00:49:44.723000 kernel: ACPI: Added _OSI(Processor Device) May 8 00:49:44.723006 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 8 00:49:44.723013 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 8 00:49:44.723019 kernel: ACPI: Added _OSI(Linux-Dell-Video) May 8 00:49:44.723025 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) May 8 00:49:44.723032 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) May 8 00:49:44.723038 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 8 00:49:44.723045 kernel: ACPI: Interpreter enabled May 8 00:49:44.723051 kernel: ACPI: Using GIC for interrupt routing May 8 00:49:44.723062 kernel: ACPI: MCFG table detected, 1 entries May 8 00:49:44.723069 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA May 8 00:49:44.723075 kernel: printk: console [ttyAMA0] enabled May 8 00:49:44.723082 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 8 00:49:44.723210 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 8 00:49:44.725764 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] May 8 00:49:44.725847 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] May 8 00:49:44.725911 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 May 8 00:49:44.725968 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] May 8 00:49:44.725977 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] May 8 00:49:44.725983 kernel: PCI host bridge to bus 0000:00 May 8 00:49:44.726051 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] May 8 00:49:44.726106 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] May 8 00:49:44.726158 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] May 8 00:49:44.726212 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 8 00:49:44.726284 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 May 8 00:49:44.726369 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 May 8 00:49:44.726467 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] May 8 00:49:44.726533 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] May 8 00:49:44.726594 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] May 8 00:49:44.726653 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] May 8 00:49:44.726717 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] May 8 00:49:44.726777 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] May 8 00:49:44.726834 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] May 8 00:49:44.726888 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] May 8 00:49:44.726949 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] May 8 00:49:44.726958 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 May 8 00:49:44.726972 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 May 8 00:49:44.726979 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 May 8 00:49:44.726988 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 May 8 00:49:44.726995 kernel: iommu: Default domain type: Translated May 8 00:49:44.727002 kernel: iommu: DMA domain TLB invalidation policy: strict mode May 8 00:49:44.727008 kernel: vgaarb: loaded May 8 00:49:44.727015 kernel: pps_core: LinuxPPS API ver. 1 registered May 8 00:49:44.727021 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti May 8 00:49:44.727028 kernel: PTP clock support registered May 8 00:49:44.727035 kernel: Registered efivars operations May 8 00:49:44.727041 kernel: clocksource: Switched to clocksource arch_sys_counter May 8 00:49:44.727049 kernel: VFS: Disk quotas dquot_6.6.0 May 8 00:49:44.727056 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 8 00:49:44.727063 kernel: pnp: PnP ACPI init May 8 00:49:44.727132 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved May 8 00:49:44.727142 kernel: pnp: PnP ACPI: found 1 devices May 8 00:49:44.727149 kernel: NET: Registered PF_INET protocol family May 8 00:49:44.727156 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 8 00:49:44.727163 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 8 00:49:44.727171 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 8 00:49:44.727178 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 8 00:49:44.727185 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) May 8 00:49:44.727191 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 8 00:49:44.727198 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 8 00:49:44.727205 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 8 00:49:44.727211 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 8 00:49:44.727218 kernel: PCI: CLS 0 bytes, default 64 May 8 00:49:44.727225 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available May 8 00:49:44.727233 kernel: kvm [1]: HYP mode not available May 8 00:49:44.727239 kernel: Initialise system trusted keyrings May 8 00:49:44.727246 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 8 00:49:44.727253 kernel: Key type asymmetric registered May 8 00:49:44.727259 kernel: Asymmetric key parser 'x509' registered May 8 00:49:44.727266 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) May 8 00:49:44.727273 kernel: io scheduler mq-deadline registered May 8 00:49:44.727280 kernel: io scheduler kyber registered May 8 00:49:44.727286 kernel: io scheduler bfq registered May 8 00:49:44.727294 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 May 8 00:49:44.727301 kernel: ACPI: button: Power Button [PWRB] May 8 00:49:44.727308 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 May 8 00:49:44.727379 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) May 8 00:49:44.727389 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 8 00:49:44.727396 kernel: thunder_xcv, ver 1.0 May 8 00:49:44.727403 kernel: thunder_bgx, ver 1.0 May 8 00:49:44.727409 kernel: nicpf, ver 1.0 May 8 00:49:44.727416 kernel: nicvf, ver 1.0 May 8 00:49:44.727497 kernel: rtc-efi rtc-efi.0: registered as rtc0 May 8 00:49:44.727556 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-05-08T00:49:44 UTC (1746665384) May 8 00:49:44.727566 kernel: hid: raw HID events driver (C) Jiri Kosina May 8 00:49:44.727572 kernel: NET: Registered PF_INET6 protocol family May 8 00:49:44.727579 kernel: Segment Routing with IPv6 May 8 00:49:44.727586 kernel: In-situ OAM (IOAM) with IPv6 May 8 00:49:44.727592 kernel: NET: Registered PF_PACKET protocol family May 8 00:49:44.727599 kernel: Key type dns_resolver registered May 8 00:49:44.727607 kernel: registered taskstats version 1 May 8 00:49:44.727614 kernel: Loading compiled-in X.509 certificates May 8 00:49:44.727621 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.180-flatcar: 47302b466ab2df930dd804d2ee9c8ab44de4e2dc' May 8 00:49:44.727627 kernel: Key type .fscrypt registered May 8 00:49:44.727634 kernel: Key type fscrypt-provisioning registered May 8 00:49:44.727641 kernel: ima: No TPM chip found, activating TPM-bypass! May 8 00:49:44.727648 kernel: ima: Allocated hash algorithm: sha1 May 8 00:49:44.727654 kernel: ima: No architecture policies found May 8 00:49:44.727661 kernel: clk: Disabling unused clocks May 8 00:49:44.727669 kernel: Freeing unused kernel memory: 36416K May 8 00:49:44.727676 kernel: Run /init as init process May 8 00:49:44.727682 kernel: with arguments: May 8 00:49:44.727689 kernel: /init May 8 00:49:44.727696 kernel: with environment: May 8 00:49:44.727702 kernel: HOME=/ May 8 00:49:44.727708 kernel: TERM=linux May 8 00:49:44.727715 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 8 00:49:44.727723 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) May 8 00:49:44.727733 systemd[1]: Detected virtualization kvm. May 8 00:49:44.727741 systemd[1]: Detected architecture arm64. May 8 00:49:44.727748 systemd[1]: Running in initrd. May 8 00:49:44.727755 systemd[1]: No hostname configured, using default hostname. May 8 00:49:44.727762 systemd[1]: Hostname set to . May 8 00:49:44.727769 systemd[1]: Initializing machine ID from VM UUID. May 8 00:49:44.727776 systemd[1]: Queued start job for default target initrd.target. May 8 00:49:44.727784 systemd[1]: Started systemd-ask-password-console.path. May 8 00:49:44.727791 systemd[1]: Reached target cryptsetup.target. May 8 00:49:44.727798 systemd[1]: Reached target paths.target. May 8 00:49:44.727805 systemd[1]: Reached target slices.target. May 8 00:49:44.727813 systemd[1]: Reached target swap.target. May 8 00:49:44.727819 systemd[1]: Reached target timers.target. May 8 00:49:44.727827 systemd[1]: Listening on iscsid.socket. May 8 00:49:44.727835 systemd[1]: Listening on iscsiuio.socket. May 8 00:49:44.727842 systemd[1]: Listening on systemd-journald-audit.socket. May 8 00:49:44.727849 systemd[1]: Listening on systemd-journald-dev-log.socket. May 8 00:49:44.727856 systemd[1]: Listening on systemd-journald.socket. May 8 00:49:44.727863 systemd[1]: Listening on systemd-networkd.socket. May 8 00:49:44.727870 systemd[1]: Listening on systemd-udevd-control.socket. May 8 00:49:44.727877 systemd[1]: Listening on systemd-udevd-kernel.socket. May 8 00:49:44.727885 systemd[1]: Reached target sockets.target. May 8 00:49:44.727892 systemd[1]: Starting kmod-static-nodes.service... May 8 00:49:44.727900 systemd[1]: Finished network-cleanup.service. May 8 00:49:44.727907 systemd[1]: Starting systemd-fsck-usr.service... May 8 00:49:44.727914 systemd[1]: Starting systemd-journald.service... May 8 00:49:44.727921 systemd[1]: Starting systemd-modules-load.service... May 8 00:49:44.727928 systemd[1]: Starting systemd-resolved.service... May 8 00:49:44.727935 systemd[1]: Starting systemd-vconsole-setup.service... May 8 00:49:44.727942 systemd[1]: Finished kmod-static-nodes.service. May 8 00:49:44.727949 systemd[1]: Finished systemd-fsck-usr.service. May 8 00:49:44.727956 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... May 8 00:49:44.727965 systemd[1]: Finished systemd-vconsole-setup.service. May 8 00:49:44.727973 kernel: audit: type=1130 audit(1746665384.723:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:44.727980 systemd[1]: Starting dracut-cmdline-ask.service... May 8 00:49:44.727990 systemd-journald[290]: Journal started May 8 00:49:44.728030 systemd-journald[290]: Runtime Journal (/run/log/journal/a993dbbd2e404dd9bab1b0b5030a067f) is 6.0M, max 48.7M, 42.6M free. May 8 00:49:44.723000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:44.717676 systemd-modules-load[291]: Inserted module 'overlay' May 8 00:49:44.729670 systemd[1]: Started systemd-journald.service. May 8 00:49:44.729000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:44.735565 kernel: audit: type=1130 audit(1746665384.729:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:44.735595 kernel: audit: type=1130 audit(1746665384.733:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:44.733000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:44.733120 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. May 8 00:49:44.736444 systemd-resolved[292]: Positive Trust Anchors: May 8 00:49:44.736453 systemd-resolved[292]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 8 00:49:44.736481 systemd-resolved[292]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test May 8 00:49:44.740602 systemd-resolved[292]: Defaulting to hostname 'linux'. May 8 00:49:44.747406 systemd[1]: Started systemd-resolved.service. May 8 00:49:44.753343 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 8 00:49:44.753361 kernel: audit: type=1130 audit(1746665384.748:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:44.748000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:44.748821 systemd[1]: Reached target nss-lookup.target. May 8 00:49:44.755549 systemd-modules-load[291]: Inserted module 'br_netfilter' May 8 00:49:44.756939 kernel: Bridge firewalling registered May 8 00:49:44.757117 systemd[1]: Finished dracut-cmdline-ask.service. May 8 00:49:44.757000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:44.761470 kernel: audit: type=1130 audit(1746665384.757:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:44.761651 systemd[1]: Starting dracut-cmdline.service... May 8 00:49:44.766675 kernel: SCSI subsystem initialized May 8 00:49:44.770394 dracut-cmdline[308]: dracut-dracut-053 May 8 00:49:44.772568 dracut-cmdline[308]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=3816e7a7ab4f80032c381006006d7d5ba477c6a86a1527e782723d869b29d497 May 8 00:49:44.777927 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 8 00:49:44.777944 kernel: device-mapper: uevent: version 1.0.3 May 8 00:49:44.777952 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com May 8 00:49:44.781138 systemd-modules-load[291]: Inserted module 'dm_multipath' May 8 00:49:44.781887 systemd[1]: Finished systemd-modules-load.service. May 8 00:49:44.782000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:44.786453 kernel: audit: type=1130 audit(1746665384.782:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:44.786509 systemd[1]: Starting systemd-sysctl.service... May 8 00:49:44.793181 systemd[1]: Finished systemd-sysctl.service. May 8 00:49:44.793000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:44.797459 kernel: audit: type=1130 audit(1746665384.793:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:44.840445 kernel: Loading iSCSI transport class v2.0-870. May 8 00:49:44.852454 kernel: iscsi: registered transport (tcp) May 8 00:49:44.867454 kernel: iscsi: registered transport (qla4xxx) May 8 00:49:44.867473 kernel: QLogic iSCSI HBA Driver May 8 00:49:44.900038 systemd[1]: Finished dracut-cmdline.service. May 8 00:49:44.900000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:44.901581 systemd[1]: Starting dracut-pre-udev.service... May 8 00:49:44.905090 kernel: audit: type=1130 audit(1746665384.900:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:44.944460 kernel: raid6: neonx8 gen() 13816 MB/s May 8 00:49:44.961458 kernel: raid6: neonx8 xor() 10833 MB/s May 8 00:49:44.978451 kernel: raid6: neonx4 gen() 13567 MB/s May 8 00:49:44.995461 kernel: raid6: neonx4 xor() 11289 MB/s May 8 00:49:45.012450 kernel: raid6: neonx2 gen() 12971 MB/s May 8 00:49:45.029451 kernel: raid6: neonx2 xor() 10600 MB/s May 8 00:49:45.046451 kernel: raid6: neonx1 gen() 10545 MB/s May 8 00:49:45.063462 kernel: raid6: neonx1 xor() 8762 MB/s May 8 00:49:45.080449 kernel: raid6: int64x8 gen() 6276 MB/s May 8 00:49:45.097458 kernel: raid6: int64x8 xor() 3545 MB/s May 8 00:49:45.114450 kernel: raid6: int64x4 gen() 7214 MB/s May 8 00:49:45.131450 kernel: raid6: int64x4 xor() 3854 MB/s May 8 00:49:45.148448 kernel: raid6: int64x2 gen() 6153 MB/s May 8 00:49:45.165461 kernel: raid6: int64x2 xor() 3317 MB/s May 8 00:49:45.182461 kernel: raid6: int64x1 gen() 5043 MB/s May 8 00:49:45.199519 kernel: raid6: int64x1 xor() 2646 MB/s May 8 00:49:45.199531 kernel: raid6: using algorithm neonx8 gen() 13816 MB/s May 8 00:49:45.199539 kernel: raid6: .... xor() 10833 MB/s, rmw enabled May 8 00:49:45.200596 kernel: raid6: using neon recovery algorithm May 8 00:49:45.212945 kernel: xor: measuring software checksum speed May 8 00:49:45.212970 kernel: 8regs : 16797 MB/sec May 8 00:49:45.213600 kernel: 32regs : 20697 MB/sec May 8 00:49:45.214825 kernel: arm64_neon : 27832 MB/sec May 8 00:49:45.214839 kernel: xor: using function: arm64_neon (27832 MB/sec) May 8 00:49:45.268467 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no May 8 00:49:45.278649 systemd[1]: Finished dracut-pre-udev.service. May 8 00:49:45.278000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:45.282458 kernel: audit: type=1130 audit(1746665385.278:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:45.281000 audit: BPF prog-id=7 op=LOAD May 8 00:49:45.281000 audit: BPF prog-id=8 op=LOAD May 8 00:49:45.282779 systemd[1]: Starting systemd-udevd.service... May 8 00:49:45.295115 systemd-udevd[492]: Using default interface naming scheme 'v252'. May 8 00:49:45.298414 systemd[1]: Started systemd-udevd.service. May 8 00:49:45.299000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:45.300990 systemd[1]: Starting dracut-pre-trigger.service... May 8 00:49:45.312438 dracut-pre-trigger[503]: rd.md=0: removing MD RAID activation May 8 00:49:45.337017 systemd[1]: Finished dracut-pre-trigger.service. May 8 00:49:45.337000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:45.338495 systemd[1]: Starting systemd-udev-trigger.service... May 8 00:49:45.370489 systemd[1]: Finished systemd-udev-trigger.service. May 8 00:49:45.370000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:45.411403 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 8 00:49:45.416799 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 8 00:49:45.416813 kernel: GPT:9289727 != 19775487 May 8 00:49:45.416822 kernel: GPT:Alternate GPT header not at the end of the disk. May 8 00:49:45.416830 kernel: GPT:9289727 != 19775487 May 8 00:49:45.416838 kernel: GPT: Use GNU Parted to correct GPT errors. May 8 00:49:45.416846 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 8 00:49:45.427144 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. May 8 00:49:45.428194 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. May 8 00:49:45.433453 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (551) May 8 00:49:45.436289 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. May 8 00:49:45.441577 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. May 8 00:49:45.444963 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. May 8 00:49:45.446627 systemd[1]: Starting disk-uuid.service... May 8 00:49:45.452358 disk-uuid[562]: Primary Header is updated. May 8 00:49:45.452358 disk-uuid[562]: Secondary Entries is updated. May 8 00:49:45.452358 disk-uuid[562]: Secondary Header is updated. May 8 00:49:45.456447 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 8 00:49:45.463451 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 8 00:49:45.465451 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 8 00:49:46.466047 disk-uuid[563]: The operation has completed successfully. May 8 00:49:46.467335 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 8 00:49:46.491781 systemd[1]: disk-uuid.service: Deactivated successfully. May 8 00:49:46.492000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:46.492000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:46.491877 systemd[1]: Finished disk-uuid.service. May 8 00:49:46.493451 systemd[1]: Starting verity-setup.service... May 8 00:49:46.507463 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" May 8 00:49:46.530525 systemd[1]: Found device dev-mapper-usr.device. May 8 00:49:46.532103 systemd[1]: Mounting sysusr-usr.mount... May 8 00:49:46.533073 systemd[1]: Finished verity-setup.service. May 8 00:49:46.534000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:46.580451 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. May 8 00:49:46.580886 systemd[1]: Mounted sysusr-usr.mount. May 8 00:49:46.581789 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. May 8 00:49:46.582496 systemd[1]: Starting ignition-setup.service... May 8 00:49:46.584955 systemd[1]: Starting parse-ip-for-networkd.service... May 8 00:49:46.590992 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 8 00:49:46.591023 kernel: BTRFS info (device vda6): using free space tree May 8 00:49:46.591038 kernel: BTRFS info (device vda6): has skinny extents May 8 00:49:46.599171 systemd[1]: mnt-oem.mount: Deactivated successfully. May 8 00:49:46.605148 systemd[1]: Finished ignition-setup.service. May 8 00:49:46.605000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:46.606704 systemd[1]: Starting ignition-fetch-offline.service... May 8 00:49:46.666845 systemd[1]: Finished parse-ip-for-networkd.service. May 8 00:49:46.668000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:46.668000 audit: BPF prog-id=9 op=LOAD May 8 00:49:46.669985 systemd[1]: Starting systemd-networkd.service... May 8 00:49:46.685838 ignition[646]: Ignition 2.14.0 May 8 00:49:46.685849 ignition[646]: Stage: fetch-offline May 8 00:49:46.685895 ignition[646]: no configs at "/usr/lib/ignition/base.d" May 8 00:49:46.685904 ignition[646]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 8 00:49:46.686033 ignition[646]: parsed url from cmdline: "" May 8 00:49:46.686037 ignition[646]: no config URL provided May 8 00:49:46.686041 ignition[646]: reading system config file "/usr/lib/ignition/user.ign" May 8 00:49:46.686048 ignition[646]: no config at "/usr/lib/ignition/user.ign" May 8 00:49:46.686067 ignition[646]: op(1): [started] loading QEMU firmware config module May 8 00:49:46.692478 systemd-networkd[739]: lo: Link UP May 8 00:49:46.694000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:46.686071 ignition[646]: op(1): executing: "modprobe" "qemu_fw_cfg" May 8 00:49:46.692482 systemd-networkd[739]: lo: Gained carrier May 8 00:49:46.692829 systemd-networkd[739]: Enumeration completed May 8 00:49:46.692954 systemd[1]: Started systemd-networkd.service. May 8 00:49:46.693003 systemd-networkd[739]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 8 00:49:46.694038 systemd-networkd[739]: eth0: Link UP May 8 00:49:46.697787 ignition[646]: op(1): [finished] loading QEMU firmware config module May 8 00:49:46.694042 systemd-networkd[739]: eth0: Gained carrier May 8 00:49:46.694580 systemd[1]: Reached target network.target. May 8 00:49:46.698761 systemd[1]: Starting iscsiuio.service... May 8 00:49:46.707423 systemd[1]: Started iscsiuio.service. May 8 00:49:46.707000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:46.709190 systemd[1]: Starting iscsid.service... May 8 00:49:46.712330 iscsid[745]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi May 8 00:49:46.712330 iscsid[745]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. May 8 00:49:46.712330 iscsid[745]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. May 8 00:49:46.712330 iscsid[745]: If using hardware iscsi like qla4xxx this message can be ignored. May 8 00:49:46.712330 iscsid[745]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi May 8 00:49:46.720000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:46.724636 iscsid[745]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf May 8 00:49:46.715204 systemd[1]: Started iscsid.service. May 8 00:49:46.719515 systemd-networkd[739]: eth0: DHCPv4 address 10.0.0.116/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 8 00:49:46.720885 systemd[1]: Starting dracut-initqueue.service... May 8 00:49:46.730561 systemd[1]: Finished dracut-initqueue.service. May 8 00:49:46.731000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:46.731601 systemd[1]: Reached target remote-fs-pre.target. May 8 00:49:46.733132 systemd[1]: Reached target remote-cryptsetup.target. May 8 00:49:46.734840 systemd[1]: Reached target remote-fs.target. May 8 00:49:46.737070 systemd[1]: Starting dracut-pre-mount.service... May 8 00:49:46.744712 systemd[1]: Finished dracut-pre-mount.service. May 8 00:49:46.745000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:46.760082 ignition[646]: parsing config with SHA512: d9e143c5c21204e452a82638e6ffcffb3479ba2f0cb5fc832e50121329b85055998469d4e0a4177579ec8ab043b628e3e58744442e36ccf12ff16d342650812d May 8 00:49:46.766928 unknown[646]: fetched base config from "system" May 8 00:49:46.766940 unknown[646]: fetched user config from "qemu" May 8 00:49:46.767476 ignition[646]: fetch-offline: fetch-offline passed May 8 00:49:46.769000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:46.768513 systemd[1]: Finished ignition-fetch-offline.service. May 8 00:49:46.767532 ignition[646]: Ignition finished successfully May 8 00:49:46.770078 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 8 00:49:46.770833 systemd[1]: Starting ignition-kargs.service... May 8 00:49:46.779663 ignition[760]: Ignition 2.14.0 May 8 00:49:46.779673 ignition[760]: Stage: kargs May 8 00:49:46.779759 ignition[760]: no configs at "/usr/lib/ignition/base.d" May 8 00:49:46.781929 systemd[1]: Finished ignition-kargs.service. May 8 00:49:46.783000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:46.779769 ignition[760]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 8 00:49:46.780614 ignition[760]: kargs: kargs passed May 8 00:49:46.784217 systemd[1]: Starting ignition-disks.service... May 8 00:49:46.780658 ignition[760]: Ignition finished successfully May 8 00:49:46.790562 ignition[766]: Ignition 2.14.0 May 8 00:49:46.790571 ignition[766]: Stage: disks May 8 00:49:46.790660 ignition[766]: no configs at "/usr/lib/ignition/base.d" May 8 00:49:46.792589 systemd[1]: Finished ignition-disks.service. May 8 00:49:46.793000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:46.790669 ignition[766]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 8 00:49:46.794139 systemd[1]: Reached target initrd-root-device.target. May 8 00:49:46.791549 ignition[766]: disks: disks passed May 8 00:49:46.795445 systemd[1]: Reached target local-fs-pre.target. May 8 00:49:46.791591 ignition[766]: Ignition finished successfully May 8 00:49:46.797083 systemd[1]: Reached target local-fs.target. May 8 00:49:46.798467 systemd[1]: Reached target sysinit.target. May 8 00:49:46.799691 systemd[1]: Reached target basic.target. May 8 00:49:46.801815 systemd[1]: Starting systemd-fsck-root.service... May 8 00:49:46.811994 systemd-fsck[774]: ROOT: clean, 623/553520 files, 56022/553472 blocks May 8 00:49:46.815695 systemd[1]: Finished systemd-fsck-root.service. May 8 00:49:46.817000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:46.818183 systemd[1]: Mounting sysroot.mount... May 8 00:49:46.824452 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. May 8 00:49:46.824860 systemd[1]: Mounted sysroot.mount. May 8 00:49:46.825651 systemd[1]: Reached target initrd-root-fs.target. May 8 00:49:46.827856 systemd[1]: Mounting sysroot-usr.mount... May 8 00:49:46.828736 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. May 8 00:49:46.828771 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 8 00:49:46.828795 systemd[1]: Reached target ignition-diskful.target. May 8 00:49:46.830679 systemd[1]: Mounted sysroot-usr.mount. May 8 00:49:46.832500 systemd[1]: Starting initrd-setup-root.service... May 8 00:49:46.836920 initrd-setup-root[784]: cut: /sysroot/etc/passwd: No such file or directory May 8 00:49:46.840754 initrd-setup-root[792]: cut: /sysroot/etc/group: No such file or directory May 8 00:49:46.844507 initrd-setup-root[800]: cut: /sysroot/etc/shadow: No such file or directory May 8 00:49:46.848452 initrd-setup-root[808]: cut: /sysroot/etc/gshadow: No such file or directory May 8 00:49:46.873000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:46.872850 systemd[1]: Finished initrd-setup-root.service. May 8 00:49:46.874397 systemd[1]: Starting ignition-mount.service... May 8 00:49:46.875786 systemd[1]: Starting sysroot-boot.service... May 8 00:49:46.880707 bash[826]: umount: /sysroot/usr/share/oem: not mounted. May 8 00:49:46.888196 ignition[828]: INFO : Ignition 2.14.0 May 8 00:49:46.888196 ignition[828]: INFO : Stage: mount May 8 00:49:46.889840 ignition[828]: INFO : no configs at "/usr/lib/ignition/base.d" May 8 00:49:46.889840 ignition[828]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 8 00:49:46.889840 ignition[828]: INFO : mount: mount passed May 8 00:49:46.889840 ignition[828]: INFO : Ignition finished successfully May 8 00:49:46.891000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:46.890001 systemd[1]: Finished ignition-mount.service. May 8 00:49:46.896242 systemd[1]: Finished sysroot-boot.service. May 8 00:49:46.896000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:47.540185 systemd[1]: Mounting sysroot-usr-share-oem.mount... May 8 00:49:47.547205 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (836) May 8 00:49:47.547250 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 8 00:49:47.547269 kernel: BTRFS info (device vda6): using free space tree May 8 00:49:47.547877 kernel: BTRFS info (device vda6): has skinny extents May 8 00:49:47.551300 systemd[1]: Mounted sysroot-usr-share-oem.mount. May 8 00:49:47.552904 systemd[1]: Starting ignition-files.service... May 8 00:49:47.566130 ignition[856]: INFO : Ignition 2.14.0 May 8 00:49:47.566130 ignition[856]: INFO : Stage: files May 8 00:49:47.567798 ignition[856]: INFO : no configs at "/usr/lib/ignition/base.d" May 8 00:49:47.567798 ignition[856]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 8 00:49:47.567798 ignition[856]: DEBUG : files: compiled without relabeling support, skipping May 8 00:49:47.571247 ignition[856]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 8 00:49:47.571247 ignition[856]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 8 00:49:47.571247 ignition[856]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 8 00:49:47.571247 ignition[856]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 8 00:49:47.571247 ignition[856]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 8 00:49:47.571247 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 8 00:49:47.571247 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 May 8 00:49:47.570602 unknown[856]: wrote ssh authorized keys file for user: core May 8 00:49:48.121614 systemd-networkd[739]: eth0: Gained IPv6LL May 8 00:49:48.564244 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 8 00:49:49.868371 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 8 00:49:49.870529 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" May 8 00:49:49.870529 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 May 8 00:49:50.219588 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 8 00:49:50.324030 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" May 8 00:49:50.325854 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 8 00:49:50.325854 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 8 00:49:50.325854 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 8 00:49:50.325854 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 8 00:49:50.325854 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 8 00:49:50.325854 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 8 00:49:50.325854 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 8 00:49:50.325854 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 8 00:49:50.325854 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 8 00:49:50.325854 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 8 00:49:50.325854 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" May 8 00:49:50.325854 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" May 8 00:49:50.325854 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" May 8 00:49:50.325854 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-arm64.raw: attempt #1 May 8 00:49:50.615953 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK May 8 00:49:51.219174 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" May 8 00:49:51.219174 ignition[856]: INFO : files: op(c): [started] processing unit "prepare-helm.service" May 8 00:49:51.222910 ignition[856]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 8 00:49:51.222910 ignition[856]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 8 00:49:51.222910 ignition[856]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" May 8 00:49:51.222910 ignition[856]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" May 8 00:49:51.222910 ignition[856]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 8 00:49:51.222910 ignition[856]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 8 00:49:51.222910 ignition[856]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" May 8 00:49:51.222910 ignition[856]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" May 8 00:49:51.222910 ignition[856]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" May 8 00:49:51.222910 ignition[856]: INFO : files: op(11): [started] setting preset to disabled for "coreos-metadata.service" May 8 00:49:51.222910 ignition[856]: INFO : files: op(11): op(12): [started] removing enablement symlink(s) for "coreos-metadata.service" May 8 00:49:51.254628 ignition[856]: INFO : files: op(11): op(12): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 8 00:49:51.257108 ignition[856]: INFO : files: op(11): [finished] setting preset to disabled for "coreos-metadata.service" May 8 00:49:51.257108 ignition[856]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" May 8 00:49:51.257108 ignition[856]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" May 8 00:49:51.257108 ignition[856]: INFO : files: files passed May 8 00:49:51.257108 ignition[856]: INFO : Ignition finished successfully May 8 00:49:51.269070 kernel: kauditd_printk_skb: 23 callbacks suppressed May 8 00:49:51.269092 kernel: audit: type=1130 audit(1746665391.259:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:51.259000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:51.257282 systemd[1]: Finished ignition-files.service. May 8 00:49:51.260253 systemd[1]: Starting initrd-setup-root-after-ignition.service... May 8 00:49:51.276181 kernel: audit: type=1130 audit(1746665391.270:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:51.276200 kernel: audit: type=1131 audit(1746665391.270:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:51.270000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:51.270000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:51.276326 initrd-setup-root-after-ignition[881]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory May 8 00:49:51.280748 kernel: audit: type=1130 audit(1746665391.276:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:51.276000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:51.264827 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). May 8 00:49:51.283200 initrd-setup-root-after-ignition[883]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 8 00:49:51.265605 systemd[1]: Starting ignition-quench.service... May 8 00:49:51.269338 systemd[1]: ignition-quench.service: Deactivated successfully. May 8 00:49:51.269446 systemd[1]: Finished ignition-quench.service. May 8 00:49:51.271263 systemd[1]: Finished initrd-setup-root-after-ignition.service. May 8 00:49:51.277185 systemd[1]: Reached target ignition-complete.target. May 8 00:49:51.282141 systemd[1]: Starting initrd-parse-etc.service... May 8 00:49:51.294156 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 8 00:49:51.294246 systemd[1]: Finished initrd-parse-etc.service. May 8 00:49:51.295000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:51.295000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:51.296042 systemd[1]: Reached target initrd-fs.target. May 8 00:49:51.302796 kernel: audit: type=1130 audit(1746665391.295:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:51.302817 kernel: audit: type=1131 audit(1746665391.295:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:51.302128 systemd[1]: Reached target initrd.target. May 8 00:49:51.303528 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. May 8 00:49:51.304351 systemd[1]: Starting dracut-pre-pivot.service... May 8 00:49:51.314694 systemd[1]: Finished dracut-pre-pivot.service. May 8 00:49:51.315000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:51.316396 systemd[1]: Starting initrd-cleanup.service... May 8 00:49:51.319864 kernel: audit: type=1130 audit(1746665391.315:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:51.324587 systemd[1]: Stopped target nss-lookup.target. May 8 00:49:51.325460 systemd[1]: Stopped target remote-cryptsetup.target. May 8 00:49:51.326926 systemd[1]: Stopped target timers.target. May 8 00:49:51.328294 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 8 00:49:51.329000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:51.328412 systemd[1]: Stopped dracut-pre-pivot.service. May 8 00:49:51.333985 kernel: audit: type=1131 audit(1746665391.329:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:51.329736 systemd[1]: Stopped target initrd.target. May 8 00:49:51.333417 systemd[1]: Stopped target basic.target. May 8 00:49:51.334698 systemd[1]: Stopped target ignition-complete.target. May 8 00:49:51.336134 systemd[1]: Stopped target ignition-diskful.target. May 8 00:49:51.337552 systemd[1]: Stopped target initrd-root-device.target. May 8 00:49:51.339024 systemd[1]: Stopped target remote-fs.target. May 8 00:49:51.340354 systemd[1]: Stopped target remote-fs-pre.target. May 8 00:49:51.341804 systemd[1]: Stopped target sysinit.target. May 8 00:49:51.343085 systemd[1]: Stopped target local-fs.target. May 8 00:49:51.344415 systemd[1]: Stopped target local-fs-pre.target. May 8 00:49:51.345763 systemd[1]: Stopped target swap.target. May 8 00:49:51.347000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:51.346992 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 8 00:49:51.352794 kernel: audit: type=1131 audit(1746665391.347:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:51.347104 systemd[1]: Stopped dracut-pre-mount.service. May 8 00:49:51.353000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:51.356479 kernel: audit: type=1131 audit(1746665391.353:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:51.348489 systemd[1]: Stopped target cryptsetup.target. May 8 00:49:51.356000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:51.352027 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 8 00:49:51.352131 systemd[1]: Stopped dracut-initqueue.service. May 8 00:49:51.353635 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 8 00:49:51.353735 systemd[1]: Stopped ignition-fetch-offline.service. May 8 00:49:51.357424 systemd[1]: Stopped target paths.target. May 8 00:49:51.358662 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 8 00:49:51.362475 systemd[1]: Stopped systemd-ask-password-console.path. May 8 00:49:51.363626 systemd[1]: Stopped target slices.target. May 8 00:49:51.365144 systemd[1]: Stopped target sockets.target. May 8 00:49:51.366540 systemd[1]: iscsid.socket: Deactivated successfully. May 8 00:49:51.369000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:51.366625 systemd[1]: Closed iscsid.socket. May 8 00:49:51.370000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:51.367787 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 8 00:49:51.367885 systemd[1]: Stopped initrd-setup-root-after-ignition.service. May 8 00:49:51.369281 systemd[1]: ignition-files.service: Deactivated successfully. May 8 00:49:51.369372 systemd[1]: Stopped ignition-files.service. May 8 00:49:51.371557 systemd[1]: Stopping ignition-mount.service... May 8 00:49:51.373064 systemd[1]: Stopping iscsiuio.service... May 8 00:49:51.378365 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 8 00:49:51.379395 ignition[896]: INFO : Ignition 2.14.0 May 8 00:49:51.379395 ignition[896]: INFO : Stage: umount May 8 00:49:51.379395 ignition[896]: INFO : no configs at "/usr/lib/ignition/base.d" May 8 00:49:51.379395 ignition[896]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 8 00:49:51.381000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:51.384899 ignition[896]: INFO : umount: umount passed May 8 00:49:51.384899 ignition[896]: INFO : Ignition finished successfully May 8 00:49:51.385000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:51.386000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:51.379863 systemd[1]: Stopped kmod-static-nodes.service. May 8 00:49:51.382505 systemd[1]: Stopping sysroot-boot.service... May 8 00:49:51.383799 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 8 00:49:51.390000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:51.383929 systemd[1]: Stopped systemd-udev-trigger.service. May 8 00:49:51.392000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:51.385729 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 8 00:49:51.385827 systemd[1]: Stopped dracut-pre-trigger.service. May 8 00:49:51.388964 systemd[1]: iscsiuio.service: Deactivated successfully. May 8 00:49:51.389127 systemd[1]: Stopped iscsiuio.service. May 8 00:49:51.397000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:51.390851 systemd[1]: ignition-mount.service: Deactivated successfully. May 8 00:49:51.399000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:51.390935 systemd[1]: Stopped ignition-mount.service. May 8 00:49:51.400000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:51.393258 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 8 00:49:51.393824 systemd[1]: Stopped target network.target. May 8 00:49:51.394849 systemd[1]: iscsiuio.socket: Deactivated successfully. May 8 00:49:51.404000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:51.404000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:51.394884 systemd[1]: Closed iscsiuio.socket. May 8 00:49:51.406000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:51.396175 systemd[1]: ignition-disks.service: Deactivated successfully. May 8 00:49:51.396217 systemd[1]: Stopped ignition-disks.service. May 8 00:49:51.409000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:51.397829 systemd[1]: ignition-kargs.service: Deactivated successfully. May 8 00:49:51.397870 systemd[1]: Stopped ignition-kargs.service. May 8 00:49:51.399175 systemd[1]: ignition-setup.service: Deactivated successfully. May 8 00:49:51.413000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:51.399218 systemd[1]: Stopped ignition-setup.service. May 8 00:49:51.414000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:51.400886 systemd[1]: Stopping systemd-networkd.service... May 8 00:49:51.402412 systemd[1]: Stopping systemd-resolved.service... May 8 00:49:51.404113 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 8 00:49:51.404197 systemd[1]: Finished initrd-cleanup.service. May 8 00:49:51.420000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:51.405525 systemd[1]: sysroot-boot.service: Deactivated successfully. May 8 00:49:51.421000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:51.422000 audit: BPF prog-id=6 op=UNLOAD May 8 00:49:51.405606 systemd[1]: Stopped sysroot-boot.service. May 8 00:49:51.424000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:51.407622 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 8 00:49:51.407667 systemd[1]: Stopped initrd-setup-root.service. May 8 00:49:51.411044 systemd-networkd[739]: eth0: DHCPv6 lease lost May 8 00:49:51.429000 audit: BPF prog-id=9 op=UNLOAD May 8 00:49:51.412192 systemd[1]: systemd-networkd.service: Deactivated successfully. May 8 00:49:51.412287 systemd[1]: Stopped systemd-networkd.service. May 8 00:49:51.414042 systemd[1]: systemd-resolved.service: Deactivated successfully. May 8 00:49:51.414128 systemd[1]: Stopped systemd-resolved.service. May 8 00:49:51.415703 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 8 00:49:51.434000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:51.415732 systemd[1]: Closed systemd-networkd.socket. May 8 00:49:51.436000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:51.417599 systemd[1]: Stopping network-cleanup.service... May 8 00:49:51.419046 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 8 00:49:51.419117 systemd[1]: Stopped parse-ip-for-networkd.service. May 8 00:49:51.440000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:51.420889 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 8 00:49:51.442000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:51.420931 systemd[1]: Stopped systemd-sysctl.service. May 8 00:49:51.443000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:51.423208 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 8 00:49:51.423251 systemd[1]: Stopped systemd-modules-load.service. May 8 00:49:51.424340 systemd[1]: Stopping systemd-udevd.service... May 8 00:49:51.448000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:51.429768 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 8 00:49:51.433545 systemd[1]: network-cleanup.service: Deactivated successfully. May 8 00:49:51.433659 systemd[1]: Stopped network-cleanup.service. May 8 00:49:51.451000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:51.452000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:51.435402 systemd[1]: systemd-udevd.service: Deactivated successfully. May 8 00:49:51.435573 systemd[1]: Stopped systemd-udevd.service. May 8 00:49:51.436872 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 8 00:49:51.436902 systemd[1]: Closed systemd-udevd-control.socket. May 8 00:49:51.438054 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 8 00:49:51.438084 systemd[1]: Closed systemd-udevd-kernel.socket. May 8 00:49:51.439508 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 8 00:49:51.439554 systemd[1]: Stopped dracut-pre-udev.service. May 8 00:49:51.441206 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 8 00:49:51.441246 systemd[1]: Stopped dracut-cmdline.service. May 8 00:49:51.442728 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 8 00:49:51.442767 systemd[1]: Stopped dracut-cmdline-ask.service. May 8 00:49:51.445077 systemd[1]: Starting initrd-udevadm-cleanup-db.service... May 8 00:49:51.446984 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 8 00:49:51.447051 systemd[1]: Stopped systemd-vconsole-setup.service. May 8 00:49:51.450757 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 8 00:49:51.450853 systemd[1]: Finished initrd-udevadm-cleanup-db.service. May 8 00:49:51.452566 systemd[1]: Reached target initrd-switch-root.target. May 8 00:49:51.454789 systemd[1]: Starting initrd-switch-root.service... May 8 00:49:51.461983 systemd[1]: Switching root. May 8 00:49:51.480005 iscsid[745]: iscsid shutting down. May 8 00:49:51.480721 systemd-journald[290]: Received SIGTERM from PID 1 (systemd). May 8 00:49:51.480769 systemd-journald[290]: Journal stopped May 8 00:49:53.425975 kernel: SELinux: Class mctp_socket not defined in policy. May 8 00:49:53.427709 kernel: SELinux: Class anon_inode not defined in policy. May 8 00:49:53.427729 kernel: SELinux: the above unknown classes and permissions will be allowed May 8 00:49:53.427740 kernel: SELinux: policy capability network_peer_controls=1 May 8 00:49:53.427752 kernel: SELinux: policy capability open_perms=1 May 8 00:49:53.427761 kernel: SELinux: policy capability extended_socket_class=1 May 8 00:49:53.427775 kernel: SELinux: policy capability always_check_network=0 May 8 00:49:53.427785 kernel: SELinux: policy capability cgroup_seclabel=1 May 8 00:49:53.427795 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 8 00:49:53.427804 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 8 00:49:53.427827 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 8 00:49:53.427840 systemd[1]: Successfully loaded SELinux policy in 34.659ms. May 8 00:49:53.427857 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 7.174ms. May 8 00:49:53.427872 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) May 8 00:49:53.427884 systemd[1]: Detected virtualization kvm. May 8 00:49:53.427896 systemd[1]: Detected architecture arm64. May 8 00:49:53.427906 systemd[1]: Detected first boot. May 8 00:49:53.427917 systemd[1]: Initializing machine ID from VM UUID. May 8 00:49:53.427927 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). May 8 00:49:53.427937 systemd[1]: Populated /etc with preset unit settings. May 8 00:49:53.427948 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 8 00:49:53.427963 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 8 00:49:53.427978 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 8 00:49:53.427994 systemd[1]: iscsid.service: Deactivated successfully. May 8 00:49:53.428004 systemd[1]: Stopped iscsid.service. May 8 00:49:53.428015 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 8 00:49:53.428026 systemd[1]: Stopped initrd-switch-root.service. May 8 00:49:53.428036 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 8 00:49:53.428047 systemd[1]: Created slice system-addon\x2dconfig.slice. May 8 00:49:53.428058 systemd[1]: Created slice system-addon\x2drun.slice. May 8 00:49:53.428068 systemd[1]: Created slice system-getty.slice. May 8 00:49:53.428078 systemd[1]: Created slice system-modprobe.slice. May 8 00:49:53.428089 systemd[1]: Created slice system-serial\x2dgetty.slice. May 8 00:49:53.428099 systemd[1]: Created slice system-system\x2dcloudinit.slice. May 8 00:49:53.428110 systemd[1]: Created slice system-systemd\x2dfsck.slice. May 8 00:49:53.428122 systemd[1]: Created slice user.slice. May 8 00:49:53.428132 systemd[1]: Started systemd-ask-password-console.path. May 8 00:49:53.428143 systemd[1]: Started systemd-ask-password-wall.path. May 8 00:49:53.428153 systemd[1]: Set up automount boot.automount. May 8 00:49:53.428163 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. May 8 00:49:53.428174 systemd[1]: Stopped target initrd-switch-root.target. May 8 00:49:53.428184 systemd[1]: Stopped target initrd-fs.target. May 8 00:49:53.428196 systemd[1]: Stopped target initrd-root-fs.target. May 8 00:49:53.428207 systemd[1]: Reached target integritysetup.target. May 8 00:49:53.428217 systemd[1]: Reached target remote-cryptsetup.target. May 8 00:49:53.428227 systemd[1]: Reached target remote-fs.target. May 8 00:49:53.428238 systemd[1]: Reached target slices.target. May 8 00:49:53.428249 systemd[1]: Reached target swap.target. May 8 00:49:53.428259 systemd[1]: Reached target torcx.target. May 8 00:49:53.428269 systemd[1]: Reached target veritysetup.target. May 8 00:49:53.428284 systemd[1]: Listening on systemd-coredump.socket. May 8 00:49:53.428295 systemd[1]: Listening on systemd-initctl.socket. May 8 00:49:53.428307 systemd[1]: Listening on systemd-networkd.socket. May 8 00:49:53.428318 systemd[1]: Listening on systemd-udevd-control.socket. May 8 00:49:53.428328 systemd[1]: Listening on systemd-udevd-kernel.socket. May 8 00:49:53.428339 systemd[1]: Listening on systemd-userdbd.socket. May 8 00:49:53.428349 systemd[1]: Mounting dev-hugepages.mount... May 8 00:49:53.428360 systemd[1]: Mounting dev-mqueue.mount... May 8 00:49:53.428370 systemd[1]: Mounting media.mount... May 8 00:49:53.428380 systemd[1]: Mounting sys-kernel-debug.mount... May 8 00:49:53.428391 systemd[1]: Mounting sys-kernel-tracing.mount... May 8 00:49:53.428410 systemd[1]: Mounting tmp.mount... May 8 00:49:53.428422 systemd[1]: Starting flatcar-tmpfiles.service... May 8 00:49:53.428444 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 8 00:49:53.428455 systemd[1]: Starting kmod-static-nodes.service... May 8 00:49:53.428466 systemd[1]: Starting modprobe@configfs.service... May 8 00:49:53.428476 systemd[1]: Starting modprobe@dm_mod.service... May 8 00:49:53.428487 systemd[1]: Starting modprobe@drm.service... May 8 00:49:53.428497 systemd[1]: Starting modprobe@efi_pstore.service... May 8 00:49:53.428508 systemd[1]: Starting modprobe@fuse.service... May 8 00:49:53.428520 systemd[1]: Starting modprobe@loop.service... May 8 00:49:53.428532 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 8 00:49:53.428543 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 8 00:49:53.428553 systemd[1]: Stopped systemd-fsck-root.service. May 8 00:49:53.428564 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 8 00:49:53.428575 systemd[1]: Stopped systemd-fsck-usr.service. May 8 00:49:53.428585 systemd[1]: Stopped systemd-journald.service. May 8 00:49:53.428596 kernel: fuse: init (API version 7.34) May 8 00:49:53.428606 systemd[1]: Starting systemd-journald.service... May 8 00:49:53.428618 systemd[1]: Starting systemd-modules-load.service... May 8 00:49:53.428629 kernel: loop: module loaded May 8 00:49:53.428639 systemd[1]: Starting systemd-network-generator.service... May 8 00:49:53.428649 systemd[1]: Starting systemd-remount-fs.service... May 8 00:49:53.428660 systemd[1]: Starting systemd-udev-trigger.service... May 8 00:49:53.428671 systemd[1]: verity-setup.service: Deactivated successfully. May 8 00:49:53.428681 systemd[1]: Stopped verity-setup.service. May 8 00:49:53.428691 systemd[1]: Mounted dev-hugepages.mount. May 8 00:49:53.428701 systemd[1]: Mounted dev-mqueue.mount. May 8 00:49:53.428712 systemd[1]: Mounted media.mount. May 8 00:49:53.428723 systemd[1]: Mounted sys-kernel-debug.mount. May 8 00:49:53.428733 systemd[1]: Mounted sys-kernel-tracing.mount. May 8 00:49:53.428746 systemd-journald[1000]: Journal started May 8 00:49:53.428793 systemd-journald[1000]: Runtime Journal (/run/log/journal/a993dbbd2e404dd9bab1b0b5030a067f) is 6.0M, max 48.7M, 42.6M free. May 8 00:49:51.550000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 May 8 00:49:51.614000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 May 8 00:49:51.614000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 May 8 00:49:51.614000 audit: BPF prog-id=10 op=LOAD May 8 00:49:51.614000 audit: BPF prog-id=10 op=UNLOAD May 8 00:49:51.615000 audit: BPF prog-id=11 op=LOAD May 8 00:49:51.615000 audit: BPF prog-id=11 op=UNLOAD May 8 00:49:51.652000 audit[930]: AVC avc: denied { associate } for pid=930 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" May 8 00:49:51.652000 audit[930]: SYSCALL arch=c00000b7 syscall=5 success=yes exit=0 a0=40001c58b4 a1=40000c8de0 a2=40000cf0c0 a3=32 items=0 ppid=913 pid=930 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:49:51.652000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 May 8 00:49:51.654000 audit[930]: AVC avc: denied { associate } for pid=930 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 May 8 00:49:51.654000 audit[930]: SYSCALL arch=c00000b7 syscall=34 success=yes exit=0 a0=ffffffffffffff9c a1=40001c5989 a2=1ed a3=0 items=2 ppid=913 pid=930 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:49:51.654000 audit: CWD cwd="/" May 8 00:49:51.654000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:49:51.654000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 8 00:49:51.654000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 May 8 00:49:53.301000 audit: BPF prog-id=12 op=LOAD May 8 00:49:53.301000 audit: BPF prog-id=3 op=UNLOAD May 8 00:49:53.301000 audit: BPF prog-id=13 op=LOAD May 8 00:49:53.301000 audit: BPF prog-id=14 op=LOAD May 8 00:49:53.301000 audit: BPF prog-id=4 op=UNLOAD May 8 00:49:53.301000 audit: BPF prog-id=5 op=UNLOAD May 8 00:49:53.302000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:53.305000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:53.308000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:53.308000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:53.312000 audit: BPF prog-id=12 op=UNLOAD May 8 00:49:53.398000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:53.400000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:53.402000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:53.402000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:53.404000 audit: BPF prog-id=15 op=LOAD May 8 00:49:53.404000 audit: BPF prog-id=16 op=LOAD May 8 00:49:53.404000 audit: BPF prog-id=17 op=LOAD May 8 00:49:53.404000 audit: BPF prog-id=13 op=UNLOAD May 8 00:49:53.404000 audit: BPF prog-id=14 op=UNLOAD May 8 00:49:53.420000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:53.424000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 May 8 00:49:53.424000 audit[1000]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=6 a1=ffffd6facad0 a2=4000 a3=1 items=0 ppid=1 pid=1000 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:49:53.424000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" May 8 00:49:51.651814 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-05-08T00:49:51Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 8 00:49:53.300111 systemd[1]: Queued start job for default target multi-user.target. May 8 00:49:51.652076 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-05-08T00:49:51Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json May 8 00:49:53.300123 systemd[1]: Unnecessary job was removed for dev-vda6.device. May 8 00:49:51.652103 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-05-08T00:49:51Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json May 8 00:49:53.303052 systemd[1]: systemd-journald.service: Deactivated successfully. May 8 00:49:51.652131 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-05-08T00:49:51Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" May 8 00:49:51.652140 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-05-08T00:49:51Z" level=debug msg="skipped missing lower profile" missing profile=oem May 8 00:49:51.652169 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-05-08T00:49:51Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" May 8 00:49:51.652179 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-05-08T00:49:51Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= May 8 00:49:51.652364 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-05-08T00:49:51Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack May 8 00:49:51.652408 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-05-08T00:49:51Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json May 8 00:49:51.652420 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-05-08T00:49:51Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json May 8 00:49:51.652815 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-05-08T00:49:51Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 May 8 00:49:51.652848 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-05-08T00:49:51Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl May 8 00:49:51.652865 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-05-08T00:49:51Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.7: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.7 May 8 00:49:51.652879 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-05-08T00:49:51Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store May 8 00:49:51.652894 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-05-08T00:49:51Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.7: no such file or directory" path=/var/lib/torcx/store/3510.3.7 May 8 00:49:51.652906 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-05-08T00:49:51Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store May 8 00:49:53.431447 systemd[1]: Started systemd-journald.service. May 8 00:49:53.063804 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-05-08T00:49:53Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 8 00:49:53.431000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:53.064058 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-05-08T00:49:53Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 8 00:49:53.431884 systemd[1]: Mounted tmp.mount. May 8 00:49:53.064159 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-05-08T00:49:53Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 8 00:49:53.432962 systemd[1]: Finished kmod-static-nodes.service. May 8 00:49:53.064314 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-05-08T00:49:53Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 8 00:49:53.064361 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-05-08T00:49:53Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= May 8 00:49:53.064426 /usr/lib/systemd/system-generators/torcx-generator[930]: time="2025-05-08T00:49:53Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx May 8 00:49:53.433000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:53.434067 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 8 00:49:53.434242 systemd[1]: Finished modprobe@configfs.service. May 8 00:49:53.434000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:53.434000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:53.435338 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 8 00:49:53.435511 systemd[1]: Finished modprobe@dm_mod.service. May 8 00:49:53.435000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:53.435000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:53.436610 systemd[1]: modprobe@drm.service: Deactivated successfully. May 8 00:49:53.436761 systemd[1]: Finished modprobe@drm.service. May 8 00:49:53.437000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:53.437000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:53.438000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:53.438000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:53.437743 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 8 00:49:53.437901 systemd[1]: Finished modprobe@efi_pstore.service. May 8 00:49:53.439052 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 8 00:49:53.439208 systemd[1]: Finished modprobe@fuse.service. May 8 00:49:53.439000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:53.439000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:53.440248 systemd[1]: modprobe@loop.service: Deactivated successfully. May 8 00:49:53.440414 systemd[1]: Finished modprobe@loop.service. May 8 00:49:53.440000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:53.440000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:53.441575 systemd[1]: Finished flatcar-tmpfiles.service. May 8 00:49:53.441000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:53.442774 systemd[1]: Finished systemd-modules-load.service. May 8 00:49:53.443000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:53.443892 systemd[1]: Finished systemd-network-generator.service. May 8 00:49:53.444000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:53.445128 systemd[1]: Finished systemd-remount-fs.service. May 8 00:49:53.445000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:53.446615 systemd[1]: Reached target network-pre.target. May 8 00:49:53.448606 systemd[1]: Mounting sys-fs-fuse-connections.mount... May 8 00:49:53.450407 systemd[1]: Mounting sys-kernel-config.mount... May 8 00:49:53.451209 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 8 00:49:53.453623 systemd[1]: Starting systemd-hwdb-update.service... May 8 00:49:53.455527 systemd[1]: Starting systemd-journal-flush.service... May 8 00:49:53.456349 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 8 00:49:53.457336 systemd[1]: Starting systemd-random-seed.service... May 8 00:49:53.458221 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 8 00:49:53.459230 systemd[1]: Starting systemd-sysctl.service... May 8 00:49:53.465000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:53.462150 systemd[1]: Starting systemd-sysusers.service... May 8 00:49:53.464935 systemd[1]: Finished systemd-udev-trigger.service. May 8 00:49:53.466010 systemd[1]: Mounted sys-fs-fuse-connections.mount. May 8 00:49:53.467095 systemd[1]: Mounted sys-kernel-config.mount. May 8 00:49:53.469903 systemd[1]: Starting systemd-udev-settle.service... May 8 00:49:53.474238 systemd[1]: Finished systemd-random-seed.service. May 8 00:49:53.475000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:53.476658 systemd-journald[1000]: Time spent on flushing to /var/log/journal/a993dbbd2e404dd9bab1b0b5030a067f is 12.303ms for 999 entries. May 8 00:49:53.476658 systemd-journald[1000]: System Journal (/var/log/journal/a993dbbd2e404dd9bab1b0b5030a067f) is 8.0M, max 195.6M, 187.6M free. May 8 00:49:53.497128 systemd-journald[1000]: Received client request to flush runtime journal. May 8 00:49:53.487000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:53.489000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:53.497469 udevadm[1030]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. May 8 00:49:53.476965 systemd[1]: Reached target first-boot-complete.target. May 8 00:49:53.487255 systemd[1]: Finished systemd-sysctl.service. May 8 00:49:53.488352 systemd[1]: Finished systemd-sysusers.service. May 8 00:49:53.498038 systemd[1]: Finished systemd-journal-flush.service. May 8 00:49:53.498000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:53.823000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:53.823507 systemd[1]: Finished systemd-hwdb-update.service. May 8 00:49:53.824000 audit: BPF prog-id=18 op=LOAD May 8 00:49:53.824000 audit: BPF prog-id=19 op=LOAD May 8 00:49:53.824000 audit: BPF prog-id=7 op=UNLOAD May 8 00:49:53.824000 audit: BPF prog-id=8 op=UNLOAD May 8 00:49:53.825748 systemd[1]: Starting systemd-udevd.service... May 8 00:49:53.846142 systemd-udevd[1033]: Using default interface naming scheme 'v252'. May 8 00:49:53.859022 systemd[1]: Started systemd-udevd.service. May 8 00:49:53.859000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:53.860000 audit: BPF prog-id=20 op=LOAD May 8 00:49:53.861830 systemd[1]: Starting systemd-networkd.service... May 8 00:49:53.869000 audit: BPF prog-id=21 op=LOAD May 8 00:49:53.870000 audit: BPF prog-id=22 op=LOAD May 8 00:49:53.870000 audit: BPF prog-id=23 op=LOAD May 8 00:49:53.871130 systemd[1]: Starting systemd-userdbd.service... May 8 00:49:53.882553 systemd[1]: Condition check resulted in dev-ttyAMA0.device being skipped. May 8 00:49:53.907627 systemd[1]: Started systemd-userdbd.service. May 8 00:49:53.908000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:53.926251 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. May 8 00:49:53.957912 systemd-networkd[1042]: lo: Link UP May 8 00:49:53.957924 systemd-networkd[1042]: lo: Gained carrier May 8 00:49:53.958260 systemd-networkd[1042]: Enumeration completed May 8 00:49:53.958349 systemd[1]: Started systemd-networkd.service. May 8 00:49:53.958000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:53.959441 systemd-networkd[1042]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 8 00:49:53.962237 systemd-networkd[1042]: eth0: Link UP May 8 00:49:53.962247 systemd-networkd[1042]: eth0: Gained carrier May 8 00:49:53.968786 systemd[1]: Finished systemd-udev-settle.service. May 8 00:49:53.969000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:53.970826 systemd[1]: Starting lvm2-activation-early.service... May 8 00:49:53.986115 systemd-networkd[1042]: eth0: DHCPv4 address 10.0.0.116/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 8 00:49:53.989541 lvm[1066]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 8 00:49:54.018982 systemd[1]: Finished lvm2-activation-early.service. May 8 00:49:54.019000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:54.020023 systemd[1]: Reached target cryptsetup.target. May 8 00:49:54.022024 systemd[1]: Starting lvm2-activation.service... May 8 00:49:54.025679 lvm[1067]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 8 00:49:54.052294 systemd[1]: Finished lvm2-activation.service. May 8 00:49:54.052000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:54.053255 systemd[1]: Reached target local-fs-pre.target. May 8 00:49:54.054112 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 8 00:49:54.054147 systemd[1]: Reached target local-fs.target. May 8 00:49:54.054915 systemd[1]: Reached target machines.target. May 8 00:49:54.056849 systemd[1]: Starting ldconfig.service... May 8 00:49:54.057921 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 8 00:49:54.057974 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 8 00:49:54.058980 systemd[1]: Starting systemd-boot-update.service... May 8 00:49:54.060874 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... May 8 00:49:54.062992 systemd[1]: Starting systemd-machine-id-commit.service... May 8 00:49:54.064869 systemd[1]: Starting systemd-sysext.service... May 8 00:49:54.065897 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1069 (bootctl) May 8 00:49:54.066857 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... May 8 00:49:54.080316 systemd[1]: Unmounting usr-share-oem.mount... May 8 00:49:54.082725 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. May 8 00:49:54.083000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:54.087729 systemd[1]: usr-share-oem.mount: Deactivated successfully. May 8 00:49:54.087920 systemd[1]: Unmounted usr-share-oem.mount. May 8 00:49:54.109119 systemd-fsck[1077]: fsck.fat 4.2 (2021-01-31) May 8 00:49:54.109119 systemd-fsck[1077]: /dev/vda1: 236 files, 117182/258078 clusters May 8 00:49:54.111000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:54.110815 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. May 8 00:49:54.114311 systemd[1]: Mounting boot.mount... May 8 00:49:54.179520 kernel: loop0: detected capacity change from 0 to 189592 May 8 00:49:54.183295 systemd[1]: Mounted boot.mount. May 8 00:49:54.185783 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 8 00:49:54.186318 systemd[1]: Finished systemd-machine-id-commit.service. May 8 00:49:54.187000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:54.191343 systemd[1]: Finished systemd-boot-update.service. May 8 00:49:54.192000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:54.195457 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 8 00:49:54.222448 kernel: loop1: detected capacity change from 0 to 189592 May 8 00:49:54.226423 (sd-sysext)[1084]: Using extensions 'kubernetes'. May 8 00:49:54.226786 (sd-sysext)[1084]: Merged extensions into '/usr'. May 8 00:49:54.242477 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 8 00:49:54.243731 systemd[1]: Starting modprobe@dm_mod.service... May 8 00:49:54.245690 systemd[1]: Starting modprobe@efi_pstore.service... May 8 00:49:54.247872 systemd[1]: Starting modprobe@loop.service... May 8 00:49:54.248791 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 8 00:49:54.248915 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 8 00:49:54.249716 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 8 00:49:54.249845 systemd[1]: Finished modprobe@dm_mod.service. May 8 00:49:54.250000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:54.250000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:54.251230 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 8 00:49:54.251335 systemd[1]: Finished modprobe@efi_pstore.service. May 8 00:49:54.252000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:54.252000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:54.252848 systemd[1]: modprobe@loop.service: Deactivated successfully. May 8 00:49:54.252956 systemd[1]: Finished modprobe@loop.service. May 8 00:49:54.253000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:54.253000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:54.254395 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 8 00:49:54.254551 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 8 00:49:54.271161 ldconfig[1068]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 8 00:49:54.274813 systemd[1]: Finished ldconfig.service. May 8 00:49:54.275000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:54.424175 systemd[1]: Mounting usr-share-oem.mount... May 8 00:49:54.429001 systemd[1]: Mounted usr-share-oem.mount. May 8 00:49:54.430811 systemd[1]: Finished systemd-sysext.service. May 8 00:49:54.431000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:54.432803 systemd[1]: Starting ensure-sysext.service... May 8 00:49:54.434424 systemd[1]: Starting systemd-tmpfiles-setup.service... May 8 00:49:54.438719 systemd[1]: Reloading. May 8 00:49:54.445806 systemd-tmpfiles[1091]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. May 8 00:49:54.447758 systemd-tmpfiles[1091]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 8 00:49:54.450740 systemd-tmpfiles[1091]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 8 00:49:54.471903 /usr/lib/systemd/system-generators/torcx-generator[1111]: time="2025-05-08T00:49:54Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 8 00:49:54.472198 /usr/lib/systemd/system-generators/torcx-generator[1111]: time="2025-05-08T00:49:54Z" level=info msg="torcx already run" May 8 00:49:54.533942 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 8 00:49:54.533960 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 8 00:49:54.549008 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 8 00:49:54.590000 audit: BPF prog-id=24 op=LOAD May 8 00:49:54.590000 audit: BPF prog-id=20 op=UNLOAD May 8 00:49:54.592000 audit: BPF prog-id=25 op=LOAD May 8 00:49:54.592000 audit: BPF prog-id=21 op=UNLOAD May 8 00:49:54.592000 audit: BPF prog-id=26 op=LOAD May 8 00:49:54.592000 audit: BPF prog-id=27 op=LOAD May 8 00:49:54.592000 audit: BPF prog-id=22 op=UNLOAD May 8 00:49:54.592000 audit: BPF prog-id=23 op=UNLOAD May 8 00:49:54.592000 audit: BPF prog-id=28 op=LOAD May 8 00:49:54.592000 audit: BPF prog-id=29 op=LOAD May 8 00:49:54.592000 audit: BPF prog-id=18 op=UNLOAD May 8 00:49:54.592000 audit: BPF prog-id=19 op=UNLOAD May 8 00:49:54.593000 audit: BPF prog-id=30 op=LOAD May 8 00:49:54.593000 audit: BPF prog-id=15 op=UNLOAD May 8 00:49:54.593000 audit: BPF prog-id=31 op=LOAD May 8 00:49:54.593000 audit: BPF prog-id=32 op=LOAD May 8 00:49:54.593000 audit: BPF prog-id=16 op=UNLOAD May 8 00:49:54.593000 audit: BPF prog-id=17 op=UNLOAD May 8 00:49:54.596076 systemd[1]: Finished systemd-tmpfiles-setup.service. May 8 00:49:54.596000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:54.600541 systemd[1]: Starting audit-rules.service... May 8 00:49:54.602285 systemd[1]: Starting clean-ca-certificates.service... May 8 00:49:54.604391 systemd[1]: Starting systemd-journal-catalog-update.service... May 8 00:49:54.608000 audit: BPF prog-id=33 op=LOAD May 8 00:49:54.609478 systemd[1]: Starting systemd-resolved.service... May 8 00:49:54.610000 audit: BPF prog-id=34 op=LOAD May 8 00:49:54.611724 systemd[1]: Starting systemd-timesyncd.service... May 8 00:49:54.613488 systemd[1]: Starting systemd-update-utmp.service... May 8 00:49:54.614784 systemd[1]: Finished clean-ca-certificates.service. May 8 00:49:54.615000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:54.617661 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 8 00:49:54.618000 audit[1160]: SYSTEM_BOOT pid=1160 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' May 8 00:49:54.621777 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 8 00:49:54.623005 systemd[1]: Starting modprobe@dm_mod.service... May 8 00:49:54.624861 systemd[1]: Starting modprobe@efi_pstore.service... May 8 00:49:54.626741 systemd[1]: Starting modprobe@loop.service... May 8 00:49:54.627510 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 8 00:49:54.627639 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 8 00:49:54.627731 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 8 00:49:54.628630 systemd[1]: Finished systemd-journal-catalog-update.service. May 8 00:49:54.629000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:54.630067 systemd[1]: Finished systemd-update-utmp.service. May 8 00:49:54.630000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:54.631389 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 8 00:49:54.632089 systemd[1]: Finished modprobe@dm_mod.service. May 8 00:49:54.632000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:54.632000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:54.633383 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 8 00:49:54.634000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:54.635000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:54.634544 systemd[1]: Finished modprobe@efi_pstore.service. May 8 00:49:54.635783 systemd[1]: modprobe@loop.service: Deactivated successfully. May 8 00:49:54.635891 systemd[1]: Finished modprobe@loop.service. May 8 00:49:54.636000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:54.636000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:54.638969 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 8 00:49:54.640113 systemd[1]: Starting modprobe@dm_mod.service... May 8 00:49:54.642080 systemd[1]: Starting modprobe@efi_pstore.service... May 8 00:49:54.643949 systemd[1]: Starting modprobe@loop.service... May 8 00:49:54.644753 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 8 00:49:54.644877 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 8 00:49:54.646083 systemd[1]: Starting systemd-update-done.service... May 8 00:49:54.646909 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 8 00:49:54.647819 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 8 00:49:54.647942 systemd[1]: Finished modprobe@dm_mod.service. May 8 00:49:54.648000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:54.648000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:54.649137 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 8 00:49:54.649249 systemd[1]: Finished modprobe@efi_pstore.service. May 8 00:49:54.649000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:54.649000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:54.650526 systemd[1]: modprobe@loop.service: Deactivated successfully. May 8 00:49:54.650636 systemd[1]: Finished modprobe@loop.service. May 8 00:49:54.651000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:54.651000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:54.652061 systemd[1]: Finished systemd-update-done.service. May 8 00:49:54.652000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:54.655172 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 8 00:49:54.656486 systemd[1]: Starting modprobe@dm_mod.service... May 8 00:49:54.658548 systemd[1]: Starting modprobe@drm.service... May 8 00:49:54.660545 systemd[1]: Starting modprobe@efi_pstore.service... May 8 00:49:54.662457 systemd[1]: Starting modprobe@loop.service... May 8 00:49:54.663269 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 8 00:49:54.663393 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 8 00:49:54.664638 systemd[1]: Starting systemd-networkd-wait-online.service... May 8 00:49:54.665703 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 8 00:49:54.666717 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 8 00:49:54.666836 systemd[1]: Finished modprobe@dm_mod.service. May 8 00:49:54.667000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:54.667000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:54.668000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:54.668000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:54.668147 systemd[1]: modprobe@drm.service: Deactivated successfully. May 8 00:49:54.668260 systemd[1]: Finished modprobe@drm.service. May 8 00:49:54.669403 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 8 00:49:54.669551 systemd[1]: Finished modprobe@efi_pstore.service. May 8 00:49:54.670000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:54.670000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:54.670805 systemd[1]: modprobe@loop.service: Deactivated successfully. May 8 00:49:54.670914 systemd[1]: Finished modprobe@loop.service. May 8 00:49:54.671000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:54.671000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:54.672195 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 8 00:49:54.672281 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 8 00:49:54.673456 systemd[1]: Started systemd-timesyncd.service. May 8 00:49:54.674248 systemd-timesyncd[1157]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 8 00:49:54.674000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 8 00:49:54.674000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 May 8 00:49:54.674000 audit[1179]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffdbb965a0 a2=420 a3=0 items=0 ppid=1149 pid=1179 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) May 8 00:49:54.674000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 May 8 00:49:54.674292 systemd-timesyncd[1157]: Initial clock synchronization to Thu 2025-05-08 00:49:54.987303 UTC. May 8 00:49:54.674961 augenrules[1179]: No rules May 8 00:49:54.675049 systemd[1]: Finished ensure-sysext.service. May 8 00:49:54.676271 systemd-resolved[1153]: Positive Trust Anchors: May 8 00:49:54.676534 systemd-resolved[1153]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 8 00:49:54.676609 systemd-resolved[1153]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test May 8 00:49:54.676787 systemd[1]: Reached target time-set.target. May 8 00:49:54.677921 systemd[1]: Finished audit-rules.service. May 8 00:49:54.690704 systemd-resolved[1153]: Defaulting to hostname 'linux'. May 8 00:49:54.692099 systemd[1]: Started systemd-resolved.service. May 8 00:49:54.692953 systemd[1]: Reached target network.target. May 8 00:49:54.693716 systemd[1]: Reached target nss-lookup.target. May 8 00:49:54.694502 systemd[1]: Reached target sysinit.target. May 8 00:49:54.695295 systemd[1]: Started motdgen.path. May 8 00:49:54.696020 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. May 8 00:49:54.697262 systemd[1]: Started logrotate.timer. May 8 00:49:54.698089 systemd[1]: Started mdadm.timer. May 8 00:49:54.698806 systemd[1]: Started systemd-tmpfiles-clean.timer. May 8 00:49:54.699627 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 8 00:49:54.699660 systemd[1]: Reached target paths.target. May 8 00:49:54.700361 systemd[1]: Reached target timers.target. May 8 00:49:54.701404 systemd[1]: Listening on dbus.socket. May 8 00:49:54.703122 systemd[1]: Starting docker.socket... May 8 00:49:54.706160 systemd[1]: Listening on sshd.socket. May 8 00:49:54.707021 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 8 00:49:54.707512 systemd[1]: Listening on docker.socket. May 8 00:49:54.708304 systemd[1]: Reached target sockets.target. May 8 00:49:54.709070 systemd[1]: Reached target basic.target. May 8 00:49:54.709844 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. May 8 00:49:54.709877 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. May 8 00:49:54.710878 systemd[1]: Starting containerd.service... May 8 00:49:54.712595 systemd[1]: Starting dbus.service... May 8 00:49:54.714229 systemd[1]: Starting enable-oem-cloudinit.service... May 8 00:49:54.716226 systemd[1]: Starting extend-filesystems.service... May 8 00:49:54.717152 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). May 8 00:49:54.718480 systemd[1]: Starting motdgen.service... May 8 00:49:54.722282 systemd[1]: Starting prepare-helm.service... May 8 00:49:54.726474 systemd[1]: Starting ssh-key-proc-cmdline.service... May 8 00:49:54.727575 jq[1192]: false May 8 00:49:54.728606 systemd[1]: Starting sshd-keygen.service... May 8 00:49:54.732227 systemd[1]: Starting systemd-logind.service... May 8 00:49:54.733315 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 8 00:49:54.733416 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 8 00:49:54.733821 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 8 00:49:54.734475 systemd[1]: Starting update-engine.service... May 8 00:49:54.736515 systemd[1]: Starting update-ssh-keys-after-ignition.service... May 8 00:49:54.739044 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 8 00:49:54.741845 extend-filesystems[1193]: Found loop1 May 8 00:49:54.741845 extend-filesystems[1193]: Found vda May 8 00:49:54.741845 extend-filesystems[1193]: Found vda1 May 8 00:49:54.741845 extend-filesystems[1193]: Found vda2 May 8 00:49:54.741845 extend-filesystems[1193]: Found vda3 May 8 00:49:54.741845 extend-filesystems[1193]: Found usr May 8 00:49:54.741845 extend-filesystems[1193]: Found vda4 May 8 00:49:54.741845 extend-filesystems[1193]: Found vda6 May 8 00:49:54.741845 extend-filesystems[1193]: Found vda7 May 8 00:49:54.741845 extend-filesystems[1193]: Found vda9 May 8 00:49:54.741845 extend-filesystems[1193]: Checking size of /dev/vda9 May 8 00:49:54.739214 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. May 8 00:49:54.775751 tar[1209]: linux-arm64/helm May 8 00:49:54.776025 extend-filesystems[1193]: Resized partition /dev/vda9 May 8 00:49:54.777105 jq[1206]: true May 8 00:49:54.740176 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 8 00:49:54.740330 systemd[1]: Finished ssh-key-proc-cmdline.service. May 8 00:49:54.774083 systemd[1]: motdgen.service: Deactivated successfully. May 8 00:49:54.781844 jq[1215]: true May 8 00:49:54.774236 systemd[1]: Finished motdgen.service. May 8 00:49:54.787606 extend-filesystems[1222]: resize2fs 1.46.5 (30-Dec-2021) May 8 00:49:54.790024 dbus-daemon[1191]: [system] SELinux support is enabled May 8 00:49:54.790394 systemd[1]: Started dbus.service. May 8 00:49:54.792704 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 8 00:49:54.792736 systemd[1]: Reached target system-config.target. May 8 00:49:54.793662 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 8 00:49:54.793678 systemd[1]: Reached target user-config.target. May 8 00:49:54.802452 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 8 00:49:54.827069 update_engine[1204]: I0508 00:49:54.826790 1204 main.cc:92] Flatcar Update Engine starting May 8 00:49:54.836820 update_engine[1204]: I0508 00:49:54.829455 1204 update_check_scheduler.cc:74] Next update check in 10m0s May 8 00:49:54.829393 systemd[1]: Started update-engine.service. May 8 00:49:54.831867 systemd[1]: Started locksmithd.service. May 8 00:49:54.837375 systemd-logind[1202]: Watching system buttons on /dev/input/event0 (Power Button) May 8 00:49:54.837875 systemd-logind[1202]: New seat seat0. May 8 00:49:54.842363 systemd[1]: Started systemd-logind.service. May 8 00:49:54.849189 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 8 00:49:54.864181 extend-filesystems[1222]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 8 00:49:54.864181 extend-filesystems[1222]: old_desc_blocks = 1, new_desc_blocks = 1 May 8 00:49:54.864181 extend-filesystems[1222]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 8 00:49:54.869099 extend-filesystems[1193]: Resized filesystem in /dev/vda9 May 8 00:49:54.870915 env[1212]: time="2025-05-08T00:49:54.867532080Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 May 8 00:49:54.871122 bash[1242]: Updated "/home/core/.ssh/authorized_keys" May 8 00:49:54.866915 systemd[1]: extend-filesystems.service: Deactivated successfully. May 8 00:49:54.867069 systemd[1]: Finished extend-filesystems.service. May 8 00:49:54.868351 systemd[1]: Finished update-ssh-keys-after-ignition.service. May 8 00:49:54.896424 env[1212]: time="2025-05-08T00:49:54.896369080Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 8 00:49:54.896714 env[1212]: time="2025-05-08T00:49:54.896693000Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 8 00:49:54.900471 env[1212]: time="2025-05-08T00:49:54.900126960Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.180-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 8 00:49:54.900471 env[1212]: time="2025-05-08T00:49:54.900156480Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 8 00:49:54.900471 env[1212]: time="2025-05-08T00:49:54.900382960Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 8 00:49:54.900471 env[1212]: time="2025-05-08T00:49:54.900400120Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 8 00:49:54.900471 env[1212]: time="2025-05-08T00:49:54.900424600Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" May 8 00:49:54.900471 env[1212]: time="2025-05-08T00:49:54.900457400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 8 00:49:54.900654 env[1212]: time="2025-05-08T00:49:54.900539200Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 8 00:49:54.900921 env[1212]: time="2025-05-08T00:49:54.900808320Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 8 00:49:54.900974 env[1212]: time="2025-05-08T00:49:54.900949800Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 8 00:49:54.900974 env[1212]: time="2025-05-08T00:49:54.900965880Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 8 00:49:54.901068 env[1212]: time="2025-05-08T00:49:54.901019560Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" May 8 00:49:54.901068 env[1212]: time="2025-05-08T00:49:54.901031440Z" level=info msg="metadata content store policy set" policy=shared May 8 00:49:54.903550 locksmithd[1244]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 8 00:49:54.904404 env[1212]: time="2025-05-08T00:49:54.904360440Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 8 00:49:54.904404 env[1212]: time="2025-05-08T00:49:54.904393480Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 8 00:49:54.904505 env[1212]: time="2025-05-08T00:49:54.904407040Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 8 00:49:54.904505 env[1212]: time="2025-05-08T00:49:54.904456960Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 8 00:49:54.904505 env[1212]: time="2025-05-08T00:49:54.904472640Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 8 00:49:54.904505 env[1212]: time="2025-05-08T00:49:54.904487280Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 8 00:49:54.904505 env[1212]: time="2025-05-08T00:49:54.904500080Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 8 00:49:54.904873 env[1212]: time="2025-05-08T00:49:54.904845400Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 8 00:49:54.904873 env[1212]: time="2025-05-08T00:49:54.904871040Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 May 8 00:49:54.904930 env[1212]: time="2025-05-08T00:49:54.904885000Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 8 00:49:54.904930 env[1212]: time="2025-05-08T00:49:54.904898440Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 8 00:49:54.904930 env[1212]: time="2025-05-08T00:49:54.904910120Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 8 00:49:54.905044 env[1212]: time="2025-05-08T00:49:54.905019160Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 8 00:49:54.905114 env[1212]: time="2025-05-08T00:49:54.905099760Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 8 00:49:54.905345 env[1212]: time="2025-05-08T00:49:54.905330080Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 8 00:49:54.905385 env[1212]: time="2025-05-08T00:49:54.905358360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 8 00:49:54.905385 env[1212]: time="2025-05-08T00:49:54.905372640Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 8 00:49:54.905520 env[1212]: time="2025-05-08T00:49:54.905508080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 8 00:49:54.905546 env[1212]: time="2025-05-08T00:49:54.905523520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 8 00:49:54.905546 env[1212]: time="2025-05-08T00:49:54.905535680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 8 00:49:54.905582 env[1212]: time="2025-05-08T00:49:54.905547000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 8 00:49:54.905582 env[1212]: time="2025-05-08T00:49:54.905560440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 8 00:49:54.905582 env[1212]: time="2025-05-08T00:49:54.905572720Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 8 00:49:54.905640 env[1212]: time="2025-05-08T00:49:54.905584440Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 8 00:49:54.905640 env[1212]: time="2025-05-08T00:49:54.905596600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 8 00:49:54.905640 env[1212]: time="2025-05-08T00:49:54.905608400Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 8 00:49:54.905763 env[1212]: time="2025-05-08T00:49:54.905722520Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 8 00:49:54.905763 env[1212]: time="2025-05-08T00:49:54.905744080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 8 00:49:54.905763 env[1212]: time="2025-05-08T00:49:54.905756560Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 8 00:49:54.905839 env[1212]: time="2025-05-08T00:49:54.905767920Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 8 00:49:54.905839 env[1212]: time="2025-05-08T00:49:54.905781760Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 May 8 00:49:54.905839 env[1212]: time="2025-05-08T00:49:54.905791960Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 8 00:49:54.905839 env[1212]: time="2025-05-08T00:49:54.905807960Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" May 8 00:49:54.905913 env[1212]: time="2025-05-08T00:49:54.905840200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 8 00:49:54.906094 env[1212]: time="2025-05-08T00:49:54.906032920Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 8 00:49:54.906094 env[1212]: time="2025-05-08T00:49:54.906092680Z" level=info msg="Connect containerd service" May 8 00:49:54.908556 env[1212]: time="2025-05-08T00:49:54.906122080Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 8 00:49:54.908556 env[1212]: time="2025-05-08T00:49:54.906842040Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 8 00:49:54.908556 env[1212]: time="2025-05-08T00:49:54.907267360Z" level=info msg="Start subscribing containerd event" May 8 00:49:54.908556 env[1212]: time="2025-05-08T00:49:54.907307200Z" level=info msg="Start recovering state" May 8 00:49:54.908556 env[1212]: time="2025-05-08T00:49:54.907360240Z" level=info msg="Start event monitor" May 8 00:49:54.908556 env[1212]: time="2025-05-08T00:49:54.907379560Z" level=info msg="Start snapshots syncer" May 8 00:49:54.908556 env[1212]: time="2025-05-08T00:49:54.907388640Z" level=info msg="Start cni network conf syncer for default" May 8 00:49:54.908556 env[1212]: time="2025-05-08T00:49:54.907395680Z" level=info msg="Start streaming server" May 8 00:49:54.908556 env[1212]: time="2025-05-08T00:49:54.907645680Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 8 00:49:54.908556 env[1212]: time="2025-05-08T00:49:54.907685680Z" level=info msg=serving... address=/run/containerd/containerd.sock May 8 00:49:54.908556 env[1212]: time="2025-05-08T00:49:54.907760520Z" level=info msg="containerd successfully booted in 0.047711s" May 8 00:49:54.907844 systemd[1]: Started containerd.service. May 8 00:49:55.179694 tar[1209]: linux-arm64/LICENSE May 8 00:49:55.179810 tar[1209]: linux-arm64/README.md May 8 00:49:55.184456 systemd[1]: Finished prepare-helm.service. May 8 00:49:55.546264 systemd-networkd[1042]: eth0: Gained IPv6LL May 8 00:49:55.548036 systemd[1]: Finished systemd-networkd-wait-online.service. May 8 00:49:55.549392 systemd[1]: Reached target network-online.target. May 8 00:49:55.551858 systemd[1]: Starting kubelet.service... May 8 00:49:55.638919 sshd_keygen[1217]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 8 00:49:55.657684 systemd[1]: Finished sshd-keygen.service. May 8 00:49:55.660069 systemd[1]: Starting issuegen.service... May 8 00:49:55.665085 systemd[1]: issuegen.service: Deactivated successfully. May 8 00:49:55.665249 systemd[1]: Finished issuegen.service. May 8 00:49:55.667637 systemd[1]: Starting systemd-user-sessions.service... May 8 00:49:55.673763 systemd[1]: Finished systemd-user-sessions.service. May 8 00:49:55.676076 systemd[1]: Started getty@tty1.service. May 8 00:49:55.678250 systemd[1]: Started serial-getty@ttyAMA0.service. May 8 00:49:55.679392 systemd[1]: Reached target getty.target. May 8 00:49:56.086331 systemd[1]: Started kubelet.service. May 8 00:49:56.087779 systemd[1]: Reached target multi-user.target. May 8 00:49:56.089993 systemd[1]: Starting systemd-update-utmp-runlevel.service... May 8 00:49:56.096988 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. May 8 00:49:56.097147 systemd[1]: Finished systemd-update-utmp-runlevel.service. May 8 00:49:56.098285 systemd[1]: Startup finished in 577ms (kernel) + 6.936s (initrd) + 4.590s (userspace) = 12.103s. May 8 00:49:56.543589 kubelet[1273]: E0508 00:49:56.543472 1273 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 8 00:49:56.545449 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 8 00:49:56.545587 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 8 00:49:57.026621 systemd[1]: Created slice system-sshd.slice. May 8 00:49:57.027768 systemd[1]: Started sshd@0-10.0.0.116:22-10.0.0.1:54196.service. May 8 00:49:57.074232 sshd[1282]: Accepted publickey for core from 10.0.0.1 port 54196 ssh2: RSA SHA256:bNzqUoNIi+loVoVjyqrqS2pcdituzSfXJGlDy1FbsUU May 8 00:49:57.076203 sshd[1282]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 8 00:49:57.084696 systemd[1]: Created slice user-500.slice. May 8 00:49:57.085827 systemd[1]: Starting user-runtime-dir@500.service... May 8 00:49:57.087302 systemd-logind[1202]: New session 1 of user core. May 8 00:49:57.093816 systemd[1]: Finished user-runtime-dir@500.service. May 8 00:49:57.095115 systemd[1]: Starting user@500.service... May 8 00:49:57.097725 (systemd)[1285]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 8 00:49:57.159449 systemd[1285]: Queued start job for default target default.target. May 8 00:49:57.159935 systemd[1285]: Reached target paths.target. May 8 00:49:57.159954 systemd[1285]: Reached target sockets.target. May 8 00:49:57.159965 systemd[1285]: Reached target timers.target. May 8 00:49:57.159975 systemd[1285]: Reached target basic.target. May 8 00:49:57.160027 systemd[1285]: Reached target default.target. May 8 00:49:57.160056 systemd[1285]: Startup finished in 56ms. May 8 00:49:57.160093 systemd[1]: Started user@500.service. May 8 00:49:57.161022 systemd[1]: Started session-1.scope. May 8 00:49:57.214159 systemd[1]: Started sshd@1-10.0.0.116:22-10.0.0.1:54210.service. May 8 00:49:57.250084 sshd[1294]: Accepted publickey for core from 10.0.0.1 port 54210 ssh2: RSA SHA256:bNzqUoNIi+loVoVjyqrqS2pcdituzSfXJGlDy1FbsUU May 8 00:49:57.251568 sshd[1294]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 8 00:49:57.257545 systemd[1]: Started session-2.scope. May 8 00:49:57.257811 systemd-logind[1202]: New session 2 of user core. May 8 00:49:57.312869 sshd[1294]: pam_unix(sshd:session): session closed for user core May 8 00:49:57.316837 systemd[1]: Started sshd@2-10.0.0.116:22-10.0.0.1:54212.service. May 8 00:49:57.317324 systemd[1]: sshd@1-10.0.0.116:22-10.0.0.1:54210.service: Deactivated successfully. May 8 00:49:57.317950 systemd[1]: session-2.scope: Deactivated successfully. May 8 00:49:57.318416 systemd-logind[1202]: Session 2 logged out. Waiting for processes to exit. May 8 00:49:57.319148 systemd-logind[1202]: Removed session 2. May 8 00:49:57.352539 sshd[1299]: Accepted publickey for core from 10.0.0.1 port 54212 ssh2: RSA SHA256:bNzqUoNIi+loVoVjyqrqS2pcdituzSfXJGlDy1FbsUU May 8 00:49:57.353669 sshd[1299]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 8 00:49:57.356568 systemd-logind[1202]: New session 3 of user core. May 8 00:49:57.357346 systemd[1]: Started session-3.scope. May 8 00:49:57.406554 sshd[1299]: pam_unix(sshd:session): session closed for user core May 8 00:49:57.409131 systemd[1]: sshd@2-10.0.0.116:22-10.0.0.1:54212.service: Deactivated successfully. May 8 00:49:57.409693 systemd[1]: session-3.scope: Deactivated successfully. May 8 00:49:57.410140 systemd-logind[1202]: Session 3 logged out. Waiting for processes to exit. May 8 00:49:57.411157 systemd[1]: Started sshd@3-10.0.0.116:22-10.0.0.1:54220.service. May 8 00:49:57.411855 systemd-logind[1202]: Removed session 3. May 8 00:49:57.447213 sshd[1306]: Accepted publickey for core from 10.0.0.1 port 54220 ssh2: RSA SHA256:bNzqUoNIi+loVoVjyqrqS2pcdituzSfXJGlDy1FbsUU May 8 00:49:57.448608 sshd[1306]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 8 00:49:57.451532 systemd-logind[1202]: New session 4 of user core. May 8 00:49:57.452283 systemd[1]: Started session-4.scope. May 8 00:49:57.506494 sshd[1306]: pam_unix(sshd:session): session closed for user core May 8 00:49:57.510011 systemd[1]: Started sshd@4-10.0.0.116:22-10.0.0.1:54224.service. May 8 00:49:57.510564 systemd[1]: sshd@3-10.0.0.116:22-10.0.0.1:54220.service: Deactivated successfully. May 8 00:49:57.511161 systemd[1]: session-4.scope: Deactivated successfully. May 8 00:49:57.511626 systemd-logind[1202]: Session 4 logged out. Waiting for processes to exit. May 8 00:49:57.512702 systemd-logind[1202]: Removed session 4. May 8 00:49:57.547784 sshd[1311]: Accepted publickey for core from 10.0.0.1 port 54224 ssh2: RSA SHA256:bNzqUoNIi+loVoVjyqrqS2pcdituzSfXJGlDy1FbsUU May 8 00:49:57.548943 sshd[1311]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 8 00:49:57.552932 systemd[1]: Started session-5.scope. May 8 00:49:57.553531 systemd-logind[1202]: New session 5 of user core. May 8 00:49:57.621191 sudo[1315]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 8 00:49:57.621437 sudo[1315]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) May 8 00:49:57.675443 systemd[1]: Starting docker.service... May 8 00:49:57.759954 env[1326]: time="2025-05-08T00:49:57.759895932Z" level=info msg="Starting up" May 8 00:49:57.761409 env[1326]: time="2025-05-08T00:49:57.761378713Z" level=info msg="parsed scheme: \"unix\"" module=grpc May 8 00:49:57.761522 env[1326]: time="2025-05-08T00:49:57.761505646Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc May 8 00:49:57.761611 env[1326]: time="2025-05-08T00:49:57.761593647Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc May 8 00:49:57.761667 env[1326]: time="2025-05-08T00:49:57.761654456Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc May 8 00:49:57.763774 env[1326]: time="2025-05-08T00:49:57.763746854Z" level=info msg="parsed scheme: \"unix\"" module=grpc May 8 00:49:57.763926 env[1326]: time="2025-05-08T00:49:57.763900855Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc May 8 00:49:57.764006 env[1326]: time="2025-05-08T00:49:57.763990833Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc May 8 00:49:57.764058 env[1326]: time="2025-05-08T00:49:57.764045792Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc May 8 00:49:57.949019 env[1326]: time="2025-05-08T00:49:57.948750389Z" level=info msg="Loading containers: start." May 8 00:49:58.065500 kernel: Initializing XFRM netlink socket May 8 00:49:58.090719 env[1326]: time="2025-05-08T00:49:58.090677578Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" May 8 00:49:58.146153 systemd-networkd[1042]: docker0: Link UP May 8 00:49:58.165893 env[1326]: time="2025-05-08T00:49:58.165849368Z" level=info msg="Loading containers: done." May 8 00:49:58.186717 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck251963368-merged.mount: Deactivated successfully. May 8 00:49:58.192029 env[1326]: time="2025-05-08T00:49:58.191985532Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 8 00:49:58.192167 env[1326]: time="2025-05-08T00:49:58.192147141Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 May 8 00:49:58.192285 env[1326]: time="2025-05-08T00:49:58.192254074Z" level=info msg="Daemon has completed initialization" May 8 00:49:58.207359 systemd[1]: Started docker.service. May 8 00:49:58.211517 env[1326]: time="2025-05-08T00:49:58.211382503Z" level=info msg="API listen on /run/docker.sock" May 8 00:49:58.934549 env[1212]: time="2025-05-08T00:49:58.934501885Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.8\"" May 8 00:49:59.661331 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount305922725.mount: Deactivated successfully. May 8 00:50:01.363380 env[1212]: time="2025-05-08T00:50:01.363330029Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.31.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:50:01.364949 env[1212]: time="2025-05-08T00:50:01.364909535Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ef8fb1ea7c9599dbedea6f9d5589975ebc5bf4ec72f6be6acaaec59a723a09b3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:50:01.367365 env[1212]: time="2025-05-08T00:50:01.367319213Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.31.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:50:01.369044 env[1212]: time="2025-05-08T00:50:01.369009553Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:30090db6a7d53799163ce82dae9e8ddb645fd47db93f2ec9da0cc787fd825625,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:50:01.369925 env[1212]: time="2025-05-08T00:50:01.369891172Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.8\" returns image reference \"sha256:ef8fb1ea7c9599dbedea6f9d5589975ebc5bf4ec72f6be6acaaec59a723a09b3\"" May 8 00:50:01.370595 env[1212]: time="2025-05-08T00:50:01.370570906Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.8\"" May 8 00:50:03.144877 env[1212]: time="2025-05-08T00:50:03.144830250Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.31.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:50:03.146516 env[1212]: time="2025-05-08T00:50:03.146482861Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ea6e6085feca75547d0422ab0536fe0d18c9ff5831de7a9d6a707c968027bb6a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:50:03.148698 env[1212]: time="2025-05-08T00:50:03.148667492Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.31.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:50:03.151910 env[1212]: time="2025-05-08T00:50:03.151867234Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:29eaddc64792a689df48506e78bbc641d063ac8bb92d2e66ae2ad05977420747,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:50:03.152580 env[1212]: time="2025-05-08T00:50:03.152551312Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.8\" returns image reference \"sha256:ea6e6085feca75547d0422ab0536fe0d18c9ff5831de7a9d6a707c968027bb6a\"" May 8 00:50:03.153137 env[1212]: time="2025-05-08T00:50:03.153092616Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.8\"" May 8 00:50:04.975674 env[1212]: time="2025-05-08T00:50:04.975630469Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.31.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:50:04.978180 env[1212]: time="2025-05-08T00:50:04.978134271Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:1d2db6ef0dd2f3e08bdfcd46afde7b755b05192841f563d8df54b807daaa7d8d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:50:04.980286 env[1212]: time="2025-05-08T00:50:04.980238950Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.31.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:50:04.982513 env[1212]: time="2025-05-08T00:50:04.982487139Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:22994a2632e81059720480b9f6bdeb133b08d58492d0b36dfd6e9768b159b22a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:50:04.983375 env[1212]: time="2025-05-08T00:50:04.983335203Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.8\" returns image reference \"sha256:1d2db6ef0dd2f3e08bdfcd46afde7b755b05192841f563d8df54b807daaa7d8d\"" May 8 00:50:04.984904 env[1212]: time="2025-05-08T00:50:04.984870257Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\"" May 8 00:50:06.146737 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount747601534.mount: Deactivated successfully. May 8 00:50:06.598129 env[1212]: time="2025-05-08T00:50:06.597993257Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.31.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:50:06.599425 env[1212]: time="2025-05-08T00:50:06.599373905Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c5361ece77e80334cd5fb082c0b678cb3244f5834ecacea1719ae6b38b465581,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:50:06.600582 env[1212]: time="2025-05-08T00:50:06.600548150Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.31.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:50:06.601733 env[1212]: time="2025-05-08T00:50:06.601694628Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:dd0c9a37670f209947b1ed880f06a2e93e1d41da78c037f52f94b13858769838,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:50:06.602117 env[1212]: time="2025-05-08T00:50:06.602079262Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\" returns image reference \"sha256:c5361ece77e80334cd5fb082c0b678cb3244f5834ecacea1719ae6b38b465581\"" May 8 00:50:06.602919 env[1212]: time="2025-05-08T00:50:06.602859549Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" May 8 00:50:06.796568 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 8 00:50:06.796749 systemd[1]: Stopped kubelet.service. May 8 00:50:06.798237 systemd[1]: Starting kubelet.service... May 8 00:50:06.879133 systemd[1]: Started kubelet.service. May 8 00:50:06.981654 kubelet[1462]: E0508 00:50:06.981615 1462 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 8 00:50:06.984929 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 8 00:50:06.985050 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 8 00:50:07.258895 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4102944309.mount: Deactivated successfully. May 8 00:50:08.271055 env[1212]: time="2025-05-08T00:50:08.271000531Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:50:08.272774 env[1212]: time="2025-05-08T00:50:08.272734409Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:50:08.279290 env[1212]: time="2025-05-08T00:50:08.279246744Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:50:08.281493 env[1212]: time="2025-05-08T00:50:08.281463289Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:50:08.282271 env[1212]: time="2025-05-08T00:50:08.282228210Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" May 8 00:50:08.282826 env[1212]: time="2025-05-08T00:50:08.282790341Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 8 00:50:08.757524 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3665434347.mount: Deactivated successfully. May 8 00:50:08.762914 env[1212]: time="2025-05-08T00:50:08.762863920Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:50:08.764361 env[1212]: time="2025-05-08T00:50:08.764324607Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:50:08.766041 env[1212]: time="2025-05-08T00:50:08.766013095Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:50:08.767676 env[1212]: time="2025-05-08T00:50:08.767650110Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:50:08.768154 env[1212]: time="2025-05-08T00:50:08.768124802Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" May 8 00:50:08.768769 env[1212]: time="2025-05-08T00:50:08.768734619Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" May 8 00:50:09.357724 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1905237990.mount: Deactivated successfully. May 8 00:50:12.551402 env[1212]: time="2025-05-08T00:50:12.551350596Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:50:12.553180 env[1212]: time="2025-05-08T00:50:12.553150204Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:50:12.555859 env[1212]: time="2025-05-08T00:50:12.555822807Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:50:12.557906 env[1212]: time="2025-05-08T00:50:12.557879048Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:50:12.558879 env[1212]: time="2025-05-08T00:50:12.558847589Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" May 8 00:50:16.999783 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 8 00:50:16.999988 systemd[1]: Stopped kubelet.service. May 8 00:50:17.001280 systemd[1]: Starting kubelet.service... May 8 00:50:17.011037 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 8 00:50:17.011104 systemd[1]: kubelet.service: Failed with result 'signal'. May 8 00:50:17.011292 systemd[1]: Stopped kubelet.service. May 8 00:50:17.013306 systemd[1]: Starting kubelet.service... May 8 00:50:17.033028 systemd[1]: Reloading. May 8 00:50:17.088975 /usr/lib/systemd/system-generators/torcx-generator[1519]: time="2025-05-08T00:50:17Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 8 00:50:17.089007 /usr/lib/systemd/system-generators/torcx-generator[1519]: time="2025-05-08T00:50:17Z" level=info msg="torcx already run" May 8 00:50:17.180889 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 8 00:50:17.181051 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 8 00:50:17.196187 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 8 00:50:17.266238 systemd[1]: Started kubelet.service. May 8 00:50:17.267978 systemd[1]: Stopping kubelet.service... May 8 00:50:17.268376 systemd[1]: kubelet.service: Deactivated successfully. May 8 00:50:17.268694 systemd[1]: Stopped kubelet.service. May 8 00:50:17.270275 systemd[1]: Starting kubelet.service... May 8 00:50:17.378870 systemd[1]: Started kubelet.service. May 8 00:50:17.412352 kubelet[1563]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 8 00:50:17.412659 kubelet[1563]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 8 00:50:17.412704 kubelet[1563]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 8 00:50:17.412981 kubelet[1563]: I0508 00:50:17.412949 1563 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 8 00:50:18.315709 kubelet[1563]: I0508 00:50:18.315666 1563 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" May 8 00:50:18.315709 kubelet[1563]: I0508 00:50:18.315699 1563 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 8 00:50:18.315970 kubelet[1563]: I0508 00:50:18.315946 1563 server.go:929] "Client rotation is on, will bootstrap in background" May 8 00:50:18.348833 kubelet[1563]: E0508 00:50:18.348800 1563 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.116:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.116:6443: connect: connection refused" logger="UnhandledError" May 8 00:50:18.349749 kubelet[1563]: I0508 00:50:18.349633 1563 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 8 00:50:18.355497 kubelet[1563]: E0508 00:50:18.355474 1563 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 8 00:50:18.355605 kubelet[1563]: I0508 00:50:18.355590 1563 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 8 00:50:18.360710 kubelet[1563]: I0508 00:50:18.360691 1563 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 8 00:50:18.361677 kubelet[1563]: I0508 00:50:18.361659 1563 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 8 00:50:18.361904 kubelet[1563]: I0508 00:50:18.361880 1563 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 8 00:50:18.362122 kubelet[1563]: I0508 00:50:18.361964 1563 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 8 00:50:18.362377 kubelet[1563]: I0508 00:50:18.362365 1563 topology_manager.go:138] "Creating topology manager with none policy" May 8 00:50:18.362429 kubelet[1563]: I0508 00:50:18.362421 1563 container_manager_linux.go:300] "Creating device plugin manager" May 8 00:50:18.362667 kubelet[1563]: I0508 00:50:18.362655 1563 state_mem.go:36] "Initialized new in-memory state store" May 8 00:50:18.365319 kubelet[1563]: I0508 00:50:18.365302 1563 kubelet.go:408] "Attempting to sync node with API server" May 8 00:50:18.365419 kubelet[1563]: I0508 00:50:18.365408 1563 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 8 00:50:18.365573 kubelet[1563]: I0508 00:50:18.365564 1563 kubelet.go:314] "Adding apiserver pod source" May 8 00:50:18.365642 kubelet[1563]: I0508 00:50:18.365632 1563 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 8 00:50:18.369269 kubelet[1563]: I0508 00:50:18.369254 1563 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" May 8 00:50:18.374419 kubelet[1563]: I0508 00:50:18.374402 1563 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 8 00:50:18.377659 kubelet[1563]: W0508 00:50:18.377643 1563 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 8 00:50:18.378122 kubelet[1563]: W0508 00:50:18.378073 1563 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.116:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.116:6443: connect: connection refused May 8 00:50:18.378177 kubelet[1563]: E0508 00:50:18.378126 1563 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.116:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.116:6443: connect: connection refused" logger="UnhandledError" May 8 00:50:18.378424 kubelet[1563]: I0508 00:50:18.378411 1563 server.go:1269] "Started kubelet" May 8 00:50:18.379087 kubelet[1563]: W0508 00:50:18.379047 1563 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.116:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.116:6443: connect: connection refused May 8 00:50:18.379145 kubelet[1563]: E0508 00:50:18.379096 1563 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.116:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.116:6443: connect: connection refused" logger="UnhandledError" May 8 00:50:18.379239 kubelet[1563]: I0508 00:50:18.379216 1563 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 8 00:50:18.380130 kubelet[1563]: I0508 00:50:18.380111 1563 server.go:460] "Adding debug handlers to kubelet server" May 8 00:50:18.382176 kubelet[1563]: I0508 00:50:18.382125 1563 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 8 00:50:18.382387 kubelet[1563]: I0508 00:50:18.382364 1563 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 8 00:50:18.383156 kubelet[1563]: E0508 00:50:18.383109 1563 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 8 00:50:18.383618 kubelet[1563]: E0508 00:50:18.382507 1563 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.116:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.116:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183d66f19be72ed3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-08 00:50:18.378391251 +0000 UTC m=+0.995287553,LastTimestamp:2025-05-08 00:50:18.378391251 +0000 UTC m=+0.995287553,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 8 00:50:18.384822 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). May 8 00:50:18.384968 kubelet[1563]: I0508 00:50:18.384943 1563 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 8 00:50:18.385099 kubelet[1563]: I0508 00:50:18.385086 1563 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 8 00:50:18.385683 kubelet[1563]: I0508 00:50:18.385664 1563 volume_manager.go:289] "Starting Kubelet Volume Manager" May 8 00:50:18.385902 kubelet[1563]: I0508 00:50:18.385882 1563 desired_state_of_world_populator.go:146] "Desired state populator starts to run" May 8 00:50:18.385957 kubelet[1563]: I0508 00:50:18.385945 1563 reconciler.go:26] "Reconciler: start to sync state" May 8 00:50:18.386108 kubelet[1563]: W0508 00:50:18.386070 1563 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.116:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.116:6443: connect: connection refused May 8 00:50:18.386150 kubelet[1563]: E0508 00:50:18.386118 1563 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.116:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.116:6443: connect: connection refused" logger="UnhandledError" May 8 00:50:18.386150 kubelet[1563]: I0508 00:50:18.386097 1563 factory.go:221] Registration of the systemd container factory successfully May 8 00:50:18.386197 kubelet[1563]: I0508 00:50:18.386186 1563 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 8 00:50:18.386959 kubelet[1563]: E0508 00:50:18.386938 1563 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 8 00:50:18.387258 kubelet[1563]: E0508 00:50:18.387223 1563 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.116:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.116:6443: connect: connection refused" interval="200ms" May 8 00:50:18.387663 kubelet[1563]: I0508 00:50:18.387635 1563 factory.go:221] Registration of the containerd container factory successfully May 8 00:50:18.398672 kubelet[1563]: I0508 00:50:18.398644 1563 cpu_manager.go:214] "Starting CPU manager" policy="none" May 8 00:50:18.398672 kubelet[1563]: I0508 00:50:18.398658 1563 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 8 00:50:18.398764 kubelet[1563]: I0508 00:50:18.398680 1563 state_mem.go:36] "Initialized new in-memory state store" May 8 00:50:18.400036 kubelet[1563]: I0508 00:50:18.400020 1563 policy_none.go:49] "None policy: Start" May 8 00:50:18.400520 kubelet[1563]: I0508 00:50:18.400500 1563 memory_manager.go:170] "Starting memorymanager" policy="None" May 8 00:50:18.400563 kubelet[1563]: I0508 00:50:18.400538 1563 state_mem.go:35] "Initializing new in-memory state store" May 8 00:50:18.406427 kubelet[1563]: I0508 00:50:18.406387 1563 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 8 00:50:18.407510 systemd[1]: Created slice kubepods.slice. May 8 00:50:18.408020 kubelet[1563]: I0508 00:50:18.408000 1563 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 8 00:50:18.408020 kubelet[1563]: I0508 00:50:18.408018 1563 status_manager.go:217] "Starting to sync pod status with apiserver" May 8 00:50:18.408098 kubelet[1563]: I0508 00:50:18.408035 1563 kubelet.go:2321] "Starting kubelet main sync loop" May 8 00:50:18.408098 kubelet[1563]: E0508 00:50:18.408068 1563 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 8 00:50:18.410444 kubelet[1563]: W0508 00:50:18.410396 1563 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.116:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.116:6443: connect: connection refused May 8 00:50:18.410571 kubelet[1563]: E0508 00:50:18.410551 1563 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.116:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.116:6443: connect: connection refused" logger="UnhandledError" May 8 00:50:18.411789 systemd[1]: Created slice kubepods-burstable.slice. May 8 00:50:18.414387 systemd[1]: Created slice kubepods-besteffort.slice. May 8 00:50:18.432087 kubelet[1563]: I0508 00:50:18.432067 1563 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 8 00:50:18.432283 kubelet[1563]: I0508 00:50:18.432190 1563 eviction_manager.go:189] "Eviction manager: starting control loop" May 8 00:50:18.432283 kubelet[1563]: I0508 00:50:18.432199 1563 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 8 00:50:18.432575 kubelet[1563]: I0508 00:50:18.432560 1563 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 8 00:50:18.434215 kubelet[1563]: E0508 00:50:18.434198 1563 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 8 00:50:18.514883 systemd[1]: Created slice kubepods-burstable-pod0613557c150e4f35d1f3f822b5f32ff1.slice. May 8 00:50:18.528100 systemd[1]: Created slice kubepods-burstable-pod8609bc908c159bf81667179c60cc1d8c.slice. May 8 00:50:18.533635 kubelet[1563]: I0508 00:50:18.533615 1563 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 8 00:50:18.534081 kubelet[1563]: E0508 00:50:18.534053 1563 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.116:6443/api/v1/nodes\": dial tcp 10.0.0.116:6443: connect: connection refused" node="localhost" May 8 00:50:18.540932 systemd[1]: Created slice kubepods-burstable-podd4a6b755cb4739fbca401212ebb82b6d.slice. May 8 00:50:18.587979 kubelet[1563]: I0508 00:50:18.587357 1563 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0613557c150e4f35d1f3f822b5f32ff1-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0613557c150e4f35d1f3f822b5f32ff1\") " pod="kube-system/kube-scheduler-localhost" May 8 00:50:18.587979 kubelet[1563]: I0508 00:50:18.587393 1563 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8609bc908c159bf81667179c60cc1d8c-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"8609bc908c159bf81667179c60cc1d8c\") " pod="kube-system/kube-apiserver-localhost" May 8 00:50:18.587979 kubelet[1563]: I0508 00:50:18.587409 1563 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:50:18.587979 kubelet[1563]: I0508 00:50:18.587425 1563 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:50:18.587979 kubelet[1563]: I0508 00:50:18.587452 1563 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:50:18.588149 kubelet[1563]: I0508 00:50:18.587469 1563 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8609bc908c159bf81667179c60cc1d8c-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"8609bc908c159bf81667179c60cc1d8c\") " pod="kube-system/kube-apiserver-localhost" May 8 00:50:18.588149 kubelet[1563]: I0508 00:50:18.587490 1563 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8609bc908c159bf81667179c60cc1d8c-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"8609bc908c159bf81667179c60cc1d8c\") " pod="kube-system/kube-apiserver-localhost" May 8 00:50:18.588149 kubelet[1563]: I0508 00:50:18.587504 1563 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:50:18.588149 kubelet[1563]: I0508 00:50:18.587518 1563 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:50:18.588149 kubelet[1563]: E0508 00:50:18.587548 1563 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.116:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.116:6443: connect: connection refused" interval="400ms" May 8 00:50:18.735571 kubelet[1563]: I0508 00:50:18.735539 1563 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 8 00:50:18.735926 kubelet[1563]: E0508 00:50:18.735898 1563 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.116:6443/api/v1/nodes\": dial tcp 10.0.0.116:6443: connect: connection refused" node="localhost" May 8 00:50:18.826644 kubelet[1563]: E0508 00:50:18.826614 1563 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:50:18.827310 env[1212]: time="2025-05-08T00:50:18.827247122Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0613557c150e4f35d1f3f822b5f32ff1,Namespace:kube-system,Attempt:0,}" May 8 00:50:18.840707 kubelet[1563]: E0508 00:50:18.840420 1563 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:50:18.841179 env[1212]: time="2025-05-08T00:50:18.841139567Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:8609bc908c159bf81667179c60cc1d8c,Namespace:kube-system,Attempt:0,}" May 8 00:50:18.842588 kubelet[1563]: E0508 00:50:18.842559 1563 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:50:18.843074 env[1212]: time="2025-05-08T00:50:18.842899515Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d4a6b755cb4739fbca401212ebb82b6d,Namespace:kube-system,Attempt:0,}" May 8 00:50:18.988578 kubelet[1563]: E0508 00:50:18.988545 1563 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.116:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.116:6443: connect: connection refused" interval="800ms" May 8 00:50:19.137492 kubelet[1563]: I0508 00:50:19.137188 1563 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 8 00:50:19.138031 kubelet[1563]: E0508 00:50:19.138003 1563 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.116:6443/api/v1/nodes\": dial tcp 10.0.0.116:6443: connect: connection refused" node="localhost" May 8 00:50:19.379889 kubelet[1563]: W0508 00:50:19.379862 1563 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.116:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.116:6443: connect: connection refused May 8 00:50:19.380011 kubelet[1563]: E0508 00:50:19.379901 1563 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.116:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.116:6443: connect: connection refused" logger="UnhandledError" May 8 00:50:19.470471 kubelet[1563]: W0508 00:50:19.470220 1563 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.116:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.116:6443: connect: connection refused May 8 00:50:19.470762 kubelet[1563]: E0508 00:50:19.470743 1563 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.116:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.116:6443: connect: connection refused" logger="UnhandledError" May 8 00:50:19.496637 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount938509118.mount: Deactivated successfully. May 8 00:50:19.500436 env[1212]: time="2025-05-08T00:50:19.500368453Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:50:19.502884 env[1212]: time="2025-05-08T00:50:19.502857159Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:50:19.503676 env[1212]: time="2025-05-08T00:50:19.503655064Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:50:19.504910 env[1212]: time="2025-05-08T00:50:19.504876560Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:50:19.506272 env[1212]: time="2025-05-08T00:50:19.506249056Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:50:19.507403 env[1212]: time="2025-05-08T00:50:19.507376964Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:50:19.508870 env[1212]: time="2025-05-08T00:50:19.508847215Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:50:19.509896 env[1212]: time="2025-05-08T00:50:19.509855814Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:50:19.511977 env[1212]: time="2025-05-08T00:50:19.511952939Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:50:19.514342 env[1212]: time="2025-05-08T00:50:19.514316886Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:50:19.515327 env[1212]: time="2025-05-08T00:50:19.515297040Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:50:19.516469 env[1212]: time="2025-05-08T00:50:19.516442977Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:50:19.551230 env[1212]: time="2025-05-08T00:50:19.550961541Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:50:19.551230 env[1212]: time="2025-05-08T00:50:19.550999682Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:50:19.551230 env[1212]: time="2025-05-08T00:50:19.551009577Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:50:19.551562 env[1212]: time="2025-05-08T00:50:19.551524394Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0a9427de8ab27cf219934b0f22d548663b020f6f76c5339d12d312cc25c9ddf6 pid=1616 runtime=io.containerd.runc.v2 May 8 00:50:19.552239 env[1212]: time="2025-05-08T00:50:19.552179392Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:50:19.552239 env[1212]: time="2025-05-08T00:50:19.552210922Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:50:19.552367 env[1212]: time="2025-05-08T00:50:19.552229391Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:50:19.552603 env[1212]: time="2025-05-08T00:50:19.552555669Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1bea576bd03eb0119fc7d880959090259bfb6af98815243b0860ae0e6f2ef5bb pid=1626 runtime=io.containerd.runc.v2 May 8 00:50:19.552927 env[1212]: time="2025-05-08T00:50:19.552874654Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:50:19.552927 env[1212]: time="2025-05-08T00:50:19.552907306Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:50:19.552927 env[1212]: time="2025-05-08T00:50:19.552917282Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:50:19.553055 env[1212]: time="2025-05-08T00:50:19.553024852Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c40db537c127bb325301970f7eaad8c636a49a42986728490e0fae816bdb0e2e pid=1617 runtime=io.containerd.runc.v2 May 8 00:50:19.562118 systemd[1]: Started cri-containerd-0a9427de8ab27cf219934b0f22d548663b020f6f76c5339d12d312cc25c9ddf6.scope. May 8 00:50:19.568637 systemd[1]: Started cri-containerd-1bea576bd03eb0119fc7d880959090259bfb6af98815243b0860ae0e6f2ef5bb.scope. May 8 00:50:19.569574 systemd[1]: Started cri-containerd-c40db537c127bb325301970f7eaad8c636a49a42986728490e0fae816bdb0e2e.scope. May 8 00:50:19.624424 env[1212]: time="2025-05-08T00:50:19.624371042Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0613557c150e4f35d1f3f822b5f32ff1,Namespace:kube-system,Attempt:0,} returns sandbox id \"c40db537c127bb325301970f7eaad8c636a49a42986728490e0fae816bdb0e2e\"" May 8 00:50:19.633884 kubelet[1563]: E0508 00:50:19.633856 1563 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:50:19.635603 env[1212]: time="2025-05-08T00:50:19.635573442Z" level=info msg="CreateContainer within sandbox \"c40db537c127bb325301970f7eaad8c636a49a42986728490e0fae816bdb0e2e\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 8 00:50:19.643704 env[1212]: time="2025-05-08T00:50:19.643668836Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:8609bc908c159bf81667179c60cc1d8c,Namespace:kube-system,Attempt:0,} returns sandbox id \"1bea576bd03eb0119fc7d880959090259bfb6af98815243b0860ae0e6f2ef5bb\"" May 8 00:50:19.644186 env[1212]: time="2025-05-08T00:50:19.644156809Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d4a6b755cb4739fbca401212ebb82b6d,Namespace:kube-system,Attempt:0,} returns sandbox id \"0a9427de8ab27cf219934b0f22d548663b020f6f76c5339d12d312cc25c9ddf6\"" May 8 00:50:19.644419 kubelet[1563]: E0508 00:50:19.644395 1563 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:50:19.645138 kubelet[1563]: E0508 00:50:19.644944 1563 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:50:19.646150 env[1212]: time="2025-05-08T00:50:19.646069361Z" level=info msg="CreateContainer within sandbox \"1bea576bd03eb0119fc7d880959090259bfb6af98815243b0860ae0e6f2ef5bb\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 8 00:50:19.649004 env[1212]: time="2025-05-08T00:50:19.648973566Z" level=info msg="CreateContainer within sandbox \"0a9427de8ab27cf219934b0f22d548663b020f6f76c5339d12d312cc25c9ddf6\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 8 00:50:19.658560 env[1212]: time="2025-05-08T00:50:19.658519339Z" level=info msg="CreateContainer within sandbox \"1bea576bd03eb0119fc7d880959090259bfb6af98815243b0860ae0e6f2ef5bb\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"41fdb1c41f2bb369322fc1f8c8416d0936a651b472c962df6023826f635f3019\"" May 8 00:50:19.659113 env[1212]: time="2025-05-08T00:50:19.659079908Z" level=info msg="StartContainer for \"41fdb1c41f2bb369322fc1f8c8416d0936a651b472c962df6023826f635f3019\"" May 8 00:50:19.664287 env[1212]: time="2025-05-08T00:50:19.664244536Z" level=info msg="CreateContainer within sandbox \"0a9427de8ab27cf219934b0f22d548663b020f6f76c5339d12d312cc25c9ddf6\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"899007aecf16c628d704206485972894e7b1768f4b37fe33a41f9fb82ef85241\"" May 8 00:50:19.664662 env[1212]: time="2025-05-08T00:50:19.664631269Z" level=info msg="StartContainer for \"899007aecf16c628d704206485972894e7b1768f4b37fe33a41f9fb82ef85241\"" May 8 00:50:19.665628 env[1212]: time="2025-05-08T00:50:19.665595798Z" level=info msg="CreateContainer within sandbox \"c40db537c127bb325301970f7eaad8c636a49a42986728490e0fae816bdb0e2e\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"c3a86143adfeec45fa4d2e5e6f91864b898de45a3bc8d88bae5334a4bda8795d\"" May 8 00:50:19.666050 env[1212]: time="2025-05-08T00:50:19.666014742Z" level=info msg="StartContainer for \"c3a86143adfeec45fa4d2e5e6f91864b898de45a3bc8d88bae5334a4bda8795d\"" May 8 00:50:19.674811 systemd[1]: Started cri-containerd-41fdb1c41f2bb369322fc1f8c8416d0936a651b472c962df6023826f635f3019.scope. May 8 00:50:19.686195 systemd[1]: Started cri-containerd-c3a86143adfeec45fa4d2e5e6f91864b898de45a3bc8d88bae5334a4bda8795d.scope. May 8 00:50:19.695373 systemd[1]: Started cri-containerd-899007aecf16c628d704206485972894e7b1768f4b37fe33a41f9fb82ef85241.scope. May 8 00:50:19.718304 kubelet[1563]: E0508 00:50:19.718193 1563 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.116:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.116:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183d66f19be72ed3 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-08 00:50:18.378391251 +0000 UTC m=+0.995287553,LastTimestamp:2025-05-08 00:50:18.378391251 +0000 UTC m=+0.995287553,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 8 00:50:19.743269 env[1212]: time="2025-05-08T00:50:19.741581182Z" level=info msg="StartContainer for \"41fdb1c41f2bb369322fc1f8c8416d0936a651b472c962df6023826f635f3019\" returns successfully" May 8 00:50:19.761921 env[1212]: time="2025-05-08T00:50:19.758320239Z" level=info msg="StartContainer for \"899007aecf16c628d704206485972894e7b1768f4b37fe33a41f9fb82ef85241\" returns successfully" May 8 00:50:19.778735 env[1212]: time="2025-05-08T00:50:19.776555549Z" level=info msg="StartContainer for \"c3a86143adfeec45fa4d2e5e6f91864b898de45a3bc8d88bae5334a4bda8795d\" returns successfully" May 8 00:50:19.793264 kubelet[1563]: E0508 00:50:19.789613 1563 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.116:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.116:6443: connect: connection refused" interval="1.6s" May 8 00:50:19.888804 kubelet[1563]: W0508 00:50:19.888746 1563 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.116:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.116:6443: connect: connection refused May 8 00:50:19.888931 kubelet[1563]: E0508 00:50:19.888818 1563 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.116:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.116:6443: connect: connection refused" logger="UnhandledError" May 8 00:50:19.939100 kubelet[1563]: I0508 00:50:19.939070 1563 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 8 00:50:20.419128 kubelet[1563]: E0508 00:50:20.419095 1563 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:50:20.421044 kubelet[1563]: E0508 00:50:20.421021 1563 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:50:20.422893 kubelet[1563]: E0508 00:50:20.422871 1563 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:50:21.424280 kubelet[1563]: E0508 00:50:21.424238 1563 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:50:21.619720 kubelet[1563]: E0508 00:50:21.619686 1563 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" May 8 00:50:21.724206 kubelet[1563]: I0508 00:50:21.723970 1563 kubelet_node_status.go:75] "Successfully registered node" node="localhost" May 8 00:50:21.724206 kubelet[1563]: E0508 00:50:21.724013 1563 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" May 8 00:50:22.381449 kubelet[1563]: I0508 00:50:22.381401 1563 apiserver.go:52] "Watching apiserver" May 8 00:50:22.386735 kubelet[1563]: I0508 00:50:22.386696 1563 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" May 8 00:50:23.346020 kubelet[1563]: E0508 00:50:23.345988 1563 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:50:23.425825 kubelet[1563]: E0508 00:50:23.425795 1563 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:50:23.845582 systemd[1]: Reloading. May 8 00:50:23.895073 /usr/lib/systemd/system-generators/torcx-generator[1858]: time="2025-05-08T00:50:23Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 8 00:50:23.895102 /usr/lib/systemd/system-generators/torcx-generator[1858]: time="2025-05-08T00:50:23Z" level=info msg="torcx already run" May 8 00:50:23.950805 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 8 00:50:23.950825 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 8 00:50:23.965949 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 8 00:50:23.971418 kubelet[1563]: E0508 00:50:23.971275 1563 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:50:24.050556 systemd[1]: Stopping kubelet.service... May 8 00:50:24.075955 systemd[1]: kubelet.service: Deactivated successfully. May 8 00:50:24.076165 systemd[1]: Stopped kubelet.service. May 8 00:50:24.076217 systemd[1]: kubelet.service: Consumed 1.143s CPU time. May 8 00:50:24.077859 systemd[1]: Starting kubelet.service... May 8 00:50:24.162815 systemd[1]: Started kubelet.service. May 8 00:50:24.197700 kubelet[1899]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 8 00:50:24.197700 kubelet[1899]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 8 00:50:24.197700 kubelet[1899]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 8 00:50:24.198048 kubelet[1899]: I0508 00:50:24.197762 1899 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 8 00:50:24.205103 kubelet[1899]: I0508 00:50:24.205066 1899 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" May 8 00:50:24.205103 kubelet[1899]: I0508 00:50:24.205093 1899 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 8 00:50:24.205324 kubelet[1899]: I0508 00:50:24.205298 1899 server.go:929] "Client rotation is on, will bootstrap in background" May 8 00:50:24.206584 kubelet[1899]: I0508 00:50:24.206564 1899 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 8 00:50:24.208297 kubelet[1899]: I0508 00:50:24.208270 1899 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 8 00:50:24.212037 kubelet[1899]: E0508 00:50:24.211991 1899 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 8 00:50:24.212037 kubelet[1899]: I0508 00:50:24.212023 1899 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 8 00:50:24.214634 kubelet[1899]: I0508 00:50:24.214616 1899 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 8 00:50:24.214726 kubelet[1899]: I0508 00:50:24.214713 1899 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 8 00:50:24.214829 kubelet[1899]: I0508 00:50:24.214806 1899 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 8 00:50:24.214973 kubelet[1899]: I0508 00:50:24.214830 1899 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 8 00:50:24.215047 kubelet[1899]: I0508 00:50:24.214982 1899 topology_manager.go:138] "Creating topology manager with none policy" May 8 00:50:24.215047 kubelet[1899]: I0508 00:50:24.214991 1899 container_manager_linux.go:300] "Creating device plugin manager" May 8 00:50:24.215047 kubelet[1899]: I0508 00:50:24.215017 1899 state_mem.go:36] "Initialized new in-memory state store" May 8 00:50:24.215126 kubelet[1899]: I0508 00:50:24.215114 1899 kubelet.go:408] "Attempting to sync node with API server" May 8 00:50:24.215169 kubelet[1899]: I0508 00:50:24.215160 1899 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 8 00:50:24.215209 kubelet[1899]: I0508 00:50:24.215199 1899 kubelet.go:314] "Adding apiserver pod source" May 8 00:50:24.215233 kubelet[1899]: I0508 00:50:24.215213 1899 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 8 00:50:24.220449 kubelet[1899]: I0508 00:50:24.216902 1899 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" May 8 00:50:24.220449 kubelet[1899]: I0508 00:50:24.217352 1899 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 8 00:50:24.220449 kubelet[1899]: I0508 00:50:24.217716 1899 server.go:1269] "Started kubelet" May 8 00:50:24.220449 kubelet[1899]: I0508 00:50:24.219351 1899 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 8 00:50:24.220449 kubelet[1899]: I0508 00:50:24.219595 1899 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 8 00:50:24.220449 kubelet[1899]: I0508 00:50:24.219641 1899 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 8 00:50:24.220449 kubelet[1899]: I0508 00:50:24.220415 1899 server.go:460] "Adding debug handlers to kubelet server" May 8 00:50:24.230668 kubelet[1899]: E0508 00:50:24.223878 1899 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 8 00:50:24.230668 kubelet[1899]: I0508 00:50:24.224591 1899 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 8 00:50:24.230668 kubelet[1899]: I0508 00:50:24.224726 1899 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 8 00:50:24.230668 kubelet[1899]: I0508 00:50:24.224863 1899 volume_manager.go:289] "Starting Kubelet Volume Manager" May 8 00:50:24.230668 kubelet[1899]: I0508 00:50:24.224944 1899 desired_state_of_world_populator.go:146] "Desired state populator starts to run" May 8 00:50:24.230668 kubelet[1899]: I0508 00:50:24.225104 1899 reconciler.go:26] "Reconciler: start to sync state" May 8 00:50:24.230668 kubelet[1899]: I0508 00:50:24.229578 1899 factory.go:221] Registration of the systemd container factory successfully May 8 00:50:24.230668 kubelet[1899]: I0508 00:50:24.229716 1899 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 8 00:50:24.235752 kubelet[1899]: I0508 00:50:24.235717 1899 factory.go:221] Registration of the containerd container factory successfully May 8 00:50:24.238747 kubelet[1899]: E0508 00:50:24.238707 1899 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 8 00:50:24.266499 kubelet[1899]: I0508 00:50:24.266420 1899 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 8 00:50:24.268270 kubelet[1899]: I0508 00:50:24.268235 1899 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 8 00:50:24.268270 kubelet[1899]: I0508 00:50:24.268262 1899 status_manager.go:217] "Starting to sync pod status with apiserver" May 8 00:50:24.268369 kubelet[1899]: I0508 00:50:24.268279 1899 kubelet.go:2321] "Starting kubelet main sync loop" May 8 00:50:24.268369 kubelet[1899]: E0508 00:50:24.268324 1899 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 8 00:50:24.275167 kubelet[1899]: I0508 00:50:24.275148 1899 cpu_manager.go:214] "Starting CPU manager" policy="none" May 8 00:50:24.275289 kubelet[1899]: I0508 00:50:24.275273 1899 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 8 00:50:24.275361 kubelet[1899]: I0508 00:50:24.275352 1899 state_mem.go:36] "Initialized new in-memory state store" May 8 00:50:24.275772 kubelet[1899]: I0508 00:50:24.275750 1899 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 8 00:50:24.275874 kubelet[1899]: I0508 00:50:24.275847 1899 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 8 00:50:24.275931 kubelet[1899]: I0508 00:50:24.275921 1899 policy_none.go:49] "None policy: Start" May 8 00:50:24.276559 kubelet[1899]: I0508 00:50:24.276540 1899 memory_manager.go:170] "Starting memorymanager" policy="None" May 8 00:50:24.276616 kubelet[1899]: I0508 00:50:24.276569 1899 state_mem.go:35] "Initializing new in-memory state store" May 8 00:50:24.276717 kubelet[1899]: I0508 00:50:24.276701 1899 state_mem.go:75] "Updated machine memory state" May 8 00:50:24.281884 kubelet[1899]: I0508 00:50:24.281853 1899 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 8 00:50:24.282028 kubelet[1899]: I0508 00:50:24.282012 1899 eviction_manager.go:189] "Eviction manager: starting control loop" May 8 00:50:24.282072 kubelet[1899]: I0508 00:50:24.282030 1899 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 8 00:50:24.283813 kubelet[1899]: I0508 00:50:24.283785 1899 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 8 00:50:24.374815 kubelet[1899]: E0508 00:50:24.374764 1899 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 8 00:50:24.374815 kubelet[1899]: E0508 00:50:24.374816 1899 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" May 8 00:50:24.385943 kubelet[1899]: I0508 00:50:24.385924 1899 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 8 00:50:24.392367 kubelet[1899]: I0508 00:50:24.392340 1899 kubelet_node_status.go:111] "Node was previously registered" node="localhost" May 8 00:50:24.392545 kubelet[1899]: I0508 00:50:24.392531 1899 kubelet_node_status.go:75] "Successfully registered node" node="localhost" May 8 00:50:24.526270 kubelet[1899]: I0508 00:50:24.526157 1899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0613557c150e4f35d1f3f822b5f32ff1-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0613557c150e4f35d1f3f822b5f32ff1\") " pod="kube-system/kube-scheduler-localhost" May 8 00:50:24.526270 kubelet[1899]: I0508 00:50:24.526215 1899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8609bc908c159bf81667179c60cc1d8c-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"8609bc908c159bf81667179c60cc1d8c\") " pod="kube-system/kube-apiserver-localhost" May 8 00:50:24.526270 kubelet[1899]: I0508 00:50:24.526247 1899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8609bc908c159bf81667179c60cc1d8c-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"8609bc908c159bf81667179c60cc1d8c\") " pod="kube-system/kube-apiserver-localhost" May 8 00:50:24.527244 kubelet[1899]: I0508 00:50:24.527216 1899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:50:24.527427 kubelet[1899]: I0508 00:50:24.527398 1899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:50:24.527762 kubelet[1899]: I0508 00:50:24.527582 1899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8609bc908c159bf81667179c60cc1d8c-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"8609bc908c159bf81667179c60cc1d8c\") " pod="kube-system/kube-apiserver-localhost" May 8 00:50:24.527984 kubelet[1899]: I0508 00:50:24.527888 1899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:50:24.528204 kubelet[1899]: I0508 00:50:24.528151 1899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:50:24.528457 kubelet[1899]: I0508 00:50:24.528328 1899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 8 00:50:24.675241 kubelet[1899]: E0508 00:50:24.675216 1899 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:50:24.675444 kubelet[1899]: E0508 00:50:24.675404 1899 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:50:24.675522 kubelet[1899]: E0508 00:50:24.675498 1899 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:50:24.902481 sudo[1933]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin May 8 00:50:24.902705 sudo[1933]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) May 8 00:50:25.215533 kubelet[1899]: I0508 00:50:25.215427 1899 apiserver.go:52] "Watching apiserver" May 8 00:50:25.225282 kubelet[1899]: I0508 00:50:25.225253 1899 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" May 8 00:50:25.275089 kubelet[1899]: E0508 00:50:25.275063 1899 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:50:25.275253 kubelet[1899]: E0508 00:50:25.275098 1899 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:50:25.291590 kubelet[1899]: E0508 00:50:25.291546 1899 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 8 00:50:25.291733 kubelet[1899]: E0508 00:50:25.291711 1899 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:50:25.318017 kubelet[1899]: I0508 00:50:25.317960 1899 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.317944451 podStartE2EDuration="2.317944451s" podCreationTimestamp="2025-05-08 00:50:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:50:25.310552947 +0000 UTC m=+1.144602404" watchObservedRunningTime="2025-05-08 00:50:25.317944451 +0000 UTC m=+1.151993908" May 8 00:50:25.324620 kubelet[1899]: I0508 00:50:25.324570 1899 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.324557641 podStartE2EDuration="2.324557641s" podCreationTimestamp="2025-05-08 00:50:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:50:25.318309431 +0000 UTC m=+1.152358888" watchObservedRunningTime="2025-05-08 00:50:25.324557641 +0000 UTC m=+1.158607098" May 8 00:50:25.332622 kubelet[1899]: I0508 00:50:25.332579 1899 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.332569226 podStartE2EDuration="1.332569226s" podCreationTimestamp="2025-05-08 00:50:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:50:25.324762386 +0000 UTC m=+1.158811843" watchObservedRunningTime="2025-05-08 00:50:25.332569226 +0000 UTC m=+1.166618683" May 8 00:50:25.361693 sudo[1933]: pam_unix(sudo:session): session closed for user root May 8 00:50:26.276525 kubelet[1899]: E0508 00:50:26.276492 1899 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:50:27.278341 kubelet[1899]: E0508 00:50:27.278311 1899 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:50:27.393376 sudo[1315]: pam_unix(sudo:session): session closed for user root May 8 00:50:27.394769 sshd[1311]: pam_unix(sshd:session): session closed for user core May 8 00:50:27.397346 systemd[1]: sshd@4-10.0.0.116:22-10.0.0.1:54224.service: Deactivated successfully. May 8 00:50:27.398075 systemd[1]: session-5.scope: Deactivated successfully. May 8 00:50:27.398245 systemd[1]: session-5.scope: Consumed 6.780s CPU time. May 8 00:50:27.398928 systemd-logind[1202]: Session 5 logged out. Waiting for processes to exit. May 8 00:50:27.399854 systemd-logind[1202]: Removed session 5. May 8 00:50:30.114207 kubelet[1899]: I0508 00:50:30.114168 1899 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 8 00:50:30.114575 env[1212]: time="2025-05-08T00:50:30.114505260Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 8 00:50:30.114938 kubelet[1899]: I0508 00:50:30.114907 1899 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 8 00:50:30.202813 kubelet[1899]: E0508 00:50:30.202786 1899 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:50:30.282155 kubelet[1899]: E0508 00:50:30.282124 1899 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:50:30.907836 systemd[1]: Created slice kubepods-besteffort-pod125a93c1_9b12_467f_a11d_369cd2c9fb46.slice. May 8 00:50:30.918907 systemd[1]: Created slice kubepods-burstable-pod9b2edf20_f78b_4920_b99a_8341ff411f0d.slice. May 8 00:50:30.973779 kubelet[1899]: I0508 00:50:30.973749 1899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9b2edf20-f78b-4920-b99a-8341ff411f0d-hostproc\") pod \"cilium-j68hn\" (UID: \"9b2edf20-f78b-4920-b99a-8341ff411f0d\") " pod="kube-system/cilium-j68hn" May 8 00:50:30.973991 kubelet[1899]: I0508 00:50:30.973969 1899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9b2edf20-f78b-4920-b99a-8341ff411f0d-host-proc-sys-kernel\") pod \"cilium-j68hn\" (UID: \"9b2edf20-f78b-4920-b99a-8341ff411f0d\") " pod="kube-system/cilium-j68hn" May 8 00:50:30.974104 kubelet[1899]: I0508 00:50:30.974087 1899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rdsd4\" (UniqueName: \"kubernetes.io/projected/125a93c1-9b12-467f-a11d-369cd2c9fb46-kube-api-access-rdsd4\") pod \"kube-proxy-gl4cn\" (UID: \"125a93c1-9b12-467f-a11d-369cd2c9fb46\") " pod="kube-system/kube-proxy-gl4cn" May 8 00:50:30.974199 kubelet[1899]: I0508 00:50:30.974185 1899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9b2edf20-f78b-4920-b99a-8341ff411f0d-bpf-maps\") pod \"cilium-j68hn\" (UID: \"9b2edf20-f78b-4920-b99a-8341ff411f0d\") " pod="kube-system/cilium-j68hn" May 8 00:50:30.974326 kubelet[1899]: I0508 00:50:30.974286 1899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9b2edf20-f78b-4920-b99a-8341ff411f0d-cilium-cgroup\") pod \"cilium-j68hn\" (UID: \"9b2edf20-f78b-4920-b99a-8341ff411f0d\") " pod="kube-system/cilium-j68hn" May 8 00:50:30.974468 kubelet[1899]: I0508 00:50:30.974440 1899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9b2edf20-f78b-4920-b99a-8341ff411f0d-etc-cni-netd\") pod \"cilium-j68hn\" (UID: \"9b2edf20-f78b-4920-b99a-8341ff411f0d\") " pod="kube-system/cilium-j68hn" May 8 00:50:30.974526 kubelet[1899]: I0508 00:50:30.974479 1899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9b2edf20-f78b-4920-b99a-8341ff411f0d-lib-modules\") pod \"cilium-j68hn\" (UID: \"9b2edf20-f78b-4920-b99a-8341ff411f0d\") " pod="kube-system/cilium-j68hn" May 8 00:50:30.974526 kubelet[1899]: I0508 00:50:30.974496 1899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9b2edf20-f78b-4920-b99a-8341ff411f0d-host-proc-sys-net\") pod \"cilium-j68hn\" (UID: \"9b2edf20-f78b-4920-b99a-8341ff411f0d\") " pod="kube-system/cilium-j68hn" May 8 00:50:30.974526 kubelet[1899]: I0508 00:50:30.974511 1899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9b2edf20-f78b-4920-b99a-8341ff411f0d-hubble-tls\") pod \"cilium-j68hn\" (UID: \"9b2edf20-f78b-4920-b99a-8341ff411f0d\") " pod="kube-system/cilium-j68hn" May 8 00:50:30.974608 kubelet[1899]: I0508 00:50:30.974540 1899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/125a93c1-9b12-467f-a11d-369cd2c9fb46-lib-modules\") pod \"kube-proxy-gl4cn\" (UID: \"125a93c1-9b12-467f-a11d-369cd2c9fb46\") " pod="kube-system/kube-proxy-gl4cn" May 8 00:50:30.974608 kubelet[1899]: I0508 00:50:30.974555 1899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9b2edf20-f78b-4920-b99a-8341ff411f0d-cilium-run\") pod \"cilium-j68hn\" (UID: \"9b2edf20-f78b-4920-b99a-8341ff411f0d\") " pod="kube-system/cilium-j68hn" May 8 00:50:30.974608 kubelet[1899]: I0508 00:50:30.974570 1899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9b2edf20-f78b-4920-b99a-8341ff411f0d-cni-path\") pod \"cilium-j68hn\" (UID: \"9b2edf20-f78b-4920-b99a-8341ff411f0d\") " pod="kube-system/cilium-j68hn" May 8 00:50:30.974608 kubelet[1899]: I0508 00:50:30.974585 1899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9b2edf20-f78b-4920-b99a-8341ff411f0d-clustermesh-secrets\") pod \"cilium-j68hn\" (UID: \"9b2edf20-f78b-4920-b99a-8341ff411f0d\") " pod="kube-system/cilium-j68hn" May 8 00:50:30.974709 kubelet[1899]: I0508 00:50:30.974601 1899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/125a93c1-9b12-467f-a11d-369cd2c9fb46-xtables-lock\") pod \"kube-proxy-gl4cn\" (UID: \"125a93c1-9b12-467f-a11d-369cd2c9fb46\") " pod="kube-system/kube-proxy-gl4cn" May 8 00:50:30.974709 kubelet[1899]: I0508 00:50:30.974624 1899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9b2edf20-f78b-4920-b99a-8341ff411f0d-xtables-lock\") pod \"cilium-j68hn\" (UID: \"9b2edf20-f78b-4920-b99a-8341ff411f0d\") " pod="kube-system/cilium-j68hn" May 8 00:50:30.974709 kubelet[1899]: I0508 00:50:30.974655 1899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9b2edf20-f78b-4920-b99a-8341ff411f0d-cilium-config-path\") pod \"cilium-j68hn\" (UID: \"9b2edf20-f78b-4920-b99a-8341ff411f0d\") " pod="kube-system/cilium-j68hn" May 8 00:50:30.974709 kubelet[1899]: I0508 00:50:30.974672 1899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/125a93c1-9b12-467f-a11d-369cd2c9fb46-kube-proxy\") pod \"kube-proxy-gl4cn\" (UID: \"125a93c1-9b12-467f-a11d-369cd2c9fb46\") " pod="kube-system/kube-proxy-gl4cn" May 8 00:50:30.974709 kubelet[1899]: I0508 00:50:30.974689 1899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nh7jz\" (UniqueName: \"kubernetes.io/projected/9b2edf20-f78b-4920-b99a-8341ff411f0d-kube-api-access-nh7jz\") pod \"cilium-j68hn\" (UID: \"9b2edf20-f78b-4920-b99a-8341ff411f0d\") " pod="kube-system/cilium-j68hn" May 8 00:50:31.053087 systemd[1]: Created slice kubepods-besteffort-pod6fb0cb8c_8a27_4c31_a8d7_513b80d42d93.slice. May 8 00:50:31.075717 kubelet[1899]: I0508 00:50:31.075671 1899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6fb0cb8c-8a27-4c31-a8d7-513b80d42d93-cilium-config-path\") pod \"cilium-operator-5d85765b45-n8759\" (UID: \"6fb0cb8c-8a27-4c31-a8d7-513b80d42d93\") " pod="kube-system/cilium-operator-5d85765b45-n8759" May 8 00:50:31.076157 kubelet[1899]: I0508 00:50:31.076138 1899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghj4z\" (UniqueName: \"kubernetes.io/projected/6fb0cb8c-8a27-4c31-a8d7-513b80d42d93-kube-api-access-ghj4z\") pod \"cilium-operator-5d85765b45-n8759\" (UID: \"6fb0cb8c-8a27-4c31-a8d7-513b80d42d93\") " pod="kube-system/cilium-operator-5d85765b45-n8759" May 8 00:50:31.076451 kubelet[1899]: I0508 00:50:31.076409 1899 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" May 8 00:50:31.216264 kubelet[1899]: E0508 00:50:31.216165 1899 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:50:31.217359 env[1212]: time="2025-05-08T00:50:31.216905062Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-gl4cn,Uid:125a93c1-9b12-467f-a11d-369cd2c9fb46,Namespace:kube-system,Attempt:0,}" May 8 00:50:31.221524 kubelet[1899]: E0508 00:50:31.221492 1899 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:50:31.222000 env[1212]: time="2025-05-08T00:50:31.221966578Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-j68hn,Uid:9b2edf20-f78b-4920-b99a-8341ff411f0d,Namespace:kube-system,Attempt:0,}" May 8 00:50:31.233788 env[1212]: time="2025-05-08T00:50:31.233717243Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:50:31.233788 env[1212]: time="2025-05-08T00:50:31.233755225Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:50:31.233955 env[1212]: time="2025-05-08T00:50:31.233771715Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:50:31.234152 env[1212]: time="2025-05-08T00:50:31.234109512Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/62107a15ee87d81fd17d81527cf6711984362dffddc2a7981a74cffa3ebee33b pid=1992 runtime=io.containerd.runc.v2 May 8 00:50:31.235679 env[1212]: time="2025-05-08T00:50:31.235607907Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:50:31.235679 env[1212]: time="2025-05-08T00:50:31.235639286Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:50:31.235679 env[1212]: time="2025-05-08T00:50:31.235649091Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:50:31.235795 env[1212]: time="2025-05-08T00:50:31.235753752Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1d6b645a3580a1626d37f79ccc466ec95848676726a2fa2513d4eeb0ac9e88a4 pid=2005 runtime=io.containerd.runc.v2 May 8 00:50:31.246040 systemd[1]: Started cri-containerd-1d6b645a3580a1626d37f79ccc466ec95848676726a2fa2513d4eeb0ac9e88a4.scope. May 8 00:50:31.251588 systemd[1]: Started cri-containerd-62107a15ee87d81fd17d81527cf6711984362dffddc2a7981a74cffa3ebee33b.scope. May 8 00:50:31.286236 env[1212]: time="2025-05-08T00:50:31.286197420Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-j68hn,Uid:9b2edf20-f78b-4920-b99a-8341ff411f0d,Namespace:kube-system,Attempt:0,} returns sandbox id \"1d6b645a3580a1626d37f79ccc466ec95848676726a2fa2513d4eeb0ac9e88a4\"" May 8 00:50:31.288132 kubelet[1899]: E0508 00:50:31.288105 1899 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:50:31.288406 env[1212]: time="2025-05-08T00:50:31.288347116Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-gl4cn,Uid:125a93c1-9b12-467f-a11d-369cd2c9fb46,Namespace:kube-system,Attempt:0,} returns sandbox id \"62107a15ee87d81fd17d81527cf6711984362dffddc2a7981a74cffa3ebee33b\"" May 8 00:50:31.289636 kubelet[1899]: E0508 00:50:31.289615 1899 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:50:31.290262 env[1212]: time="2025-05-08T00:50:31.290237620Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 8 00:50:31.293188 env[1212]: time="2025-05-08T00:50:31.293153324Z" level=info msg="CreateContainer within sandbox \"62107a15ee87d81fd17d81527cf6711984362dffddc2a7981a74cffa3ebee33b\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 8 00:50:31.308061 env[1212]: time="2025-05-08T00:50:31.308013244Z" level=info msg="CreateContainer within sandbox \"62107a15ee87d81fd17d81527cf6711984362dffddc2a7981a74cffa3ebee33b\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"ab494ee1ac5ddfdf02e72008009b77716e80c2952311a3afb983766adf0a296a\"" May 8 00:50:31.308724 env[1212]: time="2025-05-08T00:50:31.308694282Z" level=info msg="StartContainer for \"ab494ee1ac5ddfdf02e72008009b77716e80c2952311a3afb983766adf0a296a\"" May 8 00:50:31.324351 systemd[1]: Started cri-containerd-ab494ee1ac5ddfdf02e72008009b77716e80c2952311a3afb983766adf0a296a.scope. May 8 00:50:31.356088 kubelet[1899]: E0508 00:50:31.356044 1899 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:50:31.356910 env[1212]: time="2025-05-08T00:50:31.356870025Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-n8759,Uid:6fb0cb8c-8a27-4c31-a8d7-513b80d42d93,Namespace:kube-system,Attempt:0,}" May 8 00:50:31.372370 env[1212]: time="2025-05-08T00:50:31.372267860Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:50:31.372370 env[1212]: time="2025-05-08T00:50:31.372318089Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:50:31.372370 env[1212]: time="2025-05-08T00:50:31.372357833Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:50:31.372620 env[1212]: time="2025-05-08T00:50:31.372583645Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/831f15519ff3c03d7ad80d24012e4cffa1f4355280d22f4a8d65bbdbc13f9542 pid=2106 runtime=io.containerd.runc.v2 May 8 00:50:31.376629 env[1212]: time="2025-05-08T00:50:31.376580099Z" level=info msg="StartContainer for \"ab494ee1ac5ddfdf02e72008009b77716e80c2952311a3afb983766adf0a296a\" returns successfully" May 8 00:50:31.390710 systemd[1]: Started cri-containerd-831f15519ff3c03d7ad80d24012e4cffa1f4355280d22f4a8d65bbdbc13f9542.scope. May 8 00:50:31.432801 env[1212]: time="2025-05-08T00:50:31.432750072Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-n8759,Uid:6fb0cb8c-8a27-4c31-a8d7-513b80d42d93,Namespace:kube-system,Attempt:0,} returns sandbox id \"831f15519ff3c03d7ad80d24012e4cffa1f4355280d22f4a8d65bbdbc13f9542\"" May 8 00:50:31.433670 kubelet[1899]: E0508 00:50:31.433614 1899 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:50:31.886531 kubelet[1899]: E0508 00:50:31.886495 1899 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:50:32.292757 kubelet[1899]: E0508 00:50:32.292467 1899 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:50:32.293056 kubelet[1899]: E0508 00:50:32.292774 1899 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:50:32.488410 kubelet[1899]: E0508 00:50:32.488034 1899 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:50:32.504665 kubelet[1899]: I0508 00:50:32.504604 1899 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-gl4cn" podStartSLOduration=2.504586174 podStartE2EDuration="2.504586174s" podCreationTimestamp="2025-05-08 00:50:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:50:32.317871204 +0000 UTC m=+8.151920661" watchObservedRunningTime="2025-05-08 00:50:32.504586174 +0000 UTC m=+8.338635631" May 8 00:50:33.292526 kubelet[1899]: E0508 00:50:33.292372 1899 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:50:34.293080 kubelet[1899]: E0508 00:50:34.293039 1899 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:50:35.097391 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount219816133.mount: Deactivated successfully. May 8 00:50:37.416499 env[1212]: time="2025-05-08T00:50:37.416458134Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:50:37.418130 env[1212]: time="2025-05-08T00:50:37.418103430Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:50:37.420193 env[1212]: time="2025-05-08T00:50:37.420160860Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:50:37.420760 env[1212]: time="2025-05-08T00:50:37.420734422Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" May 8 00:50:37.423156 env[1212]: time="2025-05-08T00:50:37.423129155Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 8 00:50:37.426113 env[1212]: time="2025-05-08T00:50:37.426082684Z" level=info msg="CreateContainer within sandbox \"1d6b645a3580a1626d37f79ccc466ec95848676726a2fa2513d4eeb0ac9e88a4\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 8 00:50:37.441995 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount480606581.mount: Deactivated successfully. May 8 00:50:37.454340 env[1212]: time="2025-05-08T00:50:37.454299454Z" level=info msg="CreateContainer within sandbox \"1d6b645a3580a1626d37f79ccc466ec95848676726a2fa2513d4eeb0ac9e88a4\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"61e75b81b69e4701a5401cfd71a9ac99bdeeb6ad2c10f59cd6a1678456f884ba\"" May 8 00:50:37.455983 env[1212]: time="2025-05-08T00:50:37.455958075Z" level=info msg="StartContainer for \"61e75b81b69e4701a5401cfd71a9ac99bdeeb6ad2c10f59cd6a1678456f884ba\"" May 8 00:50:37.480719 systemd[1]: Started cri-containerd-61e75b81b69e4701a5401cfd71a9ac99bdeeb6ad2c10f59cd6a1678456f884ba.scope. May 8 00:50:37.563097 systemd[1]: cri-containerd-61e75b81b69e4701a5401cfd71a9ac99bdeeb6ad2c10f59cd6a1678456f884ba.scope: Deactivated successfully. May 8 00:50:37.592024 env[1212]: time="2025-05-08T00:50:37.591969341Z" level=info msg="StartContainer for \"61e75b81b69e4701a5401cfd71a9ac99bdeeb6ad2c10f59cd6a1678456f884ba\" returns successfully" May 8 00:50:37.634524 env[1212]: time="2025-05-08T00:50:37.634471552Z" level=info msg="shim disconnected" id=61e75b81b69e4701a5401cfd71a9ac99bdeeb6ad2c10f59cd6a1678456f884ba May 8 00:50:37.634524 env[1212]: time="2025-05-08T00:50:37.634523894Z" level=warning msg="cleaning up after shim disconnected" id=61e75b81b69e4701a5401cfd71a9ac99bdeeb6ad2c10f59cd6a1678456f884ba namespace=k8s.io May 8 00:50:37.634784 env[1212]: time="2025-05-08T00:50:37.634587641Z" level=info msg="cleaning up dead shim" May 8 00:50:37.642588 env[1212]: time="2025-05-08T00:50:37.642540883Z" level=warning msg="cleanup warnings time=\"2025-05-08T00:50:37Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2326 runtime=io.containerd.runc.v2\n" May 8 00:50:38.304288 kubelet[1899]: E0508 00:50:38.304263 1899 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:50:38.308319 env[1212]: time="2025-05-08T00:50:38.308271410Z" level=info msg="CreateContainer within sandbox \"1d6b645a3580a1626d37f79ccc466ec95848676726a2fa2513d4eeb0ac9e88a4\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 8 00:50:38.336644 env[1212]: time="2025-05-08T00:50:38.336599743Z" level=info msg="CreateContainer within sandbox \"1d6b645a3580a1626d37f79ccc466ec95848676726a2fa2513d4eeb0ac9e88a4\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"d03cd1a7169fc26fe2565734fc43f21d3bf253f636ee447c2745e8d4be876948\"" May 8 00:50:38.337155 env[1212]: time="2025-05-08T00:50:38.337115711Z" level=info msg="StartContainer for \"d03cd1a7169fc26fe2565734fc43f21d3bf253f636ee447c2745e8d4be876948\"" May 8 00:50:38.350584 systemd[1]: Started cri-containerd-d03cd1a7169fc26fe2565734fc43f21d3bf253f636ee447c2745e8d4be876948.scope. May 8 00:50:38.389764 env[1212]: time="2025-05-08T00:50:38.389558245Z" level=info msg="StartContainer for \"d03cd1a7169fc26fe2565734fc43f21d3bf253f636ee447c2745e8d4be876948\" returns successfully" May 8 00:50:38.398627 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 8 00:50:38.398875 systemd[1]: Stopped systemd-sysctl.service. May 8 00:50:38.399354 systemd[1]: Stopping systemd-sysctl.service... May 8 00:50:38.401069 systemd[1]: Starting systemd-sysctl.service... May 8 00:50:38.402300 systemd[1]: cri-containerd-d03cd1a7169fc26fe2565734fc43f21d3bf253f636ee447c2745e8d4be876948.scope: Deactivated successfully. May 8 00:50:38.409242 systemd[1]: Finished systemd-sysctl.service. May 8 00:50:38.421598 env[1212]: time="2025-05-08T00:50:38.421550209Z" level=info msg="shim disconnected" id=d03cd1a7169fc26fe2565734fc43f21d3bf253f636ee447c2745e8d4be876948 May 8 00:50:38.421598 env[1212]: time="2025-05-08T00:50:38.421598348Z" level=warning msg="cleaning up after shim disconnected" id=d03cd1a7169fc26fe2565734fc43f21d3bf253f636ee447c2745e8d4be876948 namespace=k8s.io May 8 00:50:38.421934 env[1212]: time="2025-05-08T00:50:38.421616916Z" level=info msg="cleaning up dead shim" May 8 00:50:38.428079 env[1212]: time="2025-05-08T00:50:38.428045657Z" level=warning msg="cleanup warnings time=\"2025-05-08T00:50:38Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2392 runtime=io.containerd.runc.v2\n" May 8 00:50:38.440091 systemd[1]: run-containerd-runc-k8s.io-61e75b81b69e4701a5401cfd71a9ac99bdeeb6ad2c10f59cd6a1678456f884ba-runc.BWsROz.mount: Deactivated successfully. May 8 00:50:38.440177 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-61e75b81b69e4701a5401cfd71a9ac99bdeeb6ad2c10f59cd6a1678456f884ba-rootfs.mount: Deactivated successfully. May 8 00:50:38.688116 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3480503518.mount: Deactivated successfully. May 8 00:50:39.273294 env[1212]: time="2025-05-08T00:50:39.273247528Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:50:39.274510 env[1212]: time="2025-05-08T00:50:39.274474636Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:50:39.275935 env[1212]: time="2025-05-08T00:50:39.275902380Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 8 00:50:39.276654 env[1212]: time="2025-05-08T00:50:39.276615732Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" May 8 00:50:39.279027 env[1212]: time="2025-05-08T00:50:39.278984476Z" level=info msg="CreateContainer within sandbox \"831f15519ff3c03d7ad80d24012e4cffa1f4355280d22f4a8d65bbdbc13f9542\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 8 00:50:39.289708 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1403284919.mount: Deactivated successfully. May 8 00:50:39.293772 env[1212]: time="2025-05-08T00:50:39.293735463Z" level=info msg="CreateContainer within sandbox \"831f15519ff3c03d7ad80d24012e4cffa1f4355280d22f4a8d65bbdbc13f9542\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"8d310238813dbe12698c27c5003acbf0e5e7474159e8839edeeef92681ad230f\"" May 8 00:50:39.295158 env[1212]: time="2025-05-08T00:50:39.294310523Z" level=info msg="StartContainer for \"8d310238813dbe12698c27c5003acbf0e5e7474159e8839edeeef92681ad230f\"" May 8 00:50:39.308521 systemd[1]: Started cri-containerd-8d310238813dbe12698c27c5003acbf0e5e7474159e8839edeeef92681ad230f.scope. May 8 00:50:39.310052 kubelet[1899]: E0508 00:50:39.309801 1899 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:50:39.314075 env[1212]: time="2025-05-08T00:50:39.313948614Z" level=info msg="CreateContainer within sandbox \"1d6b645a3580a1626d37f79ccc466ec95848676726a2fa2513d4eeb0ac9e88a4\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 8 00:50:39.330750 env[1212]: time="2025-05-08T00:50:39.330713129Z" level=info msg="CreateContainer within sandbox \"1d6b645a3580a1626d37f79ccc466ec95848676726a2fa2513d4eeb0ac9e88a4\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"30eec334eb71341df8bcb7a418df5b5edede800b7aaa8231abfa4459337c66f8\"" May 8 00:50:39.331426 env[1212]: time="2025-05-08T00:50:39.331401952Z" level=info msg="StartContainer for \"30eec334eb71341df8bcb7a418df5b5edede800b7aaa8231abfa4459337c66f8\"" May 8 00:50:39.355240 systemd[1]: Started cri-containerd-30eec334eb71341df8bcb7a418df5b5edede800b7aaa8231abfa4459337c66f8.scope. May 8 00:50:39.388264 env[1212]: time="2025-05-08T00:50:39.388198659Z" level=info msg="StartContainer for \"8d310238813dbe12698c27c5003acbf0e5e7474159e8839edeeef92681ad230f\" returns successfully" May 8 00:50:39.427289 env[1212]: time="2025-05-08T00:50:39.426559733Z" level=info msg="StartContainer for \"30eec334eb71341df8bcb7a418df5b5edede800b7aaa8231abfa4459337c66f8\" returns successfully" May 8 00:50:39.448816 systemd[1]: cri-containerd-30eec334eb71341df8bcb7a418df5b5edede800b7aaa8231abfa4459337c66f8.scope: Deactivated successfully. May 8 00:50:39.468535 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-30eec334eb71341df8bcb7a418df5b5edede800b7aaa8231abfa4459337c66f8-rootfs.mount: Deactivated successfully. May 8 00:50:39.498950 env[1212]: time="2025-05-08T00:50:39.498905251Z" level=info msg="shim disconnected" id=30eec334eb71341df8bcb7a418df5b5edede800b7aaa8231abfa4459337c66f8 May 8 00:50:39.499261 env[1212]: time="2025-05-08T00:50:39.499237417Z" level=warning msg="cleaning up after shim disconnected" id=30eec334eb71341df8bcb7a418df5b5edede800b7aaa8231abfa4459337c66f8 namespace=k8s.io May 8 00:50:39.499334 env[1212]: time="2025-05-08T00:50:39.499320609Z" level=info msg="cleaning up dead shim" May 8 00:50:39.514758 env[1212]: time="2025-05-08T00:50:39.514720724Z" level=warning msg="cleanup warnings time=\"2025-05-08T00:50:39Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2487 runtime=io.containerd.runc.v2\n" May 8 00:50:39.995543 update_engine[1204]: I0508 00:50:39.995489 1204 update_attempter.cc:509] Updating boot flags... May 8 00:50:40.312618 kubelet[1899]: E0508 00:50:40.312528 1899 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:50:40.314698 env[1212]: time="2025-05-08T00:50:40.314657088Z" level=info msg="CreateContainer within sandbox \"1d6b645a3580a1626d37f79ccc466ec95848676726a2fa2513d4eeb0ac9e88a4\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 8 00:50:40.315659 kubelet[1899]: E0508 00:50:40.315595 1899 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:50:40.326114 env[1212]: time="2025-05-08T00:50:40.326073909Z" level=info msg="CreateContainer within sandbox \"1d6b645a3580a1626d37f79ccc466ec95848676726a2fa2513d4eeb0ac9e88a4\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"8b781dd886ae61f32bac3a79c1ce15508935daa8d9eaba11f5799a17b8862109\"" May 8 00:50:40.328044 env[1212]: time="2025-05-08T00:50:40.328006610Z" level=info msg="StartContainer for \"8b781dd886ae61f32bac3a79c1ce15508935daa8d9eaba11f5799a17b8862109\"" May 8 00:50:40.342619 kubelet[1899]: I0508 00:50:40.342572 1899 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-n8759" podStartSLOduration=1.500237509 podStartE2EDuration="9.342547404s" podCreationTimestamp="2025-05-08 00:50:31 +0000 UTC" firstStartedPulling="2025-05-08 00:50:31.43517773 +0000 UTC m=+7.269227147" lastFinishedPulling="2025-05-08 00:50:39.277487585 +0000 UTC m=+15.111537042" observedRunningTime="2025-05-08 00:50:40.342298274 +0000 UTC m=+16.176347731" watchObservedRunningTime="2025-05-08 00:50:40.342547404 +0000 UTC m=+16.176596861" May 8 00:50:40.347041 systemd[1]: Started cri-containerd-8b781dd886ae61f32bac3a79c1ce15508935daa8d9eaba11f5799a17b8862109.scope. May 8 00:50:40.384092 env[1212]: time="2025-05-08T00:50:40.384033132Z" level=info msg="StartContainer for \"8b781dd886ae61f32bac3a79c1ce15508935daa8d9eaba11f5799a17b8862109\" returns successfully" May 8 00:50:40.385596 systemd[1]: cri-containerd-8b781dd886ae61f32bac3a79c1ce15508935daa8d9eaba11f5799a17b8862109.scope: Deactivated successfully. May 8 00:50:40.404146 env[1212]: time="2025-05-08T00:50:40.404086606Z" level=info msg="shim disconnected" id=8b781dd886ae61f32bac3a79c1ce15508935daa8d9eaba11f5799a17b8862109 May 8 00:50:40.404146 env[1212]: time="2025-05-08T00:50:40.404133623Z" level=warning msg="cleaning up after shim disconnected" id=8b781dd886ae61f32bac3a79c1ce15508935daa8d9eaba11f5799a17b8862109 namespace=k8s.io May 8 00:50:40.404146 env[1212]: time="2025-05-08T00:50:40.404144067Z" level=info msg="cleaning up dead shim" May 8 00:50:40.410532 env[1212]: time="2025-05-08T00:50:40.410487408Z" level=warning msg="cleanup warnings time=\"2025-05-08T00:50:40Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2555 runtime=io.containerd.runc.v2\n" May 8 00:50:40.444310 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8b781dd886ae61f32bac3a79c1ce15508935daa8d9eaba11f5799a17b8862109-rootfs.mount: Deactivated successfully. May 8 00:50:41.324055 kubelet[1899]: E0508 00:50:41.324022 1899 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:50:41.324389 kubelet[1899]: E0508 00:50:41.324331 1899 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:50:41.325780 env[1212]: time="2025-05-08T00:50:41.325739413Z" level=info msg="CreateContainer within sandbox \"1d6b645a3580a1626d37f79ccc466ec95848676726a2fa2513d4eeb0ac9e88a4\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 8 00:50:41.345635 env[1212]: time="2025-05-08T00:50:41.345591265Z" level=info msg="CreateContainer within sandbox \"1d6b645a3580a1626d37f79ccc466ec95848676726a2fa2513d4eeb0ac9e88a4\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"96ccf6ebe55d73282acaeb465d32ad2b1b66323718cfd9e8cbe03d9117c39986\"" May 8 00:50:41.346617 env[1212]: time="2025-05-08T00:50:41.346371975Z" level=info msg="StartContainer for \"96ccf6ebe55d73282acaeb465d32ad2b1b66323718cfd9e8cbe03d9117c39986\"" May 8 00:50:41.364484 systemd[1]: Started cri-containerd-96ccf6ebe55d73282acaeb465d32ad2b1b66323718cfd9e8cbe03d9117c39986.scope. May 8 00:50:41.396624 env[1212]: time="2025-05-08T00:50:41.396572461Z" level=info msg="StartContainer for \"96ccf6ebe55d73282acaeb465d32ad2b1b66323718cfd9e8cbe03d9117c39986\" returns successfully" May 8 00:50:41.521683 kubelet[1899]: I0508 00:50:41.521650 1899 kubelet_node_status.go:488] "Fast updating node status as it just became ready" May 8 00:50:41.554845 systemd[1]: Created slice kubepods-burstable-pod959ccf4e_2bd6_41d2_9342_fb303fd9f74d.slice. May 8 00:50:41.560189 systemd[1]: Created slice kubepods-burstable-podc6ce9db9_d16c_4a7b_80db_56abf7291ec7.slice. May 8 00:50:41.656394 kubelet[1899]: I0508 00:50:41.656335 1899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/959ccf4e-2bd6-41d2-9342-fb303fd9f74d-config-volume\") pod \"coredns-6f6b679f8f-sd98x\" (UID: \"959ccf4e-2bd6-41d2-9342-fb303fd9f74d\") " pod="kube-system/coredns-6f6b679f8f-sd98x" May 8 00:50:41.656394 kubelet[1899]: I0508 00:50:41.656388 1899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c6ce9db9-d16c-4a7b-80db-56abf7291ec7-config-volume\") pod \"coredns-6f6b679f8f-v77sp\" (UID: \"c6ce9db9-d16c-4a7b-80db-56abf7291ec7\") " pod="kube-system/coredns-6f6b679f8f-v77sp" May 8 00:50:41.656587 kubelet[1899]: I0508 00:50:41.656407 1899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sdz8w\" (UniqueName: \"kubernetes.io/projected/959ccf4e-2bd6-41d2-9342-fb303fd9f74d-kube-api-access-sdz8w\") pod \"coredns-6f6b679f8f-sd98x\" (UID: \"959ccf4e-2bd6-41d2-9342-fb303fd9f74d\") " pod="kube-system/coredns-6f6b679f8f-sd98x" May 8 00:50:41.656587 kubelet[1899]: I0508 00:50:41.656450 1899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6bfzt\" (UniqueName: \"kubernetes.io/projected/c6ce9db9-d16c-4a7b-80db-56abf7291ec7-kube-api-access-6bfzt\") pod \"coredns-6f6b679f8f-v77sp\" (UID: \"c6ce9db9-d16c-4a7b-80db-56abf7291ec7\") " pod="kube-system/coredns-6f6b679f8f-v77sp" May 8 00:50:41.759454 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! May 8 00:50:41.858233 kubelet[1899]: E0508 00:50:41.858199 1899 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:50:41.858974 env[1212]: time="2025-05-08T00:50:41.858936566Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-sd98x,Uid:959ccf4e-2bd6-41d2-9342-fb303fd9f74d,Namespace:kube-system,Attempt:0,}" May 8 00:50:41.862780 kubelet[1899]: E0508 00:50:41.862739 1899 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:50:41.863257 env[1212]: time="2025-05-08T00:50:41.863208040Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-v77sp,Uid:c6ce9db9-d16c-4a7b-80db-56abf7291ec7,Namespace:kube-system,Attempt:0,}" May 8 00:50:41.996456 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! May 8 00:50:42.328962 kubelet[1899]: E0508 00:50:42.328527 1899 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:50:42.343619 kubelet[1899]: I0508 00:50:42.343563 1899 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-j68hn" podStartSLOduration=6.21079149 podStartE2EDuration="12.343539811s" podCreationTimestamp="2025-05-08 00:50:30 +0000 UTC" firstStartedPulling="2025-05-08 00:50:31.289693422 +0000 UTC m=+7.123742920" lastFinishedPulling="2025-05-08 00:50:37.422441784 +0000 UTC m=+13.256491241" observedRunningTime="2025-05-08 00:50:42.343134638 +0000 UTC m=+18.177184055" watchObservedRunningTime="2025-05-08 00:50:42.343539811 +0000 UTC m=+18.177589268" May 8 00:50:43.330860 kubelet[1899]: E0508 00:50:43.330828 1899 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:50:43.608909 systemd-networkd[1042]: cilium_host: Link UP May 8 00:50:43.610472 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready May 8 00:50:43.610521 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready May 8 00:50:43.609007 systemd-networkd[1042]: cilium_net: Link UP May 8 00:50:43.609503 systemd-networkd[1042]: cilium_net: Gained carrier May 8 00:50:43.610708 systemd-networkd[1042]: cilium_host: Gained carrier May 8 00:50:43.687859 systemd-networkd[1042]: cilium_vxlan: Link UP May 8 00:50:43.687864 systemd-networkd[1042]: cilium_vxlan: Gained carrier May 8 00:50:43.968485 kernel: NET: Registered PF_ALG protocol family May 8 00:50:44.331886 kubelet[1899]: E0508 00:50:44.331857 1899 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:50:44.377560 systemd-networkd[1042]: cilium_net: Gained IPv6LL May 8 00:50:44.441526 systemd-networkd[1042]: cilium_host: Gained IPv6LL May 8 00:50:44.545996 systemd-networkd[1042]: lxc_health: Link UP May 8 00:50:44.560079 systemd-networkd[1042]: lxc_health: Gained carrier May 8 00:50:44.560528 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready May 8 00:50:44.925072 systemd-networkd[1042]: lxc13f143be8d99: Link UP May 8 00:50:44.934458 kernel: eth0: renamed from tmp81ec0 May 8 00:50:44.941299 systemd-networkd[1042]: lxc308e6f002aee: Link UP May 8 00:50:44.947714 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready May 8 00:50:44.947784 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc13f143be8d99: link becomes ready May 8 00:50:44.947810 kernel: eth0: renamed from tmpda9c4 May 8 00:50:44.947758 systemd-networkd[1042]: lxc13f143be8d99: Gained carrier May 8 00:50:44.953015 systemd-networkd[1042]: lxc308e6f002aee: Gained carrier May 8 00:50:44.953495 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc308e6f002aee: link becomes ready May 8 00:50:45.333399 kubelet[1899]: E0508 00:50:45.333292 1899 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:50:45.529836 systemd-networkd[1042]: cilium_vxlan: Gained IPv6LL May 8 00:50:46.105576 systemd-networkd[1042]: lxc13f143be8d99: Gained IPv6LL May 8 00:50:46.297625 systemd-networkd[1042]: lxc308e6f002aee: Gained IPv6LL May 8 00:50:46.335017 kubelet[1899]: E0508 00:50:46.334985 1899 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:50:46.490560 systemd-networkd[1042]: lxc_health: Gained IPv6LL May 8 00:50:47.336720 kubelet[1899]: E0508 00:50:47.336683 1899 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:50:48.503687 env[1212]: time="2025-05-08T00:50:48.503623430Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:50:48.504160 env[1212]: time="2025-05-08T00:50:48.503664360Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:50:48.504160 env[1212]: time="2025-05-08T00:50:48.503674723Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:50:48.504160 env[1212]: time="2025-05-08T00:50:48.504121074Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/da9c466d6ca36fb207e653274663f909539c6145e027d86f90a9fcfbcbb57c35 pid=3114 runtime=io.containerd.runc.v2 May 8 00:50:48.507624 env[1212]: time="2025-05-08T00:50:48.505032301Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:50:48.507624 env[1212]: time="2025-05-08T00:50:48.505073232Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:50:48.507624 env[1212]: time="2025-05-08T00:50:48.505083114Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:50:48.507624 env[1212]: time="2025-05-08T00:50:48.505258558Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/81ec0dc43baf00c3094d6e14b9adfea003abdc363eddef6aff1a500ee132614c pid=3122 runtime=io.containerd.runc.v2 May 8 00:50:48.518573 systemd[1]: Started cri-containerd-da9c466d6ca36fb207e653274663f909539c6145e027d86f90a9fcfbcbb57c35.scope. May 8 00:50:48.540407 systemd[1]: Started cri-containerd-81ec0dc43baf00c3094d6e14b9adfea003abdc363eddef6aff1a500ee132614c.scope. May 8 00:50:48.575665 systemd-resolved[1153]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 8 00:50:48.585868 systemd-resolved[1153]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 8 00:50:48.593683 env[1212]: time="2025-05-08T00:50:48.593646072Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-sd98x,Uid:959ccf4e-2bd6-41d2-9342-fb303fd9f74d,Namespace:kube-system,Attempt:0,} returns sandbox id \"da9c466d6ca36fb207e653274663f909539c6145e027d86f90a9fcfbcbb57c35\"" May 8 00:50:48.594391 kubelet[1899]: E0508 00:50:48.594365 1899 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:50:48.597777 env[1212]: time="2025-05-08T00:50:48.597740933Z" level=info msg="CreateContainer within sandbox \"da9c466d6ca36fb207e653274663f909539c6145e027d86f90a9fcfbcbb57c35\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 8 00:50:48.608082 env[1212]: time="2025-05-08T00:50:48.608047702Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-v77sp,Uid:c6ce9db9-d16c-4a7b-80db-56abf7291ec7,Namespace:kube-system,Attempt:0,} returns sandbox id \"81ec0dc43baf00c3094d6e14b9adfea003abdc363eddef6aff1a500ee132614c\"" May 8 00:50:48.608840 kubelet[1899]: E0508 00:50:48.608814 1899 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:50:48.610508 env[1212]: time="2025-05-08T00:50:48.610478388Z" level=info msg="CreateContainer within sandbox \"81ec0dc43baf00c3094d6e14b9adfea003abdc363eddef6aff1a500ee132614c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 8 00:50:48.613694 env[1212]: time="2025-05-08T00:50:48.613659781Z" level=info msg="CreateContainer within sandbox \"da9c466d6ca36fb207e653274663f909539c6145e027d86f90a9fcfbcbb57c35\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ae6f69dd226233591f8c59607dd5066324aaf1401fed9f9d34a2ecd40ecd0cb9\"" May 8 00:50:48.614908 env[1212]: time="2025-05-08T00:50:48.614882806Z" level=info msg="StartContainer for \"ae6f69dd226233591f8c59607dd5066324aaf1401fed9f9d34a2ecd40ecd0cb9\"" May 8 00:50:48.627802 env[1212]: time="2025-05-08T00:50:48.627753575Z" level=info msg="CreateContainer within sandbox \"81ec0dc43baf00c3094d6e14b9adfea003abdc363eddef6aff1a500ee132614c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"54ec94540645fa66ebdd5636c0c41694924117c3875aea98491f36512e1e2d77\"" May 8 00:50:48.628505 env[1212]: time="2025-05-08T00:50:48.628331679Z" level=info msg="StartContainer for \"54ec94540645fa66ebdd5636c0c41694924117c3875aea98491f36512e1e2d77\"" May 8 00:50:48.631645 systemd[1]: Started cri-containerd-ae6f69dd226233591f8c59607dd5066324aaf1401fed9f9d34a2ecd40ecd0cb9.scope. May 8 00:50:48.646084 systemd[1]: Started cri-containerd-54ec94540645fa66ebdd5636c0c41694924117c3875aea98491f36512e1e2d77.scope. May 8 00:50:48.673314 env[1212]: time="2025-05-08T00:50:48.672369137Z" level=info msg="StartContainer for \"ae6f69dd226233591f8c59607dd5066324aaf1401fed9f9d34a2ecd40ecd0cb9\" returns successfully" May 8 00:50:48.684664 env[1212]: time="2025-05-08T00:50:48.684597226Z" level=info msg="StartContainer for \"54ec94540645fa66ebdd5636c0c41694924117c3875aea98491f36512e1e2d77\" returns successfully" May 8 00:50:49.345857 kubelet[1899]: E0508 00:50:49.345791 1899 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:50:49.352947 kubelet[1899]: E0508 00:50:49.352905 1899 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:50:49.371552 kubelet[1899]: I0508 00:50:49.371482 1899 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-v77sp" podStartSLOduration=18.37146516 podStartE2EDuration="18.37146516s" podCreationTimestamp="2025-05-08 00:50:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:50:49.369456721 +0000 UTC m=+25.203506178" watchObservedRunningTime="2025-05-08 00:50:49.37146516 +0000 UTC m=+25.205514617" May 8 00:50:49.381504 kubelet[1899]: I0508 00:50:49.381424 1899 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-sd98x" podStartSLOduration=18.381407575 podStartE2EDuration="18.381407575s" podCreationTimestamp="2025-05-08 00:50:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:50:49.381198205 +0000 UTC m=+25.215247662" watchObservedRunningTime="2025-05-08 00:50:49.381407575 +0000 UTC m=+25.215457032" May 8 00:50:49.740893 systemd[1]: Started sshd@5-10.0.0.116:22-10.0.0.1:42246.service. May 8 00:50:49.780983 sshd[3271]: Accepted publickey for core from 10.0.0.1 port 42246 ssh2: RSA SHA256:bNzqUoNIi+loVoVjyqrqS2pcdituzSfXJGlDy1FbsUU May 8 00:50:49.782182 sshd[3271]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 8 00:50:49.785214 systemd-logind[1202]: New session 6 of user core. May 8 00:50:49.786019 systemd[1]: Started session-6.scope. May 8 00:50:49.903061 sshd[3271]: pam_unix(sshd:session): session closed for user core May 8 00:50:49.905294 systemd[1]: sshd@5-10.0.0.116:22-10.0.0.1:42246.service: Deactivated successfully. May 8 00:50:49.906078 systemd[1]: session-6.scope: Deactivated successfully. May 8 00:50:49.906560 systemd-logind[1202]: Session 6 logged out. Waiting for processes to exit. May 8 00:50:49.907153 systemd-logind[1202]: Removed session 6. May 8 00:50:50.354189 kubelet[1899]: E0508 00:50:50.354155 1899 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:50:50.356699 kubelet[1899]: E0508 00:50:50.356599 1899 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:50:51.356300 kubelet[1899]: E0508 00:50:51.356260 1899 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:50:51.356768 kubelet[1899]: E0508 00:50:51.356748 1899 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:50:54.909135 systemd[1]: Started sshd@6-10.0.0.116:22-10.0.0.1:36874.service. May 8 00:50:54.945589 sshd[3286]: Accepted publickey for core from 10.0.0.1 port 36874 ssh2: RSA SHA256:bNzqUoNIi+loVoVjyqrqS2pcdituzSfXJGlDy1FbsUU May 8 00:50:54.946783 sshd[3286]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 8 00:50:54.950070 systemd-logind[1202]: New session 7 of user core. May 8 00:50:54.950965 systemd[1]: Started session-7.scope. May 8 00:50:55.060652 sshd[3286]: pam_unix(sshd:session): session closed for user core May 8 00:50:55.063458 systemd[1]: sshd@6-10.0.0.116:22-10.0.0.1:36874.service: Deactivated successfully. May 8 00:50:55.064275 systemd[1]: session-7.scope: Deactivated successfully. May 8 00:50:55.064809 systemd-logind[1202]: Session 7 logged out. Waiting for processes to exit. May 8 00:50:55.065588 systemd-logind[1202]: Removed session 7. May 8 00:51:00.066631 systemd[1]: Started sshd@7-10.0.0.116:22-10.0.0.1:36878.service. May 8 00:51:00.102863 sshd[3300]: Accepted publickey for core from 10.0.0.1 port 36878 ssh2: RSA SHA256:bNzqUoNIi+loVoVjyqrqS2pcdituzSfXJGlDy1FbsUU May 8 00:51:00.103890 sshd[3300]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 8 00:51:00.106941 systemd-logind[1202]: New session 8 of user core. May 8 00:51:00.107741 systemd[1]: Started session-8.scope. May 8 00:51:00.221661 sshd[3300]: pam_unix(sshd:session): session closed for user core May 8 00:51:00.224205 systemd[1]: sshd@7-10.0.0.116:22-10.0.0.1:36878.service: Deactivated successfully. May 8 00:51:00.224977 systemd[1]: session-8.scope: Deactivated successfully. May 8 00:51:00.225457 systemd-logind[1202]: Session 8 logged out. Waiting for processes to exit. May 8 00:51:00.226203 systemd-logind[1202]: Removed session 8. May 8 00:51:05.227119 systemd[1]: Started sshd@8-10.0.0.116:22-10.0.0.1:59716.service. May 8 00:51:05.269084 sshd[3317]: Accepted publickey for core from 10.0.0.1 port 59716 ssh2: RSA SHA256:bNzqUoNIi+loVoVjyqrqS2pcdituzSfXJGlDy1FbsUU May 8 00:51:05.270602 sshd[3317]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 8 00:51:05.274013 systemd-logind[1202]: New session 9 of user core. May 8 00:51:05.274859 systemd[1]: Started session-9.scope. May 8 00:51:05.383642 sshd[3317]: pam_unix(sshd:session): session closed for user core May 8 00:51:05.387378 systemd[1]: Started sshd@9-10.0.0.116:22-10.0.0.1:59718.service. May 8 00:51:05.388989 systemd[1]: session-9.scope: Deactivated successfully. May 8 00:51:05.389947 systemd[1]: sshd@8-10.0.0.116:22-10.0.0.1:59716.service: Deactivated successfully. May 8 00:51:05.390711 systemd-logind[1202]: Session 9 logged out. Waiting for processes to exit. May 8 00:51:05.391313 systemd-logind[1202]: Removed session 9. May 8 00:51:05.423514 sshd[3330]: Accepted publickey for core from 10.0.0.1 port 59718 ssh2: RSA SHA256:bNzqUoNIi+loVoVjyqrqS2pcdituzSfXJGlDy1FbsUU May 8 00:51:05.424640 sshd[3330]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 8 00:51:05.428471 systemd-logind[1202]: New session 10 of user core. May 8 00:51:05.428807 systemd[1]: Started session-10.scope. May 8 00:51:05.591213 sshd[3330]: pam_unix(sshd:session): session closed for user core May 8 00:51:05.595698 systemd[1]: Started sshd@10-10.0.0.116:22-10.0.0.1:59734.service. May 8 00:51:05.607572 systemd-logind[1202]: Session 10 logged out. Waiting for processes to exit. May 8 00:51:05.607891 systemd[1]: sshd@9-10.0.0.116:22-10.0.0.1:59718.service: Deactivated successfully. May 8 00:51:05.608904 systemd[1]: session-10.scope: Deactivated successfully. May 8 00:51:05.613588 systemd-logind[1202]: Removed session 10. May 8 00:51:05.642738 sshd[3342]: Accepted publickey for core from 10.0.0.1 port 59734 ssh2: RSA SHA256:bNzqUoNIi+loVoVjyqrqS2pcdituzSfXJGlDy1FbsUU May 8 00:51:05.644270 sshd[3342]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 8 00:51:05.648067 systemd-logind[1202]: New session 11 of user core. May 8 00:51:05.648989 systemd[1]: Started session-11.scope. May 8 00:51:05.763782 sshd[3342]: pam_unix(sshd:session): session closed for user core May 8 00:51:05.766384 systemd[1]: sshd@10-10.0.0.116:22-10.0.0.1:59734.service: Deactivated successfully. May 8 00:51:05.767196 systemd[1]: session-11.scope: Deactivated successfully. May 8 00:51:05.768073 systemd-logind[1202]: Session 11 logged out. Waiting for processes to exit. May 8 00:51:05.768899 systemd-logind[1202]: Removed session 11. May 8 00:51:10.768745 systemd[1]: Started sshd@11-10.0.0.116:22-10.0.0.1:59740.service. May 8 00:51:10.804851 sshd[3358]: Accepted publickey for core from 10.0.0.1 port 59740 ssh2: RSA SHA256:bNzqUoNIi+loVoVjyqrqS2pcdituzSfXJGlDy1FbsUU May 8 00:51:10.806211 sshd[3358]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 8 00:51:10.809850 systemd-logind[1202]: New session 12 of user core. May 8 00:51:10.810765 systemd[1]: Started session-12.scope. May 8 00:51:10.919987 sshd[3358]: pam_unix(sshd:session): session closed for user core May 8 00:51:10.922383 systemd[1]: sshd@11-10.0.0.116:22-10.0.0.1:59740.service: Deactivated successfully. May 8 00:51:10.923210 systemd[1]: session-12.scope: Deactivated successfully. May 8 00:51:10.923754 systemd-logind[1202]: Session 12 logged out. Waiting for processes to exit. May 8 00:51:10.924577 systemd-logind[1202]: Removed session 12. May 8 00:51:15.925804 systemd[1]: Started sshd@12-10.0.0.116:22-10.0.0.1:39292.service. May 8 00:51:15.961583 sshd[3372]: Accepted publickey for core from 10.0.0.1 port 39292 ssh2: RSA SHA256:bNzqUoNIi+loVoVjyqrqS2pcdituzSfXJGlDy1FbsUU May 8 00:51:15.963019 sshd[3372]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 8 00:51:15.966111 systemd-logind[1202]: New session 13 of user core. May 8 00:51:15.966945 systemd[1]: Started session-13.scope. May 8 00:51:16.072166 sshd[3372]: pam_unix(sshd:session): session closed for user core May 8 00:51:16.075042 systemd[1]: sshd@12-10.0.0.116:22-10.0.0.1:39292.service: Deactivated successfully. May 8 00:51:16.075737 systemd[1]: session-13.scope: Deactivated successfully. May 8 00:51:16.076263 systemd-logind[1202]: Session 13 logged out. Waiting for processes to exit. May 8 00:51:16.077326 systemd[1]: Started sshd@13-10.0.0.116:22-10.0.0.1:39308.service. May 8 00:51:16.077994 systemd-logind[1202]: Removed session 13. May 8 00:51:16.114227 sshd[3385]: Accepted publickey for core from 10.0.0.1 port 39308 ssh2: RSA SHA256:bNzqUoNIi+loVoVjyqrqS2pcdituzSfXJGlDy1FbsUU May 8 00:51:16.115383 sshd[3385]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 8 00:51:16.118728 systemd-logind[1202]: New session 14 of user core. May 8 00:51:16.119500 systemd[1]: Started session-14.scope. May 8 00:51:16.338287 sshd[3385]: pam_unix(sshd:session): session closed for user core May 8 00:51:16.341226 systemd[1]: sshd@13-10.0.0.116:22-10.0.0.1:39308.service: Deactivated successfully. May 8 00:51:16.341835 systemd[1]: session-14.scope: Deactivated successfully. May 8 00:51:16.342378 systemd-logind[1202]: Session 14 logged out. Waiting for processes to exit. May 8 00:51:16.343534 systemd[1]: Started sshd@14-10.0.0.116:22-10.0.0.1:39312.service. May 8 00:51:16.344218 systemd-logind[1202]: Removed session 14. May 8 00:51:16.383517 sshd[3396]: Accepted publickey for core from 10.0.0.1 port 39312 ssh2: RSA SHA256:bNzqUoNIi+loVoVjyqrqS2pcdituzSfXJGlDy1FbsUU May 8 00:51:16.384822 sshd[3396]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 8 00:51:16.388384 systemd-logind[1202]: New session 15 of user core. May 8 00:51:16.389250 systemd[1]: Started session-15.scope. May 8 00:51:17.689296 sshd[3396]: pam_unix(sshd:session): session closed for user core May 8 00:51:17.696414 systemd[1]: Started sshd@15-10.0.0.116:22-10.0.0.1:39320.service. May 8 00:51:17.696940 systemd[1]: sshd@14-10.0.0.116:22-10.0.0.1:39312.service: Deactivated successfully. May 8 00:51:17.697734 systemd[1]: session-15.scope: Deactivated successfully. May 8 00:51:17.699314 systemd-logind[1202]: Session 15 logged out. Waiting for processes to exit. May 8 00:51:17.702052 systemd-logind[1202]: Removed session 15. May 8 00:51:17.733033 sshd[3416]: Accepted publickey for core from 10.0.0.1 port 39320 ssh2: RSA SHA256:bNzqUoNIi+loVoVjyqrqS2pcdituzSfXJGlDy1FbsUU May 8 00:51:17.734395 sshd[3416]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 8 00:51:17.738508 systemd[1]: Started session-16.scope. May 8 00:51:17.738610 systemd-logind[1202]: New session 16 of user core. May 8 00:51:17.963190 sshd[3416]: pam_unix(sshd:session): session closed for user core May 8 00:51:17.966564 systemd[1]: Started sshd@16-10.0.0.116:22-10.0.0.1:39332.service. May 8 00:51:17.969021 systemd-logind[1202]: Session 16 logged out. Waiting for processes to exit. May 8 00:51:17.969653 systemd[1]: sshd@15-10.0.0.116:22-10.0.0.1:39320.service: Deactivated successfully. May 8 00:51:17.970273 systemd[1]: session-16.scope: Deactivated successfully. May 8 00:51:17.970995 systemd-logind[1202]: Removed session 16. May 8 00:51:18.007072 sshd[3429]: Accepted publickey for core from 10.0.0.1 port 39332 ssh2: RSA SHA256:bNzqUoNIi+loVoVjyqrqS2pcdituzSfXJGlDy1FbsUU May 8 00:51:18.008801 sshd[3429]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 8 00:51:18.012413 systemd-logind[1202]: New session 17 of user core. May 8 00:51:18.012878 systemd[1]: Started session-17.scope. May 8 00:51:18.133821 sshd[3429]: pam_unix(sshd:session): session closed for user core May 8 00:51:18.136801 systemd[1]: sshd@16-10.0.0.116:22-10.0.0.1:39332.service: Deactivated successfully. May 8 00:51:18.137527 systemd[1]: session-17.scope: Deactivated successfully. May 8 00:51:18.138024 systemd-logind[1202]: Session 17 logged out. Waiting for processes to exit. May 8 00:51:18.139455 systemd-logind[1202]: Removed session 17. May 8 00:51:23.138785 systemd[1]: Started sshd@17-10.0.0.116:22-10.0.0.1:36214.service. May 8 00:51:23.181715 sshd[3443]: Accepted publickey for core from 10.0.0.1 port 36214 ssh2: RSA SHA256:bNzqUoNIi+loVoVjyqrqS2pcdituzSfXJGlDy1FbsUU May 8 00:51:23.183598 sshd[3443]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 8 00:51:23.187138 systemd-logind[1202]: New session 18 of user core. May 8 00:51:23.188070 systemd[1]: Started session-18.scope. May 8 00:51:23.297544 sshd[3443]: pam_unix(sshd:session): session closed for user core May 8 00:51:23.300383 systemd[1]: sshd@17-10.0.0.116:22-10.0.0.1:36214.service: Deactivated successfully. May 8 00:51:23.301147 systemd[1]: session-18.scope: Deactivated successfully. May 8 00:51:23.301765 systemd-logind[1202]: Session 18 logged out. Waiting for processes to exit. May 8 00:51:23.302562 systemd-logind[1202]: Removed session 18. May 8 00:51:28.302888 systemd[1]: Started sshd@18-10.0.0.116:22-10.0.0.1:36224.service. May 8 00:51:28.339128 sshd[3461]: Accepted publickey for core from 10.0.0.1 port 36224 ssh2: RSA SHA256:bNzqUoNIi+loVoVjyqrqS2pcdituzSfXJGlDy1FbsUU May 8 00:51:28.340461 sshd[3461]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 8 00:51:28.343909 systemd-logind[1202]: New session 19 of user core. May 8 00:51:28.344862 systemd[1]: Started session-19.scope. May 8 00:51:28.454080 sshd[3461]: pam_unix(sshd:session): session closed for user core May 8 00:51:28.457174 systemd[1]: sshd@18-10.0.0.116:22-10.0.0.1:36224.service: Deactivated successfully. May 8 00:51:28.457931 systemd[1]: session-19.scope: Deactivated successfully. May 8 00:51:28.458636 systemd-logind[1202]: Session 19 logged out. Waiting for processes to exit. May 8 00:51:28.459481 systemd-logind[1202]: Removed session 19. May 8 00:51:33.457565 systemd[1]: Started sshd@19-10.0.0.116:22-10.0.0.1:59150.service. May 8 00:51:33.493951 sshd[3476]: Accepted publickey for core from 10.0.0.1 port 59150 ssh2: RSA SHA256:bNzqUoNIi+loVoVjyqrqS2pcdituzSfXJGlDy1FbsUU May 8 00:51:33.495145 sshd[3476]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 8 00:51:33.498143 systemd-logind[1202]: New session 20 of user core. May 8 00:51:33.498954 systemd[1]: Started session-20.scope. May 8 00:51:33.634075 sshd[3476]: pam_unix(sshd:session): session closed for user core May 8 00:51:33.636667 systemd[1]: sshd@19-10.0.0.116:22-10.0.0.1:59150.service: Deactivated successfully. May 8 00:51:33.637369 systemd[1]: session-20.scope: Deactivated successfully. May 8 00:51:33.637891 systemd-logind[1202]: Session 20 logged out. Waiting for processes to exit. May 8 00:51:33.638639 systemd-logind[1202]: Removed session 20. May 8 00:51:38.638191 systemd[1]: Started sshd@20-10.0.0.116:22-10.0.0.1:59156.service. May 8 00:51:38.674338 sshd[3490]: Accepted publickey for core from 10.0.0.1 port 59156 ssh2: RSA SHA256:bNzqUoNIi+loVoVjyqrqS2pcdituzSfXJGlDy1FbsUU May 8 00:51:38.675843 sshd[3490]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 8 00:51:38.679748 systemd-logind[1202]: New session 21 of user core. May 8 00:51:38.680541 systemd[1]: Started session-21.scope. May 8 00:51:38.787609 sshd[3490]: pam_unix(sshd:session): session closed for user core May 8 00:51:38.792242 systemd[1]: sshd@20-10.0.0.116:22-10.0.0.1:59156.service: Deactivated successfully. May 8 00:51:38.792880 systemd[1]: session-21.scope: Deactivated successfully. May 8 00:51:38.793368 systemd-logind[1202]: Session 21 logged out. Waiting for processes to exit. May 8 00:51:38.794508 systemd[1]: Started sshd@21-10.0.0.116:22-10.0.0.1:59162.service. May 8 00:51:38.795243 systemd-logind[1202]: Removed session 21. May 8 00:51:38.832770 sshd[3503]: Accepted publickey for core from 10.0.0.1 port 59162 ssh2: RSA SHA256:bNzqUoNIi+loVoVjyqrqS2pcdituzSfXJGlDy1FbsUU May 8 00:51:38.833984 sshd[3503]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 8 00:51:38.837223 systemd-logind[1202]: New session 22 of user core. May 8 00:51:38.838072 systemd[1]: Started session-22.scope. May 8 00:51:40.967269 env[1212]: time="2025-05-08T00:51:40.967213034Z" level=info msg="StopContainer for \"8d310238813dbe12698c27c5003acbf0e5e7474159e8839edeeef92681ad230f\" with timeout 30 (s)" May 8 00:51:40.967668 env[1212]: time="2025-05-08T00:51:40.967633842Z" level=info msg="Stop container \"8d310238813dbe12698c27c5003acbf0e5e7474159e8839edeeef92681ad230f\" with signal terminated" May 8 00:51:40.978835 systemd[1]: cri-containerd-8d310238813dbe12698c27c5003acbf0e5e7474159e8839edeeef92681ad230f.scope: Deactivated successfully. May 8 00:51:40.980928 systemd[1]: run-containerd-runc-k8s.io-96ccf6ebe55d73282acaeb465d32ad2b1b66323718cfd9e8cbe03d9117c39986-runc.3Fu05A.mount: Deactivated successfully. May 8 00:51:41.001584 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8d310238813dbe12698c27c5003acbf0e5e7474159e8839edeeef92681ad230f-rootfs.mount: Deactivated successfully. May 8 00:51:41.006893 env[1212]: time="2025-05-08T00:51:41.006839116Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 8 00:51:41.011030 env[1212]: time="2025-05-08T00:51:41.010987336Z" level=info msg="shim disconnected" id=8d310238813dbe12698c27c5003acbf0e5e7474159e8839edeeef92681ad230f May 8 00:51:41.011119 env[1212]: time="2025-05-08T00:51:41.011032613Z" level=warning msg="cleaning up after shim disconnected" id=8d310238813dbe12698c27c5003acbf0e5e7474159e8839edeeef92681ad230f namespace=k8s.io May 8 00:51:41.011119 env[1212]: time="2025-05-08T00:51:41.011046092Z" level=info msg="cleaning up dead shim" May 8 00:51:41.012898 env[1212]: time="2025-05-08T00:51:41.012859441Z" level=info msg="StopContainer for \"96ccf6ebe55d73282acaeb465d32ad2b1b66323718cfd9e8cbe03d9117c39986\" with timeout 2 (s)" May 8 00:51:41.013307 env[1212]: time="2025-05-08T00:51:41.013276811Z" level=info msg="Stop container \"96ccf6ebe55d73282acaeb465d32ad2b1b66323718cfd9e8cbe03d9117c39986\" with signal terminated" May 8 00:51:41.019777 systemd-networkd[1042]: lxc_health: Link DOWN May 8 00:51:41.019783 systemd-networkd[1042]: lxc_health: Lost carrier May 8 00:51:41.020237 env[1212]: time="2025-05-08T00:51:41.020209591Z" level=warning msg="cleanup warnings time=\"2025-05-08T00:51:41Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3551 runtime=io.containerd.runc.v2\n" May 8 00:51:41.022272 env[1212]: time="2025-05-08T00:51:41.022232446Z" level=info msg="StopContainer for \"8d310238813dbe12698c27c5003acbf0e5e7474159e8839edeeef92681ad230f\" returns successfully" May 8 00:51:41.022923 env[1212]: time="2025-05-08T00:51:41.022894438Z" level=info msg="StopPodSandbox for \"831f15519ff3c03d7ad80d24012e4cffa1f4355280d22f4a8d65bbdbc13f9542\"" May 8 00:51:41.023074 env[1212]: time="2025-05-08T00:51:41.023051826Z" level=info msg="Container to stop \"8d310238813dbe12698c27c5003acbf0e5e7474159e8839edeeef92681ad230f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 8 00:51:41.026120 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-831f15519ff3c03d7ad80d24012e4cffa1f4355280d22f4a8d65bbdbc13f9542-shm.mount: Deactivated successfully. May 8 00:51:41.028559 systemd[1]: cri-containerd-831f15519ff3c03d7ad80d24012e4cffa1f4355280d22f4a8d65bbdbc13f9542.scope: Deactivated successfully. May 8 00:51:41.049819 systemd[1]: cri-containerd-96ccf6ebe55d73282acaeb465d32ad2b1b66323718cfd9e8cbe03d9117c39986.scope: Deactivated successfully. May 8 00:51:41.050133 systemd[1]: cri-containerd-96ccf6ebe55d73282acaeb465d32ad2b1b66323718cfd9e8cbe03d9117c39986.scope: Consumed 6.452s CPU time. May 8 00:51:41.067077 env[1212]: time="2025-05-08T00:51:41.067022175Z" level=info msg="shim disconnected" id=831f15519ff3c03d7ad80d24012e4cffa1f4355280d22f4a8d65bbdbc13f9542 May 8 00:51:41.067077 env[1212]: time="2025-05-08T00:51:41.067075172Z" level=warning msg="cleaning up after shim disconnected" id=831f15519ff3c03d7ad80d24012e4cffa1f4355280d22f4a8d65bbdbc13f9542 namespace=k8s.io May 8 00:51:41.067077 env[1212]: time="2025-05-08T00:51:41.067084971Z" level=info msg="cleaning up dead shim" May 8 00:51:41.067677 env[1212]: time="2025-05-08T00:51:41.067640851Z" level=info msg="shim disconnected" id=96ccf6ebe55d73282acaeb465d32ad2b1b66323718cfd9e8cbe03d9117c39986 May 8 00:51:41.067749 env[1212]: time="2025-05-08T00:51:41.067679088Z" level=warning msg="cleaning up after shim disconnected" id=96ccf6ebe55d73282acaeb465d32ad2b1b66323718cfd9e8cbe03d9117c39986 namespace=k8s.io May 8 00:51:41.067749 env[1212]: time="2025-05-08T00:51:41.067688087Z" level=info msg="cleaning up dead shim" May 8 00:51:41.074820 env[1212]: time="2025-05-08T00:51:41.074773776Z" level=warning msg="cleanup warnings time=\"2025-05-08T00:51:41Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3604 runtime=io.containerd.runc.v2\n" May 8 00:51:41.075760 env[1212]: time="2025-05-08T00:51:41.075725068Z" level=warning msg="cleanup warnings time=\"2025-05-08T00:51:41Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3603 runtime=io.containerd.runc.v2\n" May 8 00:51:41.076076 env[1212]: time="2025-05-08T00:51:41.076052084Z" level=info msg="TearDown network for sandbox \"831f15519ff3c03d7ad80d24012e4cffa1f4355280d22f4a8d65bbdbc13f9542\" successfully" May 8 00:51:41.076116 env[1212]: time="2025-05-08T00:51:41.076078402Z" level=info msg="StopPodSandbox for \"831f15519ff3c03d7ad80d24012e4cffa1f4355280d22f4a8d65bbdbc13f9542\" returns successfully" May 8 00:51:41.080852 env[1212]: time="2025-05-08T00:51:41.080808261Z" level=info msg="StopContainer for \"96ccf6ebe55d73282acaeb465d32ad2b1b66323718cfd9e8cbe03d9117c39986\" returns successfully" May 8 00:51:41.081216 env[1212]: time="2025-05-08T00:51:41.081181154Z" level=info msg="StopPodSandbox for \"1d6b645a3580a1626d37f79ccc466ec95848676726a2fa2513d4eeb0ac9e88a4\"" May 8 00:51:41.081282 env[1212]: time="2025-05-08T00:51:41.081248349Z" level=info msg="Container to stop \"61e75b81b69e4701a5401cfd71a9ac99bdeeb6ad2c10f59cd6a1678456f884ba\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 8 00:51:41.081282 env[1212]: time="2025-05-08T00:51:41.081264988Z" level=info msg="Container to stop \"96ccf6ebe55d73282acaeb465d32ad2b1b66323718cfd9e8cbe03d9117c39986\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 8 00:51:41.081282 env[1212]: time="2025-05-08T00:51:41.081276147Z" level=info msg="Container to stop \"d03cd1a7169fc26fe2565734fc43f21d3bf253f636ee447c2745e8d4be876948\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 8 00:51:41.081365 env[1212]: time="2025-05-08T00:51:41.081287107Z" level=info msg="Container to stop \"30eec334eb71341df8bcb7a418df5b5edede800b7aaa8231abfa4459337c66f8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 8 00:51:41.081365 env[1212]: time="2025-05-08T00:51:41.081297986Z" level=info msg="Container to stop \"8b781dd886ae61f32bac3a79c1ce15508935daa8d9eaba11f5799a17b8862109\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 8 00:51:41.086474 systemd[1]: cri-containerd-1d6b645a3580a1626d37f79ccc466ec95848676726a2fa2513d4eeb0ac9e88a4.scope: Deactivated successfully. May 8 00:51:41.111880 env[1212]: time="2025-05-08T00:51:41.111827464Z" level=info msg="shim disconnected" id=1d6b645a3580a1626d37f79ccc466ec95848676726a2fa2513d4eeb0ac9e88a4 May 8 00:51:41.112587 env[1212]: time="2025-05-08T00:51:41.112560811Z" level=warning msg="cleaning up after shim disconnected" id=1d6b645a3580a1626d37f79ccc466ec95848676726a2fa2513d4eeb0ac9e88a4 namespace=k8s.io May 8 00:51:41.112687 env[1212]: time="2025-05-08T00:51:41.112672123Z" level=info msg="cleaning up dead shim" May 8 00:51:41.119971 env[1212]: time="2025-05-08T00:51:41.119936039Z" level=warning msg="cleanup warnings time=\"2025-05-08T00:51:41Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3646 runtime=io.containerd.runc.v2\n" May 8 00:51:41.120419 env[1212]: time="2025-05-08T00:51:41.120388487Z" level=info msg="TearDown network for sandbox \"1d6b645a3580a1626d37f79ccc466ec95848676726a2fa2513d4eeb0ac9e88a4\" successfully" May 8 00:51:41.120702 env[1212]: time="2025-05-08T00:51:41.120675826Z" level=info msg="StopPodSandbox for \"1d6b645a3580a1626d37f79ccc466ec95848676726a2fa2513d4eeb0ac9e88a4\" returns successfully" May 8 00:51:41.122197 kubelet[1899]: I0508 00:51:41.122059 1899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ghj4z\" (UniqueName: \"kubernetes.io/projected/6fb0cb8c-8a27-4c31-a8d7-513b80d42d93-kube-api-access-ghj4z\") pod \"6fb0cb8c-8a27-4c31-a8d7-513b80d42d93\" (UID: \"6fb0cb8c-8a27-4c31-a8d7-513b80d42d93\") " May 8 00:51:41.122197 kubelet[1899]: I0508 00:51:41.122107 1899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6fb0cb8c-8a27-4c31-a8d7-513b80d42d93-cilium-config-path\") pod \"6fb0cb8c-8a27-4c31-a8d7-513b80d42d93\" (UID: \"6fb0cb8c-8a27-4c31-a8d7-513b80d42d93\") " May 8 00:51:41.126237 kubelet[1899]: I0508 00:51:41.126201 1899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6fb0cb8c-8a27-4c31-a8d7-513b80d42d93-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "6fb0cb8c-8a27-4c31-a8d7-513b80d42d93" (UID: "6fb0cb8c-8a27-4c31-a8d7-513b80d42d93"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 8 00:51:41.130641 kubelet[1899]: I0508 00:51:41.130601 1899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6fb0cb8c-8a27-4c31-a8d7-513b80d42d93-kube-api-access-ghj4z" (OuterVolumeSpecName: "kube-api-access-ghj4z") pod "6fb0cb8c-8a27-4c31-a8d7-513b80d42d93" (UID: "6fb0cb8c-8a27-4c31-a8d7-513b80d42d93"). InnerVolumeSpecName "kube-api-access-ghj4z". PluginName "kubernetes.io/projected", VolumeGidValue "" May 8 00:51:41.224304 kubelet[1899]: I0508 00:51:41.222741 1899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9b2edf20-f78b-4920-b99a-8341ff411f0d-xtables-lock\") pod \"9b2edf20-f78b-4920-b99a-8341ff411f0d\" (UID: \"9b2edf20-f78b-4920-b99a-8341ff411f0d\") " May 8 00:51:41.224304 kubelet[1899]: I0508 00:51:41.223422 1899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9b2edf20-f78b-4920-b99a-8341ff411f0d-hostproc\") pod \"9b2edf20-f78b-4920-b99a-8341ff411f0d\" (UID: \"9b2edf20-f78b-4920-b99a-8341ff411f0d\") " May 8 00:51:41.224304 kubelet[1899]: I0508 00:51:41.222865 1899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9b2edf20-f78b-4920-b99a-8341ff411f0d-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "9b2edf20-f78b-4920-b99a-8341ff411f0d" (UID: "9b2edf20-f78b-4920-b99a-8341ff411f0d"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 00:51:41.224304 kubelet[1899]: I0508 00:51:41.223466 1899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9b2edf20-f78b-4920-b99a-8341ff411f0d-host-proc-sys-kernel\") pod \"9b2edf20-f78b-4920-b99a-8341ff411f0d\" (UID: \"9b2edf20-f78b-4920-b99a-8341ff411f0d\") " May 8 00:51:41.224304 kubelet[1899]: I0508 00:51:41.223488 1899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9b2edf20-f78b-4920-b99a-8341ff411f0d-lib-modules\") pod \"9b2edf20-f78b-4920-b99a-8341ff411f0d\" (UID: \"9b2edf20-f78b-4920-b99a-8341ff411f0d\") " May 8 00:51:41.224304 kubelet[1899]: I0508 00:51:41.223507 1899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9b2edf20-f78b-4920-b99a-8341ff411f0d-bpf-maps\") pod \"9b2edf20-f78b-4920-b99a-8341ff411f0d\" (UID: \"9b2edf20-f78b-4920-b99a-8341ff411f0d\") " May 8 00:51:41.224584 kubelet[1899]: I0508 00:51:41.223507 1899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9b2edf20-f78b-4920-b99a-8341ff411f0d-hostproc" (OuterVolumeSpecName: "hostproc") pod "9b2edf20-f78b-4920-b99a-8341ff411f0d" (UID: "9b2edf20-f78b-4920-b99a-8341ff411f0d"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 00:51:41.224584 kubelet[1899]: I0508 00:51:41.223524 1899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9b2edf20-f78b-4920-b99a-8341ff411f0d-etc-cni-netd\") pod \"9b2edf20-f78b-4920-b99a-8341ff411f0d\" (UID: \"9b2edf20-f78b-4920-b99a-8341ff411f0d\") " May 8 00:51:41.224584 kubelet[1899]: I0508 00:51:41.223540 1899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9b2edf20-f78b-4920-b99a-8341ff411f0d-host-proc-sys-net\") pod \"9b2edf20-f78b-4920-b99a-8341ff411f0d\" (UID: \"9b2edf20-f78b-4920-b99a-8341ff411f0d\") " May 8 00:51:41.224584 kubelet[1899]: I0508 00:51:41.223559 1899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9b2edf20-f78b-4920-b99a-8341ff411f0d-cilium-config-path\") pod \"9b2edf20-f78b-4920-b99a-8341ff411f0d\" (UID: \"9b2edf20-f78b-4920-b99a-8341ff411f0d\") " May 8 00:51:41.224584 kubelet[1899]: I0508 00:51:41.223565 1899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9b2edf20-f78b-4920-b99a-8341ff411f0d-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "9b2edf20-f78b-4920-b99a-8341ff411f0d" (UID: "9b2edf20-f78b-4920-b99a-8341ff411f0d"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 00:51:41.224704 kubelet[1899]: I0508 00:51:41.223579 1899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nh7jz\" (UniqueName: \"kubernetes.io/projected/9b2edf20-f78b-4920-b99a-8341ff411f0d-kube-api-access-nh7jz\") pod \"9b2edf20-f78b-4920-b99a-8341ff411f0d\" (UID: \"9b2edf20-f78b-4920-b99a-8341ff411f0d\") " May 8 00:51:41.224704 kubelet[1899]: I0508 00:51:41.223584 1899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9b2edf20-f78b-4920-b99a-8341ff411f0d-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "9b2edf20-f78b-4920-b99a-8341ff411f0d" (UID: "9b2edf20-f78b-4920-b99a-8341ff411f0d"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 00:51:41.224704 kubelet[1899]: I0508 00:51:41.223594 1899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9b2edf20-f78b-4920-b99a-8341ff411f0d-cilium-cgroup\") pod \"9b2edf20-f78b-4920-b99a-8341ff411f0d\" (UID: \"9b2edf20-f78b-4920-b99a-8341ff411f0d\") " May 8 00:51:41.224704 kubelet[1899]: I0508 00:51:41.223600 1899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9b2edf20-f78b-4920-b99a-8341ff411f0d-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "9b2edf20-f78b-4920-b99a-8341ff411f0d" (UID: "9b2edf20-f78b-4920-b99a-8341ff411f0d"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 00:51:41.224704 kubelet[1899]: I0508 00:51:41.223612 1899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9b2edf20-f78b-4920-b99a-8341ff411f0d-hubble-tls\") pod \"9b2edf20-f78b-4920-b99a-8341ff411f0d\" (UID: \"9b2edf20-f78b-4920-b99a-8341ff411f0d\") " May 8 00:51:41.224823 kubelet[1899]: I0508 00:51:41.223615 1899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9b2edf20-f78b-4920-b99a-8341ff411f0d-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "9b2edf20-f78b-4920-b99a-8341ff411f0d" (UID: "9b2edf20-f78b-4920-b99a-8341ff411f0d"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 00:51:41.224823 kubelet[1899]: I0508 00:51:41.223628 1899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9b2edf20-f78b-4920-b99a-8341ff411f0d-cilium-run\") pod \"9b2edf20-f78b-4920-b99a-8341ff411f0d\" (UID: \"9b2edf20-f78b-4920-b99a-8341ff411f0d\") " May 8 00:51:41.224823 kubelet[1899]: I0508 00:51:41.223635 1899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9b2edf20-f78b-4920-b99a-8341ff411f0d-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "9b2edf20-f78b-4920-b99a-8341ff411f0d" (UID: "9b2edf20-f78b-4920-b99a-8341ff411f0d"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 00:51:41.224823 kubelet[1899]: I0508 00:51:41.223646 1899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9b2edf20-f78b-4920-b99a-8341ff411f0d-clustermesh-secrets\") pod \"9b2edf20-f78b-4920-b99a-8341ff411f0d\" (UID: \"9b2edf20-f78b-4920-b99a-8341ff411f0d\") " May 8 00:51:41.224823 kubelet[1899]: I0508 00:51:41.223650 1899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9b2edf20-f78b-4920-b99a-8341ff411f0d-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "9b2edf20-f78b-4920-b99a-8341ff411f0d" (UID: "9b2edf20-f78b-4920-b99a-8341ff411f0d"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 00:51:41.224935 kubelet[1899]: I0508 00:51:41.223660 1899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9b2edf20-f78b-4920-b99a-8341ff411f0d-cni-path\") pod \"9b2edf20-f78b-4920-b99a-8341ff411f0d\" (UID: \"9b2edf20-f78b-4920-b99a-8341ff411f0d\") " May 8 00:51:41.224935 kubelet[1899]: I0508 00:51:41.223698 1899 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-ghj4z\" (UniqueName: \"kubernetes.io/projected/6fb0cb8c-8a27-4c31-a8d7-513b80d42d93-kube-api-access-ghj4z\") on node \"localhost\" DevicePath \"\"" May 8 00:51:41.224935 kubelet[1899]: I0508 00:51:41.223707 1899 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9b2edf20-f78b-4920-b99a-8341ff411f0d-xtables-lock\") on node \"localhost\" DevicePath \"\"" May 8 00:51:41.224935 kubelet[1899]: I0508 00:51:41.223715 1899 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9b2edf20-f78b-4920-b99a-8341ff411f0d-hostproc\") on node \"localhost\" DevicePath \"\"" May 8 00:51:41.224935 kubelet[1899]: I0508 00:51:41.223723 1899 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9b2edf20-f78b-4920-b99a-8341ff411f0d-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" May 8 00:51:41.224935 kubelet[1899]: I0508 00:51:41.223732 1899 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9b2edf20-f78b-4920-b99a-8341ff411f0d-lib-modules\") on node \"localhost\" DevicePath \"\"" May 8 00:51:41.224935 kubelet[1899]: I0508 00:51:41.223741 1899 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/6fb0cb8c-8a27-4c31-a8d7-513b80d42d93-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 8 00:51:41.224935 kubelet[1899]: I0508 00:51:41.223748 1899 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9b2edf20-f78b-4920-b99a-8341ff411f0d-bpf-maps\") on node \"localhost\" DevicePath \"\"" May 8 00:51:41.225101 kubelet[1899]: I0508 00:51:41.223754 1899 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9b2edf20-f78b-4920-b99a-8341ff411f0d-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" May 8 00:51:41.225101 kubelet[1899]: I0508 00:51:41.223762 1899 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9b2edf20-f78b-4920-b99a-8341ff411f0d-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" May 8 00:51:41.225101 kubelet[1899]: I0508 00:51:41.223769 1899 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9b2edf20-f78b-4920-b99a-8341ff411f0d-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" May 8 00:51:41.225101 kubelet[1899]: I0508 00:51:41.223799 1899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9b2edf20-f78b-4920-b99a-8341ff411f0d-cni-path" (OuterVolumeSpecName: "cni-path") pod "9b2edf20-f78b-4920-b99a-8341ff411f0d" (UID: "9b2edf20-f78b-4920-b99a-8341ff411f0d"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 00:51:41.225101 kubelet[1899]: I0508 00:51:41.224398 1899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9b2edf20-f78b-4920-b99a-8341ff411f0d-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "9b2edf20-f78b-4920-b99a-8341ff411f0d" (UID: "9b2edf20-f78b-4920-b99a-8341ff411f0d"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 00:51:41.226126 kubelet[1899]: I0508 00:51:41.226076 1899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9b2edf20-f78b-4920-b99a-8341ff411f0d-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "9b2edf20-f78b-4920-b99a-8341ff411f0d" (UID: "9b2edf20-f78b-4920-b99a-8341ff411f0d"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 8 00:51:41.226910 kubelet[1899]: I0508 00:51:41.226870 1899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b2edf20-f78b-4920-b99a-8341ff411f0d-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "9b2edf20-f78b-4920-b99a-8341ff411f0d" (UID: "9b2edf20-f78b-4920-b99a-8341ff411f0d"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" May 8 00:51:41.227179 kubelet[1899]: I0508 00:51:41.227145 1899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9b2edf20-f78b-4920-b99a-8341ff411f0d-kube-api-access-nh7jz" (OuterVolumeSpecName: "kube-api-access-nh7jz") pod "9b2edf20-f78b-4920-b99a-8341ff411f0d" (UID: "9b2edf20-f78b-4920-b99a-8341ff411f0d"). InnerVolumeSpecName "kube-api-access-nh7jz". PluginName "kubernetes.io/projected", VolumeGidValue "" May 8 00:51:41.227833 kubelet[1899]: I0508 00:51:41.227802 1899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9b2edf20-f78b-4920-b99a-8341ff411f0d-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "9b2edf20-f78b-4920-b99a-8341ff411f0d" (UID: "9b2edf20-f78b-4920-b99a-8341ff411f0d"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" May 8 00:51:41.324937 kubelet[1899]: I0508 00:51:41.324887 1899 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9b2edf20-f78b-4920-b99a-8341ff411f0d-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 8 00:51:41.324937 kubelet[1899]: I0508 00:51:41.324920 1899 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-nh7jz\" (UniqueName: \"kubernetes.io/projected/9b2edf20-f78b-4920-b99a-8341ff411f0d-kube-api-access-nh7jz\") on node \"localhost\" DevicePath \"\"" May 8 00:51:41.324937 kubelet[1899]: I0508 00:51:41.324929 1899 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9b2edf20-f78b-4920-b99a-8341ff411f0d-hubble-tls\") on node \"localhost\" DevicePath \"\"" May 8 00:51:41.324937 kubelet[1899]: I0508 00:51:41.324937 1899 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9b2edf20-f78b-4920-b99a-8341ff411f0d-cilium-run\") on node \"localhost\" DevicePath \"\"" May 8 00:51:41.324937 kubelet[1899]: I0508 00:51:41.324946 1899 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9b2edf20-f78b-4920-b99a-8341ff411f0d-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" May 8 00:51:41.324937 kubelet[1899]: I0508 00:51:41.324954 1899 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9b2edf20-f78b-4920-b99a-8341ff411f0d-cni-path\") on node \"localhost\" DevicePath \"\"" May 8 00:51:41.449959 kubelet[1899]: I0508 00:51:41.449906 1899 scope.go:117] "RemoveContainer" containerID="8d310238813dbe12698c27c5003acbf0e5e7474159e8839edeeef92681ad230f" May 8 00:51:41.451729 env[1212]: time="2025-05-08T00:51:41.451684634Z" level=info msg="RemoveContainer for \"8d310238813dbe12698c27c5003acbf0e5e7474159e8839edeeef92681ad230f\"" May 8 00:51:41.456037 env[1212]: time="2025-05-08T00:51:41.455987084Z" level=info msg="RemoveContainer for \"8d310238813dbe12698c27c5003acbf0e5e7474159e8839edeeef92681ad230f\" returns successfully" May 8 00:51:41.456492 kubelet[1899]: I0508 00:51:41.456460 1899 scope.go:117] "RemoveContainer" containerID="8d310238813dbe12698c27c5003acbf0e5e7474159e8839edeeef92681ad230f" May 8 00:51:41.456779 env[1212]: time="2025-05-08T00:51:41.456651196Z" level=error msg="ContainerStatus for \"8d310238813dbe12698c27c5003acbf0e5e7474159e8839edeeef92681ad230f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8d310238813dbe12698c27c5003acbf0e5e7474159e8839edeeef92681ad230f\": not found" May 8 00:51:41.457605 kubelet[1899]: E0508 00:51:41.456856 1899 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8d310238813dbe12698c27c5003acbf0e5e7474159e8839edeeef92681ad230f\": not found" containerID="8d310238813dbe12698c27c5003acbf0e5e7474159e8839edeeef92681ad230f" May 8 00:51:41.457605 kubelet[1899]: I0508 00:51:41.456898 1899 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8d310238813dbe12698c27c5003acbf0e5e7474159e8839edeeef92681ad230f"} err="failed to get container status \"8d310238813dbe12698c27c5003acbf0e5e7474159e8839edeeef92681ad230f\": rpc error: code = NotFound desc = an error occurred when try to find container \"8d310238813dbe12698c27c5003acbf0e5e7474159e8839edeeef92681ad230f\": not found" May 8 00:51:41.457605 kubelet[1899]: I0508 00:51:41.456988 1899 scope.go:117] "RemoveContainer" containerID="96ccf6ebe55d73282acaeb465d32ad2b1b66323718cfd9e8cbe03d9117c39986" May 8 00:51:41.458258 env[1212]: time="2025-05-08T00:51:41.458161967Z" level=info msg="RemoveContainer for \"96ccf6ebe55d73282acaeb465d32ad2b1b66323718cfd9e8cbe03d9117c39986\"" May 8 00:51:41.460605 env[1212]: time="2025-05-08T00:51:41.460560314Z" level=info msg="RemoveContainer for \"96ccf6ebe55d73282acaeb465d32ad2b1b66323718cfd9e8cbe03d9117c39986\" returns successfully" May 8 00:51:41.461336 systemd[1]: Removed slice kubepods-besteffort-pod6fb0cb8c_8a27_4c31_a8d7_513b80d42d93.slice. May 8 00:51:41.461824 kubelet[1899]: I0508 00:51:41.461735 1899 scope.go:117] "RemoveContainer" containerID="8b781dd886ae61f32bac3a79c1ce15508935daa8d9eaba11f5799a17b8862109" May 8 00:51:41.465709 env[1212]: time="2025-05-08T00:51:41.465666386Z" level=info msg="RemoveContainer for \"8b781dd886ae61f32bac3a79c1ce15508935daa8d9eaba11f5799a17b8862109\"" May 8 00:51:41.467473 systemd[1]: Removed slice kubepods-burstable-pod9b2edf20_f78b_4920_b99a_8341ff411f0d.slice. May 8 00:51:41.467555 systemd[1]: kubepods-burstable-pod9b2edf20_f78b_4920_b99a_8341ff411f0d.slice: Consumed 6.684s CPU time. May 8 00:51:41.469488 env[1212]: time="2025-05-08T00:51:41.469454513Z" level=info msg="RemoveContainer for \"8b781dd886ae61f32bac3a79c1ce15508935daa8d9eaba11f5799a17b8862109\" returns successfully" May 8 00:51:41.469684 kubelet[1899]: I0508 00:51:41.469664 1899 scope.go:117] "RemoveContainer" containerID="30eec334eb71341df8bcb7a418df5b5edede800b7aaa8231abfa4459337c66f8" May 8 00:51:41.470790 env[1212]: time="2025-05-08T00:51:41.470761379Z" level=info msg="RemoveContainer for \"30eec334eb71341df8bcb7a418df5b5edede800b7aaa8231abfa4459337c66f8\"" May 8 00:51:41.473496 env[1212]: time="2025-05-08T00:51:41.473443945Z" level=info msg="RemoveContainer for \"30eec334eb71341df8bcb7a418df5b5edede800b7aaa8231abfa4459337c66f8\" returns successfully" May 8 00:51:41.473713 kubelet[1899]: I0508 00:51:41.473692 1899 scope.go:117] "RemoveContainer" containerID="d03cd1a7169fc26fe2565734fc43f21d3bf253f636ee447c2745e8d4be876948" May 8 00:51:41.477936 env[1212]: time="2025-05-08T00:51:41.475830453Z" level=info msg="RemoveContainer for \"d03cd1a7169fc26fe2565734fc43f21d3bf253f636ee447c2745e8d4be876948\"" May 8 00:51:41.482922 env[1212]: time="2025-05-08T00:51:41.482825628Z" level=info msg="RemoveContainer for \"d03cd1a7169fc26fe2565734fc43f21d3bf253f636ee447c2745e8d4be876948\" returns successfully" May 8 00:51:41.483239 kubelet[1899]: I0508 00:51:41.483140 1899 scope.go:117] "RemoveContainer" containerID="61e75b81b69e4701a5401cfd71a9ac99bdeeb6ad2c10f59cd6a1678456f884ba" May 8 00:51:41.484679 env[1212]: time="2025-05-08T00:51:41.484649697Z" level=info msg="RemoveContainer for \"61e75b81b69e4701a5401cfd71a9ac99bdeeb6ad2c10f59cd6a1678456f884ba\"" May 8 00:51:41.488792 env[1212]: time="2025-05-08T00:51:41.488749881Z" level=info msg="RemoveContainer for \"61e75b81b69e4701a5401cfd71a9ac99bdeeb6ad2c10f59cd6a1678456f884ba\" returns successfully" May 8 00:51:41.489022 kubelet[1899]: I0508 00:51:41.488998 1899 scope.go:117] "RemoveContainer" containerID="96ccf6ebe55d73282acaeb465d32ad2b1b66323718cfd9e8cbe03d9117c39986" May 8 00:51:41.489388 env[1212]: time="2025-05-08T00:51:41.489304481Z" level=error msg="ContainerStatus for \"96ccf6ebe55d73282acaeb465d32ad2b1b66323718cfd9e8cbe03d9117c39986\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"96ccf6ebe55d73282acaeb465d32ad2b1b66323718cfd9e8cbe03d9117c39986\": not found" May 8 00:51:41.489571 kubelet[1899]: E0508 00:51:41.489549 1899 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"96ccf6ebe55d73282acaeb465d32ad2b1b66323718cfd9e8cbe03d9117c39986\": not found" containerID="96ccf6ebe55d73282acaeb465d32ad2b1b66323718cfd9e8cbe03d9117c39986" May 8 00:51:41.489666 kubelet[1899]: I0508 00:51:41.489643 1899 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"96ccf6ebe55d73282acaeb465d32ad2b1b66323718cfd9e8cbe03d9117c39986"} err="failed to get container status \"96ccf6ebe55d73282acaeb465d32ad2b1b66323718cfd9e8cbe03d9117c39986\": rpc error: code = NotFound desc = an error occurred when try to find container \"96ccf6ebe55d73282acaeb465d32ad2b1b66323718cfd9e8cbe03d9117c39986\": not found" May 8 00:51:41.489729 kubelet[1899]: I0508 00:51:41.489718 1899 scope.go:117] "RemoveContainer" containerID="8b781dd886ae61f32bac3a79c1ce15508935daa8d9eaba11f5799a17b8862109" May 8 00:51:41.490023 env[1212]: time="2025-05-08T00:51:41.489937636Z" level=error msg="ContainerStatus for \"8b781dd886ae61f32bac3a79c1ce15508935daa8d9eaba11f5799a17b8862109\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8b781dd886ae61f32bac3a79c1ce15508935daa8d9eaba11f5799a17b8862109\": not found" May 8 00:51:41.490771 kubelet[1899]: E0508 00:51:41.490749 1899 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8b781dd886ae61f32bac3a79c1ce15508935daa8d9eaba11f5799a17b8862109\": not found" containerID="8b781dd886ae61f32bac3a79c1ce15508935daa8d9eaba11f5799a17b8862109" May 8 00:51:41.490865 kubelet[1899]: I0508 00:51:41.490845 1899 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8b781dd886ae61f32bac3a79c1ce15508935daa8d9eaba11f5799a17b8862109"} err="failed to get container status \"8b781dd886ae61f32bac3a79c1ce15508935daa8d9eaba11f5799a17b8862109\": rpc error: code = NotFound desc = an error occurred when try to find container \"8b781dd886ae61f32bac3a79c1ce15508935daa8d9eaba11f5799a17b8862109\": not found" May 8 00:51:41.490927 kubelet[1899]: I0508 00:51:41.490916 1899 scope.go:117] "RemoveContainer" containerID="30eec334eb71341df8bcb7a418df5b5edede800b7aaa8231abfa4459337c66f8" May 8 00:51:41.491259 env[1212]: time="2025-05-08T00:51:41.491168067Z" level=error msg="ContainerStatus for \"30eec334eb71341df8bcb7a418df5b5edede800b7aaa8231abfa4459337c66f8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"30eec334eb71341df8bcb7a418df5b5edede800b7aaa8231abfa4459337c66f8\": not found" May 8 00:51:41.491455 kubelet[1899]: E0508 00:51:41.491412 1899 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"30eec334eb71341df8bcb7a418df5b5edede800b7aaa8231abfa4459337c66f8\": not found" containerID="30eec334eb71341df8bcb7a418df5b5edede800b7aaa8231abfa4459337c66f8" May 8 00:51:41.491562 kubelet[1899]: I0508 00:51:41.491543 1899 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"30eec334eb71341df8bcb7a418df5b5edede800b7aaa8231abfa4459337c66f8"} err="failed to get container status \"30eec334eb71341df8bcb7a418df5b5edede800b7aaa8231abfa4459337c66f8\": rpc error: code = NotFound desc = an error occurred when try to find container \"30eec334eb71341df8bcb7a418df5b5edede800b7aaa8231abfa4459337c66f8\": not found" May 8 00:51:41.491627 kubelet[1899]: I0508 00:51:41.491612 1899 scope.go:117] "RemoveContainer" containerID="d03cd1a7169fc26fe2565734fc43f21d3bf253f636ee447c2745e8d4be876948" May 8 00:51:41.491892 env[1212]: time="2025-05-08T00:51:41.491838379Z" level=error msg="ContainerStatus for \"d03cd1a7169fc26fe2565734fc43f21d3bf253f636ee447c2745e8d4be876948\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d03cd1a7169fc26fe2565734fc43f21d3bf253f636ee447c2745e8d4be876948\": not found" May 8 00:51:41.491992 kubelet[1899]: E0508 00:51:41.491966 1899 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d03cd1a7169fc26fe2565734fc43f21d3bf253f636ee447c2745e8d4be876948\": not found" containerID="d03cd1a7169fc26fe2565734fc43f21d3bf253f636ee447c2745e8d4be876948" May 8 00:51:41.492030 kubelet[1899]: I0508 00:51:41.491995 1899 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d03cd1a7169fc26fe2565734fc43f21d3bf253f636ee447c2745e8d4be876948"} err="failed to get container status \"d03cd1a7169fc26fe2565734fc43f21d3bf253f636ee447c2745e8d4be876948\": rpc error: code = NotFound desc = an error occurred when try to find container \"d03cd1a7169fc26fe2565734fc43f21d3bf253f636ee447c2745e8d4be876948\": not found" May 8 00:51:41.492030 kubelet[1899]: I0508 00:51:41.492015 1899 scope.go:117] "RemoveContainer" containerID="61e75b81b69e4701a5401cfd71a9ac99bdeeb6ad2c10f59cd6a1678456f884ba" May 8 00:51:41.493478 env[1212]: time="2025-05-08T00:51:41.492776591Z" level=error msg="ContainerStatus for \"61e75b81b69e4701a5401cfd71a9ac99bdeeb6ad2c10f59cd6a1678456f884ba\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"61e75b81b69e4701a5401cfd71a9ac99bdeeb6ad2c10f59cd6a1678456f884ba\": not found" May 8 00:51:41.493831 kubelet[1899]: E0508 00:51:41.493808 1899 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"61e75b81b69e4701a5401cfd71a9ac99bdeeb6ad2c10f59cd6a1678456f884ba\": not found" containerID="61e75b81b69e4701a5401cfd71a9ac99bdeeb6ad2c10f59cd6a1678456f884ba" May 8 00:51:41.493971 kubelet[1899]: I0508 00:51:41.493933 1899 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"61e75b81b69e4701a5401cfd71a9ac99bdeeb6ad2c10f59cd6a1678456f884ba"} err="failed to get container status \"61e75b81b69e4701a5401cfd71a9ac99bdeeb6ad2c10f59cd6a1678456f884ba\": rpc error: code = NotFound desc = an error occurred when try to find container \"61e75b81b69e4701a5401cfd71a9ac99bdeeb6ad2c10f59cd6a1678456f884ba\": not found" May 8 00:51:41.975941 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-96ccf6ebe55d73282acaeb465d32ad2b1b66323718cfd9e8cbe03d9117c39986-rootfs.mount: Deactivated successfully. May 8 00:51:41.976039 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-831f15519ff3c03d7ad80d24012e4cffa1f4355280d22f4a8d65bbdbc13f9542-rootfs.mount: Deactivated successfully. May 8 00:51:41.976088 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1d6b645a3580a1626d37f79ccc466ec95848676726a2fa2513d4eeb0ac9e88a4-rootfs.mount: Deactivated successfully. May 8 00:51:41.976147 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1d6b645a3580a1626d37f79ccc466ec95848676726a2fa2513d4eeb0ac9e88a4-shm.mount: Deactivated successfully. May 8 00:51:41.976196 systemd[1]: var-lib-kubelet-pods-6fb0cb8c\x2d8a27\x2d4c31\x2da8d7\x2d513b80d42d93-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dghj4z.mount: Deactivated successfully. May 8 00:51:41.976244 systemd[1]: var-lib-kubelet-pods-9b2edf20\x2df78b\x2d4920\x2db99a\x2d8341ff411f0d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dnh7jz.mount: Deactivated successfully. May 8 00:51:41.976299 systemd[1]: var-lib-kubelet-pods-9b2edf20\x2df78b\x2d4920\x2db99a\x2d8341ff411f0d-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 8 00:51:41.976354 systemd[1]: var-lib-kubelet-pods-9b2edf20\x2df78b\x2d4920\x2db99a\x2d8341ff411f0d-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 8 00:51:42.271119 kubelet[1899]: I0508 00:51:42.271024 1899 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6fb0cb8c-8a27-4c31-a8d7-513b80d42d93" path="/var/lib/kubelet/pods/6fb0cb8c-8a27-4c31-a8d7-513b80d42d93/volumes" May 8 00:51:42.271876 kubelet[1899]: I0508 00:51:42.271851 1899 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9b2edf20-f78b-4920-b99a-8341ff411f0d" path="/var/lib/kubelet/pods/9b2edf20-f78b-4920-b99a-8341ff411f0d/volumes" May 8 00:51:42.930567 sshd[3503]: pam_unix(sshd:session): session closed for user core May 8 00:51:42.933942 systemd[1]: Started sshd@22-10.0.0.116:22-10.0.0.1:59472.service. May 8 00:51:42.934877 systemd[1]: sshd@21-10.0.0.116:22-10.0.0.1:59162.service: Deactivated successfully. May 8 00:51:42.935642 systemd[1]: session-22.scope: Deactivated successfully. May 8 00:51:42.935797 systemd[1]: session-22.scope: Consumed 1.456s CPU time. May 8 00:51:42.936786 systemd-logind[1202]: Session 22 logged out. Waiting for processes to exit. May 8 00:51:42.937738 systemd-logind[1202]: Removed session 22. May 8 00:51:42.969991 sshd[3665]: Accepted publickey for core from 10.0.0.1 port 59472 ssh2: RSA SHA256:bNzqUoNIi+loVoVjyqrqS2pcdituzSfXJGlDy1FbsUU May 8 00:51:42.971311 sshd[3665]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 8 00:51:42.975427 systemd[1]: Started session-23.scope. May 8 00:51:42.975761 systemd-logind[1202]: New session 23 of user core. May 8 00:51:43.737780 sshd[3665]: pam_unix(sshd:session): session closed for user core May 8 00:51:43.741348 systemd[1]: Started sshd@23-10.0.0.116:22-10.0.0.1:59476.service. May 8 00:51:43.758634 systemd-logind[1202]: Session 23 logged out. Waiting for processes to exit. May 8 00:51:43.759608 systemd[1]: sshd@22-10.0.0.116:22-10.0.0.1:59472.service: Deactivated successfully. May 8 00:51:43.761401 kubelet[1899]: E0508 00:51:43.760454 1899 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="6fb0cb8c-8a27-4c31-a8d7-513b80d42d93" containerName="cilium-operator" May 8 00:51:43.761401 kubelet[1899]: E0508 00:51:43.760485 1899 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9b2edf20-f78b-4920-b99a-8341ff411f0d" containerName="mount-cgroup" May 8 00:51:43.761401 kubelet[1899]: E0508 00:51:43.760492 1899 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9b2edf20-f78b-4920-b99a-8341ff411f0d" containerName="apply-sysctl-overwrites" May 8 00:51:43.761401 kubelet[1899]: E0508 00:51:43.760499 1899 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9b2edf20-f78b-4920-b99a-8341ff411f0d" containerName="mount-bpf-fs" May 8 00:51:43.761401 kubelet[1899]: E0508 00:51:43.760505 1899 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9b2edf20-f78b-4920-b99a-8341ff411f0d" containerName="clean-cilium-state" May 8 00:51:43.761401 kubelet[1899]: E0508 00:51:43.760512 1899 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="9b2edf20-f78b-4920-b99a-8341ff411f0d" containerName="cilium-agent" May 8 00:51:43.761401 kubelet[1899]: I0508 00:51:43.760559 1899 memory_manager.go:354] "RemoveStaleState removing state" podUID="6fb0cb8c-8a27-4c31-a8d7-513b80d42d93" containerName="cilium-operator" May 8 00:51:43.761401 kubelet[1899]: I0508 00:51:43.760566 1899 memory_manager.go:354] "RemoveStaleState removing state" podUID="9b2edf20-f78b-4920-b99a-8341ff411f0d" containerName="cilium-agent" May 8 00:51:43.760676 systemd[1]: session-23.scope: Deactivated successfully. May 8 00:51:43.762175 systemd-logind[1202]: Removed session 23. May 8 00:51:43.766979 systemd[1]: Created slice kubepods-burstable-pod5fd93e8c_a497_4022_b5ef_007850a1cd79.slice. May 8 00:51:43.785288 sshd[3677]: Accepted publickey for core from 10.0.0.1 port 59476 ssh2: RSA SHA256:bNzqUoNIi+loVoVjyqrqS2pcdituzSfXJGlDy1FbsUU May 8 00:51:43.787009 sshd[3677]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 8 00:51:43.791457 systemd[1]: Started session-24.scope. May 8 00:51:43.791501 systemd-logind[1202]: New session 24 of user core. May 8 00:51:43.840778 kubelet[1899]: I0508 00:51:43.840741 1899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5fd93e8c-a497-4022-b5ef-007850a1cd79-hostproc\") pod \"cilium-w6n57\" (UID: \"5fd93e8c-a497-4022-b5ef-007850a1cd79\") " pod="kube-system/cilium-w6n57" May 8 00:51:43.840991 kubelet[1899]: I0508 00:51:43.840940 1899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5fd93e8c-a497-4022-b5ef-007850a1cd79-cilium-cgroup\") pod \"cilium-w6n57\" (UID: \"5fd93e8c-a497-4022-b5ef-007850a1cd79\") " pod="kube-system/cilium-w6n57" May 8 00:51:43.841092 kubelet[1899]: I0508 00:51:43.841074 1899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-frq8j\" (UniqueName: \"kubernetes.io/projected/5fd93e8c-a497-4022-b5ef-007850a1cd79-kube-api-access-frq8j\") pod \"cilium-w6n57\" (UID: \"5fd93e8c-a497-4022-b5ef-007850a1cd79\") " pod="kube-system/cilium-w6n57" May 8 00:51:43.841170 kubelet[1899]: I0508 00:51:43.841157 1899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5fd93e8c-a497-4022-b5ef-007850a1cd79-cni-path\") pod \"cilium-w6n57\" (UID: \"5fd93e8c-a497-4022-b5ef-007850a1cd79\") " pod="kube-system/cilium-w6n57" May 8 00:51:43.841262 kubelet[1899]: I0508 00:51:43.841249 1899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5fd93e8c-a497-4022-b5ef-007850a1cd79-host-proc-sys-net\") pod \"cilium-w6n57\" (UID: \"5fd93e8c-a497-4022-b5ef-007850a1cd79\") " pod="kube-system/cilium-w6n57" May 8 00:51:43.841343 kubelet[1899]: I0508 00:51:43.841329 1899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5fd93e8c-a497-4022-b5ef-007850a1cd79-bpf-maps\") pod \"cilium-w6n57\" (UID: \"5fd93e8c-a497-4022-b5ef-007850a1cd79\") " pod="kube-system/cilium-w6n57" May 8 00:51:43.841416 kubelet[1899]: I0508 00:51:43.841404 1899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5fd93e8c-a497-4022-b5ef-007850a1cd79-lib-modules\") pod \"cilium-w6n57\" (UID: \"5fd93e8c-a497-4022-b5ef-007850a1cd79\") " pod="kube-system/cilium-w6n57" May 8 00:51:43.841515 kubelet[1899]: I0508 00:51:43.841502 1899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5fd93e8c-a497-4022-b5ef-007850a1cd79-clustermesh-secrets\") pod \"cilium-w6n57\" (UID: \"5fd93e8c-a497-4022-b5ef-007850a1cd79\") " pod="kube-system/cilium-w6n57" May 8 00:51:43.841590 kubelet[1899]: I0508 00:51:43.841576 1899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/5fd93e8c-a497-4022-b5ef-007850a1cd79-cilium-ipsec-secrets\") pod \"cilium-w6n57\" (UID: \"5fd93e8c-a497-4022-b5ef-007850a1cd79\") " pod="kube-system/cilium-w6n57" May 8 00:51:43.841667 kubelet[1899]: I0508 00:51:43.841653 1899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5fd93e8c-a497-4022-b5ef-007850a1cd79-cilium-run\") pod \"cilium-w6n57\" (UID: \"5fd93e8c-a497-4022-b5ef-007850a1cd79\") " pod="kube-system/cilium-w6n57" May 8 00:51:43.841745 kubelet[1899]: I0508 00:51:43.841730 1899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5fd93e8c-a497-4022-b5ef-007850a1cd79-xtables-lock\") pod \"cilium-w6n57\" (UID: \"5fd93e8c-a497-4022-b5ef-007850a1cd79\") " pod="kube-system/cilium-w6n57" May 8 00:51:43.841822 kubelet[1899]: I0508 00:51:43.841807 1899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5fd93e8c-a497-4022-b5ef-007850a1cd79-etc-cni-netd\") pod \"cilium-w6n57\" (UID: \"5fd93e8c-a497-4022-b5ef-007850a1cd79\") " pod="kube-system/cilium-w6n57" May 8 00:51:43.841889 kubelet[1899]: I0508 00:51:43.841877 1899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5fd93e8c-a497-4022-b5ef-007850a1cd79-hubble-tls\") pod \"cilium-w6n57\" (UID: \"5fd93e8c-a497-4022-b5ef-007850a1cd79\") " pod="kube-system/cilium-w6n57" May 8 00:51:43.842022 kubelet[1899]: I0508 00:51:43.842003 1899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5fd93e8c-a497-4022-b5ef-007850a1cd79-cilium-config-path\") pod \"cilium-w6n57\" (UID: \"5fd93e8c-a497-4022-b5ef-007850a1cd79\") " pod="kube-system/cilium-w6n57" May 8 00:51:43.842120 kubelet[1899]: I0508 00:51:43.842105 1899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5fd93e8c-a497-4022-b5ef-007850a1cd79-host-proc-sys-kernel\") pod \"cilium-w6n57\" (UID: \"5fd93e8c-a497-4022-b5ef-007850a1cd79\") " pod="kube-system/cilium-w6n57" May 8 00:51:43.914252 sshd[3677]: pam_unix(sshd:session): session closed for user core May 8 00:51:43.918767 systemd[1]: Started sshd@24-10.0.0.116:22-10.0.0.1:59492.service. May 8 00:51:43.919275 systemd[1]: sshd@23-10.0.0.116:22-10.0.0.1:59476.service: Deactivated successfully. May 8 00:51:43.920489 systemd[1]: session-24.scope: Deactivated successfully. May 8 00:51:43.921288 systemd-logind[1202]: Session 24 logged out. Waiting for processes to exit. May 8 00:51:43.923887 systemd-logind[1202]: Removed session 24. May 8 00:51:43.935204 kubelet[1899]: E0508 00:51:43.935139 1899 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[bpf-maps cilium-cgroup cilium-config-path cilium-ipsec-secrets cilium-run clustermesh-secrets cni-path etc-cni-netd host-proc-sys-kernel host-proc-sys-net hostproc hubble-tls kube-api-access-frq8j lib-modules xtables-lock], unattached volumes=[], failed to process volumes=[]: context canceled" pod="kube-system/cilium-w6n57" podUID="5fd93e8c-a497-4022-b5ef-007850a1cd79" May 8 00:51:43.970059 sshd[3690]: Accepted publickey for core from 10.0.0.1 port 59492 ssh2: RSA SHA256:bNzqUoNIi+loVoVjyqrqS2pcdituzSfXJGlDy1FbsUU May 8 00:51:43.971490 sshd[3690]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 8 00:51:43.974722 systemd-logind[1202]: New session 25 of user core. May 8 00:51:43.975846 systemd[1]: Started session-25.scope. May 8 00:51:44.295017 kubelet[1899]: E0508 00:51:44.294979 1899 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 8 00:51:44.547906 kubelet[1899]: I0508 00:51:44.547785 1899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5fd93e8c-a497-4022-b5ef-007850a1cd79-cilium-cgroup\") pod \"5fd93e8c-a497-4022-b5ef-007850a1cd79\" (UID: \"5fd93e8c-a497-4022-b5ef-007850a1cd79\") " May 8 00:51:44.547906 kubelet[1899]: I0508 00:51:44.547825 1899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5fd93e8c-a497-4022-b5ef-007850a1cd79-xtables-lock\") pod \"5fd93e8c-a497-4022-b5ef-007850a1cd79\" (UID: \"5fd93e8c-a497-4022-b5ef-007850a1cd79\") " May 8 00:51:44.547906 kubelet[1899]: I0508 00:51:44.547847 1899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5fd93e8c-a497-4022-b5ef-007850a1cd79-hostproc\") pod \"5fd93e8c-a497-4022-b5ef-007850a1cd79\" (UID: \"5fd93e8c-a497-4022-b5ef-007850a1cd79\") " May 8 00:51:44.547906 kubelet[1899]: I0508 00:51:44.547869 1899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5fd93e8c-a497-4022-b5ef-007850a1cd79-clustermesh-secrets\") pod \"5fd93e8c-a497-4022-b5ef-007850a1cd79\" (UID: \"5fd93e8c-a497-4022-b5ef-007850a1cd79\") " May 8 00:51:44.547906 kubelet[1899]: I0508 00:51:44.547886 1899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5fd93e8c-a497-4022-b5ef-007850a1cd79-cilium-run\") pod \"5fd93e8c-a497-4022-b5ef-007850a1cd79\" (UID: \"5fd93e8c-a497-4022-b5ef-007850a1cd79\") " May 8 00:51:44.547906 kubelet[1899]: I0508 00:51:44.547904 1899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5fd93e8c-a497-4022-b5ef-007850a1cd79-etc-cni-netd\") pod \"5fd93e8c-a497-4022-b5ef-007850a1cd79\" (UID: \"5fd93e8c-a497-4022-b5ef-007850a1cd79\") " May 8 00:51:44.548150 kubelet[1899]: I0508 00:51:44.547922 1899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5fd93e8c-a497-4022-b5ef-007850a1cd79-cilium-config-path\") pod \"5fd93e8c-a497-4022-b5ef-007850a1cd79\" (UID: \"5fd93e8c-a497-4022-b5ef-007850a1cd79\") " May 8 00:51:44.548150 kubelet[1899]: I0508 00:51:44.547944 1899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5fd93e8c-a497-4022-b5ef-007850a1cd79-host-proc-sys-net\") pod \"5fd93e8c-a497-4022-b5ef-007850a1cd79\" (UID: \"5fd93e8c-a497-4022-b5ef-007850a1cd79\") " May 8 00:51:44.548150 kubelet[1899]: I0508 00:51:44.547962 1899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-frq8j\" (UniqueName: \"kubernetes.io/projected/5fd93e8c-a497-4022-b5ef-007850a1cd79-kube-api-access-frq8j\") pod \"5fd93e8c-a497-4022-b5ef-007850a1cd79\" (UID: \"5fd93e8c-a497-4022-b5ef-007850a1cd79\") " May 8 00:51:44.548150 kubelet[1899]: I0508 00:51:44.547978 1899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/5fd93e8c-a497-4022-b5ef-007850a1cd79-cilium-ipsec-secrets\") pod \"5fd93e8c-a497-4022-b5ef-007850a1cd79\" (UID: \"5fd93e8c-a497-4022-b5ef-007850a1cd79\") " May 8 00:51:44.548150 kubelet[1899]: I0508 00:51:44.547995 1899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5fd93e8c-a497-4022-b5ef-007850a1cd79-bpf-maps\") pod \"5fd93e8c-a497-4022-b5ef-007850a1cd79\" (UID: \"5fd93e8c-a497-4022-b5ef-007850a1cd79\") " May 8 00:51:44.548150 kubelet[1899]: I0508 00:51:44.548008 1899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5fd93e8c-a497-4022-b5ef-007850a1cd79-cni-path\") pod \"5fd93e8c-a497-4022-b5ef-007850a1cd79\" (UID: \"5fd93e8c-a497-4022-b5ef-007850a1cd79\") " May 8 00:51:44.548290 kubelet[1899]: I0508 00:51:44.548022 1899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5fd93e8c-a497-4022-b5ef-007850a1cd79-lib-modules\") pod \"5fd93e8c-a497-4022-b5ef-007850a1cd79\" (UID: \"5fd93e8c-a497-4022-b5ef-007850a1cd79\") " May 8 00:51:44.548290 kubelet[1899]: I0508 00:51:44.548039 1899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5fd93e8c-a497-4022-b5ef-007850a1cd79-hubble-tls\") pod \"5fd93e8c-a497-4022-b5ef-007850a1cd79\" (UID: \"5fd93e8c-a497-4022-b5ef-007850a1cd79\") " May 8 00:51:44.548290 kubelet[1899]: I0508 00:51:44.548055 1899 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5fd93e8c-a497-4022-b5ef-007850a1cd79-host-proc-sys-kernel\") pod \"5fd93e8c-a497-4022-b5ef-007850a1cd79\" (UID: \"5fd93e8c-a497-4022-b5ef-007850a1cd79\") " May 8 00:51:44.548290 kubelet[1899]: I0508 00:51:44.547913 1899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5fd93e8c-a497-4022-b5ef-007850a1cd79-hostproc" (OuterVolumeSpecName: "hostproc") pod "5fd93e8c-a497-4022-b5ef-007850a1cd79" (UID: "5fd93e8c-a497-4022-b5ef-007850a1cd79"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 00:51:44.548290 kubelet[1899]: I0508 00:51:44.547920 1899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5fd93e8c-a497-4022-b5ef-007850a1cd79-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "5fd93e8c-a497-4022-b5ef-007850a1cd79" (UID: "5fd93e8c-a497-4022-b5ef-007850a1cd79"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 00:51:44.548395 kubelet[1899]: I0508 00:51:44.547938 1899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5fd93e8c-a497-4022-b5ef-007850a1cd79-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "5fd93e8c-a497-4022-b5ef-007850a1cd79" (UID: "5fd93e8c-a497-4022-b5ef-007850a1cd79"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 00:51:44.548395 kubelet[1899]: I0508 00:51:44.547940 1899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5fd93e8c-a497-4022-b5ef-007850a1cd79-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "5fd93e8c-a497-4022-b5ef-007850a1cd79" (UID: "5fd93e8c-a497-4022-b5ef-007850a1cd79"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 00:51:44.548395 kubelet[1899]: I0508 00:51:44.548098 1899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5fd93e8c-a497-4022-b5ef-007850a1cd79-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "5fd93e8c-a497-4022-b5ef-007850a1cd79" (UID: "5fd93e8c-a497-4022-b5ef-007850a1cd79"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 00:51:44.548395 kubelet[1899]: I0508 00:51:44.548152 1899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5fd93e8c-a497-4022-b5ef-007850a1cd79-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "5fd93e8c-a497-4022-b5ef-007850a1cd79" (UID: "5fd93e8c-a497-4022-b5ef-007850a1cd79"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 00:51:44.549347 kubelet[1899]: I0508 00:51:44.549316 1899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5fd93e8c-a497-4022-b5ef-007850a1cd79-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "5fd93e8c-a497-4022-b5ef-007850a1cd79" (UID: "5fd93e8c-a497-4022-b5ef-007850a1cd79"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 00:51:44.549501 kubelet[1899]: I0508 00:51:44.549484 1899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5fd93e8c-a497-4022-b5ef-007850a1cd79-cni-path" (OuterVolumeSpecName: "cni-path") pod "5fd93e8c-a497-4022-b5ef-007850a1cd79" (UID: "5fd93e8c-a497-4022-b5ef-007850a1cd79"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 00:51:44.549663 kubelet[1899]: I0508 00:51:44.549625 1899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5fd93e8c-a497-4022-b5ef-007850a1cd79-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "5fd93e8c-a497-4022-b5ef-007850a1cd79" (UID: "5fd93e8c-a497-4022-b5ef-007850a1cd79"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 8 00:51:44.549711 kubelet[1899]: I0508 00:51:44.549673 1899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5fd93e8c-a497-4022-b5ef-007850a1cd79-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "5fd93e8c-a497-4022-b5ef-007850a1cd79" (UID: "5fd93e8c-a497-4022-b5ef-007850a1cd79"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 00:51:44.549711 kubelet[1899]: I0508 00:51:44.549692 1899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5fd93e8c-a497-4022-b5ef-007850a1cd79-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "5fd93e8c-a497-4022-b5ef-007850a1cd79" (UID: "5fd93e8c-a497-4022-b5ef-007850a1cd79"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 8 00:51:44.551709 systemd[1]: var-lib-kubelet-pods-5fd93e8c\x2da497\x2d4022\x2db5ef\x2d007850a1cd79-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 8 00:51:44.552875 kubelet[1899]: I0508 00:51:44.552839 1899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fd93e8c-a497-4022-b5ef-007850a1cd79-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "5fd93e8c-a497-4022-b5ef-007850a1cd79" (UID: "5fd93e8c-a497-4022-b5ef-007850a1cd79"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" May 8 00:51:44.553044 kubelet[1899]: I0508 00:51:44.553009 1899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fd93e8c-a497-4022-b5ef-007850a1cd79-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "5fd93e8c-a497-4022-b5ef-007850a1cd79" (UID: "5fd93e8c-a497-4022-b5ef-007850a1cd79"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" May 8 00:51:44.553311 kubelet[1899]: I0508 00:51:44.553275 1899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fd93e8c-a497-4022-b5ef-007850a1cd79-kube-api-access-frq8j" (OuterVolumeSpecName: "kube-api-access-frq8j") pod "5fd93e8c-a497-4022-b5ef-007850a1cd79" (UID: "5fd93e8c-a497-4022-b5ef-007850a1cd79"). InnerVolumeSpecName "kube-api-access-frq8j". PluginName "kubernetes.io/projected", VolumeGidValue "" May 8 00:51:44.554482 kubelet[1899]: I0508 00:51:44.554446 1899 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fd93e8c-a497-4022-b5ef-007850a1cd79-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "5fd93e8c-a497-4022-b5ef-007850a1cd79" (UID: "5fd93e8c-a497-4022-b5ef-007850a1cd79"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" May 8 00:51:44.648405 kubelet[1899]: I0508 00:51:44.648353 1899 reconciler_common.go:288] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/5fd93e8c-a497-4022-b5ef-007850a1cd79-cilium-ipsec-secrets\") on node \"localhost\" DevicePath \"\"" May 8 00:51:44.648405 kubelet[1899]: I0508 00:51:44.648390 1899 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/5fd93e8c-a497-4022-b5ef-007850a1cd79-bpf-maps\") on node \"localhost\" DevicePath \"\"" May 8 00:51:44.648405 kubelet[1899]: I0508 00:51:44.648399 1899 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/5fd93e8c-a497-4022-b5ef-007850a1cd79-cni-path\") on node \"localhost\" DevicePath \"\"" May 8 00:51:44.648405 kubelet[1899]: I0508 00:51:44.648408 1899 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5fd93e8c-a497-4022-b5ef-007850a1cd79-lib-modules\") on node \"localhost\" DevicePath \"\"" May 8 00:51:44.648405 kubelet[1899]: I0508 00:51:44.648416 1899 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/5fd93e8c-a497-4022-b5ef-007850a1cd79-hubble-tls\") on node \"localhost\" DevicePath \"\"" May 8 00:51:44.648642 kubelet[1899]: I0508 00:51:44.648423 1899 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/5fd93e8c-a497-4022-b5ef-007850a1cd79-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" May 8 00:51:44.648642 kubelet[1899]: I0508 00:51:44.648450 1899 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/5fd93e8c-a497-4022-b5ef-007850a1cd79-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" May 8 00:51:44.648642 kubelet[1899]: I0508 00:51:44.648459 1899 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5fd93e8c-a497-4022-b5ef-007850a1cd79-xtables-lock\") on node \"localhost\" DevicePath \"\"" May 8 00:51:44.648642 kubelet[1899]: I0508 00:51:44.648466 1899 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/5fd93e8c-a497-4022-b5ef-007850a1cd79-hostproc\") on node \"localhost\" DevicePath \"\"" May 8 00:51:44.648642 kubelet[1899]: I0508 00:51:44.648474 1899 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/5fd93e8c-a497-4022-b5ef-007850a1cd79-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" May 8 00:51:44.648642 kubelet[1899]: I0508 00:51:44.648481 1899 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/5fd93e8c-a497-4022-b5ef-007850a1cd79-cilium-run\") on node \"localhost\" DevicePath \"\"" May 8 00:51:44.648642 kubelet[1899]: I0508 00:51:44.648488 1899 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5fd93e8c-a497-4022-b5ef-007850a1cd79-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" May 8 00:51:44.648642 kubelet[1899]: I0508 00:51:44.648496 1899 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5fd93e8c-a497-4022-b5ef-007850a1cd79-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 8 00:51:44.648806 kubelet[1899]: I0508 00:51:44.648505 1899 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/5fd93e8c-a497-4022-b5ef-007850a1cd79-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" May 8 00:51:44.648806 kubelet[1899]: I0508 00:51:44.648514 1899 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-frq8j\" (UniqueName: \"kubernetes.io/projected/5fd93e8c-a497-4022-b5ef-007850a1cd79-kube-api-access-frq8j\") on node \"localhost\" DevicePath \"\"" May 8 00:51:44.947153 systemd[1]: var-lib-kubelet-pods-5fd93e8c\x2da497\x2d4022\x2db5ef\x2d007850a1cd79-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dfrq8j.mount: Deactivated successfully. May 8 00:51:44.947245 systemd[1]: var-lib-kubelet-pods-5fd93e8c\x2da497\x2d4022\x2db5ef\x2d007850a1cd79-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. May 8 00:51:44.947300 systemd[1]: var-lib-kubelet-pods-5fd93e8c\x2da497\x2d4022\x2db5ef\x2d007850a1cd79-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 8 00:51:45.467567 systemd[1]: Removed slice kubepods-burstable-pod5fd93e8c_a497_4022_b5ef_007850a1cd79.slice. May 8 00:51:45.510509 systemd[1]: Created slice kubepods-burstable-pod320327b9_913d_4292_942f_eea6fbec6e19.slice. May 8 00:51:45.553486 kubelet[1899]: I0508 00:51:45.553447 1899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/320327b9-913d-4292-942f-eea6fbec6e19-hubble-tls\") pod \"cilium-pdmfd\" (UID: \"320327b9-913d-4292-942f-eea6fbec6e19\") " pod="kube-system/cilium-pdmfd" May 8 00:51:45.553816 kubelet[1899]: I0508 00:51:45.553491 1899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/320327b9-913d-4292-942f-eea6fbec6e19-cilium-ipsec-secrets\") pod \"cilium-pdmfd\" (UID: \"320327b9-913d-4292-942f-eea6fbec6e19\") " pod="kube-system/cilium-pdmfd" May 8 00:51:45.553816 kubelet[1899]: I0508 00:51:45.553514 1899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/320327b9-913d-4292-942f-eea6fbec6e19-host-proc-sys-net\") pod \"cilium-pdmfd\" (UID: \"320327b9-913d-4292-942f-eea6fbec6e19\") " pod="kube-system/cilium-pdmfd" May 8 00:51:45.553816 kubelet[1899]: I0508 00:51:45.553529 1899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/320327b9-913d-4292-942f-eea6fbec6e19-hostproc\") pod \"cilium-pdmfd\" (UID: \"320327b9-913d-4292-942f-eea6fbec6e19\") " pod="kube-system/cilium-pdmfd" May 8 00:51:45.553816 kubelet[1899]: I0508 00:51:45.553544 1899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/320327b9-913d-4292-942f-eea6fbec6e19-xtables-lock\") pod \"cilium-pdmfd\" (UID: \"320327b9-913d-4292-942f-eea6fbec6e19\") " pod="kube-system/cilium-pdmfd" May 8 00:51:45.553816 kubelet[1899]: I0508 00:51:45.553561 1899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/320327b9-913d-4292-942f-eea6fbec6e19-host-proc-sys-kernel\") pod \"cilium-pdmfd\" (UID: \"320327b9-913d-4292-942f-eea6fbec6e19\") " pod="kube-system/cilium-pdmfd" May 8 00:51:45.553816 kubelet[1899]: I0508 00:51:45.553577 1899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/320327b9-913d-4292-942f-eea6fbec6e19-cilium-cgroup\") pod \"cilium-pdmfd\" (UID: \"320327b9-913d-4292-942f-eea6fbec6e19\") " pod="kube-system/cilium-pdmfd" May 8 00:51:45.553953 kubelet[1899]: I0508 00:51:45.553592 1899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/320327b9-913d-4292-942f-eea6fbec6e19-bpf-maps\") pod \"cilium-pdmfd\" (UID: \"320327b9-913d-4292-942f-eea6fbec6e19\") " pod="kube-system/cilium-pdmfd" May 8 00:51:45.553953 kubelet[1899]: I0508 00:51:45.553608 1899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/320327b9-913d-4292-942f-eea6fbec6e19-cni-path\") pod \"cilium-pdmfd\" (UID: \"320327b9-913d-4292-942f-eea6fbec6e19\") " pod="kube-system/cilium-pdmfd" May 8 00:51:45.553953 kubelet[1899]: I0508 00:51:45.553622 1899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/320327b9-913d-4292-942f-eea6fbec6e19-etc-cni-netd\") pod \"cilium-pdmfd\" (UID: \"320327b9-913d-4292-942f-eea6fbec6e19\") " pod="kube-system/cilium-pdmfd" May 8 00:51:45.553953 kubelet[1899]: I0508 00:51:45.553637 1899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/320327b9-913d-4292-942f-eea6fbec6e19-clustermesh-secrets\") pod \"cilium-pdmfd\" (UID: \"320327b9-913d-4292-942f-eea6fbec6e19\") " pod="kube-system/cilium-pdmfd" May 8 00:51:45.553953 kubelet[1899]: I0508 00:51:45.553653 1899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/320327b9-913d-4292-942f-eea6fbec6e19-cilium-config-path\") pod \"cilium-pdmfd\" (UID: \"320327b9-913d-4292-942f-eea6fbec6e19\") " pod="kube-system/cilium-pdmfd" May 8 00:51:45.553953 kubelet[1899]: I0508 00:51:45.553682 1899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/320327b9-913d-4292-942f-eea6fbec6e19-lib-modules\") pod \"cilium-pdmfd\" (UID: \"320327b9-913d-4292-942f-eea6fbec6e19\") " pod="kube-system/cilium-pdmfd" May 8 00:51:45.554080 kubelet[1899]: I0508 00:51:45.553698 1899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/320327b9-913d-4292-942f-eea6fbec6e19-cilium-run\") pod \"cilium-pdmfd\" (UID: \"320327b9-913d-4292-942f-eea6fbec6e19\") " pod="kube-system/cilium-pdmfd" May 8 00:51:45.554080 kubelet[1899]: I0508 00:51:45.553712 1899 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c9c7p\" (UniqueName: \"kubernetes.io/projected/320327b9-913d-4292-942f-eea6fbec6e19-kube-api-access-c9c7p\") pod \"cilium-pdmfd\" (UID: \"320327b9-913d-4292-942f-eea6fbec6e19\") " pod="kube-system/cilium-pdmfd" May 8 00:51:45.814532 kubelet[1899]: E0508 00:51:45.814394 1899 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:51:45.815409 env[1212]: time="2025-05-08T00:51:45.815337924Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-pdmfd,Uid:320327b9-913d-4292-942f-eea6fbec6e19,Namespace:kube-system,Attempt:0,}" May 8 00:51:45.827336 env[1212]: time="2025-05-08T00:51:45.827272050Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 8 00:51:45.827336 env[1212]: time="2025-05-08T00:51:45.827313008Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 8 00:51:45.827336 env[1212]: time="2025-05-08T00:51:45.827323807Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 8 00:51:45.827515 env[1212]: time="2025-05-08T00:51:45.827456240Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/5c685bc1aa50f3c2331c3d1dbd2bbf22f3eeac74162b7c9651667fde8a755ef0 pid=3721 runtime=io.containerd.runc.v2 May 8 00:51:45.837673 systemd[1]: Started cri-containerd-5c685bc1aa50f3c2331c3d1dbd2bbf22f3eeac74162b7c9651667fde8a755ef0.scope. May 8 00:51:45.863501 env[1212]: time="2025-05-08T00:51:45.863458209Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-pdmfd,Uid:320327b9-913d-4292-942f-eea6fbec6e19,Namespace:kube-system,Attempt:0,} returns sandbox id \"5c685bc1aa50f3c2331c3d1dbd2bbf22f3eeac74162b7c9651667fde8a755ef0\"" May 8 00:51:45.864455 kubelet[1899]: E0508 00:51:45.864172 1899 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:51:45.867219 env[1212]: time="2025-05-08T00:51:45.866708065Z" level=info msg="CreateContainer within sandbox \"5c685bc1aa50f3c2331c3d1dbd2bbf22f3eeac74162b7c9651667fde8a755ef0\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 8 00:51:45.877232 env[1212]: time="2025-05-08T00:51:45.877165356Z" level=info msg="CreateContainer within sandbox \"5c685bc1aa50f3c2331c3d1dbd2bbf22f3eeac74162b7c9651667fde8a755ef0\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"a034678bd8048dfe08da5f991a10fd962c781e5198d2e87b08d76ff22f82429c\"" May 8 00:51:45.877591 env[1212]: time="2025-05-08T00:51:45.877569333Z" level=info msg="StartContainer for \"a034678bd8048dfe08da5f991a10fd962c781e5198d2e87b08d76ff22f82429c\"" May 8 00:51:45.891820 systemd[1]: Started cri-containerd-a034678bd8048dfe08da5f991a10fd962c781e5198d2e87b08d76ff22f82429c.scope. May 8 00:51:45.922108 env[1212]: time="2025-05-08T00:51:45.922052543Z" level=info msg="StartContainer for \"a034678bd8048dfe08da5f991a10fd962c781e5198d2e87b08d76ff22f82429c\" returns successfully" May 8 00:51:45.948940 systemd[1]: cri-containerd-a034678bd8048dfe08da5f991a10fd962c781e5198d2e87b08d76ff22f82429c.scope: Deactivated successfully. May 8 00:51:45.971614 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a034678bd8048dfe08da5f991a10fd962c781e5198d2e87b08d76ff22f82429c-rootfs.mount: Deactivated successfully. May 8 00:51:45.983028 env[1212]: time="2025-05-08T00:51:45.982973786Z" level=info msg="shim disconnected" id=a034678bd8048dfe08da5f991a10fd962c781e5198d2e87b08d76ff22f82429c May 8 00:51:45.983028 env[1212]: time="2025-05-08T00:51:45.983027223Z" level=warning msg="cleaning up after shim disconnected" id=a034678bd8048dfe08da5f991a10fd962c781e5198d2e87b08d76ff22f82429c namespace=k8s.io May 8 00:51:45.983028 env[1212]: time="2025-05-08T00:51:45.983036702Z" level=info msg="cleaning up dead shim" May 8 00:51:45.991336 env[1212]: time="2025-05-08T00:51:45.991293637Z" level=warning msg="cleanup warnings time=\"2025-05-08T00:51:45Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3803 runtime=io.containerd.runc.v2\n" May 8 00:51:46.000398 kubelet[1899]: I0508 00:51:46.000028 1899 setters.go:600] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-05-08T00:51:45Z","lastTransitionTime":"2025-05-08T00:51:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} May 8 00:51:46.270989 kubelet[1899]: I0508 00:51:46.270932 1899 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fd93e8c-a497-4022-b5ef-007850a1cd79" path="/var/lib/kubelet/pods/5fd93e8c-a497-4022-b5ef-007850a1cd79/volumes" May 8 00:51:46.466920 kubelet[1899]: E0508 00:51:46.466885 1899 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:51:46.468917 env[1212]: time="2025-05-08T00:51:46.468879986Z" level=info msg="CreateContainer within sandbox \"5c685bc1aa50f3c2331c3d1dbd2bbf22f3eeac74162b7c9651667fde8a755ef0\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 8 00:51:46.478948 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2859165936.mount: Deactivated successfully. May 8 00:51:46.479552 env[1212]: time="2025-05-08T00:51:46.479514665Z" level=info msg="CreateContainer within sandbox \"5c685bc1aa50f3c2331c3d1dbd2bbf22f3eeac74162b7c9651667fde8a755ef0\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"57cd2c5c30613c2af1d481a81da72c9b29072607db3e30be44435df3fb01f7bb\"" May 8 00:51:46.480652 env[1212]: time="2025-05-08T00:51:46.480619806Z" level=info msg="StartContainer for \"57cd2c5c30613c2af1d481a81da72c9b29072607db3e30be44435df3fb01f7bb\"" May 8 00:51:46.501722 systemd[1]: Started cri-containerd-57cd2c5c30613c2af1d481a81da72c9b29072607db3e30be44435df3fb01f7bb.scope. May 8 00:51:46.536144 env[1212]: time="2025-05-08T00:51:46.536017281Z" level=info msg="StartContainer for \"57cd2c5c30613c2af1d481a81da72c9b29072607db3e30be44435df3fb01f7bb\" returns successfully" May 8 00:51:46.547413 systemd[1]: cri-containerd-57cd2c5c30613c2af1d481a81da72c9b29072607db3e30be44435df3fb01f7bb.scope: Deactivated successfully. May 8 00:51:46.576963 env[1212]: time="2025-05-08T00:51:46.576914082Z" level=info msg="shim disconnected" id=57cd2c5c30613c2af1d481a81da72c9b29072607db3e30be44435df3fb01f7bb May 8 00:51:46.576963 env[1212]: time="2025-05-08T00:51:46.576962360Z" level=warning msg="cleaning up after shim disconnected" id=57cd2c5c30613c2af1d481a81da72c9b29072607db3e30be44435df3fb01f7bb namespace=k8s.io May 8 00:51:46.576963 env[1212]: time="2025-05-08T00:51:46.576970799Z" level=info msg="cleaning up dead shim" May 8 00:51:46.585883 env[1212]: time="2025-05-08T00:51:46.585788414Z" level=warning msg="cleanup warnings time=\"2025-05-08T00:51:46Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3866 runtime=io.containerd.runc.v2\n" May 8 00:51:46.947311 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-57cd2c5c30613c2af1d481a81da72c9b29072607db3e30be44435df3fb01f7bb-rootfs.mount: Deactivated successfully. May 8 00:51:47.470481 kubelet[1899]: E0508 00:51:47.469570 1899 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:51:47.478209 env[1212]: time="2025-05-08T00:51:47.478141284Z" level=info msg="CreateContainer within sandbox \"5c685bc1aa50f3c2331c3d1dbd2bbf22f3eeac74162b7c9651667fde8a755ef0\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 8 00:51:47.493059 env[1212]: time="2025-05-08T00:51:47.493011231Z" level=info msg="CreateContainer within sandbox \"5c685bc1aa50f3c2331c3d1dbd2bbf22f3eeac74162b7c9651667fde8a755ef0\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"17a3398b33d08536889453e04e32f1fde1fb7df8efc9ef7056f8124a7ae89a6d\"" May 8 00:51:47.493793 env[1212]: time="2025-05-08T00:51:47.493769113Z" level=info msg="StartContainer for \"17a3398b33d08536889453e04e32f1fde1fb7df8efc9ef7056f8124a7ae89a6d\"" May 8 00:51:47.520018 systemd[1]: Started cri-containerd-17a3398b33d08536889453e04e32f1fde1fb7df8efc9ef7056f8124a7ae89a6d.scope. May 8 00:51:47.555359 env[1212]: time="2025-05-08T00:51:47.555312200Z" level=info msg="StartContainer for \"17a3398b33d08536889453e04e32f1fde1fb7df8efc9ef7056f8124a7ae89a6d\" returns successfully" May 8 00:51:47.557735 systemd[1]: cri-containerd-17a3398b33d08536889453e04e32f1fde1fb7df8efc9ef7056f8124a7ae89a6d.scope: Deactivated successfully. May 8 00:51:47.582515 env[1212]: time="2025-05-08T00:51:47.582338468Z" level=info msg="shim disconnected" id=17a3398b33d08536889453e04e32f1fde1fb7df8efc9ef7056f8124a7ae89a6d May 8 00:51:47.582707 env[1212]: time="2025-05-08T00:51:47.582518459Z" level=warning msg="cleaning up after shim disconnected" id=17a3398b33d08536889453e04e32f1fde1fb7df8efc9ef7056f8124a7ae89a6d namespace=k8s.io May 8 00:51:47.582707 env[1212]: time="2025-05-08T00:51:47.582530459Z" level=info msg="cleaning up dead shim" May 8 00:51:47.589684 env[1212]: time="2025-05-08T00:51:47.589634989Z" level=warning msg="cleanup warnings time=\"2025-05-08T00:51:47Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3924 runtime=io.containerd.runc.v2\n" May 8 00:51:47.947404 systemd[1]: run-containerd-runc-k8s.io-17a3398b33d08536889453e04e32f1fde1fb7df8efc9ef7056f8124a7ae89a6d-runc.GQpkyo.mount: Deactivated successfully. May 8 00:51:47.947518 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-17a3398b33d08536889453e04e32f1fde1fb7df8efc9ef7056f8124a7ae89a6d-rootfs.mount: Deactivated successfully. May 8 00:51:48.473426 kubelet[1899]: E0508 00:51:48.473340 1899 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:51:48.476521 env[1212]: time="2025-05-08T00:51:48.476472897Z" level=info msg="CreateContainer within sandbox \"5c685bc1aa50f3c2331c3d1dbd2bbf22f3eeac74162b7c9651667fde8a755ef0\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 8 00:51:48.494060 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2850790839.mount: Deactivated successfully. May 8 00:51:48.496665 env[1212]: time="2025-05-08T00:51:48.496613612Z" level=info msg="CreateContainer within sandbox \"5c685bc1aa50f3c2331c3d1dbd2bbf22f3eeac74162b7c9651667fde8a755ef0\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"c27c914ff2b4b1effc0437ee3c4fa0f8ca97eb1ce1277a0c05e5bfd8f4bad363\"" May 8 00:51:48.497651 env[1212]: time="2025-05-08T00:51:48.497624366Z" level=info msg="StartContainer for \"c27c914ff2b4b1effc0437ee3c4fa0f8ca97eb1ce1277a0c05e5bfd8f4bad363\"" May 8 00:51:48.533609 systemd[1]: Started cri-containerd-c27c914ff2b4b1effc0437ee3c4fa0f8ca97eb1ce1277a0c05e5bfd8f4bad363.scope. May 8 00:51:48.564681 systemd[1]: cri-containerd-c27c914ff2b4b1effc0437ee3c4fa0f8ca97eb1ce1277a0c05e5bfd8f4bad363.scope: Deactivated successfully. May 8 00:51:48.565720 env[1212]: time="2025-05-08T00:51:48.565661524Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod320327b9_913d_4292_942f_eea6fbec6e19.slice/cri-containerd-c27c914ff2b4b1effc0437ee3c4fa0f8ca97eb1ce1277a0c05e5bfd8f4bad363.scope/memory.events\": no such file or directory" May 8 00:51:48.567404 env[1212]: time="2025-05-08T00:51:48.567359286Z" level=info msg="StartContainer for \"c27c914ff2b4b1effc0437ee3c4fa0f8ca97eb1ce1277a0c05e5bfd8f4bad363\" returns successfully" May 8 00:51:48.586989 env[1212]: time="2025-05-08T00:51:48.586934908Z" level=info msg="shim disconnected" id=c27c914ff2b4b1effc0437ee3c4fa0f8ca97eb1ce1277a0c05e5bfd8f4bad363 May 8 00:51:48.586989 env[1212]: time="2025-05-08T00:51:48.586985786Z" level=warning msg="cleaning up after shim disconnected" id=c27c914ff2b4b1effc0437ee3c4fa0f8ca97eb1ce1277a0c05e5bfd8f4bad363 namespace=k8s.io May 8 00:51:48.586989 env[1212]: time="2025-05-08T00:51:48.586997185Z" level=info msg="cleaning up dead shim" May 8 00:51:48.593261 env[1212]: time="2025-05-08T00:51:48.593220620Z" level=warning msg="cleanup warnings time=\"2025-05-08T00:51:48Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3980 runtime=io.containerd.runc.v2\n" May 8 00:51:48.947516 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c27c914ff2b4b1effc0437ee3c4fa0f8ca97eb1ce1277a0c05e5bfd8f4bad363-rootfs.mount: Deactivated successfully. May 8 00:51:49.270673 kubelet[1899]: E0508 00:51:49.270105 1899 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:51:49.270816 kubelet[1899]: E0508 00:51:49.270762 1899 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:51:49.295735 kubelet[1899]: E0508 00:51:49.295690 1899 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 8 00:51:49.476645 kubelet[1899]: E0508 00:51:49.476616 1899 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:51:49.480762 env[1212]: time="2025-05-08T00:51:49.480718871Z" level=info msg="CreateContainer within sandbox \"5c685bc1aa50f3c2331c3d1dbd2bbf22f3eeac74162b7c9651667fde8a755ef0\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 8 00:51:49.493010 env[1212]: time="2025-05-08T00:51:49.492961870Z" level=info msg="CreateContainer within sandbox \"5c685bc1aa50f3c2331c3d1dbd2bbf22f3eeac74162b7c9651667fde8a755ef0\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"1943cf5d8d336b10ae3b1ce2c55fc58498949d21b471f76099d43109d885fec4\"" May 8 00:51:49.493582 env[1212]: time="2025-05-08T00:51:49.493550685Z" level=info msg="StartContainer for \"1943cf5d8d336b10ae3b1ce2c55fc58498949d21b471f76099d43109d885fec4\"" May 8 00:51:49.516058 systemd[1]: Started cri-containerd-1943cf5d8d336b10ae3b1ce2c55fc58498949d21b471f76099d43109d885fec4.scope. May 8 00:51:49.556134 env[1212]: time="2025-05-08T00:51:49.555585123Z" level=info msg="StartContainer for \"1943cf5d8d336b10ae3b1ce2c55fc58498949d21b471f76099d43109d885fec4\" returns successfully" May 8 00:51:49.795458 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm-aes-ce))) May 8 00:51:49.948077 systemd[1]: run-containerd-runc-k8s.io-1943cf5d8d336b10ae3b1ce2c55fc58498949d21b471f76099d43109d885fec4-runc.MKrwdM.mount: Deactivated successfully. May 8 00:51:50.269322 kubelet[1899]: E0508 00:51:50.269220 1899 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:51:50.481368 kubelet[1899]: E0508 00:51:50.480525 1899 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:51:50.497233 kubelet[1899]: I0508 00:51:50.496044 1899 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-pdmfd" podStartSLOduration=5.496028436 podStartE2EDuration="5.496028436s" podCreationTimestamp="2025-05-08 00:51:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-08 00:51:50.495598093 +0000 UTC m=+86.329647550" watchObservedRunningTime="2025-05-08 00:51:50.496028436 +0000 UTC m=+86.330077893" May 8 00:51:51.816074 kubelet[1899]: E0508 00:51:51.815997 1899 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:51:52.234598 systemd[1]: run-containerd-runc-k8s.io-1943cf5d8d336b10ae3b1ce2c55fc58498949d21b471f76099d43109d885fec4-runc.ZOE3Pf.mount: Deactivated successfully. May 8 00:51:52.575531 systemd-networkd[1042]: lxc_health: Link UP May 8 00:51:52.591470 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready May 8 00:51:52.592329 systemd-networkd[1042]: lxc_health: Gained carrier May 8 00:51:53.816027 kubelet[1899]: E0508 00:51:53.815989 1899 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:51:54.381321 systemd[1]: run-containerd-runc-k8s.io-1943cf5d8d336b10ae3b1ce2c55fc58498949d21b471f76099d43109d885fec4-runc.nQuTme.mount: Deactivated successfully. May 8 00:51:54.487383 kubelet[1899]: E0508 00:51:54.487348 1899 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:51:54.649586 systemd-networkd[1042]: lxc_health: Gained IPv6LL May 8 00:51:55.488521 kubelet[1899]: E0508 00:51:55.488492 1899 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:51:56.269806 kubelet[1899]: E0508 00:51:56.269773 1899 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:51:56.269955 kubelet[1899]: E0508 00:51:56.269810 1899 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 8 00:51:56.486038 systemd[1]: run-containerd-runc-k8s.io-1943cf5d8d336b10ae3b1ce2c55fc58498949d21b471f76099d43109d885fec4-runc.pzG8HB.mount: Deactivated successfully. May 8 00:51:58.705339 sshd[3690]: pam_unix(sshd:session): session closed for user core May 8 00:51:58.708013 systemd[1]: sshd@24-10.0.0.116:22-10.0.0.1:59492.service: Deactivated successfully. May 8 00:51:58.708719 systemd[1]: session-25.scope: Deactivated successfully. May 8 00:51:58.709264 systemd-logind[1202]: Session 25 logged out. Waiting for processes to exit. May 8 00:51:58.710221 systemd-logind[1202]: Removed session 25.