May 16 00:33:47.724757 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] May 16 00:33:47.724777 kernel: Linux version 5.15.181-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Thu May 15 23:21:39 -00 2025 May 16 00:33:47.724785 kernel: efi: EFI v2.70 by EDK II May 16 00:33:47.724791 kernel: efi: SMBIOS 3.0=0xd9260000 ACPI 2.0=0xd9240000 MEMATTR=0xda32b018 RNG=0xd9220018 MEMRESERVE=0xd9521c18 May 16 00:33:47.724796 kernel: random: crng init done May 16 00:33:47.724801 kernel: ACPI: Early table checksum verification disabled May 16 00:33:47.724807 kernel: ACPI: RSDP 0x00000000D9240000 000024 (v02 BOCHS ) May 16 00:33:47.724814 kernel: ACPI: XSDT 0x00000000D9230000 000064 (v01 BOCHS BXPC 00000001 01000013) May 16 00:33:47.724819 kernel: ACPI: FACP 0x00000000D91E0000 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) May 16 00:33:47.724825 kernel: ACPI: DSDT 0x00000000D91F0000 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 16 00:33:47.724830 kernel: ACPI: APIC 0x00000000D91D0000 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) May 16 00:33:47.724836 kernel: ACPI: PPTT 0x00000000D91C0000 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) May 16 00:33:47.724841 kernel: ACPI: GTDT 0x00000000D91B0000 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 16 00:33:47.724846 kernel: ACPI: MCFG 0x00000000D91A0000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 16 00:33:47.724854 kernel: ACPI: SPCR 0x00000000D9190000 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 16 00:33:47.724860 kernel: ACPI: DBG2 0x00000000D9180000 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) May 16 00:33:47.724866 kernel: ACPI: IORT 0x00000000D9170000 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 16 00:33:47.724872 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 May 16 00:33:47.724878 kernel: NUMA: Failed to initialise from firmware May 16 00:33:47.724884 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] May 16 00:33:47.724889 kernel: NUMA: NODE_DATA [mem 0xdcb0b900-0xdcb10fff] May 16 00:33:47.724895 kernel: Zone ranges: May 16 00:33:47.724900 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] May 16 00:33:47.724907 kernel: DMA32 empty May 16 00:33:47.724913 kernel: Normal empty May 16 00:33:47.724918 kernel: Movable zone start for each node May 16 00:33:47.724924 kernel: Early memory node ranges May 16 00:33:47.724929 kernel: node 0: [mem 0x0000000040000000-0x00000000d924ffff] May 16 00:33:47.724935 kernel: node 0: [mem 0x00000000d9250000-0x00000000d951ffff] May 16 00:33:47.724940 kernel: node 0: [mem 0x00000000d9520000-0x00000000dc7fffff] May 16 00:33:47.724946 kernel: node 0: [mem 0x00000000dc800000-0x00000000dc88ffff] May 16 00:33:47.724952 kernel: node 0: [mem 0x00000000dc890000-0x00000000dc89ffff] May 16 00:33:47.724957 kernel: node 0: [mem 0x00000000dc8a0000-0x00000000dc9bffff] May 16 00:33:47.724963 kernel: node 0: [mem 0x00000000dc9c0000-0x00000000dcffffff] May 16 00:33:47.724968 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] May 16 00:33:47.724975 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges May 16 00:33:47.724981 kernel: psci: probing for conduit method from ACPI. May 16 00:33:47.724986 kernel: psci: PSCIv1.1 detected in firmware. May 16 00:33:47.724992 kernel: psci: Using standard PSCI v0.2 function IDs May 16 00:33:47.724997 kernel: psci: Trusted OS migration not required May 16 00:33:47.725006 kernel: psci: SMC Calling Convention v1.1 May 16 00:33:47.725012 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) May 16 00:33:47.725019 kernel: ACPI: SRAT not present May 16 00:33:47.725026 kernel: percpu: Embedded 30 pages/cpu s83032 r8192 d31656 u122880 May 16 00:33:47.725031 kernel: pcpu-alloc: s83032 r8192 d31656 u122880 alloc=30*4096 May 16 00:33:47.725038 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 May 16 00:33:47.725044 kernel: Detected PIPT I-cache on CPU0 May 16 00:33:47.725050 kernel: CPU features: detected: GIC system register CPU interface May 16 00:33:47.725055 kernel: CPU features: detected: Hardware dirty bit management May 16 00:33:47.725061 kernel: CPU features: detected: Spectre-v4 May 16 00:33:47.725067 kernel: CPU features: detected: Spectre-BHB May 16 00:33:47.725074 kernel: CPU features: kernel page table isolation forced ON by KASLR May 16 00:33:47.725081 kernel: CPU features: detected: Kernel page table isolation (KPTI) May 16 00:33:47.725086 kernel: CPU features: detected: ARM erratum 1418040 May 16 00:33:47.725092 kernel: CPU features: detected: SSBS not fully self-synchronizing May 16 00:33:47.725098 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 May 16 00:33:47.725104 kernel: Policy zone: DMA May 16 00:33:47.725111 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=2d88e96fdc9dc9b028836e57c250f3fd2abd3e6490e27ecbf72d8b216e3efce8 May 16 00:33:47.725118 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 16 00:33:47.725124 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 16 00:33:47.725130 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 16 00:33:47.725136 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 16 00:33:47.725143 kernel: Memory: 2457340K/2572288K available (9792K kernel code, 2094K rwdata, 7584K rodata, 36480K init, 777K bss, 114948K reserved, 0K cma-reserved) May 16 00:33:47.725149 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 16 00:33:47.725155 kernel: trace event string verifier disabled May 16 00:33:47.725161 kernel: rcu: Preemptible hierarchical RCU implementation. May 16 00:33:47.725168 kernel: rcu: RCU event tracing is enabled. May 16 00:33:47.725174 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 16 00:33:47.725194 kernel: Trampoline variant of Tasks RCU enabled. May 16 00:33:47.725203 kernel: Tracing variant of Tasks RCU enabled. May 16 00:33:47.725243 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 16 00:33:47.725249 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 16 00:33:47.725256 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 May 16 00:33:47.725264 kernel: GICv3: 256 SPIs implemented May 16 00:33:47.725271 kernel: GICv3: 0 Extended SPIs implemented May 16 00:33:47.725277 kernel: GICv3: Distributor has no Range Selector support May 16 00:33:47.725283 kernel: Root IRQ handler: gic_handle_irq May 16 00:33:47.725289 kernel: GICv3: 16 PPIs implemented May 16 00:33:47.725295 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 May 16 00:33:47.725301 kernel: ACPI: SRAT not present May 16 00:33:47.725307 kernel: ITS [mem 0x08080000-0x0809ffff] May 16 00:33:47.725313 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400b0000 (indirect, esz 8, psz 64K, shr 1) May 16 00:33:47.725320 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400c0000 (flat, esz 8, psz 64K, shr 1) May 16 00:33:47.725326 kernel: GICv3: using LPI property table @0x00000000400d0000 May 16 00:33:47.725332 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000000400e0000 May 16 00:33:47.725339 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 16 00:33:47.725345 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). May 16 00:33:47.725351 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns May 16 00:33:47.725357 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns May 16 00:33:47.725363 kernel: arm-pv: using stolen time PV May 16 00:33:47.725370 kernel: Console: colour dummy device 80x25 May 16 00:33:47.725376 kernel: ACPI: Core revision 20210730 May 16 00:33:47.725382 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) May 16 00:33:47.725389 kernel: pid_max: default: 32768 minimum: 301 May 16 00:33:47.725395 kernel: LSM: Security Framework initializing May 16 00:33:47.725402 kernel: SELinux: Initializing. May 16 00:33:47.725408 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 16 00:33:47.725415 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 16 00:33:47.725421 kernel: ACPI PPTT: PPTT table found, but unable to locate core 3 (3) May 16 00:33:47.725427 kernel: rcu: Hierarchical SRCU implementation. May 16 00:33:47.725433 kernel: Platform MSI: ITS@0x8080000 domain created May 16 00:33:47.725440 kernel: PCI/MSI: ITS@0x8080000 domain created May 16 00:33:47.725446 kernel: Remapping and enabling EFI services. May 16 00:33:47.725452 kernel: smp: Bringing up secondary CPUs ... May 16 00:33:47.725459 kernel: Detected PIPT I-cache on CPU1 May 16 00:33:47.725466 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 May 16 00:33:47.725472 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000000400f0000 May 16 00:33:47.725478 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 16 00:33:47.725484 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] May 16 00:33:47.725491 kernel: Detected PIPT I-cache on CPU2 May 16 00:33:47.725497 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 May 16 00:33:47.725504 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040100000 May 16 00:33:47.725510 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 16 00:33:47.725516 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] May 16 00:33:47.725523 kernel: Detected PIPT I-cache on CPU3 May 16 00:33:47.725529 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 May 16 00:33:47.725536 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040110000 May 16 00:33:47.725542 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 16 00:33:47.725552 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] May 16 00:33:47.725560 kernel: smp: Brought up 1 node, 4 CPUs May 16 00:33:47.725566 kernel: SMP: Total of 4 processors activated. May 16 00:33:47.725573 kernel: CPU features: detected: 32-bit EL0 Support May 16 00:33:47.725580 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence May 16 00:33:47.725586 kernel: CPU features: detected: Common not Private translations May 16 00:33:47.725593 kernel: CPU features: detected: CRC32 instructions May 16 00:33:47.725599 kernel: CPU features: detected: RCpc load-acquire (LDAPR) May 16 00:33:47.725607 kernel: CPU features: detected: LSE atomic instructions May 16 00:33:47.725614 kernel: CPU features: detected: Privileged Access Never May 16 00:33:47.725620 kernel: CPU features: detected: RAS Extension Support May 16 00:33:47.725627 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) May 16 00:33:47.725633 kernel: CPU: All CPU(s) started at EL1 May 16 00:33:47.725648 kernel: alternatives: patching kernel code May 16 00:33:47.725655 kernel: devtmpfs: initialized May 16 00:33:47.725662 kernel: KASLR enabled May 16 00:33:47.725668 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 16 00:33:47.725675 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 16 00:33:47.725682 kernel: pinctrl core: initialized pinctrl subsystem May 16 00:33:47.725688 kernel: SMBIOS 3.0.0 present. May 16 00:33:47.725695 kernel: DMI: QEMU KVM Virtual Machine, BIOS 0.0.0 02/06/2015 May 16 00:33:47.725702 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 16 00:33:47.725711 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations May 16 00:33:47.725718 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations May 16 00:33:47.725724 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations May 16 00:33:47.725731 kernel: audit: initializing netlink subsys (disabled) May 16 00:33:47.725738 kernel: audit: type=2000 audit(0.031:1): state=initialized audit_enabled=0 res=1 May 16 00:33:47.725744 kernel: thermal_sys: Registered thermal governor 'step_wise' May 16 00:33:47.725751 kernel: cpuidle: using governor menu May 16 00:33:47.725757 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. May 16 00:33:47.725764 kernel: ASID allocator initialised with 32768 entries May 16 00:33:47.725771 kernel: ACPI: bus type PCI registered May 16 00:33:47.725778 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 16 00:33:47.725784 kernel: Serial: AMBA PL011 UART driver May 16 00:33:47.725791 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages May 16 00:33:47.725797 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages May 16 00:33:47.725804 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages May 16 00:33:47.725810 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages May 16 00:33:47.725817 kernel: cryptd: max_cpu_qlen set to 1000 May 16 00:33:47.725824 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) May 16 00:33:47.725831 kernel: ACPI: Added _OSI(Module Device) May 16 00:33:47.725839 kernel: ACPI: Added _OSI(Processor Device) May 16 00:33:47.725845 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 16 00:33:47.725851 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 16 00:33:47.725858 kernel: ACPI: Added _OSI(Linux-Dell-Video) May 16 00:33:47.725865 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) May 16 00:33:47.725871 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) May 16 00:33:47.725878 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 16 00:33:47.725884 kernel: ACPI: Interpreter enabled May 16 00:33:47.725892 kernel: ACPI: Using GIC for interrupt routing May 16 00:33:47.725899 kernel: ACPI: MCFG table detected, 1 entries May 16 00:33:47.725905 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA May 16 00:33:47.725912 kernel: printk: console [ttyAMA0] enabled May 16 00:33:47.725918 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 16 00:33:47.726049 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 16 00:33:47.726113 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] May 16 00:33:47.726173 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] May 16 00:33:47.726260 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 May 16 00:33:47.726318 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] May 16 00:33:47.726327 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] May 16 00:33:47.726334 kernel: PCI host bridge to bus 0000:00 May 16 00:33:47.726398 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] May 16 00:33:47.726450 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] May 16 00:33:47.726503 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] May 16 00:33:47.726556 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 16 00:33:47.726626 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 May 16 00:33:47.726702 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 May 16 00:33:47.726766 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] May 16 00:33:47.726841 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] May 16 00:33:47.726899 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] May 16 00:33:47.726959 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] May 16 00:33:47.727017 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] May 16 00:33:47.727076 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] May 16 00:33:47.727127 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] May 16 00:33:47.727211 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] May 16 00:33:47.727268 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] May 16 00:33:47.727277 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 May 16 00:33:47.727284 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 May 16 00:33:47.727292 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 May 16 00:33:47.727299 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 May 16 00:33:47.727306 kernel: iommu: Default domain type: Translated May 16 00:33:47.727312 kernel: iommu: DMA domain TLB invalidation policy: strict mode May 16 00:33:47.727319 kernel: vgaarb: loaded May 16 00:33:47.727325 kernel: pps_core: LinuxPPS API ver. 1 registered May 16 00:33:47.727332 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti May 16 00:33:47.727339 kernel: PTP clock support registered May 16 00:33:47.727345 kernel: Registered efivars operations May 16 00:33:47.727353 kernel: clocksource: Switched to clocksource arch_sys_counter May 16 00:33:47.727360 kernel: VFS: Disk quotas dquot_6.6.0 May 16 00:33:47.727367 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 16 00:33:47.727373 kernel: pnp: PnP ACPI init May 16 00:33:47.727436 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved May 16 00:33:47.727445 kernel: pnp: PnP ACPI: found 1 devices May 16 00:33:47.727452 kernel: NET: Registered PF_INET protocol family May 16 00:33:47.727458 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 16 00:33:47.727467 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 16 00:33:47.727473 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 16 00:33:47.727480 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 16 00:33:47.727486 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) May 16 00:33:47.727493 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 16 00:33:47.727500 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 16 00:33:47.727507 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 16 00:33:47.727513 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 16 00:33:47.727520 kernel: PCI: CLS 0 bytes, default 64 May 16 00:33:47.727528 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available May 16 00:33:47.727535 kernel: kvm [1]: HYP mode not available May 16 00:33:47.727541 kernel: Initialise system trusted keyrings May 16 00:33:47.727547 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 16 00:33:47.727554 kernel: Key type asymmetric registered May 16 00:33:47.727560 kernel: Asymmetric key parser 'x509' registered May 16 00:33:47.727567 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) May 16 00:33:47.727573 kernel: io scheduler mq-deadline registered May 16 00:33:47.727580 kernel: io scheduler kyber registered May 16 00:33:47.727587 kernel: io scheduler bfq registered May 16 00:33:47.727594 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 May 16 00:33:47.727601 kernel: ACPI: button: Power Button [PWRB] May 16 00:33:47.727608 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 May 16 00:33:47.727677 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) May 16 00:33:47.727687 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 16 00:33:47.727693 kernel: thunder_xcv, ver 1.0 May 16 00:33:47.727700 kernel: thunder_bgx, ver 1.0 May 16 00:33:47.727706 kernel: nicpf, ver 1.0 May 16 00:33:47.727715 kernel: nicvf, ver 1.0 May 16 00:33:47.727787 kernel: rtc-efi rtc-efi.0: registered as rtc0 May 16 00:33:47.727840 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-05-16T00:33:47 UTC (1747355627) May 16 00:33:47.727849 kernel: hid: raw HID events driver (C) Jiri Kosina May 16 00:33:47.727856 kernel: NET: Registered PF_INET6 protocol family May 16 00:33:47.727863 kernel: Segment Routing with IPv6 May 16 00:33:47.727869 kernel: In-situ OAM (IOAM) with IPv6 May 16 00:33:47.727876 kernel: NET: Registered PF_PACKET protocol family May 16 00:33:47.727884 kernel: Key type dns_resolver registered May 16 00:33:47.727890 kernel: registered taskstats version 1 May 16 00:33:47.727897 kernel: Loading compiled-in X.509 certificates May 16 00:33:47.727903 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.181-flatcar: 2793d535c1de6f1789b22ef06bd5666144f4eeb2' May 16 00:33:47.727910 kernel: Key type .fscrypt registered May 16 00:33:47.727916 kernel: Key type fscrypt-provisioning registered May 16 00:33:47.727922 kernel: ima: No TPM chip found, activating TPM-bypass! May 16 00:33:47.727929 kernel: ima: Allocated hash algorithm: sha1 May 16 00:33:47.727935 kernel: ima: No architecture policies found May 16 00:33:47.727943 kernel: clk: Disabling unused clocks May 16 00:33:47.727949 kernel: Freeing unused kernel memory: 36480K May 16 00:33:47.727956 kernel: Run /init as init process May 16 00:33:47.727962 kernel: with arguments: May 16 00:33:47.727969 kernel: /init May 16 00:33:47.727975 kernel: with environment: May 16 00:33:47.727981 kernel: HOME=/ May 16 00:33:47.727988 kernel: TERM=linux May 16 00:33:47.727994 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 16 00:33:47.728004 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) May 16 00:33:47.728012 systemd[1]: Detected virtualization kvm. May 16 00:33:47.728019 systemd[1]: Detected architecture arm64. May 16 00:33:47.728026 systemd[1]: Running in initrd. May 16 00:33:47.728033 systemd[1]: No hostname configured, using default hostname. May 16 00:33:47.728040 systemd[1]: Hostname set to . May 16 00:33:47.728047 systemd[1]: Initializing machine ID from VM UUID. May 16 00:33:47.728055 systemd[1]: Queued start job for default target initrd.target. May 16 00:33:47.728062 systemd[1]: Started systemd-ask-password-console.path. May 16 00:33:47.728069 systemd[1]: Reached target cryptsetup.target. May 16 00:33:47.728076 systemd[1]: Reached target paths.target. May 16 00:33:47.728082 systemd[1]: Reached target slices.target. May 16 00:33:47.728089 systemd[1]: Reached target swap.target. May 16 00:33:47.728096 systemd[1]: Reached target timers.target. May 16 00:33:47.728103 systemd[1]: Listening on iscsid.socket. May 16 00:33:47.728111 systemd[1]: Listening on iscsiuio.socket. May 16 00:33:47.728118 systemd[1]: Listening on systemd-journald-audit.socket. May 16 00:33:47.728126 systemd[1]: Listening on systemd-journald-dev-log.socket. May 16 00:33:47.728132 systemd[1]: Listening on systemd-journald.socket. May 16 00:33:47.728139 systemd[1]: Listening on systemd-networkd.socket. May 16 00:33:47.728146 systemd[1]: Listening on systemd-udevd-control.socket. May 16 00:33:47.728154 systemd[1]: Listening on systemd-udevd-kernel.socket. May 16 00:33:47.728160 systemd[1]: Reached target sockets.target. May 16 00:33:47.728169 systemd[1]: Starting kmod-static-nodes.service... May 16 00:33:47.728176 systemd[1]: Finished network-cleanup.service. May 16 00:33:47.728193 systemd[1]: Starting systemd-fsck-usr.service... May 16 00:33:47.728201 systemd[1]: Starting systemd-journald.service... May 16 00:33:47.728208 systemd[1]: Starting systemd-modules-load.service... May 16 00:33:47.728214 systemd[1]: Starting systemd-resolved.service... May 16 00:33:47.728221 systemd[1]: Starting systemd-vconsole-setup.service... May 16 00:33:47.728228 systemd[1]: Finished kmod-static-nodes.service. May 16 00:33:47.728239 systemd[1]: Finished systemd-fsck-usr.service. May 16 00:33:47.728248 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... May 16 00:33:47.728258 systemd-journald[290]: Journal started May 16 00:33:47.728297 systemd-journald[290]: Runtime Journal (/run/log/journal/1143c62f0a4b4cd6989c15047d9a5799) is 6.0M, max 48.7M, 42.6M free. May 16 00:33:47.719394 systemd-modules-load[291]: Inserted module 'overlay' May 16 00:33:47.730762 systemd[1]: Started systemd-journald.service. May 16 00:33:47.730000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:33:47.731234 systemd[1]: Finished systemd-vconsole-setup.service. May 16 00:33:47.738130 kernel: audit: type=1130 audit(1747355627.730:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:33:47.738151 kernel: audit: type=1130 audit(1747355627.733:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:33:47.733000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:33:47.734312 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. May 16 00:33:47.737000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:33:47.738087 systemd[1]: Starting dracut-cmdline-ask.service... May 16 00:33:47.742839 kernel: audit: type=1130 audit(1747355627.737:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:33:47.745205 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 16 00:33:47.751152 systemd-modules-load[291]: Inserted module 'br_netfilter' May 16 00:33:47.752063 kernel: Bridge firewalling registered May 16 00:33:47.758747 systemd[1]: Finished dracut-cmdline-ask.service. May 16 00:33:47.759403 systemd-resolved[292]: Positive Trust Anchors: May 16 00:33:47.759409 systemd-resolved[292]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 16 00:33:47.759437 systemd-resolved[292]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test May 16 00:33:47.759000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:33:47.760362 systemd[1]: Starting dracut-cmdline.service... May 16 00:33:47.776955 kernel: audit: type=1130 audit(1747355627.759:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:33:47.776978 kernel: SCSI subsystem initialized May 16 00:33:47.776987 kernel: audit: type=1130 audit(1747355627.773:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:33:47.773000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:33:47.763925 systemd-resolved[292]: Defaulting to hostname 'linux'. May 16 00:33:47.764662 systemd[1]: Started systemd-resolved.service. May 16 00:33:47.773518 systemd[1]: Reached target nss-lookup.target. May 16 00:33:47.782699 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 16 00:33:47.782719 kernel: device-mapper: uevent: version 1.0.3 May 16 00:33:47.782728 dracut-cmdline[309]: dracut-dracut-053 May 16 00:33:47.782728 dracut-cmdline[309]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=2d88e96fdc9dc9b028836e57c250f3fd2abd3e6490e27ecbf72d8b216e3efce8 May 16 00:33:47.789687 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com May 16 00:33:47.789382 systemd-modules-load[291]: Inserted module 'dm_multipath' May 16 00:33:47.790180 systemd[1]: Finished systemd-modules-load.service. May 16 00:33:47.791000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:33:47.792112 systemd[1]: Starting systemd-sysctl.service... May 16 00:33:47.795718 kernel: audit: type=1130 audit(1747355627.791:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:33:47.801325 systemd[1]: Finished systemd-sysctl.service. May 16 00:33:47.805204 kernel: audit: type=1130 audit(1747355627.801:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:33:47.801000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:33:47.850211 kernel: Loading iSCSI transport class v2.0-870. May 16 00:33:47.864213 kernel: iscsi: registered transport (tcp) May 16 00:33:47.878595 kernel: iscsi: registered transport (qla4xxx) May 16 00:33:47.878617 kernel: QLogic iSCSI HBA Driver May 16 00:33:47.912557 systemd[1]: Finished dracut-cmdline.service. May 16 00:33:47.913000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:33:47.914237 systemd[1]: Starting dracut-pre-udev.service... May 16 00:33:47.917544 kernel: audit: type=1130 audit(1747355627.913:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:33:47.959214 kernel: raid6: neonx8 gen() 13725 MB/s May 16 00:33:47.976202 kernel: raid6: neonx8 xor() 10760 MB/s May 16 00:33:47.993197 kernel: raid6: neonx4 gen() 13539 MB/s May 16 00:33:48.010201 kernel: raid6: neonx4 xor() 11162 MB/s May 16 00:33:48.027202 kernel: raid6: neonx2 gen() 12979 MB/s May 16 00:33:48.044203 kernel: raid6: neonx2 xor() 10402 MB/s May 16 00:33:48.061197 kernel: raid6: neonx1 gen() 10595 MB/s May 16 00:33:48.078207 kernel: raid6: neonx1 xor() 8783 MB/s May 16 00:33:48.095215 kernel: raid6: int64x8 gen() 6263 MB/s May 16 00:33:48.112214 kernel: raid6: int64x8 xor() 3540 MB/s May 16 00:33:48.129213 kernel: raid6: int64x4 gen() 7220 MB/s May 16 00:33:48.146212 kernel: raid6: int64x4 xor() 3854 MB/s May 16 00:33:48.163205 kernel: raid6: int64x2 gen() 6146 MB/s May 16 00:33:48.180212 kernel: raid6: int64x2 xor() 3320 MB/s May 16 00:33:48.197212 kernel: raid6: int64x1 gen() 5043 MB/s May 16 00:33:48.214294 kernel: raid6: int64x1 xor() 2643 MB/s May 16 00:33:48.214305 kernel: raid6: using algorithm neonx8 gen() 13725 MB/s May 16 00:33:48.214313 kernel: raid6: .... xor() 10760 MB/s, rmw enabled May 16 00:33:48.215383 kernel: raid6: using neon recovery algorithm May 16 00:33:48.225207 kernel: xor: measuring software checksum speed May 16 00:33:48.226505 kernel: 8regs : 15234 MB/sec May 16 00:33:48.226526 kernel: 32regs : 20697 MB/sec May 16 00:33:48.227735 kernel: arm64_neon : 27663 MB/sec May 16 00:33:48.227745 kernel: xor: using function: arm64_neon (27663 MB/sec) May 16 00:33:48.279205 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no May 16 00:33:48.288987 systemd[1]: Finished dracut-pre-udev.service. May 16 00:33:48.289000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:33:48.292000 audit: BPF prog-id=7 op=LOAD May 16 00:33:48.292000 audit: BPF prog-id=8 op=LOAD May 16 00:33:48.293212 kernel: audit: type=1130 audit(1747355628.289:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:33:48.293306 systemd[1]: Starting systemd-udevd.service... May 16 00:33:48.304788 systemd-udevd[492]: Using default interface naming scheme 'v252'. May 16 00:33:48.308081 systemd[1]: Started systemd-udevd.service. May 16 00:33:48.308000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:33:48.310115 systemd[1]: Starting dracut-pre-trigger.service... May 16 00:33:48.320326 dracut-pre-trigger[499]: rd.md=0: removing MD RAID activation May 16 00:33:48.346264 systemd[1]: Finished dracut-pre-trigger.service. May 16 00:33:48.346000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:33:48.347799 systemd[1]: Starting systemd-udev-trigger.service... May 16 00:33:48.382214 systemd[1]: Finished systemd-udev-trigger.service. May 16 00:33:48.382000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:33:48.408726 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 16 00:33:48.412952 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 16 00:33:48.412971 kernel: GPT:9289727 != 19775487 May 16 00:33:48.412981 kernel: GPT:Alternate GPT header not at the end of the disk. May 16 00:33:48.412989 kernel: GPT:9289727 != 19775487 May 16 00:33:48.412996 kernel: GPT: Use GNU Parted to correct GPT errors. May 16 00:33:48.413005 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 16 00:33:48.426192 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. May 16 00:33:48.427680 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. May 16 00:33:48.430807 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (546) May 16 00:33:48.432120 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. May 16 00:33:48.441847 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. May 16 00:33:48.445265 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. May 16 00:33:48.447657 systemd[1]: Starting disk-uuid.service... May 16 00:33:48.455854 disk-uuid[563]: Primary Header is updated. May 16 00:33:48.455854 disk-uuid[563]: Secondary Entries is updated. May 16 00:33:48.455854 disk-uuid[563]: Secondary Header is updated. May 16 00:33:48.458914 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 16 00:33:48.467207 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 16 00:33:49.472180 disk-uuid[564]: The operation has completed successfully. May 16 00:33:49.473417 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 16 00:33:49.491570 systemd[1]: disk-uuid.service: Deactivated successfully. May 16 00:33:49.492000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:33:49.492000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:33:49.491671 systemd[1]: Finished disk-uuid.service. May 16 00:33:49.495669 systemd[1]: Starting verity-setup.service... May 16 00:33:49.511200 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" May 16 00:33:49.536624 systemd[1]: Found device dev-mapper-usr.device. May 16 00:33:49.538156 systemd[1]: Mounting sysusr-usr.mount... May 16 00:33:49.538970 systemd[1]: Finished verity-setup.service. May 16 00:33:49.539000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:33:49.584214 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. May 16 00:33:49.584390 systemd[1]: Mounted sysusr-usr.mount. May 16 00:33:49.585235 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. May 16 00:33:49.585911 systemd[1]: Starting ignition-setup.service... May 16 00:33:49.588280 systemd[1]: Starting parse-ip-for-networkd.service... May 16 00:33:49.594691 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 16 00:33:49.594726 kernel: BTRFS info (device vda6): using free space tree May 16 00:33:49.594736 kernel: BTRFS info (device vda6): has skinny extents May 16 00:33:49.602125 systemd[1]: mnt-oem.mount: Deactivated successfully. May 16 00:33:49.608386 systemd[1]: Finished ignition-setup.service. May 16 00:33:49.608000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:33:49.609935 systemd[1]: Starting ignition-fetch-offline.service... May 16 00:33:49.672769 systemd[1]: Finished parse-ip-for-networkd.service. May 16 00:33:49.673000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:33:49.674000 audit: BPF prog-id=9 op=LOAD May 16 00:33:49.675025 systemd[1]: Starting systemd-networkd.service... May 16 00:33:49.687417 ignition[644]: Ignition 2.14.0 May 16 00:33:49.687425 ignition[644]: Stage: fetch-offline May 16 00:33:49.687460 ignition[644]: no configs at "/usr/lib/ignition/base.d" May 16 00:33:49.687469 ignition[644]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 16 00:33:49.687592 ignition[644]: parsed url from cmdline: "" May 16 00:33:49.687595 ignition[644]: no config URL provided May 16 00:33:49.687599 ignition[644]: reading system config file "/usr/lib/ignition/user.ign" May 16 00:33:49.687605 ignition[644]: no config at "/usr/lib/ignition/user.ign" May 16 00:33:49.687622 ignition[644]: op(1): [started] loading QEMU firmware config module May 16 00:33:49.687628 ignition[644]: op(1): executing: "modprobe" "qemu_fw_cfg" May 16 00:33:49.696466 ignition[644]: op(1): [finished] loading QEMU firmware config module May 16 00:33:49.700318 systemd-networkd[737]: lo: Link UP May 16 00:33:49.702000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:33:49.700330 systemd-networkd[737]: lo: Gained carrier May 16 00:33:49.700688 systemd-networkd[737]: Enumeration completed May 16 00:33:49.700864 systemd-networkd[737]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 16 00:33:49.701168 systemd[1]: Started systemd-networkd.service. May 16 00:33:49.701862 systemd-networkd[737]: eth0: Link UP May 16 00:33:49.701865 systemd-networkd[737]: eth0: Gained carrier May 16 00:33:49.702377 systemd[1]: Reached target network.target. May 16 00:33:49.703871 systemd[1]: Starting iscsiuio.service... May 16 00:33:49.712349 systemd[1]: Started iscsiuio.service. May 16 00:33:49.712000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:33:49.713958 systemd[1]: Starting iscsid.service... May 16 00:33:49.717275 iscsid[744]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi May 16 00:33:49.717275 iscsid[744]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. May 16 00:33:49.717275 iscsid[744]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. May 16 00:33:49.717275 iscsid[744]: If using hardware iscsi like qla4xxx this message can be ignored. May 16 00:33:49.717275 iscsid[744]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi May 16 00:33:49.717275 iscsid[744]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf May 16 00:33:49.724000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:33:49.719938 systemd[1]: Started iscsid.service. May 16 00:33:49.725812 systemd[1]: Starting dracut-initqueue.service... May 16 00:33:49.727865 systemd-networkd[737]: eth0: DHCPv4 address 10.0.0.31/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 16 00:33:49.735687 systemd[1]: Finished dracut-initqueue.service. May 16 00:33:49.736000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:33:49.736686 systemd[1]: Reached target remote-fs-pre.target. May 16 00:33:49.738105 systemd[1]: Reached target remote-cryptsetup.target. May 16 00:33:49.739772 systemd[1]: Reached target remote-fs.target. May 16 00:33:49.741989 systemd[1]: Starting dracut-pre-mount.service... May 16 00:33:49.749145 systemd[1]: Finished dracut-pre-mount.service. May 16 00:33:49.750000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:33:49.761132 ignition[644]: parsing config with SHA512: c921b7431eedff018f7970652daa5cf96863ae1de0e34e8dfa623034f232fca364c2e736263a79f7beef3f9be6820bf439a294870ea015c698a784108cf273db May 16 00:33:49.766920 unknown[644]: fetched base config from "system" May 16 00:33:49.766931 unknown[644]: fetched user config from "qemu" May 16 00:33:49.767456 ignition[644]: fetch-offline: fetch-offline passed May 16 00:33:49.769000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:33:49.768496 systemd[1]: Finished ignition-fetch-offline.service. May 16 00:33:49.767512 ignition[644]: Ignition finished successfully May 16 00:33:49.769954 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 16 00:33:49.770669 systemd[1]: Starting ignition-kargs.service... May 16 00:33:49.778989 ignition[758]: Ignition 2.14.0 May 16 00:33:49.778999 ignition[758]: Stage: kargs May 16 00:33:49.779087 ignition[758]: no configs at "/usr/lib/ignition/base.d" May 16 00:33:49.779096 ignition[758]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 16 00:33:49.779919 ignition[758]: kargs: kargs passed May 16 00:33:49.779958 ignition[758]: Ignition finished successfully May 16 00:33:49.784000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:33:49.783069 systemd[1]: Finished ignition-kargs.service. May 16 00:33:49.785075 systemd[1]: Starting ignition-disks.service... May 16 00:33:49.791424 ignition[764]: Ignition 2.14.0 May 16 00:33:49.791441 ignition[764]: Stage: disks May 16 00:33:49.791526 ignition[764]: no configs at "/usr/lib/ignition/base.d" May 16 00:33:49.793406 systemd[1]: Finished ignition-disks.service. May 16 00:33:49.794000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:33:49.791535 ignition[764]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 16 00:33:49.794938 systemd[1]: Reached target initrd-root-device.target. May 16 00:33:49.792426 ignition[764]: disks: disks passed May 16 00:33:49.796241 systemd[1]: Reached target local-fs-pre.target. May 16 00:33:49.792464 ignition[764]: Ignition finished successfully May 16 00:33:49.797855 systemd[1]: Reached target local-fs.target. May 16 00:33:49.799202 systemd[1]: Reached target sysinit.target. May 16 00:33:49.800356 systemd[1]: Reached target basic.target. May 16 00:33:49.802389 systemd[1]: Starting systemd-fsck-root.service... May 16 00:33:49.812831 systemd-fsck[772]: ROOT: clean, 619/553520 files, 56022/553472 blocks May 16 00:33:49.816169 systemd[1]: Finished systemd-fsck-root.service. May 16 00:33:49.817000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:33:49.818202 systemd[1]: Mounting sysroot.mount... May 16 00:33:49.824878 systemd[1]: Mounted sysroot.mount. May 16 00:33:49.826101 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. May 16 00:33:49.825638 systemd[1]: Reached target initrd-root-fs.target. May 16 00:33:49.827845 systemd[1]: Mounting sysroot-usr.mount... May 16 00:33:49.828721 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. May 16 00:33:49.828759 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 16 00:33:49.828783 systemd[1]: Reached target ignition-diskful.target. May 16 00:33:49.830614 systemd[1]: Mounted sysroot-usr.mount. May 16 00:33:49.832550 systemd[1]: Starting initrd-setup-root.service... May 16 00:33:49.836741 initrd-setup-root[782]: cut: /sysroot/etc/passwd: No such file or directory May 16 00:33:49.840147 initrd-setup-root[791]: cut: /sysroot/etc/group: No such file or directory May 16 00:33:49.844273 initrd-setup-root[799]: cut: /sysroot/etc/shadow: No such file or directory May 16 00:33:49.848246 initrd-setup-root[807]: cut: /sysroot/etc/gshadow: No such file or directory May 16 00:33:49.873611 systemd[1]: Finished initrd-setup-root.service. May 16 00:33:49.874000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:33:49.875149 systemd[1]: Starting ignition-mount.service... May 16 00:33:49.876475 systemd[1]: Starting sysroot-boot.service... May 16 00:33:49.880349 bash[824]: umount: /sysroot/usr/share/oem: not mounted. May 16 00:33:49.889045 ignition[826]: INFO : Ignition 2.14.0 May 16 00:33:49.889045 ignition[826]: INFO : Stage: mount May 16 00:33:49.890621 ignition[826]: INFO : no configs at "/usr/lib/ignition/base.d" May 16 00:33:49.890621 ignition[826]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 16 00:33:49.890621 ignition[826]: INFO : mount: mount passed May 16 00:33:49.890621 ignition[826]: INFO : Ignition finished successfully May 16 00:33:49.894000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:33:49.892880 systemd[1]: Finished ignition-mount.service. May 16 00:33:49.895000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:33:49.895338 systemd[1]: Finished sysroot-boot.service. May 16 00:33:50.546019 systemd[1]: Mounting sysroot-usr-share-oem.mount... May 16 00:33:50.552213 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (835) May 16 00:33:50.553825 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 16 00:33:50.553840 kernel: BTRFS info (device vda6): using free space tree May 16 00:33:50.553850 kernel: BTRFS info (device vda6): has skinny extents May 16 00:33:50.557095 systemd[1]: Mounted sysroot-usr-share-oem.mount. May 16 00:33:50.558685 systemd[1]: Starting ignition-files.service... May 16 00:33:50.571494 ignition[855]: INFO : Ignition 2.14.0 May 16 00:33:50.571494 ignition[855]: INFO : Stage: files May 16 00:33:50.573000 ignition[855]: INFO : no configs at "/usr/lib/ignition/base.d" May 16 00:33:50.573000 ignition[855]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 16 00:33:50.573000 ignition[855]: DEBUG : files: compiled without relabeling support, skipping May 16 00:33:50.576604 ignition[855]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 16 00:33:50.576604 ignition[855]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 16 00:33:50.579378 ignition[855]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 16 00:33:50.580675 ignition[855]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 16 00:33:50.580675 ignition[855]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 16 00:33:50.580675 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 16 00:33:50.580675 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 May 16 00:33:50.580017 unknown[855]: wrote ssh authorized keys file for user: core May 16 00:33:51.039408 systemd-networkd[737]: eth0: Gained IPv6LL May 16 00:33:52.606481 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 16 00:33:54.000189 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 16 00:33:54.002325 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" May 16 00:33:54.002325 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 May 16 00:33:54.306927 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 16 00:33:54.462725 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" May 16 00:33:54.462725 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 16 00:33:54.466303 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 16 00:33:54.466303 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 16 00:33:54.466303 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 16 00:33:54.466303 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 16 00:33:54.466303 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 16 00:33:54.466303 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 16 00:33:54.466303 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 16 00:33:54.466303 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 16 00:33:54.466303 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 16 00:33:54.466303 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" May 16 00:33:54.466303 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" May 16 00:33:54.466303 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" May 16 00:33:54.466303 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-arm64.raw: attempt #1 May 16 00:33:54.855797 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK May 16 00:33:55.254851 ignition[855]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" May 16 00:33:55.254851 ignition[855]: INFO : files: op(c): [started] processing unit "prepare-helm.service" May 16 00:33:55.258503 ignition[855]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 16 00:33:55.258503 ignition[855]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 16 00:33:55.258503 ignition[855]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" May 16 00:33:55.258503 ignition[855]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" May 16 00:33:55.258503 ignition[855]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 16 00:33:55.258503 ignition[855]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 16 00:33:55.258503 ignition[855]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" May 16 00:33:55.258503 ignition[855]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" May 16 00:33:55.258503 ignition[855]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" May 16 00:33:55.258503 ignition[855]: INFO : files: op(11): [started] setting preset to disabled for "coreos-metadata.service" May 16 00:33:55.258503 ignition[855]: INFO : files: op(11): op(12): [started] removing enablement symlink(s) for "coreos-metadata.service" May 16 00:33:55.301122 ignition[855]: INFO : files: op(11): op(12): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 16 00:33:55.302674 ignition[855]: INFO : files: op(11): [finished] setting preset to disabled for "coreos-metadata.service" May 16 00:33:55.302674 ignition[855]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" May 16 00:33:55.302674 ignition[855]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" May 16 00:33:55.302674 ignition[855]: INFO : files: files passed May 16 00:33:55.302674 ignition[855]: INFO : Ignition finished successfully May 16 00:33:55.313668 kernel: kauditd_printk_skb: 23 callbacks suppressed May 16 00:33:55.313691 kernel: audit: type=1130 audit(1747355635.308:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:33:55.308000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:33:55.306456 systemd[1]: Finished ignition-files.service. May 16 00:33:55.309305 systemd[1]: Starting initrd-setup-root-after-ignition.service... May 16 00:33:55.317135 initrd-setup-root-after-ignition[880]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory May 16 00:33:55.317000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:33:55.317000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:33:55.312897 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). May 16 00:33:55.330302 kernel: audit: type=1130 audit(1747355635.317:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:33:55.330331 kernel: audit: type=1131 audit(1747355635.317:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:33:55.330342 kernel: audit: type=1130 audit(1747355635.324:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:33:55.324000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:33:55.330430 initrd-setup-root-after-ignition[883]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 16 00:33:55.313671 systemd[1]: Starting ignition-quench.service... May 16 00:33:55.316377 systemd[1]: ignition-quench.service: Deactivated successfully. May 16 00:33:55.316463 systemd[1]: Finished ignition-quench.service. May 16 00:33:55.321825 systemd[1]: Finished initrd-setup-root-after-ignition.service. May 16 00:33:55.324887 systemd[1]: Reached target ignition-complete.target. May 16 00:33:55.330559 systemd[1]: Starting initrd-parse-etc.service... May 16 00:33:55.343223 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 16 00:33:55.343314 systemd[1]: Finished initrd-parse-etc.service. May 16 00:33:55.344000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:33:55.344000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:33:55.345081 systemd[1]: Reached target initrd-fs.target. May 16 00:33:55.351560 kernel: audit: type=1130 audit(1747355635.344:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:33:55.351581 kernel: audit: type=1131 audit(1747355635.344:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:33:55.350945 systemd[1]: Reached target initrd.target. May 16 00:33:55.352276 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. May 16 00:33:55.353024 systemd[1]: Starting dracut-pre-pivot.service... May 16 00:33:55.363601 systemd[1]: Finished dracut-pre-pivot.service. May 16 00:33:55.364000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:33:55.365342 systemd[1]: Starting initrd-cleanup.service... May 16 00:33:55.369016 kernel: audit: type=1130 audit(1747355635.364:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:33:55.373708 systemd[1]: Stopped target nss-lookup.target. May 16 00:33:55.374590 systemd[1]: Stopped target remote-cryptsetup.target. May 16 00:33:55.376083 systemd[1]: Stopped target timers.target. May 16 00:33:55.377420 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 16 00:33:55.378000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:33:55.377532 systemd[1]: Stopped dracut-pre-pivot.service. May 16 00:33:55.383038 kernel: audit: type=1131 audit(1747355635.378:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:33:55.378795 systemd[1]: Stopped target initrd.target. May 16 00:33:55.382493 systemd[1]: Stopped target basic.target. May 16 00:33:55.383799 systemd[1]: Stopped target ignition-complete.target. May 16 00:33:55.385130 systemd[1]: Stopped target ignition-diskful.target. May 16 00:33:55.386469 systemd[1]: Stopped target initrd-root-device.target. May 16 00:33:55.387905 systemd[1]: Stopped target remote-fs.target. May 16 00:33:55.389273 systemd[1]: Stopped target remote-fs-pre.target. May 16 00:33:55.390726 systemd[1]: Stopped target sysinit.target. May 16 00:33:55.391973 systemd[1]: Stopped target local-fs.target. May 16 00:33:55.393316 systemd[1]: Stopped target local-fs-pre.target. May 16 00:33:55.394626 systemd[1]: Stopped target swap.target. May 16 00:33:55.397000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:33:55.395892 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 16 00:33:55.401632 kernel: audit: type=1131 audit(1747355635.397:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:33:55.396003 systemd[1]: Stopped dracut-pre-mount.service. May 16 00:33:55.402000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:33:55.397361 systemd[1]: Stopped target cryptsetup.target. May 16 00:33:55.406790 kernel: audit: type=1131 audit(1747355635.402:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:33:55.406000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:33:55.400857 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 16 00:33:55.400965 systemd[1]: Stopped dracut-initqueue.service. May 16 00:33:55.402507 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 16 00:33:55.402611 systemd[1]: Stopped ignition-fetch-offline.service. May 16 00:33:55.406305 systemd[1]: Stopped target paths.target. May 16 00:33:55.407514 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 16 00:33:55.411215 systemd[1]: Stopped systemd-ask-password-console.path. May 16 00:33:55.412382 systemd[1]: Stopped target slices.target. May 16 00:33:55.413910 systemd[1]: Stopped target sockets.target. May 16 00:33:55.416000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:33:55.415304 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 16 00:33:55.417000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:33:55.415419 systemd[1]: Stopped initrd-setup-root-after-ignition.service. May 16 00:33:55.421830 iscsid[744]: iscsid shutting down. May 16 00:33:55.416803 systemd[1]: ignition-files.service: Deactivated successfully. May 16 00:33:55.416918 systemd[1]: Stopped ignition-files.service. May 16 00:33:55.418962 systemd[1]: Stopping ignition-mount.service... May 16 00:33:55.419850 systemd[1]: Stopping iscsid.service... May 16 00:33:55.423152 systemd[1]: Stopping sysroot-boot.service... May 16 00:33:55.424544 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 16 00:33:55.425000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:33:55.427000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:33:55.424695 systemd[1]: Stopped systemd-udev-trigger.service. May 16 00:33:55.430000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:33:55.431108 ignition[896]: INFO : Ignition 2.14.0 May 16 00:33:55.431108 ignition[896]: INFO : Stage: umount May 16 00:33:55.431108 ignition[896]: INFO : no configs at "/usr/lib/ignition/base.d" May 16 00:33:55.431108 ignition[896]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 16 00:33:55.431108 ignition[896]: INFO : umount: umount passed May 16 00:33:55.431108 ignition[896]: INFO : Ignition finished successfully May 16 00:33:55.437000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:33:55.426071 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 16 00:33:55.439000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:33:55.439000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:33:55.426174 systemd[1]: Stopped dracut-pre-trigger.service. May 16 00:33:55.441000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:33:55.428853 systemd[1]: iscsid.service: Deactivated successfully. May 16 00:33:55.442000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:33:55.428944 systemd[1]: Stopped iscsid.service. May 16 00:33:55.430740 systemd[1]: iscsid.socket: Deactivated successfully. May 16 00:33:55.430826 systemd[1]: Closed iscsid.socket. May 16 00:33:55.446000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:33:55.431830 systemd[1]: Stopping iscsiuio.service... May 16 00:33:55.447000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:33:55.436331 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 16 00:33:55.449000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:33:55.436781 systemd[1]: iscsiuio.service: Deactivated successfully. May 16 00:33:55.450000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:33:55.436865 systemd[1]: Stopped iscsiuio.service. May 16 00:33:55.438369 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 16 00:33:55.438444 systemd[1]: Finished initrd-cleanup.service. May 16 00:33:55.439962 systemd[1]: ignition-mount.service: Deactivated successfully. May 16 00:33:55.440039 systemd[1]: Stopped ignition-mount.service. May 16 00:33:55.441498 systemd[1]: sysroot-boot.service: Deactivated successfully. May 16 00:33:55.441575 systemd[1]: Stopped sysroot-boot.service. May 16 00:33:55.443484 systemd[1]: Stopped target network.target. May 16 00:33:55.444307 systemd[1]: iscsiuio.socket: Deactivated successfully. May 16 00:33:55.444343 systemd[1]: Closed iscsiuio.socket. May 16 00:33:55.445493 systemd[1]: ignition-disks.service: Deactivated successfully. May 16 00:33:55.445533 systemd[1]: Stopped ignition-disks.service. May 16 00:33:55.446999 systemd[1]: ignition-kargs.service: Deactivated successfully. May 16 00:33:55.463000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:33:55.447036 systemd[1]: Stopped ignition-kargs.service. May 16 00:33:55.465000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:33:55.448280 systemd[1]: ignition-setup.service: Deactivated successfully. May 16 00:33:55.448317 systemd[1]: Stopped ignition-setup.service. May 16 00:33:55.449581 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 16 00:33:55.469000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:33:55.449622 systemd[1]: Stopped initrd-setup-root.service. May 16 00:33:55.471000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:33:55.451135 systemd[1]: Stopping systemd-networkd.service... May 16 00:33:55.472000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:33:55.452747 systemd[1]: Stopping systemd-resolved.service... May 16 00:33:55.462717 systemd-networkd[737]: eth0: DHCPv6 lease lost May 16 00:33:55.463025 systemd[1]: systemd-resolved.service: Deactivated successfully. May 16 00:33:55.476000 audit: BPF prog-id=6 op=UNLOAD May 16 00:33:55.463123 systemd[1]: Stopped systemd-resolved.service. May 16 00:33:55.464523 systemd[1]: systemd-networkd.service: Deactivated successfully. May 16 00:33:55.478000 audit: BPF prog-id=9 op=UNLOAD May 16 00:33:55.464618 systemd[1]: Stopped systemd-networkd.service. May 16 00:33:55.465925 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 16 00:33:55.480000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:33:55.465960 systemd[1]: Closed systemd-networkd.socket. May 16 00:33:55.481000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:33:55.467610 systemd[1]: Stopping network-cleanup.service... May 16 00:33:55.468309 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 16 00:33:55.468376 systemd[1]: Stopped parse-ip-for-networkd.service. May 16 00:33:55.485000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:33:55.469801 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 16 00:33:55.487000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:33:55.469841 systemd[1]: Stopped systemd-sysctl.service. May 16 00:33:55.488000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:33:55.472001 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 16 00:33:55.472043 systemd[1]: Stopped systemd-modules-load.service. May 16 00:33:55.473047 systemd[1]: Stopping systemd-udevd.service... May 16 00:33:55.492000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:33:55.475047 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 16 00:33:55.494000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:33:55.479000 systemd[1]: systemd-udevd.service: Deactivated successfully. May 16 00:33:55.496000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:33:55.479150 systemd[1]: Stopped systemd-udevd.service. May 16 00:33:55.480555 systemd[1]: network-cleanup.service: Deactivated successfully. May 16 00:33:55.498000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:33:55.498000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:33:55.480636 systemd[1]: Stopped network-cleanup.service. May 16 00:33:55.482035 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 16 00:33:55.482076 systemd[1]: Closed systemd-udevd-control.socket. May 16 00:33:55.483269 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 16 00:33:55.483303 systemd[1]: Closed systemd-udevd-kernel.socket. May 16 00:33:55.484627 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 16 00:33:55.484669 systemd[1]: Stopped dracut-pre-udev.service. May 16 00:33:55.486178 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 16 00:33:55.486238 systemd[1]: Stopped dracut-cmdline.service. May 16 00:33:55.487568 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 16 00:33:55.487608 systemd[1]: Stopped dracut-cmdline-ask.service. May 16 00:33:55.489789 systemd[1]: Starting initrd-udevadm-cleanup-db.service... May 16 00:33:55.491495 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 16 00:33:55.491554 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. May 16 00:33:55.493654 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 16 00:33:55.493693 systemd[1]: Stopped kmod-static-nodes.service. May 16 00:33:55.494530 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 16 00:33:55.494575 systemd[1]: Stopped systemd-vconsole-setup.service. May 16 00:33:55.497013 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. May 16 00:33:55.497424 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 16 00:33:55.497505 systemd[1]: Finished initrd-udevadm-cleanup-db.service. May 16 00:33:55.498807 systemd[1]: Reached target initrd-switch-root.target. May 16 00:33:55.500918 systemd[1]: Starting initrd-switch-root.service... May 16 00:33:55.507497 systemd[1]: Switching root. May 16 00:33:55.526542 systemd-journald[290]: Journal stopped May 16 00:33:57.536325 systemd-journald[290]: Received SIGTERM from PID 1 (systemd). May 16 00:33:57.536387 kernel: SELinux: Class mctp_socket not defined in policy. May 16 00:33:57.536405 kernel: SELinux: Class anon_inode not defined in policy. May 16 00:33:57.536416 kernel: SELinux: the above unknown classes and permissions will be allowed May 16 00:33:57.536429 kernel: SELinux: policy capability network_peer_controls=1 May 16 00:33:57.536438 kernel: SELinux: policy capability open_perms=1 May 16 00:33:57.536451 kernel: SELinux: policy capability extended_socket_class=1 May 16 00:33:57.536464 kernel: SELinux: policy capability always_check_network=0 May 16 00:33:57.536549 kernel: SELinux: policy capability cgroup_seclabel=1 May 16 00:33:57.536564 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 16 00:33:57.536574 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 16 00:33:57.536586 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 16 00:33:57.536599 systemd[1]: Successfully loaded SELinux policy in 33.270ms. May 16 00:33:57.536611 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 6.786ms. May 16 00:33:57.536623 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) May 16 00:33:57.536634 systemd[1]: Detected virtualization kvm. May 16 00:33:57.536649 systemd[1]: Detected architecture arm64. May 16 00:33:57.536660 systemd[1]: Detected first boot. May 16 00:33:57.536670 systemd[1]: Initializing machine ID from VM UUID. May 16 00:33:57.536680 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). May 16 00:33:57.536690 systemd[1]: Populated /etc with preset unit settings. May 16 00:33:57.536701 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 16 00:33:57.536714 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 16 00:33:57.536735 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 16 00:33:57.536748 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 16 00:33:57.536758 systemd[1]: Stopped initrd-switch-root.service. May 16 00:33:57.536769 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 16 00:33:57.536780 systemd[1]: Created slice system-addon\x2dconfig.slice. May 16 00:33:57.536790 systemd[1]: Created slice system-addon\x2drun.slice. May 16 00:33:57.536802 systemd[1]: Created slice system-getty.slice. May 16 00:33:57.536812 systemd[1]: Created slice system-modprobe.slice. May 16 00:33:57.536822 systemd[1]: Created slice system-serial\x2dgetty.slice. May 16 00:33:57.536833 systemd[1]: Created slice system-system\x2dcloudinit.slice. May 16 00:33:57.536844 systemd[1]: Created slice system-systemd\x2dfsck.slice. May 16 00:33:57.536854 systemd[1]: Created slice user.slice. May 16 00:33:57.536869 systemd[1]: Started systemd-ask-password-console.path. May 16 00:33:57.536879 systemd[1]: Started systemd-ask-password-wall.path. May 16 00:33:57.536890 systemd[1]: Set up automount boot.automount. May 16 00:33:57.536902 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. May 16 00:33:57.536912 systemd[1]: Stopped target initrd-switch-root.target. May 16 00:33:57.536922 systemd[1]: Stopped target initrd-fs.target. May 16 00:33:57.536932 systemd[1]: Stopped target initrd-root-fs.target. May 16 00:33:57.536943 systemd[1]: Reached target integritysetup.target. May 16 00:33:57.536953 systemd[1]: Reached target remote-cryptsetup.target. May 16 00:33:57.536963 systemd[1]: Reached target remote-fs.target. May 16 00:33:57.536974 systemd[1]: Reached target slices.target. May 16 00:33:57.536985 systemd[1]: Reached target swap.target. May 16 00:33:57.536995 systemd[1]: Reached target torcx.target. May 16 00:33:57.537006 systemd[1]: Reached target veritysetup.target. May 16 00:33:57.537016 systemd[1]: Listening on systemd-coredump.socket. May 16 00:33:57.537026 systemd[1]: Listening on systemd-initctl.socket. May 16 00:33:57.537037 systemd[1]: Listening on systemd-networkd.socket. May 16 00:33:57.537047 systemd[1]: Listening on systemd-udevd-control.socket. May 16 00:33:57.537057 systemd[1]: Listening on systemd-udevd-kernel.socket. May 16 00:33:57.537068 systemd[1]: Listening on systemd-userdbd.socket. May 16 00:33:57.537078 systemd[1]: Mounting dev-hugepages.mount... May 16 00:33:57.537089 systemd[1]: Mounting dev-mqueue.mount... May 16 00:33:57.537099 systemd[1]: Mounting media.mount... May 16 00:33:57.537110 systemd[1]: Mounting sys-kernel-debug.mount... May 16 00:33:57.537120 systemd[1]: Mounting sys-kernel-tracing.mount... May 16 00:33:57.537130 systemd[1]: Mounting tmp.mount... May 16 00:33:57.537140 systemd[1]: Starting flatcar-tmpfiles.service... May 16 00:33:57.537150 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 16 00:33:57.537160 systemd[1]: Starting kmod-static-nodes.service... May 16 00:33:57.537170 systemd[1]: Starting modprobe@configfs.service... May 16 00:33:57.537192 systemd[1]: Starting modprobe@dm_mod.service... May 16 00:33:57.537206 systemd[1]: Starting modprobe@drm.service... May 16 00:33:57.537216 systemd[1]: Starting modprobe@efi_pstore.service... May 16 00:33:57.537226 systemd[1]: Starting modprobe@fuse.service... May 16 00:33:57.537237 systemd[1]: Starting modprobe@loop.service... May 16 00:33:57.537247 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 16 00:33:57.537257 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 16 00:33:57.537267 systemd[1]: Stopped systemd-fsck-root.service. May 16 00:33:57.537277 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 16 00:33:57.537289 systemd[1]: Stopped systemd-fsck-usr.service. May 16 00:33:57.537300 systemd[1]: Stopped systemd-journald.service. May 16 00:33:57.537310 kernel: fuse: init (API version 7.34) May 16 00:33:57.537320 kernel: loop: module loaded May 16 00:33:57.537330 systemd[1]: Starting systemd-journald.service... May 16 00:33:57.537342 systemd[1]: Starting systemd-modules-load.service... May 16 00:33:57.537352 systemd[1]: Starting systemd-network-generator.service... May 16 00:33:57.537362 systemd[1]: Starting systemd-remount-fs.service... May 16 00:33:57.537372 systemd[1]: Starting systemd-udev-trigger.service... May 16 00:33:57.537384 systemd[1]: verity-setup.service: Deactivated successfully. May 16 00:33:57.537394 systemd[1]: Stopped verity-setup.service. May 16 00:33:57.537404 systemd[1]: Mounted dev-hugepages.mount. May 16 00:33:57.537414 systemd[1]: Mounted dev-mqueue.mount. May 16 00:33:57.537428 systemd-journald[993]: Journal started May 16 00:33:57.537469 systemd-journald[993]: Runtime Journal (/run/log/journal/1143c62f0a4b4cd6989c15047d9a5799) is 6.0M, max 48.7M, 42.6M free. May 16 00:33:57.537502 systemd[1]: Mounted media.mount. May 16 00:33:55.584000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 May 16 00:33:55.684000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 May 16 00:33:55.684000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 May 16 00:33:55.685000 audit: BPF prog-id=10 op=LOAD May 16 00:33:55.685000 audit: BPF prog-id=10 op=UNLOAD May 16 00:33:55.685000 audit: BPF prog-id=11 op=LOAD May 16 00:33:55.685000 audit: BPF prog-id=11 op=UNLOAD May 16 00:33:55.727000 audit[929]: AVC avc: denied { associate } for pid=929 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" May 16 00:33:55.727000 audit[929]: SYSCALL arch=c00000b7 syscall=5 success=yes exit=0 a0=40001c589c a1=40000c8de0 a2=40000cf0c0 a3=32 items=0 ppid=912 pid=929 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:33:55.727000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 May 16 00:33:55.728000 audit[929]: AVC avc: denied { associate } for pid=929 comm="torcx-generator" name="lib" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 May 16 00:33:55.728000 audit[929]: SYSCALL arch=c00000b7 syscall=34 success=yes exit=0 a0=ffffffffffffff9c a1=40001c5975 a2=1ed a3=0 items=2 ppid=912 pid=929 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:33:55.728000 audit: CWD cwd="/" May 16 00:33:55.728000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 16 00:33:55.728000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 16 00:33:55.728000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 May 16 00:33:57.405000 audit: BPF prog-id=12 op=LOAD May 16 00:33:57.405000 audit: BPF prog-id=3 op=UNLOAD May 16 00:33:57.405000 audit: BPF prog-id=13 op=LOAD May 16 00:33:57.405000 audit: BPF prog-id=14 op=LOAD May 16 00:33:57.405000 audit: BPF prog-id=4 op=UNLOAD May 16 00:33:57.405000 audit: BPF prog-id=5 op=UNLOAD May 16 00:33:57.406000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:33:57.409000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:33:57.409000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:33:57.419000 audit: BPF prog-id=12 op=UNLOAD May 16 00:33:57.508000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:33:57.510000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:33:57.513000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:33:57.513000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:33:57.514000 audit: BPF prog-id=15 op=LOAD May 16 00:33:57.514000 audit: BPF prog-id=16 op=LOAD May 16 00:33:57.514000 audit: BPF prog-id=17 op=LOAD May 16 00:33:57.514000 audit: BPF prog-id=13 op=UNLOAD May 16 00:33:57.514000 audit: BPF prog-id=14 op=UNLOAD May 16 00:33:57.532000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:33:57.533000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 May 16 00:33:57.533000 audit[993]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=4 a1=ffffe45cbb40 a2=4000 a3=1 items=0 ppid=1 pid=993 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:33:57.533000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" May 16 00:33:57.403785 systemd[1]: Queued start job for default target multi-user.target. May 16 00:33:55.726105 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2025-05-16T00:33:55Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 16 00:33:57.403797 systemd[1]: Unnecessary job was removed for dev-vda6.device. May 16 00:33:55.726359 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2025-05-16T00:33:55Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json May 16 00:33:57.406393 systemd[1]: systemd-journald.service: Deactivated successfully. May 16 00:33:55.726377 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2025-05-16T00:33:55Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json May 16 00:33:55.726407 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2025-05-16T00:33:55Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" May 16 00:33:55.726416 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2025-05-16T00:33:55Z" level=debug msg="skipped missing lower profile" missing profile=oem May 16 00:33:55.726443 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2025-05-16T00:33:55Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" May 16 00:33:55.726454 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2025-05-16T00:33:55Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= May 16 00:33:55.726642 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2025-05-16T00:33:55Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack May 16 00:33:55.726675 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2025-05-16T00:33:55Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json May 16 00:33:55.726687 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2025-05-16T00:33:55Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json May 16 00:33:55.727083 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2025-05-16T00:33:55Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 May 16 00:33:55.727117 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2025-05-16T00:33:55Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl May 16 00:33:55.727135 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2025-05-16T00:33:55Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.7: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.7 May 16 00:33:55.727149 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2025-05-16T00:33:55Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store May 16 00:33:55.727165 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2025-05-16T00:33:55Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.7: no such file or directory" path=/var/lib/torcx/store/3510.3.7 May 16 00:33:55.727178 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2025-05-16T00:33:55Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store May 16 00:33:57.149493 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2025-05-16T00:33:57Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 16 00:33:57.149773 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2025-05-16T00:33:57Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 16 00:33:57.149879 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2025-05-16T00:33:57Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 16 00:33:57.150047 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2025-05-16T00:33:57Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 16 00:33:57.150098 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2025-05-16T00:33:57Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= May 16 00:33:57.150161 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2025-05-16T00:33:57Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx May 16 00:33:57.540694 systemd[1]: Started systemd-journald.service. May 16 00:33:57.540000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:33:57.540834 systemd[1]: Mounted sys-kernel-debug.mount. May 16 00:33:57.541767 systemd[1]: Mounted sys-kernel-tracing.mount. May 16 00:33:57.542680 systemd[1]: Mounted tmp.mount. May 16 00:33:57.543633 systemd[1]: Finished kmod-static-nodes.service. May 16 00:33:57.544000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:33:57.544750 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 16 00:33:57.544921 systemd[1]: Finished modprobe@configfs.service. May 16 00:33:57.545000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:33:57.545000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:33:57.546042 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 16 00:33:57.546211 systemd[1]: Finished modprobe@dm_mod.service. May 16 00:33:57.546000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:33:57.546000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:33:57.547321 systemd[1]: modprobe@drm.service: Deactivated successfully. May 16 00:33:57.547480 systemd[1]: Finished modprobe@drm.service. May 16 00:33:57.548000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:33:57.548000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:33:57.548519 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 16 00:33:57.548682 systemd[1]: Finished modprobe@efi_pstore.service. May 16 00:33:57.549000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:33:57.549000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:33:57.550007 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 16 00:33:57.550160 systemd[1]: Finished modprobe@fuse.service. May 16 00:33:57.550000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:33:57.550000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:33:57.551241 systemd[1]: modprobe@loop.service: Deactivated successfully. May 16 00:33:57.552000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:33:57.552000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:33:57.554000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:33:57.552322 systemd[1]: Finished modprobe@loop.service. May 16 00:33:57.553447 systemd[1]: Finished systemd-modules-load.service. May 16 00:33:57.554591 systemd[1]: Finished systemd-network-generator.service. May 16 00:33:57.555000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:33:57.555792 systemd[1]: Finished systemd-remount-fs.service. May 16 00:33:57.556000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:33:57.557226 systemd[1]: Finished flatcar-tmpfiles.service. May 16 00:33:57.557000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:33:57.558288 systemd[1]: Reached target network-pre.target. May 16 00:33:57.560396 systemd[1]: Mounting sys-fs-fuse-connections.mount... May 16 00:33:57.562651 systemd[1]: Mounting sys-kernel-config.mount... May 16 00:33:57.563429 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 16 00:33:57.568516 systemd[1]: Starting systemd-hwdb-update.service... May 16 00:33:57.570559 systemd[1]: Starting systemd-journal-flush.service... May 16 00:33:57.572801 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 16 00:33:57.573990 systemd[1]: Starting systemd-random-seed.service... May 16 00:33:57.574965 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 16 00:33:57.576141 systemd[1]: Starting systemd-sysctl.service... May 16 00:33:57.578763 systemd[1]: Starting systemd-sysusers.service... May 16 00:33:57.581508 systemd-journald[993]: Time spent on flushing to /var/log/journal/1143c62f0a4b4cd6989c15047d9a5799 is 17.427ms for 997 entries. May 16 00:33:57.581508 systemd-journald[993]: System Journal (/var/log/journal/1143c62f0a4b4cd6989c15047d9a5799) is 8.0M, max 195.6M, 187.6M free. May 16 00:33:57.610429 systemd-journald[993]: Received client request to flush runtime journal. May 16 00:33:57.584000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:33:57.588000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:33:57.595000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:33:57.607000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:33:57.582213 systemd[1]: Finished systemd-udev-trigger.service. May 16 00:33:57.584594 systemd[1]: Mounted sys-fs-fuse-connections.mount. May 16 00:33:57.611083 udevadm[1031]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. May 16 00:33:57.586427 systemd[1]: Mounted sys-kernel-config.mount. May 16 00:33:57.587590 systemd[1]: Finished systemd-random-seed.service. May 16 00:33:57.588671 systemd[1]: Reached target first-boot-complete.target. May 16 00:33:57.591056 systemd[1]: Starting systemd-udev-settle.service... May 16 00:33:57.594532 systemd[1]: Finished systemd-sysctl.service. May 16 00:33:57.606503 systemd[1]: Finished systemd-sysusers.service. May 16 00:33:57.608692 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... May 16 00:33:57.611985 systemd[1]: Finished systemd-journal-flush.service. May 16 00:33:57.612000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:33:57.631793 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. May 16 00:33:57.632000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:33:57.976554 systemd[1]: Finished systemd-hwdb-update.service. May 16 00:33:57.977000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:33:57.977000 audit: BPF prog-id=18 op=LOAD May 16 00:33:57.978000 audit: BPF prog-id=19 op=LOAD May 16 00:33:57.978000 audit: BPF prog-id=7 op=UNLOAD May 16 00:33:57.978000 audit: BPF prog-id=8 op=UNLOAD May 16 00:33:57.978971 systemd[1]: Starting systemd-udevd.service... May 16 00:33:57.998871 systemd-udevd[1035]: Using default interface naming scheme 'v252'. May 16 00:33:58.013721 systemd[1]: Started systemd-udevd.service. May 16 00:33:58.014000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:33:58.015000 audit: BPF prog-id=20 op=LOAD May 16 00:33:58.018309 systemd[1]: Starting systemd-networkd.service... May 16 00:33:58.027000 audit: BPF prog-id=21 op=LOAD May 16 00:33:58.027000 audit: BPF prog-id=22 op=LOAD May 16 00:33:58.027000 audit: BPF prog-id=23 op=LOAD May 16 00:33:58.028658 systemd[1]: Starting systemd-userdbd.service... May 16 00:33:58.041530 systemd[1]: Condition check resulted in dev-ttyAMA0.device being skipped. May 16 00:33:58.067515 systemd[1]: Started systemd-userdbd.service. May 16 00:33:58.068000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:33:58.089905 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. May 16 00:33:58.131576 systemd[1]: Finished systemd-udev-settle.service. May 16 00:33:58.132000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:33:58.133892 systemd[1]: Starting lvm2-activation-early.service... May 16 00:33:58.138349 systemd-networkd[1045]: lo: Link UP May 16 00:33:58.138358 systemd-networkd[1045]: lo: Gained carrier May 16 00:33:58.139063 systemd-networkd[1045]: Enumeration completed May 16 00:33:58.139191 systemd[1]: Started systemd-networkd.service. May 16 00:33:58.139205 systemd-networkd[1045]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 16 00:33:58.141000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:33:58.143220 systemd-networkd[1045]: eth0: Link UP May 16 00:33:58.143331 systemd-networkd[1045]: eth0: Gained carrier May 16 00:33:58.145504 lvm[1068]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 16 00:33:58.162349 systemd-networkd[1045]: eth0: DHCPv4 address 10.0.0.31/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 16 00:33:58.180136 systemd[1]: Finished lvm2-activation-early.service. May 16 00:33:58.180000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:33:58.181222 systemd[1]: Reached target cryptsetup.target. May 16 00:33:58.183271 systemd[1]: Starting lvm2-activation.service... May 16 00:33:58.186944 lvm[1069]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 16 00:33:58.221145 systemd[1]: Finished lvm2-activation.service. May 16 00:33:58.221000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:33:58.222140 systemd[1]: Reached target local-fs-pre.target. May 16 00:33:58.223041 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 16 00:33:58.223078 systemd[1]: Reached target local-fs.target. May 16 00:33:58.223881 systemd[1]: Reached target machines.target. May 16 00:33:58.225972 systemd[1]: Starting ldconfig.service... May 16 00:33:58.227039 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 16 00:33:58.227091 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 16 00:33:58.228327 systemd[1]: Starting systemd-boot-update.service... May 16 00:33:58.230342 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... May 16 00:33:58.232810 systemd[1]: Starting systemd-machine-id-commit.service... May 16 00:33:58.236043 systemd[1]: Starting systemd-sysext.service... May 16 00:33:58.239787 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1071 (bootctl) May 16 00:33:58.241486 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... May 16 00:33:58.253838 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. May 16 00:33:58.255000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:33:58.258556 systemd[1]: Unmounting usr-share-oem.mount... May 16 00:33:58.262929 systemd[1]: usr-share-oem.mount: Deactivated successfully. May 16 00:33:58.263161 systemd[1]: Unmounted usr-share-oem.mount. May 16 00:33:58.311200 kernel: loop0: detected capacity change from 0 to 203944 May 16 00:33:58.322515 systemd[1]: Finished systemd-machine-id-commit.service. May 16 00:33:58.323000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:33:58.327235 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 16 00:33:58.333665 systemd-fsck[1080]: fsck.fat 4.2 (2021-01-31) May 16 00:33:58.333665 systemd-fsck[1080]: /dev/vda1: 236 files, 117310/258078 clusters May 16 00:33:58.335398 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. May 16 00:33:58.336000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:33:58.362225 kernel: loop1: detected capacity change from 0 to 203944 May 16 00:33:58.369260 (sd-sysext)[1083]: Using extensions 'kubernetes'. May 16 00:33:58.369623 (sd-sysext)[1083]: Merged extensions into '/usr'. May 16 00:33:58.386477 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 16 00:33:58.388073 systemd[1]: Starting modprobe@dm_mod.service... May 16 00:33:58.390388 systemd[1]: Starting modprobe@efi_pstore.service... May 16 00:33:58.392627 systemd[1]: Starting modprobe@loop.service... May 16 00:33:58.393563 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 16 00:33:58.393710 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 16 00:33:58.394543 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 16 00:33:58.394687 systemd[1]: Finished modprobe@dm_mod.service. May 16 00:33:58.395000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:33:58.395000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:33:58.396217 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 16 00:33:58.396345 systemd[1]: Finished modprobe@efi_pstore.service. May 16 00:33:58.397000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:33:58.397000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:33:58.397825 systemd[1]: modprobe@loop.service: Deactivated successfully. May 16 00:33:58.397951 systemd[1]: Finished modprobe@loop.service. May 16 00:33:58.398000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:33:58.398000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:33:58.399431 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 16 00:33:58.399542 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 16 00:33:58.457642 ldconfig[1070]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 16 00:33:58.462116 systemd[1]: Finished ldconfig.service. May 16 00:33:58.462000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:33:58.533487 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 16 00:33:58.535433 systemd[1]: Mounting boot.mount... May 16 00:33:58.537433 systemd[1]: Mounting usr-share-oem.mount... May 16 00:33:58.544331 systemd[1]: Mounted boot.mount. May 16 00:33:58.545341 systemd[1]: Mounted usr-share-oem.mount. May 16 00:33:58.547421 systemd[1]: Finished systemd-sysext.service. May 16 00:33:58.548000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:33:58.549790 systemd[1]: Starting ensure-sysext.service... May 16 00:33:58.552015 systemd[1]: Starting systemd-tmpfiles-setup.service... May 16 00:33:58.555080 systemd[1]: Finished systemd-boot-update.service. May 16 00:33:58.555000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:33:58.557721 systemd[1]: Reloading. May 16 00:33:58.561557 systemd-tmpfiles[1091]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. May 16 00:33:58.562403 systemd-tmpfiles[1091]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 16 00:33:58.563704 systemd-tmpfiles[1091]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 16 00:33:58.597530 /usr/lib/systemd/system-generators/torcx-generator[1111]: time="2025-05-16T00:33:58Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 16 00:33:58.597557 /usr/lib/systemd/system-generators/torcx-generator[1111]: time="2025-05-16T00:33:58Z" level=info msg="torcx already run" May 16 00:33:58.659946 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 16 00:33:58.659963 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 16 00:33:58.676739 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 16 00:33:58.717000 audit: BPF prog-id=24 op=LOAD May 16 00:33:58.717000 audit: BPF prog-id=25 op=LOAD May 16 00:33:58.717000 audit: BPF prog-id=18 op=UNLOAD May 16 00:33:58.717000 audit: BPF prog-id=19 op=UNLOAD May 16 00:33:58.719000 audit: BPF prog-id=26 op=LOAD May 16 00:33:58.719000 audit: BPF prog-id=15 op=UNLOAD May 16 00:33:58.719000 audit: BPF prog-id=27 op=LOAD May 16 00:33:58.719000 audit: BPF prog-id=28 op=LOAD May 16 00:33:58.719000 audit: BPF prog-id=16 op=UNLOAD May 16 00:33:58.720000 audit: BPF prog-id=17 op=UNLOAD May 16 00:33:58.720000 audit: BPF prog-id=29 op=LOAD May 16 00:33:58.720000 audit: BPF prog-id=21 op=UNLOAD May 16 00:33:58.720000 audit: BPF prog-id=30 op=LOAD May 16 00:33:58.720000 audit: BPF prog-id=31 op=LOAD May 16 00:33:58.720000 audit: BPF prog-id=22 op=UNLOAD May 16 00:33:58.720000 audit: BPF prog-id=23 op=UNLOAD May 16 00:33:58.721000 audit: BPF prog-id=32 op=LOAD May 16 00:33:58.721000 audit: BPF prog-id=20 op=UNLOAD May 16 00:33:58.723846 systemd[1]: Finished systemd-tmpfiles-setup.service. May 16 00:33:58.724000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:33:58.729489 systemd[1]: Starting audit-rules.service... May 16 00:33:58.731670 systemd[1]: Starting clean-ca-certificates.service... May 16 00:33:58.737000 audit: BPF prog-id=33 op=LOAD May 16 00:33:58.734402 systemd[1]: Starting systemd-journal-catalog-update.service... May 16 00:33:58.738995 systemd[1]: Starting systemd-resolved.service... May 16 00:33:58.741000 audit: BPF prog-id=34 op=LOAD May 16 00:33:58.742571 systemd[1]: Starting systemd-timesyncd.service... May 16 00:33:58.744857 systemd[1]: Starting systemd-update-utmp.service... May 16 00:33:58.751831 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 16 00:33:58.756000 audit[1161]: SYSTEM_BOOT pid=1161 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' May 16 00:33:58.753254 systemd[1]: Starting modprobe@dm_mod.service... May 16 00:33:58.756531 systemd[1]: Starting modprobe@efi_pstore.service... May 16 00:33:58.758745 systemd[1]: Starting modprobe@loop.service... May 16 00:33:58.759659 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 16 00:33:58.759809 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 16 00:33:58.760680 systemd[1]: Finished clean-ca-certificates.service. May 16 00:33:58.761000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:33:58.762204 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 16 00:33:58.762341 systemd[1]: Finished modprobe@dm_mod.service. May 16 00:33:58.763000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:33:58.763000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:33:58.763682 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 16 00:33:58.763817 systemd[1]: Finished modprobe@efi_pstore.service. May 16 00:33:58.764000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:33:58.764000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:33:58.765179 systemd[1]: modprobe@loop.service: Deactivated successfully. May 16 00:33:58.765458 systemd[1]: Finished modprobe@loop.service. May 16 00:33:58.766000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:33:58.766000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:33:58.769880 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 16 00:33:58.771412 systemd[1]: Starting modprobe@dm_mod.service... May 16 00:33:58.773767 systemd[1]: Starting modprobe@efi_pstore.service... May 16 00:33:58.775876 systemd[1]: Starting modprobe@loop.service... May 16 00:33:58.776774 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 16 00:33:58.776910 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 16 00:33:58.777008 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 16 00:33:58.777935 systemd[1]: Finished systemd-journal-catalog-update.service. May 16 00:33:58.779000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:33:58.779555 systemd[1]: Finished systemd-update-utmp.service. May 16 00:33:58.780000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:33:58.780898 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 16 00:33:58.781022 systemd[1]: Finished modprobe@dm_mod.service. May 16 00:33:58.781000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:33:58.781000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:33:58.782289 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 16 00:33:58.782412 systemd[1]: Finished modprobe@efi_pstore.service. May 16 00:33:58.783000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:33:58.783000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:33:58.784004 systemd[1]: modprobe@loop.service: Deactivated successfully. May 16 00:33:58.784127 systemd[1]: Finished modprobe@loop.service. May 16 00:33:58.784000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:33:58.784000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 16 00:33:58.788175 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 16 00:33:58.789776 systemd[1]: Starting modprobe@dm_mod.service... May 16 00:33:58.791989 systemd[1]: Starting modprobe@drm.service... May 16 00:33:58.792000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 May 16 00:33:58.792000 audit[1176]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffe102fd20 a2=420 a3=0 items=0 ppid=1150 pid=1176 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) May 16 00:33:58.792000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 May 16 00:33:58.792722 augenrules[1176]: No rules May 16 00:33:58.794118 systemd[1]: Starting modprobe@efi_pstore.service... May 16 00:33:58.796299 systemd[1]: Starting modprobe@loop.service... May 16 00:33:58.797091 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 16 00:33:58.797247 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 16 00:33:58.798559 systemd[1]: Starting systemd-networkd-wait-online.service... May 16 00:33:58.800947 systemd[1]: Starting systemd-update-done.service... May 16 00:33:58.801889 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 16 00:33:58.803257 systemd[1]: Finished audit-rules.service. May 16 00:33:58.804531 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 16 00:33:58.804673 systemd[1]: Finished modprobe@dm_mod.service. May 16 00:33:58.806023 systemd[1]: modprobe@drm.service: Deactivated successfully. May 16 00:33:58.806140 systemd[1]: Finished modprobe@drm.service. May 16 00:33:58.807554 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 16 00:33:58.807666 systemd[1]: Finished modprobe@efi_pstore.service. May 16 00:33:58.808999 systemd[1]: modprobe@loop.service: Deactivated successfully. May 16 00:33:58.809122 systemd[1]: Finished modprobe@loop.service. May 16 00:33:58.810718 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 16 00:33:58.810803 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 16 00:33:58.811609 systemd[1]: Started systemd-timesyncd.service. May 16 00:33:58.812612 systemd-timesyncd[1160]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 16 00:33:58.813063 systemd-timesyncd[1160]: Initial clock synchronization to Fri 2025-05-16 00:33:59.042422 UTC. May 16 00:33:58.813357 systemd[1]: Finished ensure-sysext.service. May 16 00:33:58.814566 systemd[1]: Finished systemd-update-done.service. May 16 00:33:58.815975 systemd[1]: Reached target time-set.target. May 16 00:33:58.818446 systemd-resolved[1154]: Positive Trust Anchors: May 16 00:33:58.818753 systemd-resolved[1154]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 16 00:33:58.818843 systemd-resolved[1154]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test May 16 00:33:58.829641 systemd-resolved[1154]: Defaulting to hostname 'linux'. May 16 00:33:58.831307 systemd[1]: Started systemd-resolved.service. May 16 00:33:58.832222 systemd[1]: Reached target network.target. May 16 00:33:58.833003 systemd[1]: Reached target nss-lookup.target. May 16 00:33:58.833869 systemd[1]: Reached target sysinit.target. May 16 00:33:58.834759 systemd[1]: Started motdgen.path. May 16 00:33:58.835517 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. May 16 00:33:58.836791 systemd[1]: Started logrotate.timer. May 16 00:33:58.837625 systemd[1]: Started mdadm.timer. May 16 00:33:58.838382 systemd[1]: Started systemd-tmpfiles-clean.timer. May 16 00:33:58.839221 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 16 00:33:58.839255 systemd[1]: Reached target paths.target. May 16 00:33:58.839991 systemd[1]: Reached target timers.target. May 16 00:33:58.841129 systemd[1]: Listening on dbus.socket. May 16 00:33:58.843062 systemd[1]: Starting docker.socket... May 16 00:33:58.847809 systemd[1]: Listening on sshd.socket. May 16 00:33:58.848705 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 16 00:33:58.849231 systemd[1]: Listening on docker.socket. May 16 00:33:58.850079 systemd[1]: Reached target sockets.target. May 16 00:33:58.850884 systemd[1]: Reached target basic.target. May 16 00:33:58.851700 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. May 16 00:33:58.851742 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. May 16 00:33:58.852892 systemd[1]: Starting containerd.service... May 16 00:33:58.854879 systemd[1]: Starting dbus.service... May 16 00:33:58.856791 systemd[1]: Starting enable-oem-cloudinit.service... May 16 00:33:58.859166 systemd[1]: Starting extend-filesystems.service... May 16 00:33:58.860162 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). May 16 00:33:58.861608 systemd[1]: Starting motdgen.service... May 16 00:33:58.864277 systemd[1]: Starting prepare-helm.service... May 16 00:33:58.866551 systemd[1]: Starting ssh-key-proc-cmdline.service... May 16 00:33:58.868758 systemd[1]: Starting sshd-keygen.service... May 16 00:33:58.873513 jq[1193]: false May 16 00:33:58.871923 systemd[1]: Starting systemd-logind.service... May 16 00:33:58.873437 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 16 00:33:58.873514 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 16 00:33:58.874013 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 16 00:33:58.874859 systemd[1]: Starting update-engine.service... May 16 00:33:58.876943 systemd[1]: Starting update-ssh-keys-after-ignition.service... May 16 00:33:58.880369 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 16 00:33:58.881582 jq[1210]: true May 16 00:33:58.880554 systemd[1]: Finished ssh-key-proc-cmdline.service. May 16 00:33:58.883349 extend-filesystems[1194]: Found loop1 May 16 00:33:58.884689 extend-filesystems[1194]: Found vda May 16 00:33:58.884689 extend-filesystems[1194]: Found vda1 May 16 00:33:58.884689 extend-filesystems[1194]: Found vda2 May 16 00:33:58.884689 extend-filesystems[1194]: Found vda3 May 16 00:33:58.884689 extend-filesystems[1194]: Found usr May 16 00:33:58.884689 extend-filesystems[1194]: Found vda4 May 16 00:33:58.884689 extend-filesystems[1194]: Found vda6 May 16 00:33:58.884689 extend-filesystems[1194]: Found vda7 May 16 00:33:58.884689 extend-filesystems[1194]: Found vda9 May 16 00:33:58.884689 extend-filesystems[1194]: Checking size of /dev/vda9 May 16 00:33:58.901452 tar[1212]: linux-arm64/helm May 16 00:33:58.901683 jq[1213]: true May 16 00:33:58.884785 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 16 00:33:58.884970 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. May 16 00:33:58.894166 systemd[1]: motdgen.service: Deactivated successfully. May 16 00:33:58.894380 systemd[1]: Finished motdgen.service. May 16 00:33:58.916118 extend-filesystems[1194]: Resized partition /dev/vda9 May 16 00:33:58.927416 extend-filesystems[1235]: resize2fs 1.46.5 (30-Dec-2021) May 16 00:33:58.940105 dbus-daemon[1192]: [system] SELinux support is enabled May 16 00:33:58.940285 systemd[1]: Started dbus.service. May 16 00:33:58.942917 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 16 00:33:58.942962 systemd[1]: Reached target system-config.target. May 16 00:33:58.944001 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 16 00:33:58.944025 systemd[1]: Reached target user-config.target. May 16 00:33:58.951200 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 16 00:33:58.952261 systemd-logind[1206]: Watching system buttons on /dev/input/event0 (Power Button) May 16 00:33:58.953430 systemd-logind[1206]: New seat seat0. May 16 00:33:58.957563 systemd[1]: Started systemd-logind.service. May 16 00:33:58.981019 env[1214]: time="2025-05-16T00:33:58.980962880Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 May 16 00:33:58.985758 update_engine[1209]: I0516 00:33:58.985538 1209 main.cc:92] Flatcar Update Engine starting May 16 00:33:58.993227 update_engine[1209]: I0516 00:33:58.987727 1209 update_check_scheduler.cc:74] Next update check in 6m38s May 16 00:33:58.987716 systemd[1]: Started update-engine.service. May 16 00:33:58.990513 systemd[1]: Started locksmithd.service. May 16 00:33:58.995243 bash[1243]: Updated "/home/core/.ssh/authorized_keys" May 16 00:33:58.996101 systemd[1]: Finished update-ssh-keys-after-ignition.service. May 16 00:33:59.000206 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 16 00:33:59.006791 env[1214]: time="2025-05-16T00:33:59.006230355Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 16 00:33:59.019989 extend-filesystems[1235]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 16 00:33:59.019989 extend-filesystems[1235]: old_desc_blocks = 1, new_desc_blocks = 1 May 16 00:33:59.019989 extend-filesystems[1235]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 16 00:33:59.017662 systemd[1]: extend-filesystems.service: Deactivated successfully. May 16 00:33:59.026375 env[1214]: time="2025-05-16T00:33:59.017063163Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 16 00:33:59.026375 env[1214]: time="2025-05-16T00:33:59.024491591Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.181-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 16 00:33:59.026375 env[1214]: time="2025-05-16T00:33:59.024540806Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 16 00:33:59.026375 env[1214]: time="2025-05-16T00:33:59.024779144Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 16 00:33:59.026375 env[1214]: time="2025-05-16T00:33:59.024796550Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 16 00:33:59.026375 env[1214]: time="2025-05-16T00:33:59.024809512Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" May 16 00:33:59.026375 env[1214]: time="2025-05-16T00:33:59.024818812Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 16 00:33:59.026375 env[1214]: time="2025-05-16T00:33:59.024886955Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 16 00:33:59.026375 env[1214]: time="2025-05-16T00:33:59.025097229Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 16 00:33:59.026375 env[1214]: time="2025-05-16T00:33:59.025377786Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 16 00:33:59.026701 extend-filesystems[1194]: Resized filesystem in /dev/vda9 May 16 00:33:59.017833 systemd[1]: Finished extend-filesystems.service. May 16 00:33:59.028266 env[1214]: time="2025-05-16T00:33:59.025400336Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 16 00:33:59.028266 env[1214]: time="2025-05-16T00:33:59.025474364Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" May 16 00:33:59.028266 env[1214]: time="2025-05-16T00:33:59.025488890Z" level=info msg="metadata content store policy set" policy=shared May 16 00:33:59.032579 env[1214]: time="2025-05-16T00:33:59.032437022Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 16 00:33:59.032579 env[1214]: time="2025-05-16T00:33:59.032494795Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 16 00:33:59.032579 env[1214]: time="2025-05-16T00:33:59.032510720Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 16 00:33:59.032579 env[1214]: time="2025-05-16T00:33:59.032552775Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 16 00:33:59.032579 env[1214]: time="2025-05-16T00:33:59.032570963Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 16 00:33:59.032579 env[1214]: time="2025-05-16T00:33:59.032585201Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 16 00:33:59.032793 env[1214]: time="2025-05-16T00:33:59.032598657Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 16 00:33:59.033042 env[1214]: time="2025-05-16T00:33:59.032978837Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 16 00:33:59.033042 env[1214]: time="2025-05-16T00:33:59.033007477Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 May 16 00:33:59.033042 env[1214]: time="2025-05-16T00:33:59.033023402Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 16 00:33:59.033042 env[1214]: time="2025-05-16T00:33:59.033036611Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 16 00:33:59.033147 env[1214]: time="2025-05-16T00:33:59.033051342Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 16 00:33:59.033243 env[1214]: time="2025-05-16T00:33:59.033202320Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 16 00:33:59.033343 env[1214]: time="2025-05-16T00:33:59.033312024Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 16 00:33:59.033560 env[1214]: time="2025-05-16T00:33:59.033540980Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 16 00:33:59.033603 env[1214]: time="2025-05-16T00:33:59.033570649Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 16 00:33:59.033603 env[1214]: time="2025-05-16T00:33:59.033586491Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 16 00:33:59.033726 env[1214]: time="2025-05-16T00:33:59.033709076Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 16 00:33:59.033726 env[1214]: time="2025-05-16T00:33:59.033725906Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 16 00:33:59.033859 env[1214]: time="2025-05-16T00:33:59.033738950Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 16 00:33:59.033859 env[1214]: time="2025-05-16T00:33:59.033750637Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 16 00:33:59.033859 env[1214]: time="2025-05-16T00:33:59.033762858Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 16 00:33:59.033859 env[1214]: time="2025-05-16T00:33:59.033774997Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 16 00:33:59.033859 env[1214]: time="2025-05-16T00:33:59.033788124Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 16 00:33:59.033859 env[1214]: time="2025-05-16T00:33:59.033801168Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 16 00:33:59.033859 env[1214]: time="2025-05-16T00:33:59.033814830Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 16 00:33:59.034004 env[1214]: time="2025-05-16T00:33:59.033942229Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 16 00:33:59.034004 env[1214]: time="2025-05-16T00:33:59.033958565Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 16 00:33:59.034004 env[1214]: time="2025-05-16T00:33:59.033970951Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 16 00:33:59.034004 env[1214]: time="2025-05-16T00:33:59.033982637Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 16 00:33:59.034004 env[1214]: time="2025-05-16T00:33:59.033998521Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 May 16 00:33:59.034099 env[1214]: time="2025-05-16T00:33:59.034010249Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 16 00:33:59.034099 env[1214]: time="2025-05-16T00:33:59.034029548Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" May 16 00:33:59.034099 env[1214]: time="2025-05-16T00:33:59.034065512Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 16 00:33:59.034382 env[1214]: time="2025-05-16T00:33:59.034313109Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 16 00:33:59.034382 env[1214]: time="2025-05-16T00:33:59.034383310Z" level=info msg="Connect containerd service" May 16 00:33:59.035188 env[1214]: time="2025-05-16T00:33:59.034415777Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 16 00:33:59.035188 env[1214]: time="2025-05-16T00:33:59.035057668Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 16 00:33:59.035841 env[1214]: time="2025-05-16T00:33:59.035433609Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 16 00:33:59.035841 env[1214]: time="2025-05-16T00:33:59.035420112Z" level=info msg="Start subscribing containerd event" May 16 00:33:59.035841 env[1214]: time="2025-05-16T00:33:59.035477557Z" level=info msg=serving... address=/run/containerd/containerd.sock May 16 00:33:59.035841 env[1214]: time="2025-05-16T00:33:59.035492206Z" level=info msg="Start recovering state" May 16 00:33:59.035841 env[1214]: time="2025-05-16T00:33:59.035524755Z" level=info msg="containerd successfully booted in 0.055254s" May 16 00:33:59.035841 env[1214]: time="2025-05-16T00:33:59.035559239Z" level=info msg="Start event monitor" May 16 00:33:59.035841 env[1214]: time="2025-05-16T00:33:59.035580554Z" level=info msg="Start snapshots syncer" May 16 00:33:59.035841 env[1214]: time="2025-05-16T00:33:59.035590430Z" level=info msg="Start cni network conf syncer for default" May 16 00:33:59.035841 env[1214]: time="2025-05-16T00:33:59.035597837Z" level=info msg="Start streaming server" May 16 00:33:59.035602 systemd[1]: Started containerd.service. May 16 00:33:59.060205 locksmithd[1247]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 16 00:33:59.320987 tar[1212]: linux-arm64/LICENSE May 16 00:33:59.321226 tar[1212]: linux-arm64/README.md May 16 00:33:59.325762 systemd[1]: Finished prepare-helm.service. May 16 00:33:59.797463 sshd_keygen[1218]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 16 00:33:59.808404 systemd-networkd[1045]: eth0: Gained IPv6LL May 16 00:33:59.810111 systemd[1]: Finished systemd-networkd-wait-online.service. May 16 00:33:59.811470 systemd[1]: Reached target network-online.target. May 16 00:33:59.814034 systemd[1]: Starting kubelet.service... May 16 00:33:59.818375 systemd[1]: Finished sshd-keygen.service. May 16 00:33:59.820815 systemd[1]: Starting issuegen.service... May 16 00:33:59.826037 systemd[1]: issuegen.service: Deactivated successfully. May 16 00:33:59.826284 systemd[1]: Finished issuegen.service. May 16 00:33:59.828996 systemd[1]: Starting systemd-user-sessions.service... May 16 00:33:59.836150 systemd[1]: Finished systemd-user-sessions.service. May 16 00:33:59.839355 systemd[1]: Started getty@tty1.service. May 16 00:33:59.842287 systemd[1]: Started serial-getty@ttyAMA0.service. May 16 00:33:59.843497 systemd[1]: Reached target getty.target. May 16 00:34:00.405731 systemd[1]: Created slice system-sshd.slice. May 16 00:34:00.408104 systemd[1]: Started sshd@0-10.0.0.31:22-10.0.0.1:46330.service. May 16 00:34:00.419376 systemd[1]: Started kubelet.service. May 16 00:34:00.421239 systemd[1]: Reached target multi-user.target. May 16 00:34:00.423973 systemd[1]: Starting systemd-update-utmp-runlevel.service... May 16 00:34:00.431271 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. May 16 00:34:00.431461 systemd[1]: Finished systemd-update-utmp-runlevel.service. May 16 00:34:00.432700 systemd[1]: Startup finished in 593ms (kernel) + 7.978s (initrd) + 4.883s (userspace) = 13.456s. May 16 00:34:00.465254 sshd[1272]: Accepted publickey for core from 10.0.0.1 port 46330 ssh2: RSA SHA256:czXyODm5lEdSCdxgc4UKYE1H3sjGZqNxHBxH/SPqyp4 May 16 00:34:00.469242 sshd[1272]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 16 00:34:00.477800 systemd[1]: Created slice user-500.slice. May 16 00:34:00.479061 systemd[1]: Starting user-runtime-dir@500.service... May 16 00:34:00.481160 systemd-logind[1206]: New session 1 of user core. May 16 00:34:00.490022 systemd[1]: Finished user-runtime-dir@500.service. May 16 00:34:00.491721 systemd[1]: Starting user@500.service... May 16 00:34:00.496773 (systemd)[1279]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 16 00:34:00.565300 systemd[1279]: Queued start job for default target default.target. May 16 00:34:00.565838 systemd[1279]: Reached target paths.target. May 16 00:34:00.565873 systemd[1279]: Reached target sockets.target. May 16 00:34:00.565884 systemd[1279]: Reached target timers.target. May 16 00:34:00.565895 systemd[1279]: Reached target basic.target. May 16 00:34:00.565939 systemd[1279]: Reached target default.target. May 16 00:34:00.565963 systemd[1279]: Startup finished in 62ms. May 16 00:34:00.566484 systemd[1]: Started user@500.service. May 16 00:34:00.567566 systemd[1]: Started session-1.scope. May 16 00:34:00.624831 systemd[1]: Started sshd@1-10.0.0.31:22-10.0.0.1:46342.service. May 16 00:34:00.668169 sshd[1293]: Accepted publickey for core from 10.0.0.1 port 46342 ssh2: RSA SHA256:czXyODm5lEdSCdxgc4UKYE1H3sjGZqNxHBxH/SPqyp4 May 16 00:34:00.669955 sshd[1293]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 16 00:34:00.674997 systemd[1]: Started session-2.scope. May 16 00:34:00.675523 systemd-logind[1206]: New session 2 of user core. May 16 00:34:00.736514 sshd[1293]: pam_unix(sshd:session): session closed for user core May 16 00:34:00.739717 systemd[1]: Started sshd@2-10.0.0.31:22-10.0.0.1:46350.service. May 16 00:34:00.742653 systemd[1]: sshd@1-10.0.0.31:22-10.0.0.1:46342.service: Deactivated successfully. May 16 00:34:00.743445 systemd[1]: session-2.scope: Deactivated successfully. May 16 00:34:00.744059 systemd-logind[1206]: Session 2 logged out. Waiting for processes to exit. May 16 00:34:00.744819 systemd-logind[1206]: Removed session 2. May 16 00:34:00.780769 sshd[1298]: Accepted publickey for core from 10.0.0.1 port 46350 ssh2: RSA SHA256:czXyODm5lEdSCdxgc4UKYE1H3sjGZqNxHBxH/SPqyp4 May 16 00:34:00.782911 sshd[1298]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 16 00:34:00.786929 systemd-logind[1206]: New session 3 of user core. May 16 00:34:00.787423 systemd[1]: Started session-3.scope. May 16 00:34:00.841518 sshd[1298]: pam_unix(sshd:session): session closed for user core May 16 00:34:00.845561 systemd[1]: sshd@2-10.0.0.31:22-10.0.0.1:46350.service: Deactivated successfully. May 16 00:34:00.846168 systemd[1]: session-3.scope: Deactivated successfully. May 16 00:34:00.847667 systemd-logind[1206]: Session 3 logged out. Waiting for processes to exit. May 16 00:34:00.848172 systemd[1]: Started sshd@3-10.0.0.31:22-10.0.0.1:46358.service. May 16 00:34:00.849458 systemd-logind[1206]: Removed session 3. May 16 00:34:00.887540 sshd[1307]: Accepted publickey for core from 10.0.0.1 port 46358 ssh2: RSA SHA256:czXyODm5lEdSCdxgc4UKYE1H3sjGZqNxHBxH/SPqyp4 May 16 00:34:00.889031 sshd[1307]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 16 00:34:00.892527 systemd-logind[1206]: New session 4 of user core. May 16 00:34:00.893440 systemd[1]: Started session-4.scope. May 16 00:34:00.942425 kubelet[1275]: E0516 00:34:00.941549 1275 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 16 00:34:00.943821 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 16 00:34:00.943964 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 16 00:34:00.949672 sshd[1307]: pam_unix(sshd:session): session closed for user core May 16 00:34:00.952916 systemd[1]: sshd@3-10.0.0.31:22-10.0.0.1:46358.service: Deactivated successfully. May 16 00:34:00.953572 systemd[1]: session-4.scope: Deactivated successfully. May 16 00:34:00.954159 systemd-logind[1206]: Session 4 logged out. Waiting for processes to exit. May 16 00:34:00.955356 systemd[1]: Started sshd@4-10.0.0.31:22-10.0.0.1:46360.service. May 16 00:34:00.956392 systemd-logind[1206]: Removed session 4. May 16 00:34:00.993998 sshd[1313]: Accepted publickey for core from 10.0.0.1 port 46360 ssh2: RSA SHA256:czXyODm5lEdSCdxgc4UKYE1H3sjGZqNxHBxH/SPqyp4 May 16 00:34:00.995378 sshd[1313]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 16 00:34:00.999270 systemd-logind[1206]: New session 5 of user core. May 16 00:34:00.999762 systemd[1]: Started session-5.scope. May 16 00:34:01.062365 sudo[1316]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 16 00:34:01.062621 sudo[1316]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) May 16 00:34:01.121876 systemd[1]: Starting docker.service... May 16 00:34:01.206852 env[1327]: time="2025-05-16T00:34:01.206721838Z" level=info msg="Starting up" May 16 00:34:01.208764 env[1327]: time="2025-05-16T00:34:01.208724512Z" level=info msg="parsed scheme: \"unix\"" module=grpc May 16 00:34:01.208899 env[1327]: time="2025-05-16T00:34:01.208883985Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc May 16 00:34:01.208969 env[1327]: time="2025-05-16T00:34:01.208952909Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc May 16 00:34:01.209027 env[1327]: time="2025-05-16T00:34:01.209014352Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc May 16 00:34:01.211063 env[1327]: time="2025-05-16T00:34:01.211031947Z" level=info msg="parsed scheme: \"unix\"" module=grpc May 16 00:34:01.211063 env[1327]: time="2025-05-16T00:34:01.211066245Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc May 16 00:34:01.211166 env[1327]: time="2025-05-16T00:34:01.211085009Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc May 16 00:34:01.211166 env[1327]: time="2025-05-16T00:34:01.211094984Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc May 16 00:34:01.215688 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport653027276-merged.mount: Deactivated successfully. May 16 00:34:01.405256 env[1327]: time="2025-05-16T00:34:01.405191570Z" level=info msg="Loading containers: start." May 16 00:34:01.533231 kernel: Initializing XFRM netlink socket May 16 00:34:01.558485 env[1327]: time="2025-05-16T00:34:01.558441073Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" May 16 00:34:01.609540 systemd-networkd[1045]: docker0: Link UP May 16 00:34:01.629666 env[1327]: time="2025-05-16T00:34:01.629622520Z" level=info msg="Loading containers: done." May 16 00:34:01.646504 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3248605651-merged.mount: Deactivated successfully. May 16 00:34:01.651517 env[1327]: time="2025-05-16T00:34:01.651472920Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 16 00:34:01.651686 env[1327]: time="2025-05-16T00:34:01.651668735Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 May 16 00:34:01.651794 env[1327]: time="2025-05-16T00:34:01.651776986Z" level=info msg="Daemon has completed initialization" May 16 00:34:01.669490 systemd[1]: Started docker.service. May 16 00:34:01.674161 env[1327]: time="2025-05-16T00:34:01.674036267Z" level=info msg="API listen on /run/docker.sock" May 16 00:34:02.376088 env[1214]: time="2025-05-16T00:34:02.376016531Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.9\"" May 16 00:34:02.970630 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount243935253.mount: Deactivated successfully. May 16 00:34:04.239792 env[1214]: time="2025-05-16T00:34:04.239741673Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.31.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:34:04.241167 env[1214]: time="2025-05-16T00:34:04.241128010Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:90d52158b7646075e7e560c1bd670904ba3f4f4c8c199106bf96ee0944663d61,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:34:04.242941 env[1214]: time="2025-05-16T00:34:04.242907458Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.31.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:34:04.244763 env[1214]: time="2025-05-16T00:34:04.244735693Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:5b68f0df22013422dc8fb9ddfcff513eb6fc92f9dbf8aae41555c895efef5a20,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:34:04.245583 env[1214]: time="2025-05-16T00:34:04.245551991Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.9\" returns image reference \"sha256:90d52158b7646075e7e560c1bd670904ba3f4f4c8c199106bf96ee0944663d61\"" May 16 00:34:04.248712 env[1214]: time="2025-05-16T00:34:04.248678241Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.9\"" May 16 00:34:05.746838 env[1214]: time="2025-05-16T00:34:05.746769514Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.31.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:34:05.748052 env[1214]: time="2025-05-16T00:34:05.748008894Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2d03fe540daca1d9520c403342787715eab3b05fb6773ea41153572716c82dba,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:34:05.750275 env[1214]: time="2025-05-16T00:34:05.750246546Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.31.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:34:05.752351 env[1214]: time="2025-05-16T00:34:05.752322823Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:be9e7987d323b38a12e28436cff6d6ec6fc31ffdd3ea11eaa9d74852e9d31248,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:34:05.753220 env[1214]: time="2025-05-16T00:34:05.753179126Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.9\" returns image reference \"sha256:2d03fe540daca1d9520c403342787715eab3b05fb6773ea41153572716c82dba\"" May 16 00:34:05.753863 env[1214]: time="2025-05-16T00:34:05.753839211Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.9\"" May 16 00:34:07.082919 env[1214]: time="2025-05-16T00:34:07.082860171Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.31.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:34:07.084958 env[1214]: time="2025-05-16T00:34:07.084921647Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b333fec06af219faaf48f1784baa0b7274945b2e5be5bd2fca2681f7d1baff5f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:34:07.087060 env[1214]: time="2025-05-16T00:34:07.087029821Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.31.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:34:07.089368 env[1214]: time="2025-05-16T00:34:07.089335040Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:eb358c7346bb17ab2c639c3ff8ab76a147dec7ae609f5c0c2800233e42253ed1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:34:07.090261 env[1214]: time="2025-05-16T00:34:07.090233384Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.9\" returns image reference \"sha256:b333fec06af219faaf48f1784baa0b7274945b2e5be5bd2fca2681f7d1baff5f\"" May 16 00:34:07.090744 env[1214]: time="2025-05-16T00:34:07.090714529Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.9\"" May 16 00:34:08.221072 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount950331243.mount: Deactivated successfully. May 16 00:34:08.853823 env[1214]: time="2025-05-16T00:34:08.853762149Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.31.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:34:08.855015 env[1214]: time="2025-05-16T00:34:08.854977519Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:cbfba5e6542fe387b24d9e73bf5a054a6b07b95af1392268fd82b6f449ef1c27,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:34:08.857880 env[1214]: time="2025-05-16T00:34:08.857827613Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.31.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:34:08.858915 env[1214]: time="2025-05-16T00:34:08.858862478Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:fdf026cf2434537e499e9c739d189ca8fc57101d929ac5ccd8e24f979a9738c1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:34:08.859467 env[1214]: time="2025-05-16T00:34:08.859438694Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.9\" returns image reference \"sha256:cbfba5e6542fe387b24d9e73bf5a054a6b07b95af1392268fd82b6f449ef1c27\"" May 16 00:34:08.860171 env[1214]: time="2025-05-16T00:34:08.860130016Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" May 16 00:34:09.585028 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3645711069.mount: Deactivated successfully. May 16 00:34:10.573384 env[1214]: time="2025-05-16T00:34:10.573331632Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:34:10.574929 env[1214]: time="2025-05-16T00:34:10.574888020Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:34:10.576830 env[1214]: time="2025-05-16T00:34:10.576795837Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:34:10.579273 env[1214]: time="2025-05-16T00:34:10.579235872Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:34:10.580070 env[1214]: time="2025-05-16T00:34:10.580035849Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" May 16 00:34:10.580705 env[1214]: time="2025-05-16T00:34:10.580664661Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 16 00:34:11.095745 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 16 00:34:11.095890 systemd[1]: Stopped kubelet.service. May 16 00:34:11.097580 systemd[1]: Starting kubelet.service... May 16 00:34:11.102360 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2351512069.mount: Deactivated successfully. May 16 00:34:11.105828 env[1214]: time="2025-05-16T00:34:11.105782715Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:34:11.119287 env[1214]: time="2025-05-16T00:34:11.118606635Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:34:11.121107 env[1214]: time="2025-05-16T00:34:11.121067316Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:34:11.122960 env[1214]: time="2025-05-16T00:34:11.122915069Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:34:11.123593 env[1214]: time="2025-05-16T00:34:11.123563320Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" May 16 00:34:11.124085 env[1214]: time="2025-05-16T00:34:11.124054869Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" May 16 00:34:11.200472 systemd[1]: Started kubelet.service. May 16 00:34:11.275674 kubelet[1461]: E0516 00:34:11.275608 1461 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 16 00:34:11.278285 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 16 00:34:11.278432 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 16 00:34:11.623514 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2442314872.mount: Deactivated successfully. May 16 00:34:13.923737 env[1214]: time="2025-05-16T00:34:13.923683117Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:34:13.925179 env[1214]: time="2025-05-16T00:34:13.925145609Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:34:13.927010 env[1214]: time="2025-05-16T00:34:13.926978215Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:34:13.928965 env[1214]: time="2025-05-16T00:34:13.928939548Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:34:13.930641 env[1214]: time="2025-05-16T00:34:13.930605498Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" May 16 00:34:19.664975 systemd[1]: Stopped kubelet.service. May 16 00:34:19.666960 systemd[1]: Starting kubelet.service... May 16 00:34:19.690454 systemd[1]: Reloading. May 16 00:34:19.740098 /usr/lib/systemd/system-generators/torcx-generator[1518]: time="2025-05-16T00:34:19Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 16 00:34:19.740530 /usr/lib/systemd/system-generators/torcx-generator[1518]: time="2025-05-16T00:34:19Z" level=info msg="torcx already run" May 16 00:34:19.837166 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 16 00:34:19.837248 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 16 00:34:19.852832 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 16 00:34:19.918107 systemd[1]: Started kubelet.service. May 16 00:34:19.919710 systemd[1]: Stopping kubelet.service... May 16 00:34:19.919948 systemd[1]: kubelet.service: Deactivated successfully. May 16 00:34:19.920118 systemd[1]: Stopped kubelet.service. May 16 00:34:19.921644 systemd[1]: Starting kubelet.service... May 16 00:34:20.016736 systemd[1]: Started kubelet.service. May 16 00:34:20.055386 kubelet[1563]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 16 00:34:20.055727 kubelet[1563]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 16 00:34:20.055785 kubelet[1563]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 16 00:34:20.055941 kubelet[1563]: I0516 00:34:20.055907 1563 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 16 00:34:20.649718 kubelet[1563]: I0516 00:34:20.649668 1563 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" May 16 00:34:20.649718 kubelet[1563]: I0516 00:34:20.649707 1563 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 16 00:34:20.650414 kubelet[1563]: I0516 00:34:20.650380 1563 server.go:934] "Client rotation is on, will bootstrap in background" May 16 00:34:20.677501 kubelet[1563]: E0516 00:34:20.677460 1563 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.31:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.31:6443: connect: connection refused" logger="UnhandledError" May 16 00:34:20.678463 kubelet[1563]: I0516 00:34:20.678438 1563 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 16 00:34:20.687038 kubelet[1563]: E0516 00:34:20.687000 1563 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 16 00:34:20.687038 kubelet[1563]: I0516 00:34:20.687029 1563 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 16 00:34:20.690500 kubelet[1563]: I0516 00:34:20.690474 1563 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 16 00:34:20.691331 kubelet[1563]: I0516 00:34:20.691301 1563 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 16 00:34:20.691492 kubelet[1563]: I0516 00:34:20.691460 1563 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 16 00:34:20.691648 kubelet[1563]: I0516 00:34:20.691487 1563 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 16 00:34:20.691738 kubelet[1563]: I0516 00:34:20.691651 1563 topology_manager.go:138] "Creating topology manager with none policy" May 16 00:34:20.691738 kubelet[1563]: I0516 00:34:20.691661 1563 container_manager_linux.go:300] "Creating device plugin manager" May 16 00:34:20.691914 kubelet[1563]: I0516 00:34:20.691889 1563 state_mem.go:36] "Initialized new in-memory state store" May 16 00:34:20.696693 kubelet[1563]: I0516 00:34:20.696665 1563 kubelet.go:408] "Attempting to sync node with API server" May 16 00:34:20.696749 kubelet[1563]: I0516 00:34:20.696700 1563 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 16 00:34:20.696749 kubelet[1563]: I0516 00:34:20.696726 1563 kubelet.go:314] "Adding apiserver pod source" May 16 00:34:20.696749 kubelet[1563]: I0516 00:34:20.696741 1563 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 16 00:34:20.703337 kubelet[1563]: W0516 00:34:20.703272 1563 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.31:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.31:6443: connect: connection refused May 16 00:34:20.703468 kubelet[1563]: E0516 00:34:20.703448 1563 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.31:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.31:6443: connect: connection refused" logger="UnhandledError" May 16 00:34:20.703522 kubelet[1563]: W0516 00:34:20.703343 1563 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.31:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.31:6443: connect: connection refused May 16 00:34:20.703608 kubelet[1563]: E0516 00:34:20.703593 1563 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.31:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.31:6443: connect: connection refused" logger="UnhandledError" May 16 00:34:20.704436 kubelet[1563]: I0516 00:34:20.704395 1563 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" May 16 00:34:20.705251 kubelet[1563]: I0516 00:34:20.705230 1563 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 16 00:34:20.705370 kubelet[1563]: W0516 00:34:20.705354 1563 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 16 00:34:20.706807 kubelet[1563]: I0516 00:34:20.706744 1563 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 16 00:34:20.707128 kubelet[1563]: I0516 00:34:20.707090 1563 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 16 00:34:20.707628 kubelet[1563]: I0516 00:34:20.707590 1563 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 16 00:34:20.707914 kubelet[1563]: I0516 00:34:20.707900 1563 server.go:1274] "Started kubelet" May 16 00:34:20.719060 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). May 16 00:34:20.719380 kubelet[1563]: I0516 00:34:20.715934 1563 server.go:449] "Adding debug handlers to kubelet server" May 16 00:34:20.719380 kubelet[1563]: I0516 00:34:20.718949 1563 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 16 00:34:20.719380 kubelet[1563]: I0516 00:34:20.719088 1563 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 16 00:34:20.720144 kubelet[1563]: E0516 00:34:20.719579 1563 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 00:34:20.720144 kubelet[1563]: I0516 00:34:20.719622 1563 volume_manager.go:289] "Starting Kubelet Volume Manager" May 16 00:34:20.720144 kubelet[1563]: I0516 00:34:20.719818 1563 desired_state_of_world_populator.go:147] "Desired state populator starts to run" May 16 00:34:20.720144 kubelet[1563]: I0516 00:34:20.719890 1563 reconciler.go:26] "Reconciler: start to sync state" May 16 00:34:20.720815 kubelet[1563]: W0516 00:34:20.720757 1563 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.31:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.31:6443: connect: connection refused May 16 00:34:20.720890 kubelet[1563]: E0516 00:34:20.720823 1563 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.31:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.31:6443: connect: connection refused" logger="UnhandledError" May 16 00:34:20.721157 kubelet[1563]: I0516 00:34:20.721132 1563 factory.go:221] Registration of the systemd container factory successfully May 16 00:34:20.721494 kubelet[1563]: E0516 00:34:20.721419 1563 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.31:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.31:6443: connect: connection refused" interval="200ms" May 16 00:34:20.721745 kubelet[1563]: I0516 00:34:20.721392 1563 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 16 00:34:20.722560 kubelet[1563]: E0516 00:34:20.720658 1563 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.31:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.31:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183fdab72cb08fa6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-16 00:34:20.706680742 +0000 UTC m=+0.685813856,LastTimestamp:2025-05-16 00:34:20.706680742 +0000 UTC m=+0.685813856,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 16 00:34:20.722560 kubelet[1563]: E0516 00:34:20.722533 1563 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 16 00:34:20.724101 kubelet[1563]: I0516 00:34:20.724080 1563 factory.go:221] Registration of the containerd container factory successfully May 16 00:34:20.735627 kubelet[1563]: I0516 00:34:20.735591 1563 cpu_manager.go:214] "Starting CPU manager" policy="none" May 16 00:34:20.735627 kubelet[1563]: I0516 00:34:20.735609 1563 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 16 00:34:20.735627 kubelet[1563]: I0516 00:34:20.735629 1563 state_mem.go:36] "Initialized new in-memory state store" May 16 00:34:20.735883 kubelet[1563]: I0516 00:34:20.735852 1563 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 16 00:34:20.736895 kubelet[1563]: I0516 00:34:20.736866 1563 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 16 00:34:20.736895 kubelet[1563]: I0516 00:34:20.736889 1563 status_manager.go:217] "Starting to sync pod status with apiserver" May 16 00:34:20.736972 kubelet[1563]: I0516 00:34:20.736910 1563 kubelet.go:2321] "Starting kubelet main sync loop" May 16 00:34:20.736972 kubelet[1563]: E0516 00:34:20.736951 1563 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 16 00:34:20.741631 kubelet[1563]: W0516 00:34:20.741578 1563 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.31:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.31:6443: connect: connection refused May 16 00:34:20.741727 kubelet[1563]: E0516 00:34:20.741637 1563 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.31:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.31:6443: connect: connection refused" logger="UnhandledError" May 16 00:34:20.807608 kubelet[1563]: I0516 00:34:20.807576 1563 policy_none.go:49] "None policy: Start" May 16 00:34:20.808416 kubelet[1563]: I0516 00:34:20.808395 1563 memory_manager.go:170] "Starting memorymanager" policy="None" May 16 00:34:20.808493 kubelet[1563]: I0516 00:34:20.808435 1563 state_mem.go:35] "Initializing new in-memory state store" May 16 00:34:20.816133 systemd[1]: Created slice kubepods.slice. May 16 00:34:20.820192 kubelet[1563]: E0516 00:34:20.820163 1563 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 00:34:20.820436 systemd[1]: Created slice kubepods-burstable.slice. May 16 00:34:20.822923 systemd[1]: Created slice kubepods-besteffort.slice. May 16 00:34:20.837066 kubelet[1563]: E0516 00:34:20.837026 1563 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 16 00:34:20.837206 kubelet[1563]: I0516 00:34:20.837041 1563 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 16 00:34:20.837425 kubelet[1563]: I0516 00:34:20.837410 1563 eviction_manager.go:189] "Eviction manager: starting control loop" May 16 00:34:20.837523 kubelet[1563]: I0516 00:34:20.837490 1563 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 16 00:34:20.837835 kubelet[1563]: I0516 00:34:20.837817 1563 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 16 00:34:20.838921 kubelet[1563]: E0516 00:34:20.838874 1563 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 16 00:34:20.922962 kubelet[1563]: E0516 00:34:20.922848 1563 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.31:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.31:6443: connect: connection refused" interval="400ms" May 16 00:34:20.939053 kubelet[1563]: I0516 00:34:20.939024 1563 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 16 00:34:20.939572 kubelet[1563]: E0516 00:34:20.939531 1563 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.31:6443/api/v1/nodes\": dial tcp 10.0.0.31:6443: connect: connection refused" node="localhost" May 16 00:34:21.044573 systemd[1]: Created slice kubepods-burstable-pod5c5d678753af392c8e10ea85d21244f8.slice. May 16 00:34:21.058555 systemd[1]: Created slice kubepods-burstable-poda3416600bab1918b24583836301c9096.slice. May 16 00:34:21.079654 systemd[1]: Created slice kubepods-burstable-podea5884ad3481d5218ff4c8f11f2934d5.slice. May 16 00:34:21.121460 kubelet[1563]: I0516 00:34:21.121411 1563 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5c5d678753af392c8e10ea85d21244f8-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"5c5d678753af392c8e10ea85d21244f8\") " pod="kube-system/kube-apiserver-localhost" May 16 00:34:21.121460 kubelet[1563]: I0516 00:34:21.121460 1563 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5c5d678753af392c8e10ea85d21244f8-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"5c5d678753af392c8e10ea85d21244f8\") " pod="kube-system/kube-apiserver-localhost" May 16 00:34:21.121795 kubelet[1563]: I0516 00:34:21.121487 1563 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 16 00:34:21.121795 kubelet[1563]: I0516 00:34:21.121507 1563 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 16 00:34:21.121795 kubelet[1563]: I0516 00:34:21.121525 1563 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ea5884ad3481d5218ff4c8f11f2934d5-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"ea5884ad3481d5218ff4c8f11f2934d5\") " pod="kube-system/kube-scheduler-localhost" May 16 00:34:21.121795 kubelet[1563]: I0516 00:34:21.121555 1563 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 16 00:34:21.121795 kubelet[1563]: I0516 00:34:21.121570 1563 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 16 00:34:21.121906 kubelet[1563]: I0516 00:34:21.121586 1563 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 16 00:34:21.121906 kubelet[1563]: I0516 00:34:21.121601 1563 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5c5d678753af392c8e10ea85d21244f8-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"5c5d678753af392c8e10ea85d21244f8\") " pod="kube-system/kube-apiserver-localhost" May 16 00:34:21.141555 kubelet[1563]: I0516 00:34:21.141521 1563 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 16 00:34:21.141899 kubelet[1563]: E0516 00:34:21.141867 1563 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.31:6443/api/v1/nodes\": dial tcp 10.0.0.31:6443: connect: connection refused" node="localhost" May 16 00:34:21.323725 kubelet[1563]: E0516 00:34:21.323619 1563 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.31:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.31:6443: connect: connection refused" interval="800ms" May 16 00:34:21.356952 kubelet[1563]: E0516 00:34:21.356917 1563 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:34:21.357783 env[1214]: time="2025-05-16T00:34:21.357510766Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:5c5d678753af392c8e10ea85d21244f8,Namespace:kube-system,Attempt:0,}" May 16 00:34:21.360711 kubelet[1563]: E0516 00:34:21.360676 1563 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:34:21.361147 env[1214]: time="2025-05-16T00:34:21.361113283Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:a3416600bab1918b24583836301c9096,Namespace:kube-system,Attempt:0,}" May 16 00:34:21.381710 kubelet[1563]: E0516 00:34:21.381663 1563 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:34:21.382138 env[1214]: time="2025-05-16T00:34:21.382102676Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:ea5884ad3481d5218ff4c8f11f2934d5,Namespace:kube-system,Attempt:0,}" May 16 00:34:21.390716 kubelet[1563]: E0516 00:34:21.390619 1563 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.31:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.31:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183fdab72cb08fa6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-16 00:34:20.706680742 +0000 UTC m=+0.685813856,LastTimestamp:2025-05-16 00:34:20.706680742 +0000 UTC m=+0.685813856,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 16 00:34:21.543893 kubelet[1563]: I0516 00:34:21.543862 1563 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 16 00:34:21.544359 kubelet[1563]: E0516 00:34:21.544330 1563 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.31:6443/api/v1/nodes\": dial tcp 10.0.0.31:6443: connect: connection refused" node="localhost" May 16 00:34:21.908434 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3394089826.mount: Deactivated successfully. May 16 00:34:21.913856 env[1214]: time="2025-05-16T00:34:21.913811368Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:34:21.915815 env[1214]: time="2025-05-16T00:34:21.915780602Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:34:21.916912 env[1214]: time="2025-05-16T00:34:21.916874586Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:34:21.920298 env[1214]: time="2025-05-16T00:34:21.919121882Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:34:21.923420 env[1214]: time="2025-05-16T00:34:21.923385685Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:34:21.925728 env[1214]: time="2025-05-16T00:34:21.925686223Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:34:21.926483 env[1214]: time="2025-05-16T00:34:21.926462763Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:34:21.927628 env[1214]: time="2025-05-16T00:34:21.927168677Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:34:21.928732 env[1214]: time="2025-05-16T00:34:21.928688387Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:34:21.929666 env[1214]: time="2025-05-16T00:34:21.929632823Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:34:21.930649 env[1214]: time="2025-05-16T00:34:21.930626935Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:34:21.931370 env[1214]: time="2025-05-16T00:34:21.931344786Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:34:21.953021 env[1214]: time="2025-05-16T00:34:21.952935933Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 16 00:34:21.953021 env[1214]: time="2025-05-16T00:34:21.952981002Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 16 00:34:21.953021 env[1214]: time="2025-05-16T00:34:21.952991298Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 00:34:21.953267 env[1214]: time="2025-05-16T00:34:21.953229900Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6d1e8081c63412de8a8a870390be7d03ef8cc6a14900cb937aabc8ce6bdbe746 pid=1612 runtime=io.containerd.runc.v2 May 16 00:34:21.953873 env[1214]: time="2025-05-16T00:34:21.953824845Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 16 00:34:21.953949 env[1214]: time="2025-05-16T00:34:21.953894591Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 16 00:34:21.953949 env[1214]: time="2025-05-16T00:34:21.953921872Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 00:34:21.954410 env[1214]: time="2025-05-16T00:34:21.954361701Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0946003f71c818537c345a3e2320de953aace9525be47df46a13d6013cda5f2d pid=1617 runtime=io.containerd.runc.v2 May 16 00:34:21.961610 env[1214]: time="2025-05-16T00:34:21.961511291Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 16 00:34:21.961610 env[1214]: time="2025-05-16T00:34:21.961589450Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 16 00:34:21.962029 env[1214]: time="2025-05-16T00:34:21.961780501Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 00:34:21.962029 env[1214]: time="2025-05-16T00:34:21.961973915Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0006c4a88ba80ec198557aa7df529350e0c950477cfdaf67289760b6f91471aa pid=1654 runtime=io.containerd.runc.v2 May 16 00:34:21.966043 systemd[1]: Started cri-containerd-6d1e8081c63412de8a8a870390be7d03ef8cc6a14900cb937aabc8ce6bdbe746.scope. May 16 00:34:21.972574 systemd[1]: Started cri-containerd-0946003f71c818537c345a3e2320de953aace9525be47df46a13d6013cda5f2d.scope. May 16 00:34:21.977164 systemd[1]: Started cri-containerd-0006c4a88ba80ec198557aa7df529350e0c950477cfdaf67289760b6f91471aa.scope. May 16 00:34:22.030762 env[1214]: time="2025-05-16T00:34:22.030715665Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:a3416600bab1918b24583836301c9096,Namespace:kube-system,Attempt:0,} returns sandbox id \"6d1e8081c63412de8a8a870390be7d03ef8cc6a14900cb937aabc8ce6bdbe746\"" May 16 00:34:22.030917 env[1214]: time="2025-05-16T00:34:22.030740098Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:5c5d678753af392c8e10ea85d21244f8,Namespace:kube-system,Attempt:0,} returns sandbox id \"0946003f71c818537c345a3e2320de953aace9525be47df46a13d6013cda5f2d\"" May 16 00:34:22.031653 kubelet[1563]: E0516 00:34:22.031624 1563 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:34:22.031919 kubelet[1563]: E0516 00:34:22.031900 1563 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:34:22.034301 env[1214]: time="2025-05-16T00:34:22.034264147Z" level=info msg="CreateContainer within sandbox \"6d1e8081c63412de8a8a870390be7d03ef8cc6a14900cb937aabc8ce6bdbe746\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 16 00:34:22.034500 env[1214]: time="2025-05-16T00:34:22.034479554Z" level=info msg="CreateContainer within sandbox \"0946003f71c818537c345a3e2320de953aace9525be47df46a13d6013cda5f2d\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 16 00:34:22.038062 env[1214]: time="2025-05-16T00:34:22.038033483Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:ea5884ad3481d5218ff4c8f11f2934d5,Namespace:kube-system,Attempt:0,} returns sandbox id \"0006c4a88ba80ec198557aa7df529350e0c950477cfdaf67289760b6f91471aa\"" May 16 00:34:22.039123 kubelet[1563]: E0516 00:34:22.039098 1563 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:34:22.041132 env[1214]: time="2025-05-16T00:34:22.041100724Z" level=info msg="CreateContainer within sandbox \"0006c4a88ba80ec198557aa7df529350e0c950477cfdaf67289760b6f91471aa\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 16 00:34:22.050460 env[1214]: time="2025-05-16T00:34:22.050415518Z" level=info msg="CreateContainer within sandbox \"6d1e8081c63412de8a8a870390be7d03ef8cc6a14900cb937aabc8ce6bdbe746\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"a274abe9a6ebdfa1e1ec0a317d6da63c73fa37cbe2d40fddbf1e67d12c5888be\"" May 16 00:34:22.051046 env[1214]: time="2025-05-16T00:34:22.051015797Z" level=info msg="StartContainer for \"a274abe9a6ebdfa1e1ec0a317d6da63c73fa37cbe2d40fddbf1e67d12c5888be\"" May 16 00:34:22.053636 env[1214]: time="2025-05-16T00:34:22.053589141Z" level=info msg="CreateContainer within sandbox \"0946003f71c818537c345a3e2320de953aace9525be47df46a13d6013cda5f2d\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"d6202b3aa03dee7c81500b30fd4614686953f68d512f47d327f39fa36e0a5efa\"" May 16 00:34:22.054034 env[1214]: time="2025-05-16T00:34:22.054001209Z" level=info msg="StartContainer for \"d6202b3aa03dee7c81500b30fd4614686953f68d512f47d327f39fa36e0a5efa\"" May 16 00:34:22.058781 env[1214]: time="2025-05-16T00:34:22.058740636Z" level=info msg="CreateContainer within sandbox \"0006c4a88ba80ec198557aa7df529350e0c950477cfdaf67289760b6f91471aa\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"646d9c140660ceceec6f6c3c9693844feef6e7a7c902b392fd19e0f6c9ea93b5\"" May 16 00:34:22.059364 env[1214]: time="2025-05-16T00:34:22.059329820Z" level=info msg="StartContainer for \"646d9c140660ceceec6f6c3c9693844feef6e7a7c902b392fd19e0f6c9ea93b5\"" May 16 00:34:22.072053 systemd[1]: Started cri-containerd-a274abe9a6ebdfa1e1ec0a317d6da63c73fa37cbe2d40fddbf1e67d12c5888be.scope. May 16 00:34:22.079162 systemd[1]: Started cri-containerd-d6202b3aa03dee7c81500b30fd4614686953f68d512f47d327f39fa36e0a5efa.scope. May 16 00:34:22.086282 systemd[1]: Started cri-containerd-646d9c140660ceceec6f6c3c9693844feef6e7a7c902b392fd19e0f6c9ea93b5.scope. May 16 00:34:22.098358 kubelet[1563]: W0516 00:34:22.097074 1563 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.31:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.31:6443: connect: connection refused May 16 00:34:22.098358 kubelet[1563]: E0516 00:34:22.097142 1563 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.31:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.31:6443: connect: connection refused" logger="UnhandledError" May 16 00:34:22.124267 kubelet[1563]: E0516 00:34:22.124210 1563 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.31:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.31:6443: connect: connection refused" interval="1.6s" May 16 00:34:22.125506 kubelet[1563]: W0516 00:34:22.125451 1563 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.31:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.31:6443: connect: connection refused May 16 00:34:22.125582 kubelet[1563]: E0516 00:34:22.125523 1563 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.31:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.31:6443: connect: connection refused" logger="UnhandledError" May 16 00:34:22.134744 env[1214]: time="2025-05-16T00:34:22.134700108Z" level=info msg="StartContainer for \"a274abe9a6ebdfa1e1ec0a317d6da63c73fa37cbe2d40fddbf1e67d12c5888be\" returns successfully" May 16 00:34:22.159243 env[1214]: time="2025-05-16T00:34:22.159137785Z" level=info msg="StartContainer for \"646d9c140660ceceec6f6c3c9693844feef6e7a7c902b392fd19e0f6c9ea93b5\" returns successfully" May 16 00:34:22.176449 kubelet[1563]: W0516 00:34:22.172123 1563 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.31:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.31:6443: connect: connection refused May 16 00:34:22.176449 kubelet[1563]: E0516 00:34:22.172219 1563 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.31:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.31:6443: connect: connection refused" logger="UnhandledError" May 16 00:34:22.176799 env[1214]: time="2025-05-16T00:34:22.176751502Z" level=info msg="StartContainer for \"d6202b3aa03dee7c81500b30fd4614686953f68d512f47d327f39fa36e0a5efa\" returns successfully" May 16 00:34:22.244219 kubelet[1563]: W0516 00:34:22.243557 1563 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.31:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.31:6443: connect: connection refused May 16 00:34:22.244219 kubelet[1563]: E0516 00:34:22.243625 1563 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.31:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.31:6443: connect: connection refused" logger="UnhandledError" May 16 00:34:22.346695 kubelet[1563]: I0516 00:34:22.346660 1563 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 16 00:34:22.748966 kubelet[1563]: E0516 00:34:22.748932 1563 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:34:22.751077 kubelet[1563]: E0516 00:34:22.751050 1563 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:34:22.752779 kubelet[1563]: E0516 00:34:22.752751 1563 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:34:23.754948 kubelet[1563]: E0516 00:34:23.754918 1563 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:34:24.006943 kubelet[1563]: E0516 00:34:24.006837 1563 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" May 16 00:34:24.183328 kubelet[1563]: I0516 00:34:24.183291 1563 kubelet_node_status.go:75] "Successfully registered node" node="localhost" May 16 00:34:24.183506 kubelet[1563]: E0516 00:34:24.183492 1563 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" May 16 00:34:24.196992 kubelet[1563]: E0516 00:34:24.196953 1563 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 00:34:24.298036 kubelet[1563]: E0516 00:34:24.297934 1563 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 16 00:34:24.704662 kubelet[1563]: I0516 00:34:24.704616 1563 apiserver.go:52] "Watching apiserver" May 16 00:34:24.720505 kubelet[1563]: I0516 00:34:24.720463 1563 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" May 16 00:34:25.988991 systemd[1]: Reloading. May 16 00:34:26.031074 /usr/lib/systemd/system-generators/torcx-generator[1861]: time="2025-05-16T00:34:26Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 16 00:34:26.031157 /usr/lib/systemd/system-generators/torcx-generator[1861]: time="2025-05-16T00:34:26Z" level=info msg="torcx already run" May 16 00:34:26.094046 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 16 00:34:26.094067 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 16 00:34:26.110006 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 16 00:34:26.191326 systemd[1]: Stopping kubelet.service... May 16 00:34:26.204583 systemd[1]: kubelet.service: Deactivated successfully. May 16 00:34:26.204772 systemd[1]: Stopped kubelet.service. May 16 00:34:26.204819 systemd[1]: kubelet.service: Consumed 1.021s CPU time. May 16 00:34:26.206540 systemd[1]: Starting kubelet.service... May 16 00:34:26.298544 systemd[1]: Started kubelet.service. May 16 00:34:26.338834 kubelet[1903]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 16 00:34:26.338834 kubelet[1903]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 16 00:34:26.338834 kubelet[1903]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 16 00:34:26.339202 kubelet[1903]: I0516 00:34:26.338921 1903 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 16 00:34:26.344849 kubelet[1903]: I0516 00:34:26.344790 1903 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" May 16 00:34:26.344849 kubelet[1903]: I0516 00:34:26.344821 1903 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 16 00:34:26.345046 kubelet[1903]: I0516 00:34:26.345019 1903 server.go:934] "Client rotation is on, will bootstrap in background" May 16 00:34:26.346331 kubelet[1903]: I0516 00:34:26.346303 1903 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 16 00:34:26.348246 kubelet[1903]: I0516 00:34:26.348219 1903 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 16 00:34:26.351753 kubelet[1903]: E0516 00:34:26.351720 1903 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 16 00:34:26.351837 kubelet[1903]: I0516 00:34:26.351757 1903 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 16 00:34:26.354391 kubelet[1903]: I0516 00:34:26.354368 1903 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 16 00:34:26.354741 kubelet[1903]: I0516 00:34:26.354721 1903 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 16 00:34:26.354957 kubelet[1903]: I0516 00:34:26.354926 1903 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 16 00:34:26.355170 kubelet[1903]: I0516 00:34:26.355016 1903 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 16 00:34:26.355389 kubelet[1903]: I0516 00:34:26.355372 1903 topology_manager.go:138] "Creating topology manager with none policy" May 16 00:34:26.355456 kubelet[1903]: I0516 00:34:26.355446 1903 container_manager_linux.go:300] "Creating device plugin manager" May 16 00:34:26.355555 kubelet[1903]: I0516 00:34:26.355542 1903 state_mem.go:36] "Initialized new in-memory state store" May 16 00:34:26.355891 kubelet[1903]: I0516 00:34:26.355877 1903 kubelet.go:408] "Attempting to sync node with API server" May 16 00:34:26.357024 kubelet[1903]: I0516 00:34:26.357006 1903 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 16 00:34:26.357147 kubelet[1903]: I0516 00:34:26.357136 1903 kubelet.go:314] "Adding apiserver pod source" May 16 00:34:26.360266 kubelet[1903]: I0516 00:34:26.360233 1903 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 16 00:34:26.360923 kubelet[1903]: I0516 00:34:26.360898 1903 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" May 16 00:34:26.361407 kubelet[1903]: I0516 00:34:26.361379 1903 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 16 00:34:26.361780 kubelet[1903]: I0516 00:34:26.361753 1903 server.go:1274] "Started kubelet" May 16 00:34:26.363236 kubelet[1903]: I0516 00:34:26.363177 1903 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 16 00:34:26.363361 kubelet[1903]: I0516 00:34:26.363337 1903 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 16 00:34:26.363449 kubelet[1903]: I0516 00:34:26.363423 1903 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 16 00:34:26.369222 kubelet[1903]: I0516 00:34:26.364179 1903 server.go:449] "Adding debug handlers to kubelet server" May 16 00:34:26.369222 kubelet[1903]: I0516 00:34:26.365015 1903 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 16 00:34:26.369222 kubelet[1903]: I0516 00:34:26.365213 1903 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 16 00:34:26.377481 kubelet[1903]: I0516 00:34:26.372212 1903 volume_manager.go:289] "Starting Kubelet Volume Manager" May 16 00:34:26.377481 kubelet[1903]: I0516 00:34:26.373169 1903 desired_state_of_world_populator.go:147] "Desired state populator starts to run" May 16 00:34:26.377481 kubelet[1903]: I0516 00:34:26.373319 1903 reconciler.go:26] "Reconciler: start to sync state" May 16 00:34:26.377481 kubelet[1903]: E0516 00:34:26.373757 1903 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 16 00:34:26.377481 kubelet[1903]: I0516 00:34:26.376495 1903 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 16 00:34:26.382216 kubelet[1903]: I0516 00:34:26.382188 1903 factory.go:221] Registration of the containerd container factory successfully May 16 00:34:26.382318 kubelet[1903]: I0516 00:34:26.382239 1903 factory.go:221] Registration of the systemd container factory successfully May 16 00:34:26.384637 kubelet[1903]: I0516 00:34:26.384597 1903 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 16 00:34:26.386138 kubelet[1903]: I0516 00:34:26.386091 1903 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 16 00:34:26.386138 kubelet[1903]: I0516 00:34:26.386123 1903 status_manager.go:217] "Starting to sync pod status with apiserver" May 16 00:34:26.386138 kubelet[1903]: I0516 00:34:26.386142 1903 kubelet.go:2321] "Starting kubelet main sync loop" May 16 00:34:26.386304 kubelet[1903]: E0516 00:34:26.386221 1903 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 16 00:34:26.418426 kubelet[1903]: I0516 00:34:26.418396 1903 cpu_manager.go:214] "Starting CPU manager" policy="none" May 16 00:34:26.418426 kubelet[1903]: I0516 00:34:26.418414 1903 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 16 00:34:26.418426 kubelet[1903]: I0516 00:34:26.418431 1903 state_mem.go:36] "Initialized new in-memory state store" May 16 00:34:26.418605 kubelet[1903]: I0516 00:34:26.418573 1903 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 16 00:34:26.418605 kubelet[1903]: I0516 00:34:26.418593 1903 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 16 00:34:26.418674 kubelet[1903]: I0516 00:34:26.418614 1903 policy_none.go:49] "None policy: Start" May 16 00:34:26.419113 kubelet[1903]: I0516 00:34:26.419093 1903 memory_manager.go:170] "Starting memorymanager" policy="None" May 16 00:34:26.419113 kubelet[1903]: I0516 00:34:26.419111 1903 state_mem.go:35] "Initializing new in-memory state store" May 16 00:34:26.419262 kubelet[1903]: I0516 00:34:26.419247 1903 state_mem.go:75] "Updated machine memory state" May 16 00:34:26.423022 kubelet[1903]: I0516 00:34:26.422993 1903 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 16 00:34:26.423220 kubelet[1903]: I0516 00:34:26.423142 1903 eviction_manager.go:189] "Eviction manager: starting control loop" May 16 00:34:26.423220 kubelet[1903]: I0516 00:34:26.423159 1903 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 16 00:34:26.423840 kubelet[1903]: I0516 00:34:26.423569 1903 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 16 00:34:26.529791 kubelet[1903]: I0516 00:34:26.529762 1903 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 16 00:34:26.536614 kubelet[1903]: I0516 00:34:26.536591 1903 kubelet_node_status.go:111] "Node was previously registered" node="localhost" May 16 00:34:26.536716 kubelet[1903]: I0516 00:34:26.536659 1903 kubelet_node_status.go:75] "Successfully registered node" node="localhost" May 16 00:34:26.674158 kubelet[1903]: I0516 00:34:26.674123 1903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 16 00:34:26.674373 kubelet[1903]: I0516 00:34:26.674353 1903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 16 00:34:26.674471 kubelet[1903]: I0516 00:34:26.674454 1903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 16 00:34:26.674559 kubelet[1903]: I0516 00:34:26.674546 1903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 16 00:34:26.674626 kubelet[1903]: I0516 00:34:26.674613 1903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ea5884ad3481d5218ff4c8f11f2934d5-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"ea5884ad3481d5218ff4c8f11f2934d5\") " pod="kube-system/kube-scheduler-localhost" May 16 00:34:26.674704 kubelet[1903]: I0516 00:34:26.674691 1903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5c5d678753af392c8e10ea85d21244f8-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"5c5d678753af392c8e10ea85d21244f8\") " pod="kube-system/kube-apiserver-localhost" May 16 00:34:26.674768 kubelet[1903]: I0516 00:34:26.674755 1903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5c5d678753af392c8e10ea85d21244f8-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"5c5d678753af392c8e10ea85d21244f8\") " pod="kube-system/kube-apiserver-localhost" May 16 00:34:26.674851 kubelet[1903]: I0516 00:34:26.674838 1903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5c5d678753af392c8e10ea85d21244f8-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"5c5d678753af392c8e10ea85d21244f8\") " pod="kube-system/kube-apiserver-localhost" May 16 00:34:26.674943 kubelet[1903]: I0516 00:34:26.674929 1903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 16 00:34:26.795530 kubelet[1903]: E0516 00:34:26.795500 1903 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:34:26.795707 kubelet[1903]: E0516 00:34:26.795678 1903 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:34:26.795707 kubelet[1903]: E0516 00:34:26.795697 1903 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:34:26.992662 sudo[1938]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin May 16 00:34:26.992885 sudo[1938]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) May 16 00:34:27.360936 kubelet[1903]: I0516 00:34:27.360911 1903 apiserver.go:52] "Watching apiserver" May 16 00:34:27.373695 kubelet[1903]: I0516 00:34:27.373664 1903 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" May 16 00:34:27.401471 kubelet[1903]: E0516 00:34:27.401439 1903 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:34:27.402110 kubelet[1903]: E0516 00:34:27.402091 1903 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:34:27.402345 kubelet[1903]: E0516 00:34:27.402326 1903 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:34:27.430203 kubelet[1903]: I0516 00:34:27.430067 1903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.430049586 podStartE2EDuration="1.430049586s" podCreationTimestamp="2025-05-16 00:34:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-16 00:34:27.422382069 +0000 UTC m=+1.119838173" watchObservedRunningTime="2025-05-16 00:34:27.430049586 +0000 UTC m=+1.127505690" May 16 00:34:27.437377 kubelet[1903]: I0516 00:34:27.437323 1903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.437307062 podStartE2EDuration="1.437307062s" podCreationTimestamp="2025-05-16 00:34:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-16 00:34:27.430510901 +0000 UTC m=+1.127966965" watchObservedRunningTime="2025-05-16 00:34:27.437307062 +0000 UTC m=+1.134763166" May 16 00:34:27.437673 kubelet[1903]: I0516 00:34:27.437627 1903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.437603705 podStartE2EDuration="1.437603705s" podCreationTimestamp="2025-05-16 00:34:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-16 00:34:27.437604705 +0000 UTC m=+1.135060809" watchObservedRunningTime="2025-05-16 00:34:27.437603705 +0000 UTC m=+1.135059809" May 16 00:34:27.473296 sudo[1938]: pam_unix(sudo:session): session closed for user root May 16 00:34:28.403167 kubelet[1903]: E0516 00:34:28.403135 1903 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:34:28.403550 kubelet[1903]: E0516 00:34:28.403281 1903 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:34:29.393078 sudo[1316]: pam_unix(sudo:session): session closed for user root May 16 00:34:29.394346 sshd[1313]: pam_unix(sshd:session): session closed for user core May 16 00:34:29.396648 systemd[1]: sshd@4-10.0.0.31:22-10.0.0.1:46360.service: Deactivated successfully. May 16 00:34:29.397346 systemd[1]: session-5.scope: Deactivated successfully. May 16 00:34:29.397507 systemd[1]: session-5.scope: Consumed 7.970s CPU time. May 16 00:34:29.398253 systemd-logind[1206]: Session 5 logged out. Waiting for processes to exit. May 16 00:34:29.399396 systemd-logind[1206]: Removed session 5. May 16 00:34:32.051588 kubelet[1903]: I0516 00:34:32.051407 1903 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 16 00:34:32.052180 env[1214]: time="2025-05-16T00:34:32.052146012Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 16 00:34:32.052640 kubelet[1903]: I0516 00:34:32.052609 1903 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 16 00:34:32.892816 systemd[1]: Created slice kubepods-besteffort-pode0fef79d_81b7_470a_8787_0af70bb7bab1.slice. May 16 00:34:32.905401 systemd[1]: Created slice kubepods-burstable-pod4c00dc12_91b7_4a8b_88d1_5e0ff66acaa0.slice. May 16 00:34:32.915482 kubelet[1903]: I0516 00:34:32.915431 1903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sc6df\" (UniqueName: \"kubernetes.io/projected/e0fef79d-81b7-470a-8787-0af70bb7bab1-kube-api-access-sc6df\") pod \"kube-proxy-sxfq4\" (UID: \"e0fef79d-81b7-470a-8787-0af70bb7bab1\") " pod="kube-system/kube-proxy-sxfq4" May 16 00:34:32.915629 kubelet[1903]: I0516 00:34:32.915485 1903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4c00dc12-91b7-4a8b-88d1-5e0ff66acaa0-cni-path\") pod \"cilium-pwqp5\" (UID: \"4c00dc12-91b7-4a8b-88d1-5e0ff66acaa0\") " pod="kube-system/cilium-pwqp5" May 16 00:34:32.915629 kubelet[1903]: I0516 00:34:32.915521 1903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4c00dc12-91b7-4a8b-88d1-5e0ff66acaa0-clustermesh-secrets\") pod \"cilium-pwqp5\" (UID: \"4c00dc12-91b7-4a8b-88d1-5e0ff66acaa0\") " pod="kube-system/cilium-pwqp5" May 16 00:34:32.915629 kubelet[1903]: I0516 00:34:32.915571 1903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4c00dc12-91b7-4a8b-88d1-5e0ff66acaa0-cilium-config-path\") pod \"cilium-pwqp5\" (UID: \"4c00dc12-91b7-4a8b-88d1-5e0ff66acaa0\") " pod="kube-system/cilium-pwqp5" May 16 00:34:32.915629 kubelet[1903]: I0516 00:34:32.915589 1903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e0fef79d-81b7-470a-8787-0af70bb7bab1-xtables-lock\") pod \"kube-proxy-sxfq4\" (UID: \"e0fef79d-81b7-470a-8787-0af70bb7bab1\") " pod="kube-system/kube-proxy-sxfq4" May 16 00:34:32.915629 kubelet[1903]: I0516 00:34:32.915605 1903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4c00dc12-91b7-4a8b-88d1-5e0ff66acaa0-cilium-run\") pod \"cilium-pwqp5\" (UID: \"4c00dc12-91b7-4a8b-88d1-5e0ff66acaa0\") " pod="kube-system/cilium-pwqp5" May 16 00:34:32.915818 kubelet[1903]: I0516 00:34:32.915643 1903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4c00dc12-91b7-4a8b-88d1-5e0ff66acaa0-lib-modules\") pod \"cilium-pwqp5\" (UID: \"4c00dc12-91b7-4a8b-88d1-5e0ff66acaa0\") " pod="kube-system/cilium-pwqp5" May 16 00:34:32.915818 kubelet[1903]: I0516 00:34:32.915662 1903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4c00dc12-91b7-4a8b-88d1-5e0ff66acaa0-hostproc\") pod \"cilium-pwqp5\" (UID: \"4c00dc12-91b7-4a8b-88d1-5e0ff66acaa0\") " pod="kube-system/cilium-pwqp5" May 16 00:34:32.915818 kubelet[1903]: I0516 00:34:32.915678 1903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4c00dc12-91b7-4a8b-88d1-5e0ff66acaa0-cilium-cgroup\") pod \"cilium-pwqp5\" (UID: \"4c00dc12-91b7-4a8b-88d1-5e0ff66acaa0\") " pod="kube-system/cilium-pwqp5" May 16 00:34:32.915818 kubelet[1903]: I0516 00:34:32.915706 1903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4c00dc12-91b7-4a8b-88d1-5e0ff66acaa0-host-proc-sys-net\") pod \"cilium-pwqp5\" (UID: \"4c00dc12-91b7-4a8b-88d1-5e0ff66acaa0\") " pod="kube-system/cilium-pwqp5" May 16 00:34:32.915818 kubelet[1903]: I0516 00:34:32.915721 1903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4c00dc12-91b7-4a8b-88d1-5e0ff66acaa0-bpf-maps\") pod \"cilium-pwqp5\" (UID: \"4c00dc12-91b7-4a8b-88d1-5e0ff66acaa0\") " pod="kube-system/cilium-pwqp5" May 16 00:34:32.915818 kubelet[1903]: I0516 00:34:32.915735 1903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e0fef79d-81b7-470a-8787-0af70bb7bab1-kube-proxy\") pod \"kube-proxy-sxfq4\" (UID: \"e0fef79d-81b7-470a-8787-0af70bb7bab1\") " pod="kube-system/kube-proxy-sxfq4" May 16 00:34:32.915953 kubelet[1903]: I0516 00:34:32.915764 1903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4c00dc12-91b7-4a8b-88d1-5e0ff66acaa0-etc-cni-netd\") pod \"cilium-pwqp5\" (UID: \"4c00dc12-91b7-4a8b-88d1-5e0ff66acaa0\") " pod="kube-system/cilium-pwqp5" May 16 00:34:32.915953 kubelet[1903]: I0516 00:34:32.915780 1903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4c00dc12-91b7-4a8b-88d1-5e0ff66acaa0-hubble-tls\") pod \"cilium-pwqp5\" (UID: \"4c00dc12-91b7-4a8b-88d1-5e0ff66acaa0\") " pod="kube-system/cilium-pwqp5" May 16 00:34:32.915953 kubelet[1903]: I0516 00:34:32.915799 1903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e0fef79d-81b7-470a-8787-0af70bb7bab1-lib-modules\") pod \"kube-proxy-sxfq4\" (UID: \"e0fef79d-81b7-470a-8787-0af70bb7bab1\") " pod="kube-system/kube-proxy-sxfq4" May 16 00:34:32.915953 kubelet[1903]: I0516 00:34:32.915835 1903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4c00dc12-91b7-4a8b-88d1-5e0ff66acaa0-xtables-lock\") pod \"cilium-pwqp5\" (UID: \"4c00dc12-91b7-4a8b-88d1-5e0ff66acaa0\") " pod="kube-system/cilium-pwqp5" May 16 00:34:32.915953 kubelet[1903]: I0516 00:34:32.915854 1903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4c00dc12-91b7-4a8b-88d1-5e0ff66acaa0-host-proc-sys-kernel\") pod \"cilium-pwqp5\" (UID: \"4c00dc12-91b7-4a8b-88d1-5e0ff66acaa0\") " pod="kube-system/cilium-pwqp5" May 16 00:34:32.915953 kubelet[1903]: I0516 00:34:32.915874 1903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrnsw\" (UniqueName: \"kubernetes.io/projected/4c00dc12-91b7-4a8b-88d1-5e0ff66acaa0-kube-api-access-wrnsw\") pod \"cilium-pwqp5\" (UID: \"4c00dc12-91b7-4a8b-88d1-5e0ff66acaa0\") " pod="kube-system/cilium-pwqp5" May 16 00:34:33.017485 kubelet[1903]: I0516 00:34:33.017175 1903 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" May 16 00:34:33.133119 systemd[1]: Created slice kubepods-besteffort-podb1fb436f_32ba_46cc_9c38_fb4bb6b58d6d.slice. May 16 00:34:33.201977 kubelet[1903]: E0516 00:34:33.201876 1903 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:34:33.202980 env[1214]: time="2025-05-16T00:34:33.202902633Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-sxfq4,Uid:e0fef79d-81b7-470a-8787-0af70bb7bab1,Namespace:kube-system,Attempt:0,}" May 16 00:34:33.208342 kubelet[1903]: E0516 00:34:33.208314 1903 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:34:33.208933 env[1214]: time="2025-05-16T00:34:33.208861671Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-pwqp5,Uid:4c00dc12-91b7-4a8b-88d1-5e0ff66acaa0,Namespace:kube-system,Attempt:0,}" May 16 00:34:33.218146 kubelet[1903]: I0516 00:34:33.218107 1903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x9tnw\" (UniqueName: \"kubernetes.io/projected/b1fb436f-32ba-46cc-9c38-fb4bb6b58d6d-kube-api-access-x9tnw\") pod \"cilium-operator-5d85765b45-djs6q\" (UID: \"b1fb436f-32ba-46cc-9c38-fb4bb6b58d6d\") " pod="kube-system/cilium-operator-5d85765b45-djs6q" May 16 00:34:33.218337 kubelet[1903]: I0516 00:34:33.218319 1903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b1fb436f-32ba-46cc-9c38-fb4bb6b58d6d-cilium-config-path\") pod \"cilium-operator-5d85765b45-djs6q\" (UID: \"b1fb436f-32ba-46cc-9c38-fb4bb6b58d6d\") " pod="kube-system/cilium-operator-5d85765b45-djs6q" May 16 00:34:33.226010 env[1214]: time="2025-05-16T00:34:33.225919386Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 16 00:34:33.226010 env[1214]: time="2025-05-16T00:34:33.225968925Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 16 00:34:33.226010 env[1214]: time="2025-05-16T00:34:33.225980450Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 00:34:33.226308 env[1214]: time="2025-05-16T00:34:33.226167322Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b2994e5ec0f23c62859d5a9d5c49936ba4896abbe16f315ec2c0055decace7f3 pid=1996 runtime=io.containerd.runc.v2 May 16 00:34:33.230611 env[1214]: time="2025-05-16T00:34:33.230405091Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 16 00:34:33.230611 env[1214]: time="2025-05-16T00:34:33.230448188Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 16 00:34:33.230611 env[1214]: time="2025-05-16T00:34:33.230459352Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 00:34:33.230774 env[1214]: time="2025-05-16T00:34:33.230644664Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ba0e26da10607b6712166e625cedfdd4f1c6b0a266fb659300978a33c0519441 pid=2014 runtime=io.containerd.runc.v2 May 16 00:34:33.238011 systemd[1]: Started cri-containerd-b2994e5ec0f23c62859d5a9d5c49936ba4896abbe16f315ec2c0055decace7f3.scope. May 16 00:34:33.243457 systemd[1]: Started cri-containerd-ba0e26da10607b6712166e625cedfdd4f1c6b0a266fb659300978a33c0519441.scope. May 16 00:34:33.284279 env[1214]: time="2025-05-16T00:34:33.284232588Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-sxfq4,Uid:e0fef79d-81b7-470a-8787-0af70bb7bab1,Namespace:kube-system,Attempt:0,} returns sandbox id \"b2994e5ec0f23c62859d5a9d5c49936ba4896abbe16f315ec2c0055decace7f3\"" May 16 00:34:33.284952 env[1214]: time="2025-05-16T00:34:33.284922497Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-pwqp5,Uid:4c00dc12-91b7-4a8b-88d1-5e0ff66acaa0,Namespace:kube-system,Attempt:0,} returns sandbox id \"ba0e26da10607b6712166e625cedfdd4f1c6b0a266fb659300978a33c0519441\"" May 16 00:34:33.286302 kubelet[1903]: E0516 00:34:33.285946 1903 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:34:33.286302 kubelet[1903]: E0516 00:34:33.286271 1903 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:34:33.287059 env[1214]: time="2025-05-16T00:34:33.287011829Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 16 00:34:33.288554 env[1214]: time="2025-05-16T00:34:33.288507371Z" level=info msg="CreateContainer within sandbox \"b2994e5ec0f23c62859d5a9d5c49936ba4896abbe16f315ec2c0055decace7f3\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 16 00:34:33.302702 env[1214]: time="2025-05-16T00:34:33.302653513Z" level=info msg="CreateContainer within sandbox \"b2994e5ec0f23c62859d5a9d5c49936ba4896abbe16f315ec2c0055decace7f3\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"6bc3b271a1d5868c0e1ffa4b5690ddf79467bd9f880b394ea8aff7a7a4774de8\"" May 16 00:34:33.304534 env[1214]: time="2025-05-16T00:34:33.304504793Z" level=info msg="StartContainer for \"6bc3b271a1d5868c0e1ffa4b5690ddf79467bd9f880b394ea8aff7a7a4774de8\"" May 16 00:34:33.319570 systemd[1]: Started cri-containerd-6bc3b271a1d5868c0e1ffa4b5690ddf79467bd9f880b394ea8aff7a7a4774de8.scope. May 16 00:34:33.369022 env[1214]: time="2025-05-16T00:34:33.368968708Z" level=info msg="StartContainer for \"6bc3b271a1d5868c0e1ffa4b5690ddf79467bd9f880b394ea8aff7a7a4774de8\" returns successfully" May 16 00:34:33.413242 kubelet[1903]: E0516 00:34:33.413210 1903 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:34:33.438249 kubelet[1903]: E0516 00:34:33.438216 1903 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:34:33.438842 env[1214]: time="2025-05-16T00:34:33.438806753Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-djs6q,Uid:b1fb436f-32ba-46cc-9c38-fb4bb6b58d6d,Namespace:kube-system,Attempt:0,}" May 16 00:34:33.454138 env[1214]: time="2025-05-16T00:34:33.453357733Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 16 00:34:33.454330 env[1214]: time="2025-05-16T00:34:33.454292417Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 16 00:34:33.454435 env[1214]: time="2025-05-16T00:34:33.454412143Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 00:34:33.454771 env[1214]: time="2025-05-16T00:34:33.454728546Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/05831c42c57604c88dc86481f967a4ad1ee9b24fa8312782878beee0b03648d1 pid=2114 runtime=io.containerd.runc.v2 May 16 00:34:33.464857 systemd[1]: Started cri-containerd-05831c42c57604c88dc86481f967a4ad1ee9b24fa8312782878beee0b03648d1.scope. May 16 00:34:33.505815 env[1214]: time="2025-05-16T00:34:33.505768159Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-djs6q,Uid:b1fb436f-32ba-46cc-9c38-fb4bb6b58d6d,Namespace:kube-system,Attempt:0,} returns sandbox id \"05831c42c57604c88dc86481f967a4ad1ee9b24fa8312782878beee0b03648d1\"" May 16 00:34:33.506447 kubelet[1903]: E0516 00:34:33.506425 1903 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:34:35.118506 kubelet[1903]: E0516 00:34:35.118451 1903 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:34:35.138421 kubelet[1903]: I0516 00:34:35.134426 1903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-sxfq4" podStartSLOduration=3.134409767 podStartE2EDuration="3.134409767s" podCreationTimestamp="2025-05-16 00:34:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-16 00:34:33.425061487 +0000 UTC m=+7.122517591" watchObservedRunningTime="2025-05-16 00:34:35.134409767 +0000 UTC m=+8.831865871" May 16 00:34:35.416804 kubelet[1903]: E0516 00:34:35.416568 1903 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:34:36.418603 kubelet[1903]: E0516 00:34:36.418566 1903 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:34:36.786574 kubelet[1903]: E0516 00:34:36.785367 1903 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:34:37.162231 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3253208599.mount: Deactivated successfully. May 16 00:34:37.421607 kubelet[1903]: E0516 00:34:37.421398 1903 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:34:38.084616 kubelet[1903]: E0516 00:34:38.084568 1903 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:34:39.420938 env[1214]: time="2025-05-16T00:34:39.420890725Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:34:39.423373 env[1214]: time="2025-05-16T00:34:39.423339850Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:34:39.425580 env[1214]: time="2025-05-16T00:34:39.425549908Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:34:39.426065 env[1214]: time="2025-05-16T00:34:39.426022480Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" May 16 00:34:39.428265 env[1214]: time="2025-05-16T00:34:39.428230058Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 16 00:34:39.434298 env[1214]: time="2025-05-16T00:34:39.434260704Z" level=info msg="CreateContainer within sandbox \"ba0e26da10607b6712166e625cedfdd4f1c6b0a266fb659300978a33c0519441\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 16 00:34:39.444018 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount748453061.mount: Deactivated successfully. May 16 00:34:39.447689 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4108653272.mount: Deactivated successfully. May 16 00:34:39.450730 env[1214]: time="2025-05-16T00:34:39.450690379Z" level=info msg="CreateContainer within sandbox \"ba0e26da10607b6712166e625cedfdd4f1c6b0a266fb659300978a33c0519441\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"0ae6f7e5b151ef4fcf6c86a8814ce327f4d969d43503c091f349b84e2518668a\"" May 16 00:34:39.451357 env[1214]: time="2025-05-16T00:34:39.451313714Z" level=info msg="StartContainer for \"0ae6f7e5b151ef4fcf6c86a8814ce327f4d969d43503c091f349b84e2518668a\"" May 16 00:34:39.468329 systemd[1]: Started cri-containerd-0ae6f7e5b151ef4fcf6c86a8814ce327f4d969d43503c091f349b84e2518668a.scope. May 16 00:34:39.560749 systemd[1]: cri-containerd-0ae6f7e5b151ef4fcf6c86a8814ce327f4d969d43503c091f349b84e2518668a.scope: Deactivated successfully. May 16 00:34:39.645112 env[1214]: time="2025-05-16T00:34:39.645062579Z" level=info msg="StartContainer for \"0ae6f7e5b151ef4fcf6c86a8814ce327f4d969d43503c091f349b84e2518668a\" returns successfully" May 16 00:34:39.665046 env[1214]: time="2025-05-16T00:34:39.664995513Z" level=info msg="shim disconnected" id=0ae6f7e5b151ef4fcf6c86a8814ce327f4d969d43503c091f349b84e2518668a May 16 00:34:39.665295 env[1214]: time="2025-05-16T00:34:39.665275592Z" level=warning msg="cleaning up after shim disconnected" id=0ae6f7e5b151ef4fcf6c86a8814ce327f4d969d43503c091f349b84e2518668a namespace=k8s.io May 16 00:34:39.665356 env[1214]: time="2025-05-16T00:34:39.665343211Z" level=info msg="cleaning up dead shim" May 16 00:34:39.672658 env[1214]: time="2025-05-16T00:34:39.672554147Z" level=warning msg="cleanup warnings time=\"2025-05-16T00:34:39Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2329 runtime=io.containerd.runc.v2\n" May 16 00:34:40.428468 kubelet[1903]: E0516 00:34:40.428242 1903 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:34:40.450324 env[1214]: time="2025-05-16T00:34:40.431280222Z" level=info msg="CreateContainer within sandbox \"ba0e26da10607b6712166e625cedfdd4f1c6b0a266fb659300978a33c0519441\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 16 00:34:40.442475 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0ae6f7e5b151ef4fcf6c86a8814ce327f4d969d43503c091f349b84e2518668a-rootfs.mount: Deactivated successfully. May 16 00:34:40.466575 env[1214]: time="2025-05-16T00:34:40.466517168Z" level=info msg="CreateContainer within sandbox \"ba0e26da10607b6712166e625cedfdd4f1c6b0a266fb659300978a33c0519441\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"4fabd29a19d44a4746dcb557c0678baec68b0bf31d4f8024d789af1f1f28910b\"" May 16 00:34:40.466984 env[1214]: time="2025-05-16T00:34:40.466954324Z" level=info msg="StartContainer for \"4fabd29a19d44a4746dcb557c0678baec68b0bf31d4f8024d789af1f1f28910b\"" May 16 00:34:40.486503 systemd[1]: Started cri-containerd-4fabd29a19d44a4746dcb557c0678baec68b0bf31d4f8024d789af1f1f28910b.scope. May 16 00:34:40.524549 env[1214]: time="2025-05-16T00:34:40.524499306Z" level=info msg="StartContainer for \"4fabd29a19d44a4746dcb557c0678baec68b0bf31d4f8024d789af1f1f28910b\" returns successfully" May 16 00:34:40.542121 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 16 00:34:40.542367 systemd[1]: Stopped systemd-sysctl.service. May 16 00:34:40.542534 systemd[1]: Stopping systemd-sysctl.service... May 16 00:34:40.544141 systemd[1]: Starting systemd-sysctl.service... May 16 00:34:40.547267 systemd[1]: cri-containerd-4fabd29a19d44a4746dcb557c0678baec68b0bf31d4f8024d789af1f1f28910b.scope: Deactivated successfully. May 16 00:34:40.556162 systemd[1]: Finished systemd-sysctl.service. May 16 00:34:40.569442 env[1214]: time="2025-05-16T00:34:40.569400054Z" level=info msg="shim disconnected" id=4fabd29a19d44a4746dcb557c0678baec68b0bf31d4f8024d789af1f1f28910b May 16 00:34:40.569442 env[1214]: time="2025-05-16T00:34:40.569443586Z" level=warning msg="cleaning up after shim disconnected" id=4fabd29a19d44a4746dcb557c0678baec68b0bf31d4f8024d789af1f1f28910b namespace=k8s.io May 16 00:34:40.569660 env[1214]: time="2025-05-16T00:34:40.569453309Z" level=info msg="cleaning up dead shim" May 16 00:34:40.576179 env[1214]: time="2025-05-16T00:34:40.576145523Z" level=warning msg="cleanup warnings time=\"2025-05-16T00:34:40Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2395 runtime=io.containerd.runc.v2\n" May 16 00:34:41.430891 kubelet[1903]: E0516 00:34:41.430121 1903 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:34:41.431861 env[1214]: time="2025-05-16T00:34:41.431810429Z" level=info msg="CreateContainer within sandbox \"ba0e26da10607b6712166e625cedfdd4f1c6b0a266fb659300978a33c0519441\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 16 00:34:41.442193 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4fabd29a19d44a4746dcb557c0678baec68b0bf31d4f8024d789af1f1f28910b-rootfs.mount: Deactivated successfully. May 16 00:34:41.456530 env[1214]: time="2025-05-16T00:34:41.456489240Z" level=info msg="CreateContainer within sandbox \"ba0e26da10607b6712166e625cedfdd4f1c6b0a266fb659300978a33c0519441\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"71a4435549e3cda68d5f535e21222ff72132b5f69c50343ee60153850f2aee3f\"" May 16 00:34:41.457255 env[1214]: time="2025-05-16T00:34:41.457224585Z" level=info msg="StartContainer for \"71a4435549e3cda68d5f535e21222ff72132b5f69c50343ee60153850f2aee3f\"" May 16 00:34:41.474831 systemd[1]: Started cri-containerd-71a4435549e3cda68d5f535e21222ff72132b5f69c50343ee60153850f2aee3f.scope. May 16 00:34:41.495503 env[1214]: time="2025-05-16T00:34:41.495450325Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:34:41.497610 env[1214]: time="2025-05-16T00:34:41.497582502Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:34:41.499947 env[1214]: time="2025-05-16T00:34:41.499892203Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 16 00:34:41.500226 env[1214]: time="2025-05-16T00:34:41.500151349Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" May 16 00:34:41.502440 env[1214]: time="2025-05-16T00:34:41.502407476Z" level=info msg="CreateContainer within sandbox \"05831c42c57604c88dc86481f967a4ad1ee9b24fa8312782878beee0b03648d1\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 16 00:34:41.518357 env[1214]: time="2025-05-16T00:34:41.518319361Z" level=info msg="CreateContainer within sandbox \"05831c42c57604c88dc86481f967a4ad1ee9b24fa8312782878beee0b03648d1\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"3cecb128cb2424bc141f408574475767bb42978a7eea9599071326bb3fde64cb\"" May 16 00:34:41.519916 env[1214]: time="2025-05-16T00:34:41.519877113Z" level=info msg="StartContainer for \"3cecb128cb2424bc141f408574475767bb42978a7eea9599071326bb3fde64cb\"" May 16 00:34:41.530128 env[1214]: time="2025-05-16T00:34:41.530082401Z" level=info msg="StartContainer for \"71a4435549e3cda68d5f535e21222ff72132b5f69c50343ee60153850f2aee3f\" returns successfully" May 16 00:34:41.534435 systemd[1]: Started cri-containerd-3cecb128cb2424bc141f408574475767bb42978a7eea9599071326bb3fde64cb.scope. May 16 00:34:41.542269 systemd[1]: cri-containerd-71a4435549e3cda68d5f535e21222ff72132b5f69c50343ee60153850f2aee3f.scope: Deactivated successfully. May 16 00:34:41.643720 env[1214]: time="2025-05-16T00:34:41.642797729Z" level=info msg="StartContainer for \"3cecb128cb2424bc141f408574475767bb42978a7eea9599071326bb3fde64cb\" returns successfully" May 16 00:34:41.647208 env[1214]: time="2025-05-16T00:34:41.646723157Z" level=info msg="shim disconnected" id=71a4435549e3cda68d5f535e21222ff72132b5f69c50343ee60153850f2aee3f May 16 00:34:41.647208 env[1214]: time="2025-05-16T00:34:41.646759206Z" level=warning msg="cleaning up after shim disconnected" id=71a4435549e3cda68d5f535e21222ff72132b5f69c50343ee60153850f2aee3f namespace=k8s.io May 16 00:34:41.647208 env[1214]: time="2025-05-16T00:34:41.646767968Z" level=info msg="cleaning up dead shim" May 16 00:34:41.659021 env[1214]: time="2025-05-16T00:34:41.658971039Z" level=warning msg="cleanup warnings time=\"2025-05-16T00:34:41Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2491 runtime=io.containerd.runc.v2\n" May 16 00:34:42.434079 kubelet[1903]: E0516 00:34:42.434044 1903 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:34:42.435957 env[1214]: time="2025-05-16T00:34:42.435916741Z" level=info msg="CreateContainer within sandbox \"ba0e26da10607b6712166e625cedfdd4f1c6b0a266fb659300978a33c0519441\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 16 00:34:42.436141 kubelet[1903]: E0516 00:34:42.436114 1903 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:34:42.442907 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-71a4435549e3cda68d5f535e21222ff72132b5f69c50343ee60153850f2aee3f-rootfs.mount: Deactivated successfully. May 16 00:34:42.453911 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount302403676.mount: Deactivated successfully. May 16 00:34:42.465373 env[1214]: time="2025-05-16T00:34:42.465326730Z" level=info msg="CreateContainer within sandbox \"ba0e26da10607b6712166e625cedfdd4f1c6b0a266fb659300978a33c0519441\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"ef6adf1a6624f51eb0ce3a065e7645c62784958c1737bdc145edde2460f29c6c\"" May 16 00:34:42.471450 env[1214]: time="2025-05-16T00:34:42.471406823Z" level=info msg="StartContainer for \"ef6adf1a6624f51eb0ce3a065e7645c62784958c1737bdc145edde2460f29c6c\"" May 16 00:34:42.502976 systemd[1]: Started cri-containerd-ef6adf1a6624f51eb0ce3a065e7645c62784958c1737bdc145edde2460f29c6c.scope. May 16 00:34:42.578509 systemd[1]: cri-containerd-ef6adf1a6624f51eb0ce3a065e7645c62784958c1737bdc145edde2460f29c6c.scope: Deactivated successfully. May 16 00:34:42.582134 env[1214]: time="2025-05-16T00:34:42.578595038Z" level=info msg="StartContainer for \"ef6adf1a6624f51eb0ce3a065e7645c62784958c1737bdc145edde2460f29c6c\" returns successfully" May 16 00:34:42.603517 env[1214]: time="2025-05-16T00:34:42.603464741Z" level=info msg="shim disconnected" id=ef6adf1a6624f51eb0ce3a065e7645c62784958c1737bdc145edde2460f29c6c May 16 00:34:42.603517 env[1214]: time="2025-05-16T00:34:42.603513473Z" level=warning msg="cleaning up after shim disconnected" id=ef6adf1a6624f51eb0ce3a065e7645c62784958c1737bdc145edde2460f29c6c namespace=k8s.io May 16 00:34:42.603768 env[1214]: time="2025-05-16T00:34:42.603523075Z" level=info msg="cleaning up dead shim" May 16 00:34:42.610858 env[1214]: time="2025-05-16T00:34:42.610804135Z" level=warning msg="cleanup warnings time=\"2025-05-16T00:34:42Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2548 runtime=io.containerd.runc.v2\n" May 16 00:34:43.439240 kubelet[1903]: E0516 00:34:43.439180 1903 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:34:43.439929 kubelet[1903]: E0516 00:34:43.439907 1903 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:34:43.443556 env[1214]: time="2025-05-16T00:34:43.443514636Z" level=info msg="CreateContainer within sandbox \"ba0e26da10607b6712166e625cedfdd4f1c6b0a266fb659300978a33c0519441\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 16 00:34:43.460414 env[1214]: time="2025-05-16T00:34:43.460360621Z" level=info msg="CreateContainer within sandbox \"ba0e26da10607b6712166e625cedfdd4f1c6b0a266fb659300978a33c0519441\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"cb470b52a741c59f84fa25fc8210a0b08b290d35fd6bcd2bc7ab6e9753a8a821\"" May 16 00:34:43.460967 env[1214]: time="2025-05-16T00:34:43.460937272Z" level=info msg="StartContainer for \"cb470b52a741c59f84fa25fc8210a0b08b290d35fd6bcd2bc7ab6e9753a8a821\"" May 16 00:34:43.463907 kubelet[1903]: I0516 00:34:43.463850 1903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-djs6q" podStartSLOduration=2.47148031 podStartE2EDuration="10.46383513s" podCreationTimestamp="2025-05-16 00:34:33 +0000 UTC" firstStartedPulling="2025-05-16 00:34:33.508817185 +0000 UTC m=+7.206273289" lastFinishedPulling="2025-05-16 00:34:41.501172005 +0000 UTC m=+15.198628109" observedRunningTime="2025-05-16 00:34:42.468125038 +0000 UTC m=+16.165581102" watchObservedRunningTime="2025-05-16 00:34:43.46383513 +0000 UTC m=+17.161291234" May 16 00:34:43.482485 systemd[1]: Started cri-containerd-cb470b52a741c59f84fa25fc8210a0b08b290d35fd6bcd2bc7ab6e9753a8a821.scope. May 16 00:34:43.538065 env[1214]: time="2025-05-16T00:34:43.538004972Z" level=info msg="StartContainer for \"cb470b52a741c59f84fa25fc8210a0b08b290d35fd6bcd2bc7ab6e9753a8a821\" returns successfully" May 16 00:34:43.634633 kubelet[1903]: I0516 00:34:43.634580 1903 kubelet_node_status.go:488] "Fast updating node status as it just became ready" May 16 00:34:43.666035 systemd[1]: Created slice kubepods-burstable-podbdb82464_9633_4d48_abc1_2ef5375206d4.slice. May 16 00:34:43.669646 systemd[1]: Created slice kubepods-burstable-pod9d8383d9_db0a_411d_a46a_2ce201fafb72.slice. May 16 00:34:43.803176 kubelet[1903]: I0516 00:34:43.803066 1903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bngwt\" (UniqueName: \"kubernetes.io/projected/bdb82464-9633-4d48-abc1-2ef5375206d4-kube-api-access-bngwt\") pod \"coredns-7c65d6cfc9-zzrkg\" (UID: \"bdb82464-9633-4d48-abc1-2ef5375206d4\") " pod="kube-system/coredns-7c65d6cfc9-zzrkg" May 16 00:34:43.803176 kubelet[1903]: I0516 00:34:43.803115 1903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9d8383d9-db0a-411d-a46a-2ce201fafb72-config-volume\") pod \"coredns-7c65d6cfc9-x7758\" (UID: \"9d8383d9-db0a-411d-a46a-2ce201fafb72\") " pod="kube-system/coredns-7c65d6cfc9-x7758" May 16 00:34:43.803176 kubelet[1903]: I0516 00:34:43.803135 1903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bdb82464-9633-4d48-abc1-2ef5375206d4-config-volume\") pod \"coredns-7c65d6cfc9-zzrkg\" (UID: \"bdb82464-9633-4d48-abc1-2ef5375206d4\") " pod="kube-system/coredns-7c65d6cfc9-zzrkg" May 16 00:34:43.803176 kubelet[1903]: I0516 00:34:43.803154 1903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5j7wt\" (UniqueName: \"kubernetes.io/projected/9d8383d9-db0a-411d-a46a-2ce201fafb72-kube-api-access-5j7wt\") pod \"coredns-7c65d6cfc9-x7758\" (UID: \"9d8383d9-db0a-411d-a46a-2ce201fafb72\") " pod="kube-system/coredns-7c65d6cfc9-x7758" May 16 00:34:43.875220 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! May 16 00:34:43.968407 kubelet[1903]: E0516 00:34:43.968367 1903 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:34:43.969115 env[1214]: time="2025-05-16T00:34:43.969070853Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-zzrkg,Uid:bdb82464-9633-4d48-abc1-2ef5375206d4,Namespace:kube-system,Attempt:0,}" May 16 00:34:43.971698 kubelet[1903]: E0516 00:34:43.971673 1903 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:34:43.972193 env[1214]: time="2025-05-16T00:34:43.972157074Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-x7758,Uid:9d8383d9-db0a-411d-a46a-2ce201fafb72,Namespace:kube-system,Attempt:0,}" May 16 00:34:43.977626 update_engine[1209]: I0516 00:34:43.977591 1209 update_attempter.cc:509] Updating boot flags... May 16 00:34:44.171206 kernel: WARNING: Unprivileged eBPF is enabled, data leaks possible via Spectre v2 BHB attacks! May 16 00:34:44.447646 kubelet[1903]: E0516 00:34:44.447525 1903 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:34:44.462189 kubelet[1903]: I0516 00:34:44.461448 1903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-pwqp5" podStartSLOduration=6.320224653 podStartE2EDuration="12.461433996s" podCreationTimestamp="2025-05-16 00:34:32 +0000 UTC" firstStartedPulling="2025-05-16 00:34:33.286675698 +0000 UTC m=+6.984131802" lastFinishedPulling="2025-05-16 00:34:39.427885041 +0000 UTC m=+13.125341145" observedRunningTime="2025-05-16 00:34:44.460773173 +0000 UTC m=+18.158229277" watchObservedRunningTime="2025-05-16 00:34:44.461433996 +0000 UTC m=+18.158890100" May 16 00:34:45.448123 kubelet[1903]: E0516 00:34:45.448072 1903 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:34:45.787618 systemd-networkd[1045]: cilium_host: Link UP May 16 00:34:45.787715 systemd-networkd[1045]: cilium_net: Link UP May 16 00:34:45.787719 systemd-networkd[1045]: cilium_net: Gained carrier May 16 00:34:45.788158 systemd-networkd[1045]: cilium_host: Gained carrier May 16 00:34:45.790597 systemd-networkd[1045]: cilium_host: Gained IPv6LL May 16 00:34:45.791216 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready May 16 00:34:45.863477 systemd-networkd[1045]: cilium_vxlan: Link UP May 16 00:34:45.863483 systemd-networkd[1045]: cilium_vxlan: Gained carrier May 16 00:34:46.156219 kernel: NET: Registered PF_ALG protocol family May 16 00:34:46.449321 kubelet[1903]: E0516 00:34:46.449235 1903 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:34:46.744318 systemd-networkd[1045]: lxc_health: Link UP May 16 00:34:46.761221 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready May 16 00:34:46.758755 systemd-networkd[1045]: lxc_health: Gained carrier May 16 00:34:46.784322 systemd-networkd[1045]: cilium_net: Gained IPv6LL May 16 00:34:47.039302 systemd-networkd[1045]: cilium_vxlan: Gained IPv6LL May 16 00:34:47.094724 systemd-networkd[1045]: lxcc9d864451aff: Link UP May 16 00:34:47.104230 kernel: eth0: renamed from tmpdbb05 May 16 00:34:47.111828 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready May 16 00:34:47.111897 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcc9d864451aff: link becomes ready May 16 00:34:47.127973 systemd-networkd[1045]: lxcc9d864451aff: Gained carrier May 16 00:34:47.128104 systemd-networkd[1045]: lxcf1c56fdeaffe: Link UP May 16 00:34:47.134202 kernel: eth0: renamed from tmp1a6f7 May 16 00:34:47.144657 systemd-networkd[1045]: lxcf1c56fdeaffe: Gained carrier May 16 00:34:47.145357 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcf1c56fdeaffe: link becomes ready May 16 00:34:47.450858 kubelet[1903]: E0516 00:34:47.450826 1903 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:34:48.063357 systemd-networkd[1045]: lxc_health: Gained IPv6LL May 16 00:34:48.319353 systemd-networkd[1045]: lxcc9d864451aff: Gained IPv6LL May 16 00:34:48.895405 systemd-networkd[1045]: lxcf1c56fdeaffe: Gained IPv6LL May 16 00:34:50.658433 env[1214]: time="2025-05-16T00:34:50.658375385Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 16 00:34:50.658814 env[1214]: time="2025-05-16T00:34:50.658417112Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 16 00:34:50.658814 env[1214]: time="2025-05-16T00:34:50.658427554Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 00:34:50.658814 env[1214]: time="2025-05-16T00:34:50.658684835Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1a6f71d22a4846eb61a85cfeeffc251fa017d8cebfe1ce6ec9f516cc569a9bd9 pid=3133 runtime=io.containerd.runc.v2 May 16 00:34:50.666819 env[1214]: time="2025-05-16T00:34:50.666242621Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 16 00:34:50.666819 env[1214]: time="2025-05-16T00:34:50.666298710Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 16 00:34:50.666819 env[1214]: time="2025-05-16T00:34:50.666309952Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 00:34:50.666819 env[1214]: time="2025-05-16T00:34:50.666496342Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/dbb05215ad0f1f2d33298243bb407966221260f942764e1f1866dbacde81dd37 pid=3156 runtime=io.containerd.runc.v2 May 16 00:34:50.678589 systemd[1]: run-containerd-runc-k8s.io-1a6f71d22a4846eb61a85cfeeffc251fa017d8cebfe1ce6ec9f516cc569a9bd9-runc.8DTjBa.mount: Deactivated successfully. May 16 00:34:50.683828 systemd[1]: Started cri-containerd-1a6f71d22a4846eb61a85cfeeffc251fa017d8cebfe1ce6ec9f516cc569a9bd9.scope. May 16 00:34:50.686027 systemd[1]: Started cri-containerd-dbb05215ad0f1f2d33298243bb407966221260f942764e1f1866dbacde81dd37.scope. May 16 00:34:50.725629 systemd-resolved[1154]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 16 00:34:50.728875 systemd-resolved[1154]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 16 00:34:50.745558 env[1214]: time="2025-05-16T00:34:50.744709944Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-x7758,Uid:9d8383d9-db0a-411d-a46a-2ce201fafb72,Namespace:kube-system,Attempt:0,} returns sandbox id \"dbb05215ad0f1f2d33298243bb407966221260f942764e1f1866dbacde81dd37\"" May 16 00:34:50.746098 kubelet[1903]: E0516 00:34:50.745871 1903 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:34:50.747799 env[1214]: time="2025-05-16T00:34:50.747675785Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-zzrkg,Uid:bdb82464-9633-4d48-abc1-2ef5375206d4,Namespace:kube-system,Attempt:0,} returns sandbox id \"1a6f71d22a4846eb61a85cfeeffc251fa017d8cebfe1ce6ec9f516cc569a9bd9\"" May 16 00:34:50.748863 kubelet[1903]: E0516 00:34:50.748319 1903 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:34:50.749985 env[1214]: time="2025-05-16T00:34:50.749813452Z" level=info msg="CreateContainer within sandbox \"dbb05215ad0f1f2d33298243bb407966221260f942764e1f1866dbacde81dd37\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 16 00:34:50.751404 env[1214]: time="2025-05-16T00:34:50.751368104Z" level=info msg="CreateContainer within sandbox \"1a6f71d22a4846eb61a85cfeeffc251fa017d8cebfe1ce6ec9f516cc569a9bd9\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 16 00:34:50.766087 env[1214]: time="2025-05-16T00:34:50.766036642Z" level=info msg="CreateContainer within sandbox \"dbb05215ad0f1f2d33298243bb407966221260f942764e1f1866dbacde81dd37\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"11c835950a7e38edb8563f631cbfa89a25a6db9d6e4bc4e0009a95bd6d353737\"" May 16 00:34:50.766680 env[1214]: time="2025-05-16T00:34:50.766642780Z" level=info msg="StartContainer for \"11c835950a7e38edb8563f631cbfa89a25a6db9d6e4bc4e0009a95bd6d353737\"" May 16 00:34:50.767289 env[1214]: time="2025-05-16T00:34:50.767260201Z" level=info msg="CreateContainer within sandbox \"1a6f71d22a4846eb61a85cfeeffc251fa017d8cebfe1ce6ec9f516cc569a9bd9\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"048eee693d05f0a3f12a8c968df02d7dd5dbdff28e2c6f6638af45c029ba3465\"" May 16 00:34:50.767635 env[1214]: time="2025-05-16T00:34:50.767579772Z" level=info msg="StartContainer for \"048eee693d05f0a3f12a8c968df02d7dd5dbdff28e2c6f6638af45c029ba3465\"" May 16 00:34:50.782460 systemd[1]: Started cri-containerd-11c835950a7e38edb8563f631cbfa89a25a6db9d6e4bc4e0009a95bd6d353737.scope. May 16 00:34:50.785649 systemd[1]: Started cri-containerd-048eee693d05f0a3f12a8c968df02d7dd5dbdff28e2c6f6638af45c029ba3465.scope. May 16 00:34:50.836137 env[1214]: time="2025-05-16T00:34:50.836088841Z" level=info msg="StartContainer for \"048eee693d05f0a3f12a8c968df02d7dd5dbdff28e2c6f6638af45c029ba3465\" returns successfully" May 16 00:34:50.844827 env[1214]: time="2025-05-16T00:34:50.844772449Z" level=info msg="StartContainer for \"11c835950a7e38edb8563f631cbfa89a25a6db9d6e4bc4e0009a95bd6d353737\" returns successfully" May 16 00:34:51.209521 systemd[1]: Started sshd@5-10.0.0.31:22-10.0.0.1:56920.service. May 16 00:34:51.253173 sshd[3286]: Accepted publickey for core from 10.0.0.1 port 56920 ssh2: RSA SHA256:czXyODm5lEdSCdxgc4UKYE1H3sjGZqNxHBxH/SPqyp4 May 16 00:34:51.254530 sshd[3286]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 16 00:34:51.257862 systemd-logind[1206]: New session 6 of user core. May 16 00:34:51.258729 systemd[1]: Started session-6.scope. May 16 00:34:51.378468 sshd[3286]: pam_unix(sshd:session): session closed for user core May 16 00:34:51.381090 systemd[1]: sshd@5-10.0.0.31:22-10.0.0.1:56920.service: Deactivated successfully. May 16 00:34:51.381913 systemd[1]: session-6.scope: Deactivated successfully. May 16 00:34:51.382405 systemd-logind[1206]: Session 6 logged out. Waiting for processes to exit. May 16 00:34:51.383208 systemd-logind[1206]: Removed session 6. May 16 00:34:51.460236 kubelet[1903]: E0516 00:34:51.460110 1903 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:34:51.463334 kubelet[1903]: E0516 00:34:51.462337 1903 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:34:51.481811 kubelet[1903]: I0516 00:34:51.481579 1903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-x7758" podStartSLOduration=18.481560089 podStartE2EDuration="18.481560089s" podCreationTimestamp="2025-05-16 00:34:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-16 00:34:51.471293738 +0000 UTC m=+25.168749802" watchObservedRunningTime="2025-05-16 00:34:51.481560089 +0000 UTC m=+25.179016193" May 16 00:34:51.496306 kubelet[1903]: I0516 00:34:51.496247 1903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-zzrkg" podStartSLOduration=18.496228724 podStartE2EDuration="18.496228724s" podCreationTimestamp="2025-05-16 00:34:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-16 00:34:51.481668226 +0000 UTC m=+25.179124330" watchObservedRunningTime="2025-05-16 00:34:51.496228724 +0000 UTC m=+25.193684828" May 16 00:34:52.465805 kubelet[1903]: E0516 00:34:52.465775 1903 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:34:52.466350 kubelet[1903]: E0516 00:34:52.466320 1903 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:34:53.467593 kubelet[1903]: E0516 00:34:53.467556 1903 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:34:53.468057 kubelet[1903]: E0516 00:34:53.468037 1903 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:34:55.135872 kubelet[1903]: I0516 00:34:55.135833 1903 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 16 00:34:55.136722 kubelet[1903]: E0516 00:34:55.136700 1903 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:34:55.471505 kubelet[1903]: E0516 00:34:55.471406 1903 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:34:56.384927 systemd[1]: Started sshd@6-10.0.0.31:22-10.0.0.1:44000.service. May 16 00:34:56.427680 sshd[3308]: Accepted publickey for core from 10.0.0.1 port 44000 ssh2: RSA SHA256:czXyODm5lEdSCdxgc4UKYE1H3sjGZqNxHBxH/SPqyp4 May 16 00:34:56.429741 sshd[3308]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 16 00:34:56.435490 systemd-logind[1206]: New session 7 of user core. May 16 00:34:56.436158 systemd[1]: Started session-7.scope. May 16 00:34:56.578842 sshd[3308]: pam_unix(sshd:session): session closed for user core May 16 00:34:56.582013 systemd[1]: sshd@6-10.0.0.31:22-10.0.0.1:44000.service: Deactivated successfully. May 16 00:34:56.582784 systemd[1]: session-7.scope: Deactivated successfully. May 16 00:34:56.583521 systemd-logind[1206]: Session 7 logged out. Waiting for processes to exit. May 16 00:34:56.584410 systemd-logind[1206]: Removed session 7. May 16 00:35:01.585676 systemd[1]: Started sshd@7-10.0.0.31:22-10.0.0.1:44004.service. May 16 00:35:01.630510 sshd[3323]: Accepted publickey for core from 10.0.0.1 port 44004 ssh2: RSA SHA256:czXyODm5lEdSCdxgc4UKYE1H3sjGZqNxHBxH/SPqyp4 May 16 00:35:01.632292 sshd[3323]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 16 00:35:01.636142 systemd-logind[1206]: New session 8 of user core. May 16 00:35:01.637065 systemd[1]: Started session-8.scope. May 16 00:35:01.765140 sshd[3323]: pam_unix(sshd:session): session closed for user core May 16 00:35:01.767833 systemd[1]: sshd@7-10.0.0.31:22-10.0.0.1:44004.service: Deactivated successfully. May 16 00:35:01.768629 systemd[1]: session-8.scope: Deactivated successfully. May 16 00:35:01.769148 systemd-logind[1206]: Session 8 logged out. Waiting for processes to exit. May 16 00:35:01.769987 systemd-logind[1206]: Removed session 8. May 16 00:35:06.771168 systemd[1]: Started sshd@8-10.0.0.31:22-10.0.0.1:51930.service. May 16 00:35:06.810430 sshd[3340]: Accepted publickey for core from 10.0.0.1 port 51930 ssh2: RSA SHA256:czXyODm5lEdSCdxgc4UKYE1H3sjGZqNxHBxH/SPqyp4 May 16 00:35:06.811845 sshd[3340]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 16 00:35:06.815739 systemd-logind[1206]: New session 9 of user core. May 16 00:35:06.816656 systemd[1]: Started session-9.scope. May 16 00:35:06.936459 sshd[3340]: pam_unix(sshd:session): session closed for user core May 16 00:35:06.940551 systemd[1]: Started sshd@9-10.0.0.31:22-10.0.0.1:51946.service. May 16 00:35:06.941127 systemd[1]: sshd@8-10.0.0.31:22-10.0.0.1:51930.service: Deactivated successfully. May 16 00:35:06.941992 systemd[1]: session-9.scope: Deactivated successfully. May 16 00:35:06.942706 systemd-logind[1206]: Session 9 logged out. Waiting for processes to exit. May 16 00:35:06.943589 systemd-logind[1206]: Removed session 9. May 16 00:35:06.982992 sshd[3354]: Accepted publickey for core from 10.0.0.1 port 51946 ssh2: RSA SHA256:czXyODm5lEdSCdxgc4UKYE1H3sjGZqNxHBxH/SPqyp4 May 16 00:35:06.984420 sshd[3354]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 16 00:35:06.988038 systemd-logind[1206]: New session 10 of user core. May 16 00:35:06.988955 systemd[1]: Started session-10.scope. May 16 00:35:07.157535 sshd[3354]: pam_unix(sshd:session): session closed for user core May 16 00:35:07.160518 systemd[1]: Started sshd@10-10.0.0.31:22-10.0.0.1:51960.service. May 16 00:35:07.166568 systemd[1]: sshd@9-10.0.0.31:22-10.0.0.1:51946.service: Deactivated successfully. May 16 00:35:07.167482 systemd[1]: session-10.scope: Deactivated successfully. May 16 00:35:07.169317 systemd-logind[1206]: Session 10 logged out. Waiting for processes to exit. May 16 00:35:07.172491 systemd-logind[1206]: Removed session 10. May 16 00:35:07.203972 sshd[3365]: Accepted publickey for core from 10.0.0.1 port 51960 ssh2: RSA SHA256:czXyODm5lEdSCdxgc4UKYE1H3sjGZqNxHBxH/SPqyp4 May 16 00:35:07.205906 sshd[3365]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 16 00:35:07.209619 systemd-logind[1206]: New session 11 of user core. May 16 00:35:07.210515 systemd[1]: Started session-11.scope. May 16 00:35:07.327699 sshd[3365]: pam_unix(sshd:session): session closed for user core May 16 00:35:07.330333 systemd[1]: session-11.scope: Deactivated successfully. May 16 00:35:07.330900 systemd-logind[1206]: Session 11 logged out. Waiting for processes to exit. May 16 00:35:07.331032 systemd[1]: sshd@10-10.0.0.31:22-10.0.0.1:51960.service: Deactivated successfully. May 16 00:35:07.332013 systemd-logind[1206]: Removed session 11. May 16 00:35:12.332441 systemd[1]: Started sshd@11-10.0.0.31:22-10.0.0.1:51962.service. May 16 00:35:12.375500 sshd[3381]: Accepted publickey for core from 10.0.0.1 port 51962 ssh2: RSA SHA256:czXyODm5lEdSCdxgc4UKYE1H3sjGZqNxHBxH/SPqyp4 May 16 00:35:12.377420 sshd[3381]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 16 00:35:12.382305 systemd-logind[1206]: New session 12 of user core. May 16 00:35:12.385484 systemd[1]: Started session-12.scope. May 16 00:35:12.526512 sshd[3381]: pam_unix(sshd:session): session closed for user core May 16 00:35:12.528912 systemd[1]: sshd@11-10.0.0.31:22-10.0.0.1:51962.service: Deactivated successfully. May 16 00:35:12.529656 systemd[1]: session-12.scope: Deactivated successfully. May 16 00:35:12.530211 systemd-logind[1206]: Session 12 logged out. Waiting for processes to exit. May 16 00:35:12.530981 systemd-logind[1206]: Removed session 12. May 16 00:35:17.531796 systemd[1]: Started sshd@12-10.0.0.31:22-10.0.0.1:43580.service. May 16 00:35:17.571270 sshd[3394]: Accepted publickey for core from 10.0.0.1 port 43580 ssh2: RSA SHA256:czXyODm5lEdSCdxgc4UKYE1H3sjGZqNxHBxH/SPqyp4 May 16 00:35:17.572899 sshd[3394]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 16 00:35:17.576613 systemd-logind[1206]: New session 13 of user core. May 16 00:35:17.577034 systemd[1]: Started session-13.scope. May 16 00:35:17.684359 sshd[3394]: pam_unix(sshd:session): session closed for user core May 16 00:35:17.688563 systemd[1]: Started sshd@13-10.0.0.31:22-10.0.0.1:43596.service. May 16 00:35:17.689163 systemd[1]: sshd@12-10.0.0.31:22-10.0.0.1:43580.service: Deactivated successfully. May 16 00:35:17.689954 systemd[1]: session-13.scope: Deactivated successfully. May 16 00:35:17.690581 systemd-logind[1206]: Session 13 logged out. Waiting for processes to exit. May 16 00:35:17.691360 systemd-logind[1206]: Removed session 13. May 16 00:35:17.727173 sshd[3406]: Accepted publickey for core from 10.0.0.1 port 43596 ssh2: RSA SHA256:czXyODm5lEdSCdxgc4UKYE1H3sjGZqNxHBxH/SPqyp4 May 16 00:35:17.728414 sshd[3406]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 16 00:35:17.731941 systemd-logind[1206]: New session 14 of user core. May 16 00:35:17.732989 systemd[1]: Started session-14.scope. May 16 00:35:17.975647 sshd[3406]: pam_unix(sshd:session): session closed for user core May 16 00:35:17.978556 systemd[1]: Started sshd@14-10.0.0.31:22-10.0.0.1:43598.service. May 16 00:35:17.979058 systemd[1]: sshd@13-10.0.0.31:22-10.0.0.1:43596.service: Deactivated successfully. May 16 00:35:17.980007 systemd[1]: session-14.scope: Deactivated successfully. May 16 00:35:17.980626 systemd-logind[1206]: Session 14 logged out. Waiting for processes to exit. May 16 00:35:17.981362 systemd-logind[1206]: Removed session 14. May 16 00:35:18.019897 sshd[3418]: Accepted publickey for core from 10.0.0.1 port 43598 ssh2: RSA SHA256:czXyODm5lEdSCdxgc4UKYE1H3sjGZqNxHBxH/SPqyp4 May 16 00:35:18.021125 sshd[3418]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 16 00:35:18.024555 systemd-logind[1206]: New session 15 of user core. May 16 00:35:18.025473 systemd[1]: Started session-15.scope. May 16 00:35:19.359736 sshd[3418]: pam_unix(sshd:session): session closed for user core May 16 00:35:19.363262 systemd[1]: Started sshd@15-10.0.0.31:22-10.0.0.1:43608.service. May 16 00:35:19.363780 systemd[1]: sshd@14-10.0.0.31:22-10.0.0.1:43598.service: Deactivated successfully. May 16 00:35:19.364687 systemd[1]: session-15.scope: Deactivated successfully. May 16 00:35:19.366901 systemd-logind[1206]: Session 15 logged out. Waiting for processes to exit. May 16 00:35:19.367845 systemd-logind[1206]: Removed session 15. May 16 00:35:19.411785 sshd[3441]: Accepted publickey for core from 10.0.0.1 port 43608 ssh2: RSA SHA256:czXyODm5lEdSCdxgc4UKYE1H3sjGZqNxHBxH/SPqyp4 May 16 00:35:19.413374 sshd[3441]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 16 00:35:19.417107 systemd-logind[1206]: New session 16 of user core. May 16 00:35:19.417978 systemd[1]: Started session-16.scope. May 16 00:35:19.641123 sshd[3441]: pam_unix(sshd:session): session closed for user core May 16 00:35:19.645052 systemd[1]: Started sshd@16-10.0.0.31:22-10.0.0.1:43624.service. May 16 00:35:19.649608 systemd[1]: sshd@15-10.0.0.31:22-10.0.0.1:43608.service: Deactivated successfully. May 16 00:35:19.650290 systemd[1]: session-16.scope: Deactivated successfully. May 16 00:35:19.655609 systemd-logind[1206]: Session 16 logged out. Waiting for processes to exit. May 16 00:35:19.656555 systemd-logind[1206]: Removed session 16. May 16 00:35:19.688466 sshd[3453]: Accepted publickey for core from 10.0.0.1 port 43624 ssh2: RSA SHA256:czXyODm5lEdSCdxgc4UKYE1H3sjGZqNxHBxH/SPqyp4 May 16 00:35:19.690154 sshd[3453]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 16 00:35:19.693607 systemd-logind[1206]: New session 17 of user core. May 16 00:35:19.694474 systemd[1]: Started session-17.scope. May 16 00:35:19.814753 sshd[3453]: pam_unix(sshd:session): session closed for user core May 16 00:35:19.818093 systemd[1]: sshd@16-10.0.0.31:22-10.0.0.1:43624.service: Deactivated successfully. May 16 00:35:19.818801 systemd[1]: session-17.scope: Deactivated successfully. May 16 00:35:19.819415 systemd-logind[1206]: Session 17 logged out. Waiting for processes to exit. May 16 00:35:19.820088 systemd-logind[1206]: Removed session 17. May 16 00:35:24.818538 systemd[1]: Started sshd@17-10.0.0.31:22-10.0.0.1:56936.service. May 16 00:35:24.856242 sshd[3474]: Accepted publickey for core from 10.0.0.1 port 56936 ssh2: RSA SHA256:czXyODm5lEdSCdxgc4UKYE1H3sjGZqNxHBxH/SPqyp4 May 16 00:35:24.858045 sshd[3474]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 16 00:35:24.862275 systemd-logind[1206]: New session 18 of user core. May 16 00:35:24.862744 systemd[1]: Started session-18.scope. May 16 00:35:24.970230 sshd[3474]: pam_unix(sshd:session): session closed for user core May 16 00:35:24.973540 systemd[1]: sshd@17-10.0.0.31:22-10.0.0.1:56936.service: Deactivated successfully. May 16 00:35:24.974271 systemd[1]: session-18.scope: Deactivated successfully. May 16 00:35:24.974786 systemd-logind[1206]: Session 18 logged out. Waiting for processes to exit. May 16 00:35:24.975734 systemd-logind[1206]: Removed session 18. May 16 00:35:29.980084 systemd[1]: Started sshd@18-10.0.0.31:22-10.0.0.1:56946.service. May 16 00:35:30.019281 sshd[3490]: Accepted publickey for core from 10.0.0.1 port 56946 ssh2: RSA SHA256:czXyODm5lEdSCdxgc4UKYE1H3sjGZqNxHBxH/SPqyp4 May 16 00:35:30.020938 sshd[3490]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 16 00:35:30.024786 systemd-logind[1206]: New session 19 of user core. May 16 00:35:30.026159 systemd[1]: Started session-19.scope. May 16 00:35:30.139122 sshd[3490]: pam_unix(sshd:session): session closed for user core May 16 00:35:30.141615 systemd[1]: sshd@18-10.0.0.31:22-10.0.0.1:56946.service: Deactivated successfully. May 16 00:35:30.142383 systemd[1]: session-19.scope: Deactivated successfully. May 16 00:35:30.143078 systemd-logind[1206]: Session 19 logged out. Waiting for processes to exit. May 16 00:35:30.143888 systemd-logind[1206]: Removed session 19. May 16 00:35:35.144141 systemd[1]: Started sshd@19-10.0.0.31:22-10.0.0.1:48870.service. May 16 00:35:35.185491 sshd[3505]: Accepted publickey for core from 10.0.0.1 port 48870 ssh2: RSA SHA256:czXyODm5lEdSCdxgc4UKYE1H3sjGZqNxHBxH/SPqyp4 May 16 00:35:35.187146 sshd[3505]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 16 00:35:35.192355 systemd-logind[1206]: New session 20 of user core. May 16 00:35:35.196096 systemd[1]: Started session-20.scope. May 16 00:35:35.312691 sshd[3505]: pam_unix(sshd:session): session closed for user core May 16 00:35:35.316094 systemd-logind[1206]: Session 20 logged out. Waiting for processes to exit. May 16 00:35:35.316360 systemd[1]: sshd@19-10.0.0.31:22-10.0.0.1:48870.service: Deactivated successfully. May 16 00:35:35.317048 systemd[1]: session-20.scope: Deactivated successfully. May 16 00:35:35.317651 systemd-logind[1206]: Removed session 20. May 16 00:35:40.317552 systemd[1]: Started sshd@20-10.0.0.31:22-10.0.0.1:48872.service. May 16 00:35:40.355366 sshd[3518]: Accepted publickey for core from 10.0.0.1 port 48872 ssh2: RSA SHA256:czXyODm5lEdSCdxgc4UKYE1H3sjGZqNxHBxH/SPqyp4 May 16 00:35:40.356553 sshd[3518]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 16 00:35:40.359828 systemd-logind[1206]: New session 21 of user core. May 16 00:35:40.360698 systemd[1]: Started session-21.scope. May 16 00:35:40.475883 sshd[3518]: pam_unix(sshd:session): session closed for user core May 16 00:35:40.479705 systemd[1]: Started sshd@21-10.0.0.31:22-10.0.0.1:48886.service. May 16 00:35:40.480236 systemd[1]: sshd@20-10.0.0.31:22-10.0.0.1:48872.service: Deactivated successfully. May 16 00:35:40.480990 systemd[1]: session-21.scope: Deactivated successfully. May 16 00:35:40.481614 systemd-logind[1206]: Session 21 logged out. Waiting for processes to exit. May 16 00:35:40.483694 systemd-logind[1206]: Removed session 21. May 16 00:35:40.518118 sshd[3530]: Accepted publickey for core from 10.0.0.1 port 48886 ssh2: RSA SHA256:czXyODm5lEdSCdxgc4UKYE1H3sjGZqNxHBxH/SPqyp4 May 16 00:35:40.519354 sshd[3530]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 16 00:35:40.523689 systemd-logind[1206]: New session 22 of user core. May 16 00:35:40.524526 systemd[1]: Started session-22.scope. May 16 00:35:42.387338 kubelet[1903]: E0516 00:35:42.387306 1903 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:35:42.399929 env[1214]: time="2025-05-16T00:35:42.399875486Z" level=info msg="StopContainer for \"3cecb128cb2424bc141f408574475767bb42978a7eea9599071326bb3fde64cb\" with timeout 30 (s)" May 16 00:35:42.400292 env[1214]: time="2025-05-16T00:35:42.400229347Z" level=info msg="Stop container \"3cecb128cb2424bc141f408574475767bb42978a7eea9599071326bb3fde64cb\" with signal terminated" May 16 00:35:42.410590 systemd[1]: cri-containerd-3cecb128cb2424bc141f408574475767bb42978a7eea9599071326bb3fde64cb.scope: Deactivated successfully. May 16 00:35:42.427896 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3cecb128cb2424bc141f408574475767bb42978a7eea9599071326bb3fde64cb-rootfs.mount: Deactivated successfully. May 16 00:35:42.435285 env[1214]: time="2025-05-16T00:35:42.435228952Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 16 00:35:42.440853 env[1214]: time="2025-05-16T00:35:42.440810840Z" level=info msg="StopContainer for \"cb470b52a741c59f84fa25fc8210a0b08b290d35fd6bcd2bc7ab6e9753a8a821\" with timeout 2 (s)" May 16 00:35:42.441074 env[1214]: time="2025-05-16T00:35:42.441040788Z" level=info msg="Stop container \"cb470b52a741c59f84fa25fc8210a0b08b290d35fd6bcd2bc7ab6e9753a8a821\" with signal terminated" May 16 00:35:42.441544 env[1214]: time="2025-05-16T00:35:42.441510241Z" level=info msg="shim disconnected" id=3cecb128cb2424bc141f408574475767bb42978a7eea9599071326bb3fde64cb May 16 00:35:42.441719 env[1214]: time="2025-05-16T00:35:42.441699351Z" level=warning msg="cleaning up after shim disconnected" id=3cecb128cb2424bc141f408574475767bb42978a7eea9599071326bb3fde64cb namespace=k8s.io May 16 00:35:42.441806 env[1214]: time="2025-05-16T00:35:42.441790906Z" level=info msg="cleaning up dead shim" May 16 00:35:42.447365 systemd-networkd[1045]: lxc_health: Link DOWN May 16 00:35:42.447370 systemd-networkd[1045]: lxc_health: Lost carrier May 16 00:35:42.450457 env[1214]: time="2025-05-16T00:35:42.450420864Z" level=warning msg="cleanup warnings time=\"2025-05-16T00:35:42Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3580 runtime=io.containerd.runc.v2\n" May 16 00:35:42.452733 env[1214]: time="2025-05-16T00:35:42.452694737Z" level=info msg="StopContainer for \"3cecb128cb2424bc141f408574475767bb42978a7eea9599071326bb3fde64cb\" returns successfully" May 16 00:35:42.453547 env[1214]: time="2025-05-16T00:35:42.453491292Z" level=info msg="StopPodSandbox for \"05831c42c57604c88dc86481f967a4ad1ee9b24fa8312782878beee0b03648d1\"" May 16 00:35:42.453614 env[1214]: time="2025-05-16T00:35:42.453575928Z" level=info msg="Container to stop \"3cecb128cb2424bc141f408574475767bb42978a7eea9599071326bb3fde64cb\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 16 00:35:42.455289 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-05831c42c57604c88dc86481f967a4ad1ee9b24fa8312782878beee0b03648d1-shm.mount: Deactivated successfully. May 16 00:35:42.461704 systemd[1]: cri-containerd-05831c42c57604c88dc86481f967a4ad1ee9b24fa8312782878beee0b03648d1.scope: Deactivated successfully. May 16 00:35:42.474596 systemd[1]: cri-containerd-cb470b52a741c59f84fa25fc8210a0b08b290d35fd6bcd2bc7ab6e9753a8a821.scope: Deactivated successfully. May 16 00:35:42.474911 systemd[1]: cri-containerd-cb470b52a741c59f84fa25fc8210a0b08b290d35fd6bcd2bc7ab6e9753a8a821.scope: Consumed 6.533s CPU time. May 16 00:35:42.487414 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-05831c42c57604c88dc86481f967a4ad1ee9b24fa8312782878beee0b03648d1-rootfs.mount: Deactivated successfully. May 16 00:35:42.493371 env[1214]: time="2025-05-16T00:35:42.493323468Z" level=info msg="shim disconnected" id=05831c42c57604c88dc86481f967a4ad1ee9b24fa8312782878beee0b03648d1 May 16 00:35:42.493371 env[1214]: time="2025-05-16T00:35:42.493371585Z" level=warning msg="cleaning up after shim disconnected" id=05831c42c57604c88dc86481f967a4ad1ee9b24fa8312782878beee0b03648d1 namespace=k8s.io May 16 00:35:42.493547 env[1214]: time="2025-05-16T00:35:42.493380705Z" level=info msg="cleaning up dead shim" May 16 00:35:42.496565 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cb470b52a741c59f84fa25fc8210a0b08b290d35fd6bcd2bc7ab6e9753a8a821-rootfs.mount: Deactivated successfully. May 16 00:35:42.501441 env[1214]: time="2025-05-16T00:35:42.501394537Z" level=info msg="shim disconnected" id=cb470b52a741c59f84fa25fc8210a0b08b290d35fd6bcd2bc7ab6e9753a8a821 May 16 00:35:42.501638 env[1214]: time="2025-05-16T00:35:42.501443895Z" level=warning msg="cleaning up after shim disconnected" id=cb470b52a741c59f84fa25fc8210a0b08b290d35fd6bcd2bc7ab6e9753a8a821 namespace=k8s.io May 16 00:35:42.501638 env[1214]: time="2025-05-16T00:35:42.501453094Z" level=info msg="cleaning up dead shim" May 16 00:35:42.503721 env[1214]: time="2025-05-16T00:35:42.503682650Z" level=warning msg="cleanup warnings time=\"2025-05-16T00:35:42Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3629 runtime=io.containerd.runc.v2\n" May 16 00:35:42.504028 env[1214]: time="2025-05-16T00:35:42.504003632Z" level=info msg="TearDown network for sandbox \"05831c42c57604c88dc86481f967a4ad1ee9b24fa8312782878beee0b03648d1\" successfully" May 16 00:35:42.504073 env[1214]: time="2025-05-16T00:35:42.504030510Z" level=info msg="StopPodSandbox for \"05831c42c57604c88dc86481f967a4ad1ee9b24fa8312782878beee0b03648d1\" returns successfully" May 16 00:35:42.515400 env[1214]: time="2025-05-16T00:35:42.515359918Z" level=warning msg="cleanup warnings time=\"2025-05-16T00:35:42Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3640 runtime=io.containerd.runc.v2\n" May 16 00:35:42.517455 env[1214]: time="2025-05-16T00:35:42.517414283Z" level=info msg="StopContainer for \"cb470b52a741c59f84fa25fc8210a0b08b290d35fd6bcd2bc7ab6e9753a8a821\" returns successfully" May 16 00:35:42.518225 env[1214]: time="2025-05-16T00:35:42.518180320Z" level=info msg="StopPodSandbox for \"ba0e26da10607b6712166e625cedfdd4f1c6b0a266fb659300978a33c0519441\"" May 16 00:35:42.518283 env[1214]: time="2025-05-16T00:35:42.518260356Z" level=info msg="Container to stop \"ef6adf1a6624f51eb0ce3a065e7645c62784958c1737bdc145edde2460f29c6c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 16 00:35:42.518283 env[1214]: time="2025-05-16T00:35:42.518276515Z" level=info msg="Container to stop \"cb470b52a741c59f84fa25fc8210a0b08b290d35fd6bcd2bc7ab6e9753a8a821\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 16 00:35:42.518334 env[1214]: time="2025-05-16T00:35:42.518288834Z" level=info msg="Container to stop \"0ae6f7e5b151ef4fcf6c86a8814ce327f4d969d43503c091f349b84e2518668a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 16 00:35:42.518334 env[1214]: time="2025-05-16T00:35:42.518301433Z" level=info msg="Container to stop \"4fabd29a19d44a4746dcb557c0678baec68b0bf31d4f8024d789af1f1f28910b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 16 00:35:42.518334 env[1214]: time="2025-05-16T00:35:42.518313153Z" level=info msg="Container to stop \"71a4435549e3cda68d5f535e21222ff72132b5f69c50343ee60153850f2aee3f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 16 00:35:42.522955 systemd[1]: cri-containerd-ba0e26da10607b6712166e625cedfdd4f1c6b0a266fb659300978a33c0519441.scope: Deactivated successfully. May 16 00:35:42.547716 env[1214]: time="2025-05-16T00:35:42.547671993Z" level=info msg="shim disconnected" id=ba0e26da10607b6712166e625cedfdd4f1c6b0a266fb659300978a33c0519441 May 16 00:35:42.548593 env[1214]: time="2025-05-16T00:35:42.548566023Z" level=warning msg="cleaning up after shim disconnected" id=ba0e26da10607b6712166e625cedfdd4f1c6b0a266fb659300978a33c0519441 namespace=k8s.io May 16 00:35:42.548706 env[1214]: time="2025-05-16T00:35:42.548690176Z" level=info msg="cleaning up dead shim" May 16 00:35:42.556119 env[1214]: time="2025-05-16T00:35:42.556086243Z" level=warning msg="cleanup warnings time=\"2025-05-16T00:35:42Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3673 runtime=io.containerd.runc.v2\n" May 16 00:35:42.556674 env[1214]: time="2025-05-16T00:35:42.556635413Z" level=info msg="TearDown network for sandbox \"ba0e26da10607b6712166e625cedfdd4f1c6b0a266fb659300978a33c0519441\" successfully" May 16 00:35:42.556773 env[1214]: time="2025-05-16T00:35:42.556756366Z" level=info msg="StopPodSandbox for \"ba0e26da10607b6712166e625cedfdd4f1c6b0a266fb659300978a33c0519441\" returns successfully" May 16 00:35:42.567001 kubelet[1903]: I0516 00:35:42.566974 1903 scope.go:117] "RemoveContainer" containerID="cb470b52a741c59f84fa25fc8210a0b08b290d35fd6bcd2bc7ab6e9753a8a821" May 16 00:35:42.569302 env[1214]: time="2025-05-16T00:35:42.569256228Z" level=info msg="RemoveContainer for \"cb470b52a741c59f84fa25fc8210a0b08b290d35fd6bcd2bc7ab6e9753a8a821\"" May 16 00:35:42.573850 env[1214]: time="2025-05-16T00:35:42.573813174Z" level=info msg="RemoveContainer for \"cb470b52a741c59f84fa25fc8210a0b08b290d35fd6bcd2bc7ab6e9753a8a821\" returns successfully" May 16 00:35:42.574246 kubelet[1903]: I0516 00:35:42.574211 1903 scope.go:117] "RemoveContainer" containerID="ef6adf1a6624f51eb0ce3a065e7645c62784958c1737bdc145edde2460f29c6c" May 16 00:35:42.575204 env[1214]: time="2025-05-16T00:35:42.575163218Z" level=info msg="RemoveContainer for \"ef6adf1a6624f51eb0ce3a065e7645c62784958c1737bdc145edde2460f29c6c\"" May 16 00:35:42.577492 env[1214]: time="2025-05-16T00:35:42.577450171Z" level=info msg="RemoveContainer for \"ef6adf1a6624f51eb0ce3a065e7645c62784958c1737bdc145edde2460f29c6c\" returns successfully" May 16 00:35:42.577687 kubelet[1903]: I0516 00:35:42.577659 1903 scope.go:117] "RemoveContainer" containerID="71a4435549e3cda68d5f535e21222ff72132b5f69c50343ee60153850f2aee3f" May 16 00:35:42.578771 env[1214]: time="2025-05-16T00:35:42.578743218Z" level=info msg="RemoveContainer for \"71a4435549e3cda68d5f535e21222ff72132b5f69c50343ee60153850f2aee3f\"" May 16 00:35:42.583135 env[1214]: time="2025-05-16T00:35:42.582908066Z" level=info msg="RemoveContainer for \"71a4435549e3cda68d5f535e21222ff72132b5f69c50343ee60153850f2aee3f\" returns successfully" May 16 00:35:42.583333 kubelet[1903]: I0516 00:35:42.583312 1903 scope.go:117] "RemoveContainer" containerID="4fabd29a19d44a4746dcb557c0678baec68b0bf31d4f8024d789af1f1f28910b" May 16 00:35:42.584450 env[1214]: time="2025-05-16T00:35:42.584423701Z" level=info msg="RemoveContainer for \"4fabd29a19d44a4746dcb557c0678baec68b0bf31d4f8024d789af1f1f28910b\"" May 16 00:35:42.586762 env[1214]: time="2025-05-16T00:35:42.586725213Z" level=info msg="RemoveContainer for \"4fabd29a19d44a4746dcb557c0678baec68b0bf31d4f8024d789af1f1f28910b\" returns successfully" May 16 00:35:42.587037 kubelet[1903]: I0516 00:35:42.586996 1903 scope.go:117] "RemoveContainer" containerID="0ae6f7e5b151ef4fcf6c86a8814ce327f4d969d43503c091f349b84e2518668a" May 16 00:35:42.587976 env[1214]: time="2025-05-16T00:35:42.587951864Z" level=info msg="RemoveContainer for \"0ae6f7e5b151ef4fcf6c86a8814ce327f4d969d43503c091f349b84e2518668a\"" May 16 00:35:42.590392 env[1214]: time="2025-05-16T00:35:42.590361130Z" level=info msg="RemoveContainer for \"0ae6f7e5b151ef4fcf6c86a8814ce327f4d969d43503c091f349b84e2518668a\" returns successfully" May 16 00:35:42.590750 kubelet[1903]: I0516 00:35:42.590725 1903 scope.go:117] "RemoveContainer" containerID="cb470b52a741c59f84fa25fc8210a0b08b290d35fd6bcd2bc7ab6e9753a8a821" May 16 00:35:42.590989 env[1214]: time="2025-05-16T00:35:42.590909699Z" level=error msg="ContainerStatus for \"cb470b52a741c59f84fa25fc8210a0b08b290d35fd6bcd2bc7ab6e9753a8a821\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"cb470b52a741c59f84fa25fc8210a0b08b290d35fd6bcd2bc7ab6e9753a8a821\": not found" May 16 00:35:42.591787 kubelet[1903]: E0516 00:35:42.591741 1903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"cb470b52a741c59f84fa25fc8210a0b08b290d35fd6bcd2bc7ab6e9753a8a821\": not found" containerID="cb470b52a741c59f84fa25fc8210a0b08b290d35fd6bcd2bc7ab6e9753a8a821" May 16 00:35:42.591946 kubelet[1903]: I0516 00:35:42.591782 1903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"cb470b52a741c59f84fa25fc8210a0b08b290d35fd6bcd2bc7ab6e9753a8a821"} err="failed to get container status \"cb470b52a741c59f84fa25fc8210a0b08b290d35fd6bcd2bc7ab6e9753a8a821\": rpc error: code = NotFound desc = an error occurred when try to find container \"cb470b52a741c59f84fa25fc8210a0b08b290d35fd6bcd2bc7ab6e9753a8a821\": not found" May 16 00:35:42.591946 kubelet[1903]: I0516 00:35:42.591939 1903 scope.go:117] "RemoveContainer" containerID="ef6adf1a6624f51eb0ce3a065e7645c62784958c1737bdc145edde2460f29c6c" May 16 00:35:42.592216 env[1214]: time="2025-05-16T00:35:42.592145550Z" level=error msg="ContainerStatus for \"ef6adf1a6624f51eb0ce3a065e7645c62784958c1737bdc145edde2460f29c6c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ef6adf1a6624f51eb0ce3a065e7645c62784958c1737bdc145edde2460f29c6c\": not found" May 16 00:35:42.592334 kubelet[1903]: E0516 00:35:42.592316 1903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ef6adf1a6624f51eb0ce3a065e7645c62784958c1737bdc145edde2460f29c6c\": not found" containerID="ef6adf1a6624f51eb0ce3a065e7645c62784958c1737bdc145edde2460f29c6c" May 16 00:35:42.592370 kubelet[1903]: I0516 00:35:42.592348 1903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ef6adf1a6624f51eb0ce3a065e7645c62784958c1737bdc145edde2460f29c6c"} err="failed to get container status \"ef6adf1a6624f51eb0ce3a065e7645c62784958c1737bdc145edde2460f29c6c\": rpc error: code = NotFound desc = an error occurred when try to find container \"ef6adf1a6624f51eb0ce3a065e7645c62784958c1737bdc145edde2460f29c6c\": not found" May 16 00:35:42.592370 kubelet[1903]: I0516 00:35:42.592361 1903 scope.go:117] "RemoveContainer" containerID="71a4435549e3cda68d5f535e21222ff72132b5f69c50343ee60153850f2aee3f" May 16 00:35:42.592578 env[1214]: time="2025-05-16T00:35:42.592519849Z" level=error msg="ContainerStatus for \"71a4435549e3cda68d5f535e21222ff72132b5f69c50343ee60153850f2aee3f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"71a4435549e3cda68d5f535e21222ff72132b5f69c50343ee60153850f2aee3f\": not found" May 16 00:35:42.592676 kubelet[1903]: E0516 00:35:42.592646 1903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"71a4435549e3cda68d5f535e21222ff72132b5f69c50343ee60153850f2aee3f\": not found" containerID="71a4435549e3cda68d5f535e21222ff72132b5f69c50343ee60153850f2aee3f" May 16 00:35:42.592676 kubelet[1903]: I0516 00:35:42.592664 1903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"71a4435549e3cda68d5f535e21222ff72132b5f69c50343ee60153850f2aee3f"} err="failed to get container status \"71a4435549e3cda68d5f535e21222ff72132b5f69c50343ee60153850f2aee3f\": rpc error: code = NotFound desc = an error occurred when try to find container \"71a4435549e3cda68d5f535e21222ff72132b5f69c50343ee60153850f2aee3f\": not found" May 16 00:35:42.592734 kubelet[1903]: I0516 00:35:42.592687 1903 scope.go:117] "RemoveContainer" containerID="4fabd29a19d44a4746dcb557c0678baec68b0bf31d4f8024d789af1f1f28910b" May 16 00:35:42.592876 env[1214]: time="2025-05-16T00:35:42.592823192Z" level=error msg="ContainerStatus for \"4fabd29a19d44a4746dcb557c0678baec68b0bf31d4f8024d789af1f1f28910b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4fabd29a19d44a4746dcb557c0678baec68b0bf31d4f8024d789af1f1f28910b\": not found" May 16 00:35:42.592949 kubelet[1903]: E0516 00:35:42.592936 1903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4fabd29a19d44a4746dcb557c0678baec68b0bf31d4f8024d789af1f1f28910b\": not found" containerID="4fabd29a19d44a4746dcb557c0678baec68b0bf31d4f8024d789af1f1f28910b" May 16 00:35:42.592976 kubelet[1903]: I0516 00:35:42.592953 1903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4fabd29a19d44a4746dcb557c0678baec68b0bf31d4f8024d789af1f1f28910b"} err="failed to get container status \"4fabd29a19d44a4746dcb557c0678baec68b0bf31d4f8024d789af1f1f28910b\": rpc error: code = NotFound desc = an error occurred when try to find container \"4fabd29a19d44a4746dcb557c0678baec68b0bf31d4f8024d789af1f1f28910b\": not found" May 16 00:35:42.592976 kubelet[1903]: I0516 00:35:42.592967 1903 scope.go:117] "RemoveContainer" containerID="0ae6f7e5b151ef4fcf6c86a8814ce327f4d969d43503c091f349b84e2518668a" May 16 00:35:42.593150 env[1214]: time="2025-05-16T00:35:42.593107216Z" level=error msg="ContainerStatus for \"0ae6f7e5b151ef4fcf6c86a8814ce327f4d969d43503c091f349b84e2518668a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"0ae6f7e5b151ef4fcf6c86a8814ce327f4d969d43503c091f349b84e2518668a\": not found" May 16 00:35:42.593239 kubelet[1903]: E0516 00:35:42.593225 1903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"0ae6f7e5b151ef4fcf6c86a8814ce327f4d969d43503c091f349b84e2518668a\": not found" containerID="0ae6f7e5b151ef4fcf6c86a8814ce327f4d969d43503c091f349b84e2518668a" May 16 00:35:42.593265 kubelet[1903]: I0516 00:35:42.593243 1903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"0ae6f7e5b151ef4fcf6c86a8814ce327f4d969d43503c091f349b84e2518668a"} err="failed to get container status \"0ae6f7e5b151ef4fcf6c86a8814ce327f4d969d43503c091f349b84e2518668a\": rpc error: code = NotFound desc = an error occurred when try to find container \"0ae6f7e5b151ef4fcf6c86a8814ce327f4d969d43503c091f349b84e2518668a\": not found" May 16 00:35:42.593265 kubelet[1903]: I0516 00:35:42.593254 1903 scope.go:117] "RemoveContainer" containerID="3cecb128cb2424bc141f408574475767bb42978a7eea9599071326bb3fde64cb" May 16 00:35:42.594187 env[1214]: time="2025-05-16T00:35:42.594143878Z" level=info msg="RemoveContainer for \"3cecb128cb2424bc141f408574475767bb42978a7eea9599071326bb3fde64cb\"" May 16 00:35:42.597025 env[1214]: time="2025-05-16T00:35:42.596986360Z" level=info msg="RemoveContainer for \"3cecb128cb2424bc141f408574475767bb42978a7eea9599071326bb3fde64cb\" returns successfully" May 16 00:35:42.597274 kubelet[1903]: I0516 00:35:42.597163 1903 scope.go:117] "RemoveContainer" containerID="3cecb128cb2424bc141f408574475767bb42978a7eea9599071326bb3fde64cb" May 16 00:35:42.597447 env[1214]: time="2025-05-16T00:35:42.597393337Z" level=error msg="ContainerStatus for \"3cecb128cb2424bc141f408574475767bb42978a7eea9599071326bb3fde64cb\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3cecb128cb2424bc141f408574475767bb42978a7eea9599071326bb3fde64cb\": not found" May 16 00:35:42.597565 kubelet[1903]: E0516 00:35:42.597539 1903 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3cecb128cb2424bc141f408574475767bb42978a7eea9599071326bb3fde64cb\": not found" containerID="3cecb128cb2424bc141f408574475767bb42978a7eea9599071326bb3fde64cb" May 16 00:35:42.597608 kubelet[1903]: I0516 00:35:42.597565 1903 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3cecb128cb2424bc141f408574475767bb42978a7eea9599071326bb3fde64cb"} err="failed to get container status \"3cecb128cb2424bc141f408574475767bb42978a7eea9599071326bb3fde64cb\": rpc error: code = NotFound desc = an error occurred when try to find container \"3cecb128cb2424bc141f408574475767bb42978a7eea9599071326bb3fde64cb\": not found" May 16 00:35:42.671955 kubelet[1903]: I0516 00:35:42.671850 1903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4c00dc12-91b7-4a8b-88d1-5e0ff66acaa0-lib-modules\") pod \"4c00dc12-91b7-4a8b-88d1-5e0ff66acaa0\" (UID: \"4c00dc12-91b7-4a8b-88d1-5e0ff66acaa0\") " May 16 00:35:42.671955 kubelet[1903]: I0516 00:35:42.671897 1903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4c00dc12-91b7-4a8b-88d1-5e0ff66acaa0-hubble-tls\") pod \"4c00dc12-91b7-4a8b-88d1-5e0ff66acaa0\" (UID: \"4c00dc12-91b7-4a8b-88d1-5e0ff66acaa0\") " May 16 00:35:42.671955 kubelet[1903]: I0516 00:35:42.671928 1903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4c00dc12-91b7-4a8b-88d1-5e0ff66acaa0-xtables-lock\") pod \"4c00dc12-91b7-4a8b-88d1-5e0ff66acaa0\" (UID: \"4c00dc12-91b7-4a8b-88d1-5e0ff66acaa0\") " May 16 00:35:42.671955 kubelet[1903]: I0516 00:35:42.671948 1903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4c00dc12-91b7-4a8b-88d1-5e0ff66acaa0-etc-cni-netd\") pod \"4c00dc12-91b7-4a8b-88d1-5e0ff66acaa0\" (UID: \"4c00dc12-91b7-4a8b-88d1-5e0ff66acaa0\") " May 16 00:35:42.672132 kubelet[1903]: I0516 00:35:42.671964 1903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4c00dc12-91b7-4a8b-88d1-5e0ff66acaa0-cni-path\") pod \"4c00dc12-91b7-4a8b-88d1-5e0ff66acaa0\" (UID: \"4c00dc12-91b7-4a8b-88d1-5e0ff66acaa0\") " May 16 00:35:42.672132 kubelet[1903]: I0516 00:35:42.671982 1903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b1fb436f-32ba-46cc-9c38-fb4bb6b58d6d-cilium-config-path\") pod \"b1fb436f-32ba-46cc-9c38-fb4bb6b58d6d\" (UID: \"b1fb436f-32ba-46cc-9c38-fb4bb6b58d6d\") " May 16 00:35:42.672132 kubelet[1903]: I0516 00:35:42.672009 1903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4c00dc12-91b7-4a8b-88d1-5e0ff66acaa0-cilium-run\") pod \"4c00dc12-91b7-4a8b-88d1-5e0ff66acaa0\" (UID: \"4c00dc12-91b7-4a8b-88d1-5e0ff66acaa0\") " May 16 00:35:42.672132 kubelet[1903]: I0516 00:35:42.672024 1903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4c00dc12-91b7-4a8b-88d1-5e0ff66acaa0-host-proc-sys-net\") pod \"4c00dc12-91b7-4a8b-88d1-5e0ff66acaa0\" (UID: \"4c00dc12-91b7-4a8b-88d1-5e0ff66acaa0\") " May 16 00:35:42.672132 kubelet[1903]: I0516 00:35:42.672040 1903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wrnsw\" (UniqueName: \"kubernetes.io/projected/4c00dc12-91b7-4a8b-88d1-5e0ff66acaa0-kube-api-access-wrnsw\") pod \"4c00dc12-91b7-4a8b-88d1-5e0ff66acaa0\" (UID: \"4c00dc12-91b7-4a8b-88d1-5e0ff66acaa0\") " May 16 00:35:42.672132 kubelet[1903]: I0516 00:35:42.672056 1903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x9tnw\" (UniqueName: \"kubernetes.io/projected/b1fb436f-32ba-46cc-9c38-fb4bb6b58d6d-kube-api-access-x9tnw\") pod \"b1fb436f-32ba-46cc-9c38-fb4bb6b58d6d\" (UID: \"b1fb436f-32ba-46cc-9c38-fb4bb6b58d6d\") " May 16 00:35:42.672357 kubelet[1903]: I0516 00:35:42.672080 1903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4c00dc12-91b7-4a8b-88d1-5e0ff66acaa0-bpf-maps\") pod \"4c00dc12-91b7-4a8b-88d1-5e0ff66acaa0\" (UID: \"4c00dc12-91b7-4a8b-88d1-5e0ff66acaa0\") " May 16 00:35:42.672357 kubelet[1903]: I0516 00:35:42.672098 1903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4c00dc12-91b7-4a8b-88d1-5e0ff66acaa0-clustermesh-secrets\") pod \"4c00dc12-91b7-4a8b-88d1-5e0ff66acaa0\" (UID: \"4c00dc12-91b7-4a8b-88d1-5e0ff66acaa0\") " May 16 00:35:42.672357 kubelet[1903]: I0516 00:35:42.672113 1903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4c00dc12-91b7-4a8b-88d1-5e0ff66acaa0-hostproc\") pod \"4c00dc12-91b7-4a8b-88d1-5e0ff66acaa0\" (UID: \"4c00dc12-91b7-4a8b-88d1-5e0ff66acaa0\") " May 16 00:35:42.672357 kubelet[1903]: I0516 00:35:42.672126 1903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4c00dc12-91b7-4a8b-88d1-5e0ff66acaa0-host-proc-sys-kernel\") pod \"4c00dc12-91b7-4a8b-88d1-5e0ff66acaa0\" (UID: \"4c00dc12-91b7-4a8b-88d1-5e0ff66acaa0\") " May 16 00:35:42.672357 kubelet[1903]: I0516 00:35:42.672149 1903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4c00dc12-91b7-4a8b-88d1-5e0ff66acaa0-cilium-config-path\") pod \"4c00dc12-91b7-4a8b-88d1-5e0ff66acaa0\" (UID: \"4c00dc12-91b7-4a8b-88d1-5e0ff66acaa0\") " May 16 00:35:42.672357 kubelet[1903]: I0516 00:35:42.672164 1903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4c00dc12-91b7-4a8b-88d1-5e0ff66acaa0-cilium-cgroup\") pod \"4c00dc12-91b7-4a8b-88d1-5e0ff66acaa0\" (UID: \"4c00dc12-91b7-4a8b-88d1-5e0ff66acaa0\") " May 16 00:35:42.674609 kubelet[1903]: I0516 00:35:42.674577 1903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4c00dc12-91b7-4a8b-88d1-5e0ff66acaa0-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "4c00dc12-91b7-4a8b-88d1-5e0ff66acaa0" (UID: "4c00dc12-91b7-4a8b-88d1-5e0ff66acaa0"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 16 00:35:42.674938 kubelet[1903]: I0516 00:35:42.674719 1903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4c00dc12-91b7-4a8b-88d1-5e0ff66acaa0-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "4c00dc12-91b7-4a8b-88d1-5e0ff66acaa0" (UID: "4c00dc12-91b7-4a8b-88d1-5e0ff66acaa0"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 16 00:35:42.674938 kubelet[1903]: I0516 00:35:42.674759 1903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4c00dc12-91b7-4a8b-88d1-5e0ff66acaa0-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "4c00dc12-91b7-4a8b-88d1-5e0ff66acaa0" (UID: "4c00dc12-91b7-4a8b-88d1-5e0ff66acaa0"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 16 00:35:42.674938 kubelet[1903]: I0516 00:35:42.674916 1903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4c00dc12-91b7-4a8b-88d1-5e0ff66acaa0-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "4c00dc12-91b7-4a8b-88d1-5e0ff66acaa0" (UID: "4c00dc12-91b7-4a8b-88d1-5e0ff66acaa0"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 16 00:35:42.675070 kubelet[1903]: I0516 00:35:42.674947 1903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4c00dc12-91b7-4a8b-88d1-5e0ff66acaa0-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "4c00dc12-91b7-4a8b-88d1-5e0ff66acaa0" (UID: "4c00dc12-91b7-4a8b-88d1-5e0ff66acaa0"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 16 00:35:42.676206 kubelet[1903]: I0516 00:35:42.675812 1903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b1fb436f-32ba-46cc-9c38-fb4bb6b58d6d-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "b1fb436f-32ba-46cc-9c38-fb4bb6b58d6d" (UID: "b1fb436f-32ba-46cc-9c38-fb4bb6b58d6d"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 16 00:35:42.676206 kubelet[1903]: I0516 00:35:42.675867 1903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4c00dc12-91b7-4a8b-88d1-5e0ff66acaa0-cni-path" (OuterVolumeSpecName: "cni-path") pod "4c00dc12-91b7-4a8b-88d1-5e0ff66acaa0" (UID: "4c00dc12-91b7-4a8b-88d1-5e0ff66acaa0"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 16 00:35:42.677962 kubelet[1903]: I0516 00:35:42.677931 1903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4c00dc12-91b7-4a8b-88d1-5e0ff66acaa0-hostproc" (OuterVolumeSpecName: "hostproc") pod "4c00dc12-91b7-4a8b-88d1-5e0ff66acaa0" (UID: "4c00dc12-91b7-4a8b-88d1-5e0ff66acaa0"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 16 00:35:42.678041 kubelet[1903]: I0516 00:35:42.677962 1903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4c00dc12-91b7-4a8b-88d1-5e0ff66acaa0-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "4c00dc12-91b7-4a8b-88d1-5e0ff66acaa0" (UID: "4c00dc12-91b7-4a8b-88d1-5e0ff66acaa0"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 16 00:35:42.678041 kubelet[1903]: I0516 00:35:42.677943 1903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4c00dc12-91b7-4a8b-88d1-5e0ff66acaa0-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "4c00dc12-91b7-4a8b-88d1-5e0ff66acaa0" (UID: "4c00dc12-91b7-4a8b-88d1-5e0ff66acaa0"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 16 00:35:42.678041 kubelet[1903]: I0516 00:35:42.677977 1903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4c00dc12-91b7-4a8b-88d1-5e0ff66acaa0-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "4c00dc12-91b7-4a8b-88d1-5e0ff66acaa0" (UID: "4c00dc12-91b7-4a8b-88d1-5e0ff66acaa0"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 16 00:35:42.679588 kubelet[1903]: I0516 00:35:42.679553 1903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c00dc12-91b7-4a8b-88d1-5e0ff66acaa0-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "4c00dc12-91b7-4a8b-88d1-5e0ff66acaa0" (UID: "4c00dc12-91b7-4a8b-88d1-5e0ff66acaa0"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" May 16 00:35:42.679669 kubelet[1903]: I0516 00:35:42.679555 1903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c00dc12-91b7-4a8b-88d1-5e0ff66acaa0-kube-api-access-wrnsw" (OuterVolumeSpecName: "kube-api-access-wrnsw") pod "4c00dc12-91b7-4a8b-88d1-5e0ff66acaa0" (UID: "4c00dc12-91b7-4a8b-88d1-5e0ff66acaa0"). InnerVolumeSpecName "kube-api-access-wrnsw". PluginName "kubernetes.io/projected", VolumeGidValue "" May 16 00:35:42.679771 kubelet[1903]: I0516 00:35:42.679734 1903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4c00dc12-91b7-4a8b-88d1-5e0ff66acaa0-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "4c00dc12-91b7-4a8b-88d1-5e0ff66acaa0" (UID: "4c00dc12-91b7-4a8b-88d1-5e0ff66acaa0"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 16 00:35:42.679898 kubelet[1903]: I0516 00:35:42.679871 1903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c00dc12-91b7-4a8b-88d1-5e0ff66acaa0-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "4c00dc12-91b7-4a8b-88d1-5e0ff66acaa0" (UID: "4c00dc12-91b7-4a8b-88d1-5e0ff66acaa0"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" May 16 00:35:42.679940 kubelet[1903]: I0516 00:35:42.679914 1903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b1fb436f-32ba-46cc-9c38-fb4bb6b58d6d-kube-api-access-x9tnw" (OuterVolumeSpecName: "kube-api-access-x9tnw") pod "b1fb436f-32ba-46cc-9c38-fb4bb6b58d6d" (UID: "b1fb436f-32ba-46cc-9c38-fb4bb6b58d6d"). InnerVolumeSpecName "kube-api-access-x9tnw". PluginName "kubernetes.io/projected", VolumeGidValue "" May 16 00:35:42.772719 kubelet[1903]: I0516 00:35:42.772673 1903 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/4c00dc12-91b7-4a8b-88d1-5e0ff66acaa0-cni-path\") on node \"localhost\" DevicePath \"\"" May 16 00:35:42.772719 kubelet[1903]: I0516 00:35:42.772705 1903 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/4c00dc12-91b7-4a8b-88d1-5e0ff66acaa0-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" May 16 00:35:42.772719 kubelet[1903]: I0516 00:35:42.772717 1903 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b1fb436f-32ba-46cc-9c38-fb4bb6b58d6d-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 16 00:35:42.772719 kubelet[1903]: I0516 00:35:42.772726 1903 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/4c00dc12-91b7-4a8b-88d1-5e0ff66acaa0-cilium-run\") on node \"localhost\" DevicePath \"\"" May 16 00:35:42.772934 kubelet[1903]: I0516 00:35:42.772757 1903 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/4c00dc12-91b7-4a8b-88d1-5e0ff66acaa0-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" May 16 00:35:42.772934 kubelet[1903]: I0516 00:35:42.772768 1903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wrnsw\" (UniqueName: \"kubernetes.io/projected/4c00dc12-91b7-4a8b-88d1-5e0ff66acaa0-kube-api-access-wrnsw\") on node \"localhost\" DevicePath \"\"" May 16 00:35:42.772934 kubelet[1903]: I0516 00:35:42.772778 1903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x9tnw\" (UniqueName: \"kubernetes.io/projected/b1fb436f-32ba-46cc-9c38-fb4bb6b58d6d-kube-api-access-x9tnw\") on node \"localhost\" DevicePath \"\"" May 16 00:35:42.772934 kubelet[1903]: I0516 00:35:42.772786 1903 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/4c00dc12-91b7-4a8b-88d1-5e0ff66acaa0-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" May 16 00:35:42.772934 kubelet[1903]: I0516 00:35:42.772794 1903 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/4c00dc12-91b7-4a8b-88d1-5e0ff66acaa0-bpf-maps\") on node \"localhost\" DevicePath \"\"" May 16 00:35:42.772934 kubelet[1903]: I0516 00:35:42.772801 1903 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/4c00dc12-91b7-4a8b-88d1-5e0ff66acaa0-hostproc\") on node \"localhost\" DevicePath \"\"" May 16 00:35:42.772934 kubelet[1903]: I0516 00:35:42.772810 1903 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/4c00dc12-91b7-4a8b-88d1-5e0ff66acaa0-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" May 16 00:35:42.772934 kubelet[1903]: I0516 00:35:42.772818 1903 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4c00dc12-91b7-4a8b-88d1-5e0ff66acaa0-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 16 00:35:42.773104 kubelet[1903]: I0516 00:35:42.772826 1903 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/4c00dc12-91b7-4a8b-88d1-5e0ff66acaa0-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" May 16 00:35:42.773104 kubelet[1903]: I0516 00:35:42.772834 1903 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4c00dc12-91b7-4a8b-88d1-5e0ff66acaa0-lib-modules\") on node \"localhost\" DevicePath \"\"" May 16 00:35:42.773104 kubelet[1903]: I0516 00:35:42.772842 1903 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/4c00dc12-91b7-4a8b-88d1-5e0ff66acaa0-hubble-tls\") on node \"localhost\" DevicePath \"\"" May 16 00:35:42.773104 kubelet[1903]: I0516 00:35:42.772851 1903 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4c00dc12-91b7-4a8b-88d1-5e0ff66acaa0-xtables-lock\") on node \"localhost\" DevicePath \"\"" May 16 00:35:42.874217 systemd[1]: Removed slice kubepods-besteffort-podb1fb436f_32ba_46cc_9c38_fb4bb6b58d6d.slice. May 16 00:35:43.403912 systemd[1]: var-lib-kubelet-pods-b1fb436f\x2d32ba\x2d46cc\x2d9c38\x2dfb4bb6b58d6d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dx9tnw.mount: Deactivated successfully. May 16 00:35:43.404015 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ba0e26da10607b6712166e625cedfdd4f1c6b0a266fb659300978a33c0519441-rootfs.mount: Deactivated successfully. May 16 00:35:43.404074 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ba0e26da10607b6712166e625cedfdd4f1c6b0a266fb659300978a33c0519441-shm.mount: Deactivated successfully. May 16 00:35:43.404122 systemd[1]: var-lib-kubelet-pods-4c00dc12\x2d91b7\x2d4a8b\x2d88d1\x2d5e0ff66acaa0-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dwrnsw.mount: Deactivated successfully. May 16 00:35:43.404172 systemd[1]: var-lib-kubelet-pods-4c00dc12\x2d91b7\x2d4a8b\x2d88d1\x2d5e0ff66acaa0-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 16 00:35:43.404239 systemd[1]: var-lib-kubelet-pods-4c00dc12\x2d91b7\x2d4a8b\x2d88d1\x2d5e0ff66acaa0-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 16 00:35:43.577748 systemd[1]: Removed slice kubepods-burstable-pod4c00dc12_91b7_4a8b_88d1_5e0ff66acaa0.slice. May 16 00:35:43.577834 systemd[1]: kubepods-burstable-pod4c00dc12_91b7_4a8b_88d1_5e0ff66acaa0.slice: Consumed 6.801s CPU time. May 16 00:35:44.366281 sshd[3530]: pam_unix(sshd:session): session closed for user core May 16 00:35:44.377013 systemd[1]: Started sshd@22-10.0.0.31:22-10.0.0.1:55134.service. May 16 00:35:44.380305 systemd[1]: sshd@21-10.0.0.31:22-10.0.0.1:48886.service: Deactivated successfully. May 16 00:35:44.382032 systemd[1]: session-22.scope: Deactivated successfully. May 16 00:35:44.382176 systemd[1]: session-22.scope: Consumed 1.203s CPU time. May 16 00:35:44.383871 systemd-logind[1206]: Session 22 logged out. Waiting for processes to exit. May 16 00:35:44.385017 systemd-logind[1206]: Removed session 22. May 16 00:35:44.390288 kubelet[1903]: I0516 00:35:44.390253 1903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c00dc12-91b7-4a8b-88d1-5e0ff66acaa0" path="/var/lib/kubelet/pods/4c00dc12-91b7-4a8b-88d1-5e0ff66acaa0/volumes" May 16 00:35:44.391066 kubelet[1903]: I0516 00:35:44.391044 1903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b1fb436f-32ba-46cc-9c38-fb4bb6b58d6d" path="/var/lib/kubelet/pods/b1fb436f-32ba-46cc-9c38-fb4bb6b58d6d/volumes" May 16 00:35:44.428660 sshd[3691]: Accepted publickey for core from 10.0.0.1 port 55134 ssh2: RSA SHA256:czXyODm5lEdSCdxgc4UKYE1H3sjGZqNxHBxH/SPqyp4 May 16 00:35:44.429882 sshd[3691]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 16 00:35:44.435147 systemd[1]: Started session-23.scope. May 16 00:35:44.435726 systemd-logind[1206]: New session 23 of user core. May 16 00:35:45.380011 sshd[3691]: pam_unix(sshd:session): session closed for user core May 16 00:35:45.387066 systemd[1]: Started sshd@23-10.0.0.31:22-10.0.0.1:55144.service. May 16 00:35:45.389130 systemd[1]: sshd@22-10.0.0.31:22-10.0.0.1:55134.service: Deactivated successfully. May 16 00:35:45.389787 systemd[1]: session-23.scope: Deactivated successfully. May 16 00:35:45.393381 systemd-logind[1206]: Session 23 logged out. Waiting for processes to exit. May 16 00:35:45.396690 systemd-logind[1206]: Removed session 23. May 16 00:35:45.421754 kubelet[1903]: E0516 00:35:45.421697 1903 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4c00dc12-91b7-4a8b-88d1-5e0ff66acaa0" containerName="mount-bpf-fs" May 16 00:35:45.421754 kubelet[1903]: E0516 00:35:45.421738 1903 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4c00dc12-91b7-4a8b-88d1-5e0ff66acaa0" containerName="cilium-agent" May 16 00:35:45.421754 kubelet[1903]: E0516 00:35:45.421746 1903 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4c00dc12-91b7-4a8b-88d1-5e0ff66acaa0" containerName="mount-cgroup" May 16 00:35:45.421754 kubelet[1903]: E0516 00:35:45.421752 1903 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4c00dc12-91b7-4a8b-88d1-5e0ff66acaa0" containerName="apply-sysctl-overwrites" May 16 00:35:45.421754 kubelet[1903]: E0516 00:35:45.421758 1903 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b1fb436f-32ba-46cc-9c38-fb4bb6b58d6d" containerName="cilium-operator" May 16 00:35:45.421754 kubelet[1903]: E0516 00:35:45.421763 1903 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4c00dc12-91b7-4a8b-88d1-5e0ff66acaa0" containerName="clean-cilium-state" May 16 00:35:45.422166 kubelet[1903]: I0516 00:35:45.421794 1903 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c00dc12-91b7-4a8b-88d1-5e0ff66acaa0" containerName="cilium-agent" May 16 00:35:45.422166 kubelet[1903]: I0516 00:35:45.421802 1903 memory_manager.go:354] "RemoveStaleState removing state" podUID="b1fb436f-32ba-46cc-9c38-fb4bb6b58d6d" containerName="cilium-operator" May 16 00:35:45.427000 systemd[1]: Created slice kubepods-burstable-podb79f224b_1263_437f_a2ae_3a17c2755186.slice. May 16 00:35:45.428960 kubelet[1903]: W0516 00:35:45.428931 1903 reflector.go:561] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object May 16 00:35:45.429121 kubelet[1903]: E0516 00:35:45.429098 1903 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"hubble-server-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"hubble-server-certs\" is forbidden: User \"system:node:localhost\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" May 16 00:35:45.429206 kubelet[1903]: W0516 00:35:45.428981 1903 reflector.go:561] object-"kube-system"/"cilium-ipsec-keys": failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:localhost" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node 'localhost' and this object May 16 00:35:45.429288 kubelet[1903]: E0516 00:35:45.429269 1903 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"cilium-ipsec-keys\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"cilium-ipsec-keys\" is forbidden: User \"system:node:localhost\" cannot list resource \"secrets\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'localhost' and this object" logger="UnhandledError" May 16 00:35:45.434278 sshd[3704]: Accepted publickey for core from 10.0.0.1 port 55144 ssh2: RSA SHA256:czXyODm5lEdSCdxgc4UKYE1H3sjGZqNxHBxH/SPqyp4 May 16 00:35:45.435661 sshd[3704]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 16 00:35:45.439555 systemd[1]: Started session-24.scope. May 16 00:35:45.439838 systemd-logind[1206]: New session 24 of user core. May 16 00:35:45.567640 sshd[3704]: pam_unix(sshd:session): session closed for user core May 16 00:35:45.570977 systemd[1]: Started sshd@24-10.0.0.31:22-10.0.0.1:55146.service. May 16 00:35:45.572429 systemd-logind[1206]: Session 24 logged out. Waiting for processes to exit. May 16 00:35:45.572629 systemd[1]: sshd@23-10.0.0.31:22-10.0.0.1:55144.service: Deactivated successfully. May 16 00:35:45.573333 systemd[1]: session-24.scope: Deactivated successfully. May 16 00:35:45.573979 systemd-logind[1206]: Removed session 24. May 16 00:35:45.585085 kubelet[1903]: E0516 00:35:45.584967 1903 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[bpf-maps cilium-cgroup cilium-config-path cilium-ipsec-secrets cilium-run clustermesh-secrets cni-path etc-cni-netd host-proc-sys-kernel host-proc-sys-net hostproc hubble-tls kube-api-access-djkqq lib-modules xtables-lock], unattached volumes=[], failed to process volumes=[]: context canceled" pod="kube-system/cilium-5lhgv" podUID="b79f224b-1263-437f-a2ae-3a17c2755186" May 16 00:35:45.591683 kubelet[1903]: I0516 00:35:45.591644 1903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b79f224b-1263-437f-a2ae-3a17c2755186-lib-modules\") pod \"cilium-5lhgv\" (UID: \"b79f224b-1263-437f-a2ae-3a17c2755186\") " pod="kube-system/cilium-5lhgv" May 16 00:35:45.591683 kubelet[1903]: I0516 00:35:45.591686 1903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b79f224b-1263-437f-a2ae-3a17c2755186-etc-cni-netd\") pod \"cilium-5lhgv\" (UID: \"b79f224b-1263-437f-a2ae-3a17c2755186\") " pod="kube-system/cilium-5lhgv" May 16 00:35:45.591884 kubelet[1903]: I0516 00:35:45.591704 1903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b79f224b-1263-437f-a2ae-3a17c2755186-xtables-lock\") pod \"cilium-5lhgv\" (UID: \"b79f224b-1263-437f-a2ae-3a17c2755186\") " pod="kube-system/cilium-5lhgv" May 16 00:35:45.591884 kubelet[1903]: I0516 00:35:45.591719 1903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b79f224b-1263-437f-a2ae-3a17c2755186-cilium-config-path\") pod \"cilium-5lhgv\" (UID: \"b79f224b-1263-437f-a2ae-3a17c2755186\") " pod="kube-system/cilium-5lhgv" May 16 00:35:45.591884 kubelet[1903]: I0516 00:35:45.591745 1903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b79f224b-1263-437f-a2ae-3a17c2755186-hubble-tls\") pod \"cilium-5lhgv\" (UID: \"b79f224b-1263-437f-a2ae-3a17c2755186\") " pod="kube-system/cilium-5lhgv" May 16 00:35:45.591884 kubelet[1903]: I0516 00:35:45.591762 1903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djkqq\" (UniqueName: \"kubernetes.io/projected/b79f224b-1263-437f-a2ae-3a17c2755186-kube-api-access-djkqq\") pod \"cilium-5lhgv\" (UID: \"b79f224b-1263-437f-a2ae-3a17c2755186\") " pod="kube-system/cilium-5lhgv" May 16 00:35:45.591884 kubelet[1903]: I0516 00:35:45.591777 1903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b79f224b-1263-437f-a2ae-3a17c2755186-cilium-run\") pod \"cilium-5lhgv\" (UID: \"b79f224b-1263-437f-a2ae-3a17c2755186\") " pod="kube-system/cilium-5lhgv" May 16 00:35:45.591884 kubelet[1903]: I0516 00:35:45.591792 1903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b79f224b-1263-437f-a2ae-3a17c2755186-cilium-cgroup\") pod \"cilium-5lhgv\" (UID: \"b79f224b-1263-437f-a2ae-3a17c2755186\") " pod="kube-system/cilium-5lhgv" May 16 00:35:45.592023 kubelet[1903]: I0516 00:35:45.591808 1903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b79f224b-1263-437f-a2ae-3a17c2755186-cni-path\") pod \"cilium-5lhgv\" (UID: \"b79f224b-1263-437f-a2ae-3a17c2755186\") " pod="kube-system/cilium-5lhgv" May 16 00:35:45.592023 kubelet[1903]: I0516 00:35:45.591823 1903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b79f224b-1263-437f-a2ae-3a17c2755186-clustermesh-secrets\") pod \"cilium-5lhgv\" (UID: \"b79f224b-1263-437f-a2ae-3a17c2755186\") " pod="kube-system/cilium-5lhgv" May 16 00:35:45.592023 kubelet[1903]: I0516 00:35:45.591843 1903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b79f224b-1263-437f-a2ae-3a17c2755186-bpf-maps\") pod \"cilium-5lhgv\" (UID: \"b79f224b-1263-437f-a2ae-3a17c2755186\") " pod="kube-system/cilium-5lhgv" May 16 00:35:45.592023 kubelet[1903]: I0516 00:35:45.591858 1903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b79f224b-1263-437f-a2ae-3a17c2755186-host-proc-sys-net\") pod \"cilium-5lhgv\" (UID: \"b79f224b-1263-437f-a2ae-3a17c2755186\") " pod="kube-system/cilium-5lhgv" May 16 00:35:45.592023 kubelet[1903]: I0516 00:35:45.591875 1903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b79f224b-1263-437f-a2ae-3a17c2755186-host-proc-sys-kernel\") pod \"cilium-5lhgv\" (UID: \"b79f224b-1263-437f-a2ae-3a17c2755186\") " pod="kube-system/cilium-5lhgv" May 16 00:35:45.592023 kubelet[1903]: I0516 00:35:45.591892 1903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b79f224b-1263-437f-a2ae-3a17c2755186-hostproc\") pod \"cilium-5lhgv\" (UID: \"b79f224b-1263-437f-a2ae-3a17c2755186\") " pod="kube-system/cilium-5lhgv" May 16 00:35:45.592167 kubelet[1903]: I0516 00:35:45.591907 1903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/b79f224b-1263-437f-a2ae-3a17c2755186-cilium-ipsec-secrets\") pod \"cilium-5lhgv\" (UID: \"b79f224b-1263-437f-a2ae-3a17c2755186\") " pod="kube-system/cilium-5lhgv" May 16 00:35:45.616102 sshd[3717]: Accepted publickey for core from 10.0.0.1 port 55146 ssh2: RSA SHA256:czXyODm5lEdSCdxgc4UKYE1H3sjGZqNxHBxH/SPqyp4 May 16 00:35:45.617451 sshd[3717]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 16 00:35:45.620882 systemd-logind[1206]: New session 25 of user core. May 16 00:35:45.621727 systemd[1]: Started session-25.scope. May 16 00:35:46.446224 kubelet[1903]: E0516 00:35:46.446144 1903 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 16 00:35:46.694686 kubelet[1903]: E0516 00:35:46.694633 1903 projected.go:263] Couldn't get secret kube-system/hubble-server-certs: failed to sync secret cache: timed out waiting for the condition May 16 00:35:46.694686 kubelet[1903]: E0516 00:35:46.694674 1903 projected.go:194] Error preparing data for projected volume hubble-tls for pod kube-system/cilium-5lhgv: failed to sync secret cache: timed out waiting for the condition May 16 00:35:46.695400 kubelet[1903]: E0516 00:35:46.695362 1903 secret.go:189] Couldn't get secret kube-system/cilium-ipsec-keys: failed to sync secret cache: timed out waiting for the condition May 16 00:35:46.696189 kubelet[1903]: E0516 00:35:46.696152 1903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/b79f224b-1263-437f-a2ae-3a17c2755186-hubble-tls podName:b79f224b-1263-437f-a2ae-3a17c2755186 nodeName:}" failed. No retries permitted until 2025-05-16 00:35:47.196125171 +0000 UTC m=+80.893581275 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "hubble-tls" (UniqueName: "kubernetes.io/projected/b79f224b-1263-437f-a2ae-3a17c2755186-hubble-tls") pod "cilium-5lhgv" (UID: "b79f224b-1263-437f-a2ae-3a17c2755186") : failed to sync secret cache: timed out waiting for the condition May 16 00:35:46.696300 kubelet[1903]: E0516 00:35:46.696244 1903 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/b79f224b-1263-437f-a2ae-3a17c2755186-cilium-ipsec-secrets podName:b79f224b-1263-437f-a2ae-3a17c2755186 nodeName:}" failed. No retries permitted until 2025-05-16 00:35:47.196222527 +0000 UTC m=+80.893678631 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-ipsec-secrets" (UniqueName: "kubernetes.io/secret/b79f224b-1263-437f-a2ae-3a17c2755186-cilium-ipsec-secrets") pod "cilium-5lhgv" (UID: "b79f224b-1263-437f-a2ae-3a17c2755186") : failed to sync secret cache: timed out waiting for the condition May 16 00:35:46.700364 kubelet[1903]: I0516 00:35:46.700332 1903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b79f224b-1263-437f-a2ae-3a17c2755186-cilium-config-path\") pod \"b79f224b-1263-437f-a2ae-3a17c2755186\" (UID: \"b79f224b-1263-437f-a2ae-3a17c2755186\") " May 16 00:35:46.700414 kubelet[1903]: I0516 00:35:46.700367 1903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b79f224b-1263-437f-a2ae-3a17c2755186-cilium-run\") pod \"b79f224b-1263-437f-a2ae-3a17c2755186\" (UID: \"b79f224b-1263-437f-a2ae-3a17c2755186\") " May 16 00:35:46.700439 kubelet[1903]: I0516 00:35:46.700417 1903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b79f224b-1263-437f-a2ae-3a17c2755186-xtables-lock\") pod \"b79f224b-1263-437f-a2ae-3a17c2755186\" (UID: \"b79f224b-1263-437f-a2ae-3a17c2755186\") " May 16 00:35:46.700483 kubelet[1903]: I0516 00:35:46.700437 1903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b79f224b-1263-437f-a2ae-3a17c2755186-bpf-maps\") pod \"b79f224b-1263-437f-a2ae-3a17c2755186\" (UID: \"b79f224b-1263-437f-a2ae-3a17c2755186\") " May 16 00:35:46.700508 kubelet[1903]: I0516 00:35:46.700479 1903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b79f224b-1263-437f-a2ae-3a17c2755186-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "b79f224b-1263-437f-a2ae-3a17c2755186" (UID: "b79f224b-1263-437f-a2ae-3a17c2755186"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 16 00:35:46.700533 kubelet[1903]: I0516 00:35:46.700514 1903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b79f224b-1263-437f-a2ae-3a17c2755186-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "b79f224b-1263-437f-a2ae-3a17c2755186" (UID: "b79f224b-1263-437f-a2ae-3a17c2755186"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 16 00:35:46.700596 kubelet[1903]: I0516 00:35:46.700577 1903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b79f224b-1263-437f-a2ae-3a17c2755186-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "b79f224b-1263-437f-a2ae-3a17c2755186" (UID: "b79f224b-1263-437f-a2ae-3a17c2755186"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 16 00:35:46.700948 kubelet[1903]: I0516 00:35:46.700914 1903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-djkqq\" (UniqueName: \"kubernetes.io/projected/b79f224b-1263-437f-a2ae-3a17c2755186-kube-api-access-djkqq\") pod \"b79f224b-1263-437f-a2ae-3a17c2755186\" (UID: \"b79f224b-1263-437f-a2ae-3a17c2755186\") " May 16 00:35:46.700948 kubelet[1903]: I0516 00:35:46.700947 1903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b79f224b-1263-437f-a2ae-3a17c2755186-cni-path\") pod \"b79f224b-1263-437f-a2ae-3a17c2755186\" (UID: \"b79f224b-1263-437f-a2ae-3a17c2755186\") " May 16 00:35:46.701008 kubelet[1903]: I0516 00:35:46.700966 1903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b79f224b-1263-437f-a2ae-3a17c2755186-hostproc\") pod \"b79f224b-1263-437f-a2ae-3a17c2755186\" (UID: \"b79f224b-1263-437f-a2ae-3a17c2755186\") " May 16 00:35:46.701008 kubelet[1903]: I0516 00:35:46.700984 1903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b79f224b-1263-437f-a2ae-3a17c2755186-lib-modules\") pod \"b79f224b-1263-437f-a2ae-3a17c2755186\" (UID: \"b79f224b-1263-437f-a2ae-3a17c2755186\") " May 16 00:35:46.701008 kubelet[1903]: I0516 00:35:46.700997 1903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b79f224b-1263-437f-a2ae-3a17c2755186-cni-path" (OuterVolumeSpecName: "cni-path") pod "b79f224b-1263-437f-a2ae-3a17c2755186" (UID: "b79f224b-1263-437f-a2ae-3a17c2755186"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 16 00:35:46.701069 kubelet[1903]: I0516 00:35:46.701001 1903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b79f224b-1263-437f-a2ae-3a17c2755186-cilium-cgroup\") pod \"b79f224b-1263-437f-a2ae-3a17c2755186\" (UID: \"b79f224b-1263-437f-a2ae-3a17c2755186\") " May 16 00:35:46.701069 kubelet[1903]: I0516 00:35:46.701031 1903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b79f224b-1263-437f-a2ae-3a17c2755186-clustermesh-secrets\") pod \"b79f224b-1263-437f-a2ae-3a17c2755186\" (UID: \"b79f224b-1263-437f-a2ae-3a17c2755186\") " May 16 00:35:46.701069 kubelet[1903]: I0516 00:35:46.701035 1903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b79f224b-1263-437f-a2ae-3a17c2755186-hostproc" (OuterVolumeSpecName: "hostproc") pod "b79f224b-1263-437f-a2ae-3a17c2755186" (UID: "b79f224b-1263-437f-a2ae-3a17c2755186"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 16 00:35:46.701069 kubelet[1903]: I0516 00:35:46.701045 1903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b79f224b-1263-437f-a2ae-3a17c2755186-host-proc-sys-net\") pod \"b79f224b-1263-437f-a2ae-3a17c2755186\" (UID: \"b79f224b-1263-437f-a2ae-3a17c2755186\") " May 16 00:35:46.701069 kubelet[1903]: I0516 00:35:46.701055 1903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b79f224b-1263-437f-a2ae-3a17c2755186-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "b79f224b-1263-437f-a2ae-3a17c2755186" (UID: "b79f224b-1263-437f-a2ae-3a17c2755186"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 16 00:35:46.701200 kubelet[1903]: I0516 00:35:46.701060 1903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b79f224b-1263-437f-a2ae-3a17c2755186-etc-cni-netd\") pod \"b79f224b-1263-437f-a2ae-3a17c2755186\" (UID: \"b79f224b-1263-437f-a2ae-3a17c2755186\") " May 16 00:35:46.701200 kubelet[1903]: I0516 00:35:46.701070 1903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b79f224b-1263-437f-a2ae-3a17c2755186-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "b79f224b-1263-437f-a2ae-3a17c2755186" (UID: "b79f224b-1263-437f-a2ae-3a17c2755186"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 16 00:35:46.701200 kubelet[1903]: I0516 00:35:46.701075 1903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b79f224b-1263-437f-a2ae-3a17c2755186-host-proc-sys-kernel\") pod \"b79f224b-1263-437f-a2ae-3a17c2755186\" (UID: \"b79f224b-1263-437f-a2ae-3a17c2755186\") " May 16 00:35:46.701200 kubelet[1903]: I0516 00:35:46.701084 1903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b79f224b-1263-437f-a2ae-3a17c2755186-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "b79f224b-1263-437f-a2ae-3a17c2755186" (UID: "b79f224b-1263-437f-a2ae-3a17c2755186"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 16 00:35:46.701200 kubelet[1903]: I0516 00:35:46.701112 1903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b79f224b-1263-437f-a2ae-3a17c2755186-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "b79f224b-1263-437f-a2ae-3a17c2755186" (UID: "b79f224b-1263-437f-a2ae-3a17c2755186"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 16 00:35:46.701310 kubelet[1903]: I0516 00:35:46.701135 1903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b79f224b-1263-437f-a2ae-3a17c2755186-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "b79f224b-1263-437f-a2ae-3a17c2755186" (UID: "b79f224b-1263-437f-a2ae-3a17c2755186"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 16 00:35:46.701310 kubelet[1903]: I0516 00:35:46.701164 1903 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b79f224b-1263-437f-a2ae-3a17c2755186-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" May 16 00:35:46.701310 kubelet[1903]: I0516 00:35:46.701196 1903 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b79f224b-1263-437f-a2ae-3a17c2755186-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" May 16 00:35:46.701310 kubelet[1903]: I0516 00:35:46.701208 1903 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b79f224b-1263-437f-a2ae-3a17c2755186-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" May 16 00:35:46.701310 kubelet[1903]: I0516 00:35:46.701218 1903 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b79f224b-1263-437f-a2ae-3a17c2755186-xtables-lock\") on node \"localhost\" DevicePath \"\"" May 16 00:35:46.701310 kubelet[1903]: I0516 00:35:46.701226 1903 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b79f224b-1263-437f-a2ae-3a17c2755186-cilium-run\") on node \"localhost\" DevicePath \"\"" May 16 00:35:46.701310 kubelet[1903]: I0516 00:35:46.701235 1903 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b79f224b-1263-437f-a2ae-3a17c2755186-bpf-maps\") on node \"localhost\" DevicePath \"\"" May 16 00:35:46.701310 kubelet[1903]: I0516 00:35:46.701243 1903 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b79f224b-1263-437f-a2ae-3a17c2755186-cni-path\") on node \"localhost\" DevicePath \"\"" May 16 00:35:46.701483 kubelet[1903]: I0516 00:35:46.701250 1903 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b79f224b-1263-437f-a2ae-3a17c2755186-hostproc\") on node \"localhost\" DevicePath \"\"" May 16 00:35:46.701483 kubelet[1903]: I0516 00:35:46.701261 1903 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b79f224b-1263-437f-a2ae-3a17c2755186-lib-modules\") on node \"localhost\" DevicePath \"\"" May 16 00:35:46.701483 kubelet[1903]: I0516 00:35:46.701281 1903 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b79f224b-1263-437f-a2ae-3a17c2755186-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" May 16 00:35:46.702729 kubelet[1903]: I0516 00:35:46.702681 1903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b79f224b-1263-437f-a2ae-3a17c2755186-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "b79f224b-1263-437f-a2ae-3a17c2755186" (UID: "b79f224b-1263-437f-a2ae-3a17c2755186"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 16 00:35:46.705072 systemd[1]: var-lib-kubelet-pods-b79f224b\x2d1263\x2d437f\x2da2ae\x2d3a17c2755186-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 16 00:35:46.706046 kubelet[1903]: I0516 00:35:46.706010 1903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b79f224b-1263-437f-a2ae-3a17c2755186-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "b79f224b-1263-437f-a2ae-3a17c2755186" (UID: "b79f224b-1263-437f-a2ae-3a17c2755186"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" May 16 00:35:46.706234 kubelet[1903]: I0516 00:35:46.706176 1903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b79f224b-1263-437f-a2ae-3a17c2755186-kube-api-access-djkqq" (OuterVolumeSpecName: "kube-api-access-djkqq") pod "b79f224b-1263-437f-a2ae-3a17c2755186" (UID: "b79f224b-1263-437f-a2ae-3a17c2755186"). InnerVolumeSpecName "kube-api-access-djkqq". PluginName "kubernetes.io/projected", VolumeGidValue "" May 16 00:35:46.706928 systemd[1]: var-lib-kubelet-pods-b79f224b\x2d1263\x2d437f\x2da2ae\x2d3a17c2755186-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ddjkqq.mount: Deactivated successfully. May 16 00:35:46.802465 kubelet[1903]: I0516 00:35:46.802414 1903 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b79f224b-1263-437f-a2ae-3a17c2755186-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" May 16 00:35:46.802465 kubelet[1903]: I0516 00:35:46.802452 1903 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b79f224b-1263-437f-a2ae-3a17c2755186-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 16 00:35:46.802465 kubelet[1903]: I0516 00:35:46.802462 1903 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-djkqq\" (UniqueName: \"kubernetes.io/projected/b79f224b-1263-437f-a2ae-3a17c2755186-kube-api-access-djkqq\") on node \"localhost\" DevicePath \"\"" May 16 00:35:47.306166 kubelet[1903]: I0516 00:35:47.306119 1903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b79f224b-1263-437f-a2ae-3a17c2755186-hubble-tls\") pod \"b79f224b-1263-437f-a2ae-3a17c2755186\" (UID: \"b79f224b-1263-437f-a2ae-3a17c2755186\") " May 16 00:35:47.306166 kubelet[1903]: I0516 00:35:47.306170 1903 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/b79f224b-1263-437f-a2ae-3a17c2755186-cilium-ipsec-secrets\") pod \"b79f224b-1263-437f-a2ae-3a17c2755186\" (UID: \"b79f224b-1263-437f-a2ae-3a17c2755186\") " May 16 00:35:47.309124 kubelet[1903]: I0516 00:35:47.309062 1903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b79f224b-1263-437f-a2ae-3a17c2755186-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "b79f224b-1263-437f-a2ae-3a17c2755186" (UID: "b79f224b-1263-437f-a2ae-3a17c2755186"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" May 16 00:35:47.309883 kubelet[1903]: I0516 00:35:47.309861 1903 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b79f224b-1263-437f-a2ae-3a17c2755186-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "b79f224b-1263-437f-a2ae-3a17c2755186" (UID: "b79f224b-1263-437f-a2ae-3a17c2755186"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" May 16 00:35:47.310021 systemd[1]: var-lib-kubelet-pods-b79f224b\x2d1263\x2d437f\x2da2ae\x2d3a17c2755186-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. May 16 00:35:47.310113 systemd[1]: var-lib-kubelet-pods-b79f224b\x2d1263\x2d437f\x2da2ae\x2d3a17c2755186-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 16 00:35:47.386914 kubelet[1903]: E0516 00:35:47.386864 1903 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:35:47.407109 kubelet[1903]: I0516 00:35:47.407064 1903 reconciler_common.go:293] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/b79f224b-1263-437f-a2ae-3a17c2755186-cilium-ipsec-secrets\") on node \"localhost\" DevicePath \"\"" May 16 00:35:47.407109 kubelet[1903]: I0516 00:35:47.407101 1903 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b79f224b-1263-437f-a2ae-3a17c2755186-hubble-tls\") on node \"localhost\" DevicePath \"\"" May 16 00:35:47.593242 systemd[1]: Removed slice kubepods-burstable-podb79f224b_1263_437f_a2ae_3a17c2755186.slice. May 16 00:35:47.636002 systemd[1]: Created slice kubepods-burstable-pod48de72d4_148b_48b4_aef9_bb230e2f17dc.slice. May 16 00:35:47.676829 kubelet[1903]: I0516 00:35:47.676780 1903 setters.go:600] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-05-16T00:35:47Z","lastTransitionTime":"2025-05-16T00:35:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} May 16 00:35:47.808059 kubelet[1903]: I0516 00:35:47.808012 1903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/48de72d4-148b-48b4-aef9-bb230e2f17dc-etc-cni-netd\") pod \"cilium-s2nqb\" (UID: \"48de72d4-148b-48b4-aef9-bb230e2f17dc\") " pod="kube-system/cilium-s2nqb" May 16 00:35:47.808059 kubelet[1903]: I0516 00:35:47.808066 1903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/48de72d4-148b-48b4-aef9-bb230e2f17dc-bpf-maps\") pod \"cilium-s2nqb\" (UID: \"48de72d4-148b-48b4-aef9-bb230e2f17dc\") " pod="kube-system/cilium-s2nqb" May 16 00:35:47.808262 kubelet[1903]: I0516 00:35:47.808083 1903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/48de72d4-148b-48b4-aef9-bb230e2f17dc-cni-path\") pod \"cilium-s2nqb\" (UID: \"48de72d4-148b-48b4-aef9-bb230e2f17dc\") " pod="kube-system/cilium-s2nqb" May 16 00:35:47.808262 kubelet[1903]: I0516 00:35:47.808101 1903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/48de72d4-148b-48b4-aef9-bb230e2f17dc-xtables-lock\") pod \"cilium-s2nqb\" (UID: \"48de72d4-148b-48b4-aef9-bb230e2f17dc\") " pod="kube-system/cilium-s2nqb" May 16 00:35:47.808262 kubelet[1903]: I0516 00:35:47.808120 1903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/48de72d4-148b-48b4-aef9-bb230e2f17dc-host-proc-sys-kernel\") pod \"cilium-s2nqb\" (UID: \"48de72d4-148b-48b4-aef9-bb230e2f17dc\") " pod="kube-system/cilium-s2nqb" May 16 00:35:47.808262 kubelet[1903]: I0516 00:35:47.808146 1903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/48de72d4-148b-48b4-aef9-bb230e2f17dc-hubble-tls\") pod \"cilium-s2nqb\" (UID: \"48de72d4-148b-48b4-aef9-bb230e2f17dc\") " pod="kube-system/cilium-s2nqb" May 16 00:35:47.808262 kubelet[1903]: I0516 00:35:47.808164 1903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2cj7f\" (UniqueName: \"kubernetes.io/projected/48de72d4-148b-48b4-aef9-bb230e2f17dc-kube-api-access-2cj7f\") pod \"cilium-s2nqb\" (UID: \"48de72d4-148b-48b4-aef9-bb230e2f17dc\") " pod="kube-system/cilium-s2nqb" May 16 00:35:47.808262 kubelet[1903]: I0516 00:35:47.808179 1903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/48de72d4-148b-48b4-aef9-bb230e2f17dc-cilium-cgroup\") pod \"cilium-s2nqb\" (UID: \"48de72d4-148b-48b4-aef9-bb230e2f17dc\") " pod="kube-system/cilium-s2nqb" May 16 00:35:47.808421 kubelet[1903]: I0516 00:35:47.808214 1903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/48de72d4-148b-48b4-aef9-bb230e2f17dc-cilium-ipsec-secrets\") pod \"cilium-s2nqb\" (UID: \"48de72d4-148b-48b4-aef9-bb230e2f17dc\") " pod="kube-system/cilium-s2nqb" May 16 00:35:47.808421 kubelet[1903]: I0516 00:35:47.808230 1903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/48de72d4-148b-48b4-aef9-bb230e2f17dc-host-proc-sys-net\") pod \"cilium-s2nqb\" (UID: \"48de72d4-148b-48b4-aef9-bb230e2f17dc\") " pod="kube-system/cilium-s2nqb" May 16 00:35:47.808421 kubelet[1903]: I0516 00:35:47.808245 1903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/48de72d4-148b-48b4-aef9-bb230e2f17dc-cilium-config-path\") pod \"cilium-s2nqb\" (UID: \"48de72d4-148b-48b4-aef9-bb230e2f17dc\") " pod="kube-system/cilium-s2nqb" May 16 00:35:47.808421 kubelet[1903]: I0516 00:35:47.808260 1903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/48de72d4-148b-48b4-aef9-bb230e2f17dc-hostproc\") pod \"cilium-s2nqb\" (UID: \"48de72d4-148b-48b4-aef9-bb230e2f17dc\") " pod="kube-system/cilium-s2nqb" May 16 00:35:47.808421 kubelet[1903]: I0516 00:35:47.808285 1903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/48de72d4-148b-48b4-aef9-bb230e2f17dc-clustermesh-secrets\") pod \"cilium-s2nqb\" (UID: \"48de72d4-148b-48b4-aef9-bb230e2f17dc\") " pod="kube-system/cilium-s2nqb" May 16 00:35:47.808421 kubelet[1903]: I0516 00:35:47.808301 1903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/48de72d4-148b-48b4-aef9-bb230e2f17dc-cilium-run\") pod \"cilium-s2nqb\" (UID: \"48de72d4-148b-48b4-aef9-bb230e2f17dc\") " pod="kube-system/cilium-s2nqb" May 16 00:35:47.808553 kubelet[1903]: I0516 00:35:47.808315 1903 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/48de72d4-148b-48b4-aef9-bb230e2f17dc-lib-modules\") pod \"cilium-s2nqb\" (UID: \"48de72d4-148b-48b4-aef9-bb230e2f17dc\") " pod="kube-system/cilium-s2nqb" May 16 00:35:47.938641 kubelet[1903]: E0516 00:35:47.938585 1903 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:35:47.939205 env[1214]: time="2025-05-16T00:35:47.939146286Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-s2nqb,Uid:48de72d4-148b-48b4-aef9-bb230e2f17dc,Namespace:kube-system,Attempt:0,}" May 16 00:35:47.958726 env[1214]: time="2025-05-16T00:35:47.958642817Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 16 00:35:47.958726 env[1214]: time="2025-05-16T00:35:47.958694694Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 16 00:35:47.958913 env[1214]: time="2025-05-16T00:35:47.958706454Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 16 00:35:47.958913 env[1214]: time="2025-05-16T00:35:47.958850928Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b82dc588528d655ca858dddad9469e31b0b7f2df761d26349a23a4bef7759598 pid=3748 runtime=io.containerd.runc.v2 May 16 00:35:47.970057 systemd[1]: Started cri-containerd-b82dc588528d655ca858dddad9469e31b0b7f2df761d26349a23a4bef7759598.scope. May 16 00:35:48.020150 env[1214]: time="2025-05-16T00:35:48.020102048Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-s2nqb,Uid:48de72d4-148b-48b4-aef9-bb230e2f17dc,Namespace:kube-system,Attempt:0,} returns sandbox id \"b82dc588528d655ca858dddad9469e31b0b7f2df761d26349a23a4bef7759598\"" May 16 00:35:48.021269 kubelet[1903]: E0516 00:35:48.020940 1903 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:35:48.023706 env[1214]: time="2025-05-16T00:35:48.023672504Z" level=info msg="CreateContainer within sandbox \"b82dc588528d655ca858dddad9469e31b0b7f2df761d26349a23a4bef7759598\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 16 00:35:48.035261 env[1214]: time="2025-05-16T00:35:48.035206202Z" level=info msg="CreateContainer within sandbox \"b82dc588528d655ca858dddad9469e31b0b7f2df761d26349a23a4bef7759598\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"072c85cb9de550a5d1c51dded11c8e48d851da6f9d75a849f4cda5c68d8b149f\"" May 16 00:35:48.035706 env[1214]: time="2025-05-16T00:35:48.035675143Z" level=info msg="StartContainer for \"072c85cb9de550a5d1c51dded11c8e48d851da6f9d75a849f4cda5c68d8b149f\"" May 16 00:35:48.050383 systemd[1]: Started cri-containerd-072c85cb9de550a5d1c51dded11c8e48d851da6f9d75a849f4cda5c68d8b149f.scope. May 16 00:35:48.092314 env[1214]: time="2025-05-16T00:35:48.091920606Z" level=info msg="StartContainer for \"072c85cb9de550a5d1c51dded11c8e48d851da6f9d75a849f4cda5c68d8b149f\" returns successfully" May 16 00:35:48.100547 systemd[1]: cri-containerd-072c85cb9de550a5d1c51dded11c8e48d851da6f9d75a849f4cda5c68d8b149f.scope: Deactivated successfully. May 16 00:35:48.132233 env[1214]: time="2025-05-16T00:35:48.132084234Z" level=info msg="shim disconnected" id=072c85cb9de550a5d1c51dded11c8e48d851da6f9d75a849f4cda5c68d8b149f May 16 00:35:48.132233 env[1214]: time="2025-05-16T00:35:48.132236308Z" level=warning msg="cleaning up after shim disconnected" id=072c85cb9de550a5d1c51dded11c8e48d851da6f9d75a849f4cda5c68d8b149f namespace=k8s.io May 16 00:35:48.132462 env[1214]: time="2025-05-16T00:35:48.132246748Z" level=info msg="cleaning up dead shim" May 16 00:35:48.147985 env[1214]: time="2025-05-16T00:35:48.147932078Z" level=warning msg="cleanup warnings time=\"2025-05-16T00:35:48Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3834 runtime=io.containerd.runc.v2\n" May 16 00:35:48.389148 kubelet[1903]: I0516 00:35:48.389105 1903 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b79f224b-1263-437f-a2ae-3a17c2755186" path="/var/lib/kubelet/pods/b79f224b-1263-437f-a2ae-3a17c2755186/volumes" May 16 00:35:48.592658 kubelet[1903]: E0516 00:35:48.592612 1903 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:35:48.594860 env[1214]: time="2025-05-16T00:35:48.594817266Z" level=info msg="CreateContainer within sandbox \"b82dc588528d655ca858dddad9469e31b0b7f2df761d26349a23a4bef7759598\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 16 00:35:48.605006 env[1214]: time="2025-05-16T00:35:48.604965979Z" level=info msg="CreateContainer within sandbox \"b82dc588528d655ca858dddad9469e31b0b7f2df761d26349a23a4bef7759598\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"e2f6e35bf78b42f2b3be798e1f9dee5f83518142a6efc1e584042b1e8e18b641\"" May 16 00:35:48.605608 env[1214]: time="2025-05-16T00:35:48.605566995Z" level=info msg="StartContainer for \"e2f6e35bf78b42f2b3be798e1f9dee5f83518142a6efc1e584042b1e8e18b641\"" May 16 00:35:48.620541 systemd[1]: Started cri-containerd-e2f6e35bf78b42f2b3be798e1f9dee5f83518142a6efc1e584042b1e8e18b641.scope. May 16 00:35:48.652920 env[1214]: time="2025-05-16T00:35:48.652822499Z" level=info msg="StartContainer for \"e2f6e35bf78b42f2b3be798e1f9dee5f83518142a6efc1e584042b1e8e18b641\" returns successfully" May 16 00:35:48.659535 systemd[1]: cri-containerd-e2f6e35bf78b42f2b3be798e1f9dee5f83518142a6efc1e584042b1e8e18b641.scope: Deactivated successfully. May 16 00:35:48.677262 env[1214]: time="2025-05-16T00:35:48.677214840Z" level=info msg="shim disconnected" id=e2f6e35bf78b42f2b3be798e1f9dee5f83518142a6efc1e584042b1e8e18b641 May 16 00:35:48.677262 env[1214]: time="2025-05-16T00:35:48.677262438Z" level=warning msg="cleaning up after shim disconnected" id=e2f6e35bf78b42f2b3be798e1f9dee5f83518142a6efc1e584042b1e8e18b641 namespace=k8s.io May 16 00:35:48.677522 env[1214]: time="2025-05-16T00:35:48.677271878Z" level=info msg="cleaning up dead shim" May 16 00:35:48.683493 env[1214]: time="2025-05-16T00:35:48.683459510Z" level=warning msg="cleanup warnings time=\"2025-05-16T00:35:48Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3895 runtime=io.containerd.runc.v2\n" May 16 00:35:49.387444 kubelet[1903]: E0516 00:35:49.387091 1903 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:35:49.595755 kubelet[1903]: E0516 00:35:49.595723 1903 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:35:49.597515 env[1214]: time="2025-05-16T00:35:49.597476389Z" level=info msg="CreateContainer within sandbox \"b82dc588528d655ca858dddad9469e31b0b7f2df761d26349a23a4bef7759598\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 16 00:35:49.613326 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3233533835.mount: Deactivated successfully. May 16 00:35:49.615377 env[1214]: time="2025-05-16T00:35:49.615314315Z" level=info msg="CreateContainer within sandbox \"b82dc588528d655ca858dddad9469e31b0b7f2df761d26349a23a4bef7759598\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"293b2a20a4819092c1e203cf05e081668a2c783b8567306a98e8723fdd73de3b\"" May 16 00:35:49.615934 env[1214]: time="2025-05-16T00:35:49.615909373Z" level=info msg="StartContainer for \"293b2a20a4819092c1e203cf05e081668a2c783b8567306a98e8723fdd73de3b\"" May 16 00:35:49.631801 systemd[1]: Started cri-containerd-293b2a20a4819092c1e203cf05e081668a2c783b8567306a98e8723fdd73de3b.scope. May 16 00:35:49.659642 env[1214]: time="2025-05-16T00:35:49.659541444Z" level=info msg="StartContainer for \"293b2a20a4819092c1e203cf05e081668a2c783b8567306a98e8723fdd73de3b\" returns successfully" May 16 00:35:49.661464 systemd[1]: cri-containerd-293b2a20a4819092c1e203cf05e081668a2c783b8567306a98e8723fdd73de3b.scope: Deactivated successfully. May 16 00:35:49.681750 env[1214]: time="2025-05-16T00:35:49.681707126Z" level=info msg="shim disconnected" id=293b2a20a4819092c1e203cf05e081668a2c783b8567306a98e8723fdd73de3b May 16 00:35:49.681750 env[1214]: time="2025-05-16T00:35:49.681748205Z" level=warning msg="cleaning up after shim disconnected" id=293b2a20a4819092c1e203cf05e081668a2c783b8567306a98e8723fdd73de3b namespace=k8s.io May 16 00:35:49.681930 env[1214]: time="2025-05-16T00:35:49.681758885Z" level=info msg="cleaning up dead shim" May 16 00:35:49.689073 env[1214]: time="2025-05-16T00:35:49.689035570Z" level=warning msg="cleanup warnings time=\"2025-05-16T00:35:49Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3950 runtime=io.containerd.runc.v2\n" May 16 00:35:49.705298 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-293b2a20a4819092c1e203cf05e081668a2c783b8567306a98e8723fdd73de3b-rootfs.mount: Deactivated successfully. May 16 00:35:50.599332 kubelet[1903]: E0516 00:35:50.599294 1903 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:35:50.601124 env[1214]: time="2025-05-16T00:35:50.601081933Z" level=info msg="CreateContainer within sandbox \"b82dc588528d655ca858dddad9469e31b0b7f2df761d26349a23a4bef7759598\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 16 00:35:50.623491 env[1214]: time="2025-05-16T00:35:50.623445978Z" level=info msg="CreateContainer within sandbox \"b82dc588528d655ca858dddad9469e31b0b7f2df761d26349a23a4bef7759598\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"262c1ec32560209f619963fdf59b9d9b1665e5b0144066dede4044e007333f2a\"" May 16 00:35:50.624111 env[1214]: time="2025-05-16T00:35:50.624078076Z" level=info msg="StartContainer for \"262c1ec32560209f619963fdf59b9d9b1665e5b0144066dede4044e007333f2a\"" May 16 00:35:50.645179 systemd[1]: Started cri-containerd-262c1ec32560209f619963fdf59b9d9b1665e5b0144066dede4044e007333f2a.scope. May 16 00:35:50.681629 env[1214]: time="2025-05-16T00:35:50.681578554Z" level=info msg="StartContainer for \"262c1ec32560209f619963fdf59b9d9b1665e5b0144066dede4044e007333f2a\" returns successfully" May 16 00:35:50.685022 systemd[1]: cri-containerd-262c1ec32560209f619963fdf59b9d9b1665e5b0144066dede4044e007333f2a.scope: Deactivated successfully. May 16 00:35:50.705387 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-262c1ec32560209f619963fdf59b9d9b1665e5b0144066dede4044e007333f2a-rootfs.mount: Deactivated successfully. May 16 00:35:50.711084 env[1214]: time="2025-05-16T00:35:50.711039267Z" level=info msg="shim disconnected" id=262c1ec32560209f619963fdf59b9d9b1665e5b0144066dede4044e007333f2a May 16 00:35:50.711313 env[1214]: time="2025-05-16T00:35:50.711287378Z" level=warning msg="cleaning up after shim disconnected" id=262c1ec32560209f619963fdf59b9d9b1665e5b0144066dede4044e007333f2a namespace=k8s.io May 16 00:35:50.711383 env[1214]: time="2025-05-16T00:35:50.711369655Z" level=info msg="cleaning up dead shim" May 16 00:35:50.717817 env[1214]: time="2025-05-16T00:35:50.717782468Z" level=warning msg="cleanup warnings time=\"2025-05-16T00:35:50Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4005 runtime=io.containerd.runc.v2\n" May 16 00:35:51.446808 kubelet[1903]: E0516 00:35:51.446759 1903 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 16 00:35:51.602959 kubelet[1903]: E0516 00:35:51.602912 1903 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:35:51.607768 env[1214]: time="2025-05-16T00:35:51.606832616Z" level=info msg="CreateContainer within sandbox \"b82dc588528d655ca858dddad9469e31b0b7f2df761d26349a23a4bef7759598\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 16 00:35:51.618762 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount706995381.mount: Deactivated successfully. May 16 00:35:51.625263 env[1214]: time="2025-05-16T00:35:51.625211004Z" level=info msg="CreateContainer within sandbox \"b82dc588528d655ca858dddad9469e31b0b7f2df761d26349a23a4bef7759598\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"3f48990d5b9d82d80e659bfdf65cda5384d512a90a343d988ad8101a0f6409da\"" May 16 00:35:51.626561 env[1214]: time="2025-05-16T00:35:51.626528480Z" level=info msg="StartContainer for \"3f48990d5b9d82d80e659bfdf65cda5384d512a90a343d988ad8101a0f6409da\"" May 16 00:35:51.643267 systemd[1]: Started cri-containerd-3f48990d5b9d82d80e659bfdf65cda5384d512a90a343d988ad8101a0f6409da.scope. May 16 00:35:51.675474 env[1214]: time="2025-05-16T00:35:51.675428011Z" level=info msg="StartContainer for \"3f48990d5b9d82d80e659bfdf65cda5384d512a90a343d988ad8101a0f6409da\" returns successfully" May 16 00:35:51.968211 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106(gcm-aes-ce))) May 16 00:35:52.607382 kubelet[1903]: E0516 00:35:52.607348 1903 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:35:52.624019 kubelet[1903]: I0516 00:35:52.623961 1903 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-s2nqb" podStartSLOduration=5.623943408 podStartE2EDuration="5.623943408s" podCreationTimestamp="2025-05-16 00:35:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-16 00:35:52.623904169 +0000 UTC m=+86.321360273" watchObservedRunningTime="2025-05-16 00:35:52.623943408 +0000 UTC m=+86.321399472" May 16 00:35:53.885420 systemd[1]: run-containerd-runc-k8s.io-3f48990d5b9d82d80e659bfdf65cda5384d512a90a343d988ad8101a0f6409da-runc.CAkw8u.mount: Deactivated successfully. May 16 00:35:53.940268 kubelet[1903]: E0516 00:35:53.940219 1903 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:35:54.386932 kubelet[1903]: E0516 00:35:54.386898 1903 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:35:54.879878 systemd-networkd[1045]: lxc_health: Link UP May 16 00:35:54.886705 systemd-networkd[1045]: lxc_health: Gained carrier May 16 00:35:54.887271 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready May 16 00:35:55.941486 kubelet[1903]: E0516 00:35:55.941453 1903 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:35:56.033748 systemd[1]: run-containerd-runc-k8s.io-3f48990d5b9d82d80e659bfdf65cda5384d512a90a343d988ad8101a0f6409da-runc.qzibA1.mount: Deactivated successfully. May 16 00:35:56.543377 systemd-networkd[1045]: lxc_health: Gained IPv6LL May 16 00:35:56.614879 kubelet[1903]: E0516 00:35:56.614841 1903 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:35:57.616432 kubelet[1903]: E0516 00:35:57.616397 1903 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 16 00:36:00.339587 sshd[3717]: pam_unix(sshd:session): session closed for user core May 16 00:36:00.342703 systemd[1]: sshd@24-10.0.0.31:22-10.0.0.1:55146.service: Deactivated successfully. May 16 00:36:00.343435 systemd[1]: session-25.scope: Deactivated successfully. May 16 00:36:00.343945 systemd-logind[1206]: Session 25 logged out. Waiting for processes to exit. May 16 00:36:00.344610 systemd-logind[1206]: Removed session 25.