May 15 23:40:25.934078 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] May 15 23:40:25.934100 kernel: Linux version 6.6.90-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT Thu May 15 22:19:24 -00 2025 May 15 23:40:25.934131 kernel: KASLR enabled May 15 23:40:25.934137 kernel: efi: EFI v2.7 by EDK II May 15 23:40:25.934142 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdbbbf018 ACPI 2.0=0xd9b43018 RNG=0xd9b43a18 MEMRESERVE=0xd9b40d98 May 15 23:40:25.934148 kernel: random: crng init done May 15 23:40:25.934155 kernel: secureboot: Secure boot disabled May 15 23:40:25.934161 kernel: ACPI: Early table checksum verification disabled May 15 23:40:25.934167 kernel: ACPI: RSDP 0x00000000D9B43018 000024 (v02 BOCHS ) May 15 23:40:25.934174 kernel: ACPI: XSDT 0x00000000D9B43F18 000064 (v01 BOCHS BXPC 00000001 01000013) May 15 23:40:25.934180 kernel: ACPI: FACP 0x00000000D9B43B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) May 15 23:40:25.934186 kernel: ACPI: DSDT 0x00000000D9B41018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 15 23:40:25.934192 kernel: ACPI: APIC 0x00000000D9B43C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) May 15 23:40:25.934198 kernel: ACPI: PPTT 0x00000000D9B43098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) May 15 23:40:25.934205 kernel: ACPI: GTDT 0x00000000D9B43818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 15 23:40:25.934213 kernel: ACPI: MCFG 0x00000000D9B43A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 15 23:40:25.934219 kernel: ACPI: SPCR 0x00000000D9B43918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 15 23:40:25.934225 kernel: ACPI: DBG2 0x00000000D9B43998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) May 15 23:40:25.934231 kernel: ACPI: IORT 0x00000000D9B43198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 15 23:40:25.934238 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 May 15 23:40:25.934244 kernel: NUMA: Failed to initialise from firmware May 15 23:40:25.934251 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] May 15 23:40:25.934257 kernel: NUMA: NODE_DATA [mem 0xdc958800-0xdc95dfff] May 15 23:40:25.934263 kernel: Zone ranges: May 15 23:40:25.934270 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] May 15 23:40:25.934278 kernel: DMA32 empty May 15 23:40:25.934284 kernel: Normal empty May 15 23:40:25.934290 kernel: Movable zone start for each node May 15 23:40:25.934296 kernel: Early memory node ranges May 15 23:40:25.934302 kernel: node 0: [mem 0x0000000040000000-0x00000000d976ffff] May 15 23:40:25.934309 kernel: node 0: [mem 0x00000000d9770000-0x00000000d9b3ffff] May 15 23:40:25.934315 kernel: node 0: [mem 0x00000000d9b40000-0x00000000dce1ffff] May 15 23:40:25.934321 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] May 15 23:40:25.934328 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] May 15 23:40:25.934334 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] May 15 23:40:25.934340 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] May 15 23:40:25.934347 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] May 15 23:40:25.934354 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges May 15 23:40:25.934361 kernel: psci: probing for conduit method from ACPI. May 15 23:40:25.934368 kernel: psci: PSCIv1.1 detected in firmware. May 15 23:40:25.934377 kernel: psci: Using standard PSCI v0.2 function IDs May 15 23:40:25.934383 kernel: psci: Trusted OS migration not required May 15 23:40:25.934390 kernel: psci: SMC Calling Convention v1.1 May 15 23:40:25.934398 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) May 15 23:40:25.934405 kernel: percpu: Embedded 31 pages/cpu s86632 r8192 d32152 u126976 May 15 23:40:25.934412 kernel: pcpu-alloc: s86632 r8192 d32152 u126976 alloc=31*4096 May 15 23:40:25.934419 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 May 15 23:40:25.934425 kernel: Detected PIPT I-cache on CPU0 May 15 23:40:25.934432 kernel: CPU features: detected: GIC system register CPU interface May 15 23:40:25.934438 kernel: CPU features: detected: Hardware dirty bit management May 15 23:40:25.934445 kernel: CPU features: detected: Spectre-v4 May 15 23:40:25.934451 kernel: CPU features: detected: Spectre-BHB May 15 23:40:25.934458 kernel: CPU features: kernel page table isolation forced ON by KASLR May 15 23:40:25.934466 kernel: CPU features: detected: Kernel page table isolation (KPTI) May 15 23:40:25.934473 kernel: CPU features: detected: ARM erratum 1418040 May 15 23:40:25.934479 kernel: CPU features: detected: SSBS not fully self-synchronizing May 15 23:40:25.934486 kernel: alternatives: applying boot alternatives May 15 23:40:25.934493 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=a39d79b1d2ff9998339b60958cf17b8dfae5bd16f05fb844c0e06a5d7107915a May 15 23:40:25.934500 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 15 23:40:25.934507 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 15 23:40:25.934514 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 15 23:40:25.934520 kernel: Fallback order for Node 0: 0 May 15 23:40:25.934527 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 May 15 23:40:25.934534 kernel: Policy zone: DMA May 15 23:40:25.934542 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 15 23:40:25.934548 kernel: software IO TLB: area num 4. May 15 23:40:25.934555 kernel: software IO TLB: mapped [mem 0x00000000d2e00000-0x00000000d6e00000] (64MB) May 15 23:40:25.934562 kernel: Memory: 2386260K/2572288K available (10240K kernel code, 2186K rwdata, 8108K rodata, 39744K init, 897K bss, 186028K reserved, 0K cma-reserved) May 15 23:40:25.934569 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 15 23:40:25.934575 kernel: rcu: Preemptible hierarchical RCU implementation. May 15 23:40:25.934582 kernel: rcu: RCU event tracing is enabled. May 15 23:40:25.934589 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 15 23:40:25.934596 kernel: Trampoline variant of Tasks RCU enabled. May 15 23:40:25.934602 kernel: Tracing variant of Tasks RCU enabled. May 15 23:40:25.934609 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 15 23:40:25.934616 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 15 23:40:25.934624 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 May 15 23:40:25.934631 kernel: GICv3: 256 SPIs implemented May 15 23:40:25.934638 kernel: GICv3: 0 Extended SPIs implemented May 15 23:40:25.934644 kernel: Root IRQ handler: gic_handle_irq May 15 23:40:25.934651 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI May 15 23:40:25.934658 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 May 15 23:40:25.934664 kernel: ITS [mem 0x08080000-0x0809ffff] May 15 23:40:25.934671 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400c0000 (indirect, esz 8, psz 64K, shr 1) May 15 23:40:25.934678 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400d0000 (flat, esz 8, psz 64K, shr 1) May 15 23:40:25.934684 kernel: GICv3: using LPI property table @0x00000000400f0000 May 15 23:40:25.934691 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040100000 May 15 23:40:25.934699 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 15 23:40:25.934706 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 23:40:25.934712 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). May 15 23:40:25.934719 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns May 15 23:40:25.934726 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns May 15 23:40:25.934733 kernel: arm-pv: using stolen time PV May 15 23:40:25.934740 kernel: Console: colour dummy device 80x25 May 15 23:40:25.934746 kernel: ACPI: Core revision 20230628 May 15 23:40:25.934753 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) May 15 23:40:25.934760 kernel: pid_max: default: 32768 minimum: 301 May 15 23:40:25.934769 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 15 23:40:25.934776 kernel: landlock: Up and running. May 15 23:40:25.934782 kernel: SELinux: Initializing. May 15 23:40:25.934790 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 15 23:40:25.934796 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 15 23:40:25.934804 kernel: ACPI PPTT: PPTT table found, but unable to locate core 3 (3) May 15 23:40:25.934811 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 15 23:40:25.934817 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 15 23:40:25.934824 kernel: rcu: Hierarchical SRCU implementation. May 15 23:40:25.934833 kernel: rcu: Max phase no-delay instances is 400. May 15 23:40:25.934840 kernel: Platform MSI: ITS@0x8080000 domain created May 15 23:40:25.934846 kernel: PCI/MSI: ITS@0x8080000 domain created May 15 23:40:25.934853 kernel: Remapping and enabling EFI services. May 15 23:40:25.934860 kernel: smp: Bringing up secondary CPUs ... May 15 23:40:25.934867 kernel: Detected PIPT I-cache on CPU1 May 15 23:40:25.934873 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 May 15 23:40:25.934881 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040110000 May 15 23:40:25.934887 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 23:40:25.934894 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] May 15 23:40:25.934903 kernel: Detected PIPT I-cache on CPU2 May 15 23:40:25.934910 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 May 15 23:40:25.934922 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040120000 May 15 23:40:25.934930 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 23:40:25.934937 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] May 15 23:40:25.934944 kernel: Detected PIPT I-cache on CPU3 May 15 23:40:25.934952 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 May 15 23:40:25.934959 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040130000 May 15 23:40:25.934966 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 15 23:40:25.934973 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] May 15 23:40:25.934982 kernel: smp: Brought up 1 node, 4 CPUs May 15 23:40:25.934989 kernel: SMP: Total of 4 processors activated. May 15 23:40:25.934996 kernel: CPU features: detected: 32-bit EL0 Support May 15 23:40:25.935004 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence May 15 23:40:25.935011 kernel: CPU features: detected: Common not Private translations May 15 23:40:25.935018 kernel: CPU features: detected: CRC32 instructions May 15 23:40:25.935025 kernel: CPU features: detected: Enhanced Virtualization Traps May 15 23:40:25.935034 kernel: CPU features: detected: RCpc load-acquire (LDAPR) May 15 23:40:25.935041 kernel: CPU features: detected: LSE atomic instructions May 15 23:40:25.935048 kernel: CPU features: detected: Privileged Access Never May 15 23:40:25.935055 kernel: CPU features: detected: RAS Extension Support May 15 23:40:25.935063 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) May 15 23:40:25.935070 kernel: CPU: All CPU(s) started at EL1 May 15 23:40:25.935077 kernel: alternatives: applying system-wide alternatives May 15 23:40:25.935084 kernel: devtmpfs: initialized May 15 23:40:25.935092 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 15 23:40:25.935100 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 15 23:40:25.935212 kernel: pinctrl core: initialized pinctrl subsystem May 15 23:40:25.935223 kernel: SMBIOS 3.0.0 present. May 15 23:40:25.935230 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 May 15 23:40:25.935237 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 15 23:40:25.935245 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations May 15 23:40:25.935253 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations May 15 23:40:25.935260 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations May 15 23:40:25.935267 kernel: audit: initializing netlink subsys (disabled) May 15 23:40:25.935278 kernel: audit: type=2000 audit(0.025:1): state=initialized audit_enabled=0 res=1 May 15 23:40:25.935285 kernel: thermal_sys: Registered thermal governor 'step_wise' May 15 23:40:25.935292 kernel: cpuidle: using governor menu May 15 23:40:25.935299 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. May 15 23:40:25.935307 kernel: ASID allocator initialised with 32768 entries May 15 23:40:25.935314 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 15 23:40:25.935321 kernel: Serial: AMBA PL011 UART driver May 15 23:40:25.935328 kernel: Modules: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL May 15 23:40:25.935336 kernel: Modules: 0 pages in range for non-PLT usage May 15 23:40:25.935344 kernel: Modules: 508944 pages in range for PLT usage May 15 23:40:25.935351 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 15 23:40:25.935359 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page May 15 23:40:25.935366 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages May 15 23:40:25.935373 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page May 15 23:40:25.935380 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 15 23:40:25.935387 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page May 15 23:40:25.935395 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages May 15 23:40:25.935402 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page May 15 23:40:25.935411 kernel: ACPI: Added _OSI(Module Device) May 15 23:40:25.935418 kernel: ACPI: Added _OSI(Processor Device) May 15 23:40:25.935425 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 15 23:40:25.935433 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 15 23:40:25.935440 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 15 23:40:25.935448 kernel: ACPI: Interpreter enabled May 15 23:40:25.935455 kernel: ACPI: Using GIC for interrupt routing May 15 23:40:25.935462 kernel: ACPI: MCFG table detected, 1 entries May 15 23:40:25.935484 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA May 15 23:40:25.935493 kernel: printk: console [ttyAMA0] enabled May 15 23:40:25.935500 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 15 23:40:25.935656 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 15 23:40:25.935733 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] May 15 23:40:25.935802 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] May 15 23:40:25.935866 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 May 15 23:40:25.935930 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] May 15 23:40:25.935942 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] May 15 23:40:25.935949 kernel: PCI host bridge to bus 0000:00 May 15 23:40:25.936022 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] May 15 23:40:25.936082 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] May 15 23:40:25.936173 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] May 15 23:40:25.936235 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 15 23:40:25.936322 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 May 15 23:40:25.936405 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 May 15 23:40:25.936473 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] May 15 23:40:25.936545 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] May 15 23:40:25.936613 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] May 15 23:40:25.936680 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] May 15 23:40:25.936748 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] May 15 23:40:25.936816 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] May 15 23:40:25.936879 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] May 15 23:40:25.936939 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] May 15 23:40:25.936999 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] May 15 23:40:25.937009 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 May 15 23:40:25.937016 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 May 15 23:40:25.937023 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 May 15 23:40:25.937031 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 May 15 23:40:25.937038 kernel: iommu: Default domain type: Translated May 15 23:40:25.937047 kernel: iommu: DMA domain TLB invalidation policy: strict mode May 15 23:40:25.937055 kernel: efivars: Registered efivars operations May 15 23:40:25.937062 kernel: vgaarb: loaded May 15 23:40:25.937069 kernel: clocksource: Switched to clocksource arch_sys_counter May 15 23:40:25.937076 kernel: VFS: Disk quotas dquot_6.6.0 May 15 23:40:25.937083 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 15 23:40:25.937091 kernel: pnp: PnP ACPI init May 15 23:40:25.937188 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved May 15 23:40:25.937202 kernel: pnp: PnP ACPI: found 1 devices May 15 23:40:25.937209 kernel: NET: Registered PF_INET protocol family May 15 23:40:25.937216 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 15 23:40:25.937224 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 15 23:40:25.937231 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 15 23:40:25.937239 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 15 23:40:25.937246 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 15 23:40:25.937254 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 15 23:40:25.937261 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 15 23:40:25.937270 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 15 23:40:25.937277 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 15 23:40:25.937285 kernel: PCI: CLS 0 bytes, default 64 May 15 23:40:25.937292 kernel: kvm [1]: HYP mode not available May 15 23:40:25.937299 kernel: Initialise system trusted keyrings May 15 23:40:25.937307 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 15 23:40:25.937314 kernel: Key type asymmetric registered May 15 23:40:25.937321 kernel: Asymmetric key parser 'x509' registered May 15 23:40:25.937328 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) May 15 23:40:25.937337 kernel: io scheduler mq-deadline registered May 15 23:40:25.937344 kernel: io scheduler kyber registered May 15 23:40:25.937352 kernel: io scheduler bfq registered May 15 23:40:25.937359 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 May 15 23:40:25.937367 kernel: ACPI: button: Power Button [PWRB] May 15 23:40:25.937374 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 May 15 23:40:25.937447 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) May 15 23:40:25.937457 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 15 23:40:25.937465 kernel: thunder_xcv, ver 1.0 May 15 23:40:25.937474 kernel: thunder_bgx, ver 1.0 May 15 23:40:25.937482 kernel: nicpf, ver 1.0 May 15 23:40:25.937489 kernel: nicvf, ver 1.0 May 15 23:40:25.937564 kernel: rtc-efi rtc-efi.0: registered as rtc0 May 15 23:40:25.937628 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-05-15T23:40:25 UTC (1747352425) May 15 23:40:25.937637 kernel: hid: raw HID events driver (C) Jiri Kosina May 15 23:40:25.937645 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available May 15 23:40:25.937652 kernel: watchdog: Delayed init of the lockup detector failed: -19 May 15 23:40:25.937661 kernel: watchdog: Hard watchdog permanently disabled May 15 23:40:25.937668 kernel: NET: Registered PF_INET6 protocol family May 15 23:40:25.937675 kernel: Segment Routing with IPv6 May 15 23:40:25.937683 kernel: In-situ OAM (IOAM) with IPv6 May 15 23:40:25.937690 kernel: NET: Registered PF_PACKET protocol family May 15 23:40:25.937697 kernel: Key type dns_resolver registered May 15 23:40:25.937704 kernel: registered taskstats version 1 May 15 23:40:25.937711 kernel: Loading compiled-in X.509 certificates May 15 23:40:25.937718 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.90-flatcar: c5ee9c587519d4ef57ff0de9630e786a4c7faded' May 15 23:40:25.937727 kernel: Key type .fscrypt registered May 15 23:40:25.937734 kernel: Key type fscrypt-provisioning registered May 15 23:40:25.937741 kernel: ima: No TPM chip found, activating TPM-bypass! May 15 23:40:25.937749 kernel: ima: Allocated hash algorithm: sha1 May 15 23:40:25.937756 kernel: ima: No architecture policies found May 15 23:40:25.937763 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) May 15 23:40:25.937771 kernel: clk: Disabling unused clocks May 15 23:40:25.937778 kernel: Freeing unused kernel memory: 39744K May 15 23:40:25.937785 kernel: Run /init as init process May 15 23:40:25.937794 kernel: with arguments: May 15 23:40:25.937801 kernel: /init May 15 23:40:25.937808 kernel: with environment: May 15 23:40:25.937815 kernel: HOME=/ May 15 23:40:25.937822 kernel: TERM=linux May 15 23:40:25.937829 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 15 23:40:25.937838 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 15 23:40:25.937848 systemd[1]: Detected virtualization kvm. May 15 23:40:25.937857 systemd[1]: Detected architecture arm64. May 15 23:40:25.937865 systemd[1]: Running in initrd. May 15 23:40:25.937873 systemd[1]: No hostname configured, using default hostname. May 15 23:40:25.937880 systemd[1]: Hostname set to . May 15 23:40:25.937888 systemd[1]: Initializing machine ID from VM UUID. May 15 23:40:25.937896 systemd[1]: Queued start job for default target initrd.target. May 15 23:40:25.937904 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 15 23:40:25.937912 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 15 23:40:25.937922 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 15 23:40:25.937930 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 15 23:40:25.937938 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 15 23:40:25.937946 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 15 23:40:25.937955 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 15 23:40:25.937963 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 15 23:40:25.937973 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 15 23:40:25.937981 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 15 23:40:25.937988 systemd[1]: Reached target paths.target - Path Units. May 15 23:40:25.937996 systemd[1]: Reached target slices.target - Slice Units. May 15 23:40:25.938004 systemd[1]: Reached target swap.target - Swaps. May 15 23:40:25.938012 systemd[1]: Reached target timers.target - Timer Units. May 15 23:40:25.938019 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 15 23:40:25.938027 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 15 23:40:25.938035 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 15 23:40:25.938045 systemd[1]: Listening on systemd-journald.socket - Journal Socket. May 15 23:40:25.938053 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 15 23:40:25.938060 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 15 23:40:25.938068 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 15 23:40:25.938076 systemd[1]: Reached target sockets.target - Socket Units. May 15 23:40:25.938084 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 15 23:40:25.938092 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 15 23:40:25.938100 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 15 23:40:25.938133 systemd[1]: Starting systemd-fsck-usr.service... May 15 23:40:25.938144 systemd[1]: Starting systemd-journald.service - Journal Service... May 15 23:40:25.938152 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 15 23:40:25.938160 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 15 23:40:25.938168 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 15 23:40:25.938176 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 15 23:40:25.938184 systemd[1]: Finished systemd-fsck-usr.service. May 15 23:40:25.938193 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 15 23:40:25.938222 systemd-journald[239]: Collecting audit messages is disabled. May 15 23:40:25.938244 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 15 23:40:25.938252 systemd-journald[239]: Journal started May 15 23:40:25.938271 systemd-journald[239]: Runtime Journal (/run/log/journal/5709a6d37a1545a997d3f18d139e2f96) is 5.9M, max 47.3M, 41.4M free. May 15 23:40:25.927870 systemd-modules-load[240]: Inserted module 'overlay' May 15 23:40:25.940257 systemd[1]: Started systemd-journald.service - Journal Service. May 15 23:40:25.942232 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 15 23:40:25.945907 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 15 23:40:25.945928 kernel: Bridge firewalling registered May 15 23:40:25.946516 systemd-modules-load[240]: Inserted module 'br_netfilter' May 15 23:40:25.954335 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 15 23:40:25.956305 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 15 23:40:25.958701 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 15 23:40:25.960527 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 15 23:40:25.967327 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 15 23:40:25.970418 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 15 23:40:25.974706 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 15 23:40:25.978266 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 15 23:40:25.990321 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 15 23:40:25.991568 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 15 23:40:25.994986 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 15 23:40:26.000092 dracut-cmdline[277]: dracut-dracut-053 May 15 23:40:26.002624 dracut-cmdline[277]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=a39d79b1d2ff9998339b60958cf17b8dfae5bd16f05fb844c0e06a5d7107915a May 15 23:40:26.035372 systemd-resolved[283]: Positive Trust Anchors: May 15 23:40:26.035450 systemd-resolved[283]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 15 23:40:26.035481 systemd-resolved[283]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 15 23:40:26.040628 systemd-resolved[283]: Defaulting to hostname 'linux'. May 15 23:40:26.043703 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 15 23:40:26.045427 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 15 23:40:26.077152 kernel: SCSI subsystem initialized May 15 23:40:26.082137 kernel: Loading iSCSI transport class v2.0-870. May 15 23:40:26.090147 kernel: iscsi: registered transport (tcp) May 15 23:40:26.103172 kernel: iscsi: registered transport (qla4xxx) May 15 23:40:26.103232 kernel: QLogic iSCSI HBA Driver May 15 23:40:26.149050 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 15 23:40:26.164332 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 15 23:40:26.182463 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 15 23:40:26.182542 kernel: device-mapper: uevent: version 1.0.3 May 15 23:40:26.182554 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 15 23:40:26.232156 kernel: raid6: neonx8 gen() 15773 MB/s May 15 23:40:26.249150 kernel: raid6: neonx4 gen() 15632 MB/s May 15 23:40:26.266141 kernel: raid6: neonx2 gen() 13208 MB/s May 15 23:40:26.283142 kernel: raid6: neonx1 gen() 10438 MB/s May 15 23:40:26.300147 kernel: raid6: int64x8 gen() 6946 MB/s May 15 23:40:26.317143 kernel: raid6: int64x4 gen() 7335 MB/s May 15 23:40:26.334144 kernel: raid6: int64x2 gen() 6118 MB/s May 15 23:40:26.351324 kernel: raid6: int64x1 gen() 5049 MB/s May 15 23:40:26.351366 kernel: raid6: using algorithm neonx8 gen() 15773 MB/s May 15 23:40:26.369284 kernel: raid6: .... xor() 11927 MB/s, rmw enabled May 15 23:40:26.369317 kernel: raid6: using neon recovery algorithm May 15 23:40:26.374139 kernel: xor: measuring software checksum speed May 15 23:40:26.375538 kernel: 8regs : 16950 MB/sec May 15 23:40:26.375553 kernel: 32regs : 19641 MB/sec May 15 23:40:26.376174 kernel: arm64_neon : 26778 MB/sec May 15 23:40:26.376187 kernel: xor: using function: arm64_neon (26778 MB/sec) May 15 23:40:26.429150 kernel: Btrfs loaded, zoned=no, fsverity=no May 15 23:40:26.441184 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 15 23:40:26.454291 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 15 23:40:26.466519 systemd-udevd[462]: Using default interface naming scheme 'v255'. May 15 23:40:26.469704 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 15 23:40:26.472913 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 15 23:40:26.488350 dracut-pre-trigger[470]: rd.md=0: removing MD RAID activation May 15 23:40:26.514963 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 15 23:40:26.524311 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 15 23:40:26.565007 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 15 23:40:26.573366 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 15 23:40:26.587640 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 15 23:40:26.589345 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 15 23:40:26.593236 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 15 23:40:26.595447 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 15 23:40:26.602348 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 15 23:40:26.614838 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 15 23:40:26.626160 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 15 23:40:26.626296 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 15 23:40:26.630160 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 15 23:40:26.634313 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues May 15 23:40:26.634498 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 15 23:40:26.631984 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 15 23:40:26.632155 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 15 23:40:26.635410 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 15 23:40:26.643690 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 15 23:40:26.643713 kernel: GPT:9289727 != 19775487 May 15 23:40:26.643723 kernel: GPT:Alternate GPT header not at the end of the disk. May 15 23:40:26.643732 kernel: GPT:9289727 != 19775487 May 15 23:40:26.645883 kernel: GPT: Use GNU Parted to correct GPT errors. May 15 23:40:26.645953 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 15 23:40:26.649401 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 15 23:40:26.660521 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 15 23:40:26.668353 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 15 23:40:26.674761 kernel: BTRFS: device fsid 462ff9f1-7a02-4839-b355-edf30dab0598 devid 1 transid 39 /dev/vda3 scanned by (udev-worker) (513) May 15 23:40:26.674816 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (519) May 15 23:40:26.684169 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 15 23:40:26.688939 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 15 23:40:26.692900 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 15 23:40:26.696206 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 15 23:40:26.697693 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 15 23:40:26.703518 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 15 23:40:26.716323 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 15 23:40:26.726132 disk-uuid[558]: Primary Header is updated. May 15 23:40:26.726132 disk-uuid[558]: Secondary Entries is updated. May 15 23:40:26.726132 disk-uuid[558]: Secondary Header is updated. May 15 23:40:26.730142 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 15 23:40:27.741148 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 15 23:40:27.743347 disk-uuid[559]: The operation has completed successfully. May 15 23:40:27.768697 systemd[1]: disk-uuid.service: Deactivated successfully. May 15 23:40:27.768803 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 15 23:40:27.789373 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 15 23:40:27.793614 sh[573]: Success May 15 23:40:27.810172 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" May 15 23:40:27.839508 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 15 23:40:27.856623 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 15 23:40:27.860162 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 15 23:40:27.869349 kernel: BTRFS info (device dm-0): first mount of filesystem 462ff9f1-7a02-4839-b355-edf30dab0598 May 15 23:40:27.869405 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm May 15 23:40:27.869416 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 15 23:40:27.871192 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 15 23:40:27.871208 kernel: BTRFS info (device dm-0): using free space tree May 15 23:40:27.875744 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 15 23:40:27.877176 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 15 23:40:27.887316 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 15 23:40:27.889006 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 15 23:40:27.899921 kernel: BTRFS info (device vda6): first mount of filesystem bb522e90-8598-4687-8a48-65ed6b798a46 May 15 23:40:27.899979 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 15 23:40:27.899991 kernel: BTRFS info (device vda6): using free space tree May 15 23:40:27.904256 kernel: BTRFS info (device vda6): auto enabling async discard May 15 23:40:27.911650 systemd[1]: mnt-oem.mount: Deactivated successfully. May 15 23:40:27.914145 kernel: BTRFS info (device vda6): last unmount of filesystem bb522e90-8598-4687-8a48-65ed6b798a46 May 15 23:40:27.921491 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 15 23:40:27.927340 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 15 23:40:27.993632 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 15 23:40:28.004300 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 15 23:40:28.035991 ignition[670]: Ignition 2.20.0 May 15 23:40:28.036002 ignition[670]: Stage: fetch-offline May 15 23:40:28.036375 systemd-networkd[758]: lo: Link UP May 15 23:40:28.036035 ignition[670]: no configs at "/usr/lib/ignition/base.d" May 15 23:40:28.036379 systemd-networkd[758]: lo: Gained carrier May 15 23:40:28.036043 ignition[670]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 15 23:40:28.037179 systemd-networkd[758]: Enumeration completed May 15 23:40:28.036218 ignition[670]: parsed url from cmdline: "" May 15 23:40:28.037285 systemd[1]: Started systemd-networkd.service - Network Configuration. May 15 23:40:28.036222 ignition[670]: no config URL provided May 15 23:40:28.039259 systemd[1]: Reached target network.target - Network. May 15 23:40:28.036227 ignition[670]: reading system config file "/usr/lib/ignition/user.ign" May 15 23:40:28.040798 systemd-networkd[758]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 15 23:40:28.036234 ignition[670]: no config at "/usr/lib/ignition/user.ign" May 15 23:40:28.040801 systemd-networkd[758]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 15 23:40:28.036262 ignition[670]: op(1): [started] loading QEMU firmware config module May 15 23:40:28.041768 systemd-networkd[758]: eth0: Link UP May 15 23:40:28.036267 ignition[670]: op(1): executing: "modprobe" "qemu_fw_cfg" May 15 23:40:28.041771 systemd-networkd[758]: eth0: Gained carrier May 15 23:40:28.046583 ignition[670]: op(1): [finished] loading QEMU firmware config module May 15 23:40:28.041779 systemd-networkd[758]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 15 23:40:28.063202 systemd-networkd[758]: eth0: DHCPv4 address 10.0.0.43/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 15 23:40:28.094239 ignition[670]: parsing config with SHA512: 79ca8bda5086a6b2150201995cd95f2d1aacca5b4ea1d8842496200458b1e625fb018dd418c5e00513a709c65a3084ed8ed1b399c01a4a913f00e97973f4a3f8 May 15 23:40:28.101842 unknown[670]: fetched base config from "system" May 15 23:40:28.101856 unknown[670]: fetched user config from "qemu" May 15 23:40:28.102416 ignition[670]: fetch-offline: fetch-offline passed May 15 23:40:28.102503 ignition[670]: Ignition finished successfully May 15 23:40:28.104718 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 15 23:40:28.107219 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 15 23:40:28.119452 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 15 23:40:28.130208 ignition[768]: Ignition 2.20.0 May 15 23:40:28.130218 ignition[768]: Stage: kargs May 15 23:40:28.130386 ignition[768]: no configs at "/usr/lib/ignition/base.d" May 15 23:40:28.130395 ignition[768]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 15 23:40:28.131352 ignition[768]: kargs: kargs passed May 15 23:40:28.134181 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 15 23:40:28.131401 ignition[768]: Ignition finished successfully May 15 23:40:28.144342 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 15 23:40:28.155560 ignition[777]: Ignition 2.20.0 May 15 23:40:28.155572 ignition[777]: Stage: disks May 15 23:40:28.155749 ignition[777]: no configs at "/usr/lib/ignition/base.d" May 15 23:40:28.158543 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 15 23:40:28.155759 ignition[777]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 15 23:40:28.159787 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 15 23:40:28.156758 ignition[777]: disks: disks passed May 15 23:40:28.161433 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 15 23:40:28.156806 ignition[777]: Ignition finished successfully May 15 23:40:28.163896 systemd[1]: Reached target local-fs.target - Local File Systems. May 15 23:40:28.165865 systemd[1]: Reached target sysinit.target - System Initialization. May 15 23:40:28.167630 systemd[1]: Reached target basic.target - Basic System. May 15 23:40:28.180341 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 15 23:40:28.191034 systemd-fsck[787]: ROOT: clean, 14/553520 files, 52654/553472 blocks May 15 23:40:28.195655 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 15 23:40:28.198954 systemd[1]: Mounting sysroot.mount - /sysroot... May 15 23:40:28.246142 kernel: EXT4-fs (vda9): mounted filesystem 759e3456-2e58-4307-81e1-19f20d3141c2 r/w with ordered data mode. Quota mode: none. May 15 23:40:28.246880 systemd[1]: Mounted sysroot.mount - /sysroot. May 15 23:40:28.248420 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 15 23:40:28.259975 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 15 23:40:28.264850 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 15 23:40:28.266061 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 15 23:40:28.274329 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (795) May 15 23:40:28.274420 kernel: BTRFS info (device vda6): first mount of filesystem bb522e90-8598-4687-8a48-65ed6b798a46 May 15 23:40:28.274436 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 15 23:40:28.274446 kernel: BTRFS info (device vda6): using free space tree May 15 23:40:28.266132 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 15 23:40:28.278174 kernel: BTRFS info (device vda6): auto enabling async discard May 15 23:40:28.266165 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 15 23:40:28.270817 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 15 23:40:28.277223 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 15 23:40:28.283558 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 15 23:40:28.324441 initrd-setup-root[821]: cut: /sysroot/etc/passwd: No such file or directory May 15 23:40:28.328211 initrd-setup-root[828]: cut: /sysroot/etc/group: No such file or directory May 15 23:40:28.331635 initrd-setup-root[835]: cut: /sysroot/etc/shadow: No such file or directory May 15 23:40:28.335960 initrd-setup-root[842]: cut: /sysroot/etc/gshadow: No such file or directory May 15 23:40:28.425390 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 15 23:40:28.435286 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 15 23:40:28.438671 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 15 23:40:28.447137 kernel: BTRFS info (device vda6): last unmount of filesystem bb522e90-8598-4687-8a48-65ed6b798a46 May 15 23:40:28.465442 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 15 23:40:28.471576 ignition[911]: INFO : Ignition 2.20.0 May 15 23:40:28.471576 ignition[911]: INFO : Stage: mount May 15 23:40:28.474006 ignition[911]: INFO : no configs at "/usr/lib/ignition/base.d" May 15 23:40:28.474006 ignition[911]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 15 23:40:28.474006 ignition[911]: INFO : mount: mount passed May 15 23:40:28.474006 ignition[911]: INFO : Ignition finished successfully May 15 23:40:28.474707 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 15 23:40:28.486306 systemd[1]: Starting ignition-files.service - Ignition (files)... May 15 23:40:28.868151 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 15 23:40:28.890388 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 15 23:40:28.896174 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (925) May 15 23:40:28.896211 kernel: BTRFS info (device vda6): first mount of filesystem bb522e90-8598-4687-8a48-65ed6b798a46 May 15 23:40:28.898178 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 15 23:40:28.898196 kernel: BTRFS info (device vda6): using free space tree May 15 23:40:28.901135 kernel: BTRFS info (device vda6): auto enabling async discard May 15 23:40:28.902312 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 15 23:40:28.926971 ignition[942]: INFO : Ignition 2.20.0 May 15 23:40:28.926971 ignition[942]: INFO : Stage: files May 15 23:40:28.928781 ignition[942]: INFO : no configs at "/usr/lib/ignition/base.d" May 15 23:40:28.928781 ignition[942]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 15 23:40:28.928781 ignition[942]: DEBUG : files: compiled without relabeling support, skipping May 15 23:40:28.932744 ignition[942]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 15 23:40:28.932744 ignition[942]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 15 23:40:28.943267 ignition[942]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 15 23:40:28.944684 ignition[942]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 15 23:40:28.944684 ignition[942]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 15 23:40:28.943918 unknown[942]: wrote ssh authorized keys file for user: core May 15 23:40:28.949724 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" May 15 23:40:28.951476 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" May 15 23:40:28.951476 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 15 23:40:28.951476 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 May 15 23:40:29.010074 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 15 23:40:29.210930 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 15 23:40:29.210930 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" May 15 23:40:29.215013 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 May 15 23:40:29.367532 systemd-networkd[758]: eth0: Gained IPv6LL May 15 23:40:29.508026 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK May 15 23:40:29.564870 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" May 15 23:40:29.564870 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/install.sh" May 15 23:40:29.568833 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/install.sh" May 15 23:40:29.568833 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nginx.yaml" May 15 23:40:29.568833 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nginx.yaml" May 15 23:40:29.568833 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 15 23:40:29.568833 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 15 23:40:29.568833 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 15 23:40:29.568833 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 15 23:40:29.568833 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" May 15 23:40:29.568833 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 15 23:40:29.568833 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" May 15 23:40:29.568833 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" May 15 23:40:29.568833 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" May 15 23:40:29.568833 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-arm64.raw: attempt #1 May 15 23:40:30.423514 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(c): GET result: OK May 15 23:40:30.753432 ignition[942]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" May 15 23:40:30.753432 ignition[942]: INFO : files: op(d): [started] processing unit "containerd.service" May 15 23:40:30.756939 ignition[942]: INFO : files: op(d): op(e): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" May 15 23:40:30.756939 ignition[942]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" May 15 23:40:30.756939 ignition[942]: INFO : files: op(d): [finished] processing unit "containerd.service" May 15 23:40:30.756939 ignition[942]: INFO : files: op(f): [started] processing unit "prepare-helm.service" May 15 23:40:30.756939 ignition[942]: INFO : files: op(f): op(10): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 15 23:40:30.756939 ignition[942]: INFO : files: op(f): op(10): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 15 23:40:30.756939 ignition[942]: INFO : files: op(f): [finished] processing unit "prepare-helm.service" May 15 23:40:30.756939 ignition[942]: INFO : files: op(11): [started] processing unit "coreos-metadata.service" May 15 23:40:30.756939 ignition[942]: INFO : files: op(11): op(12): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 15 23:40:30.756939 ignition[942]: INFO : files: op(11): op(12): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 15 23:40:30.756939 ignition[942]: INFO : files: op(11): [finished] processing unit "coreos-metadata.service" May 15 23:40:30.756939 ignition[942]: INFO : files: op(13): [started] setting preset to disabled for "coreos-metadata.service" May 15 23:40:30.783722 ignition[942]: INFO : files: op(13): op(14): [started] removing enablement symlink(s) for "coreos-metadata.service" May 15 23:40:30.787565 ignition[942]: INFO : files: op(13): op(14): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 15 23:40:30.789061 ignition[942]: INFO : files: op(13): [finished] setting preset to disabled for "coreos-metadata.service" May 15 23:40:30.789061 ignition[942]: INFO : files: op(15): [started] setting preset to enabled for "prepare-helm.service" May 15 23:40:30.789061 ignition[942]: INFO : files: op(15): [finished] setting preset to enabled for "prepare-helm.service" May 15 23:40:30.789061 ignition[942]: INFO : files: createResultFile: createFiles: op(16): [started] writing file "/sysroot/etc/.ignition-result.json" May 15 23:40:30.789061 ignition[942]: INFO : files: createResultFile: createFiles: op(16): [finished] writing file "/sysroot/etc/.ignition-result.json" May 15 23:40:30.789061 ignition[942]: INFO : files: files passed May 15 23:40:30.789061 ignition[942]: INFO : Ignition finished successfully May 15 23:40:30.789685 systemd[1]: Finished ignition-files.service - Ignition (files). May 15 23:40:30.802266 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 15 23:40:30.804959 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 15 23:40:30.806839 systemd[1]: ignition-quench.service: Deactivated successfully. May 15 23:40:30.806918 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 15 23:40:30.818732 initrd-setup-root-after-ignition[971]: grep: /sysroot/oem/oem-release: No such file or directory May 15 23:40:30.821959 initrd-setup-root-after-ignition[973]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 15 23:40:30.821959 initrd-setup-root-after-ignition[973]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 15 23:40:30.825249 initrd-setup-root-after-ignition[977]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 15 23:40:30.825865 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 15 23:40:30.827935 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 15 23:40:30.846322 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 15 23:40:30.867057 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 15 23:40:30.867179 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 15 23:40:30.868605 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 15 23:40:30.871282 systemd[1]: Reached target initrd.target - Initrd Default Target. May 15 23:40:30.873402 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 15 23:40:30.874219 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 15 23:40:30.891966 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 15 23:40:30.907300 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 15 23:40:30.915933 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 15 23:40:30.917340 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 15 23:40:30.919424 systemd[1]: Stopped target timers.target - Timer Units. May 15 23:40:30.921273 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 15 23:40:30.921406 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 15 23:40:30.924057 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 15 23:40:30.926381 systemd[1]: Stopped target basic.target - Basic System. May 15 23:40:30.928250 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 15 23:40:30.930211 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 15 23:40:30.932412 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 15 23:40:30.934699 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 15 23:40:30.936772 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 15 23:40:30.938963 systemd[1]: Stopped target sysinit.target - System Initialization. May 15 23:40:30.941212 systemd[1]: Stopped target local-fs.target - Local File Systems. May 15 23:40:30.943236 systemd[1]: Stopped target swap.target - Swaps. May 15 23:40:30.944941 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 15 23:40:30.945079 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 15 23:40:30.947813 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 15 23:40:30.949954 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 15 23:40:30.952231 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 15 23:40:30.952315 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 15 23:40:30.954531 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 15 23:40:30.954658 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 15 23:40:30.957773 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 15 23:40:30.957900 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 15 23:40:30.960062 systemd[1]: Stopped target paths.target - Path Units. May 15 23:40:30.961703 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 15 23:40:30.965153 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 15 23:40:30.966791 systemd[1]: Stopped target slices.target - Slice Units. May 15 23:40:30.968905 systemd[1]: Stopped target sockets.target - Socket Units. May 15 23:40:30.970553 systemd[1]: iscsid.socket: Deactivated successfully. May 15 23:40:30.970647 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 15 23:40:30.972218 systemd[1]: iscsiuio.socket: Deactivated successfully. May 15 23:40:30.972306 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 15 23:40:30.973885 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 15 23:40:30.973997 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 15 23:40:30.975795 systemd[1]: ignition-files.service: Deactivated successfully. May 15 23:40:30.975905 systemd[1]: Stopped ignition-files.service - Ignition (files). May 15 23:40:30.999619 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 15 23:40:31.005312 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 15 23:40:31.006267 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 15 23:40:31.006401 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 15 23:40:31.008906 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 15 23:40:31.009012 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 15 23:40:31.017166 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 15 23:40:31.017327 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 15 23:40:31.021619 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 15 23:40:31.023046 ignition[997]: INFO : Ignition 2.20.0 May 15 23:40:31.023046 ignition[997]: INFO : Stage: umount May 15 23:40:31.023046 ignition[997]: INFO : no configs at "/usr/lib/ignition/base.d" May 15 23:40:31.023046 ignition[997]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 15 23:40:31.023046 ignition[997]: INFO : umount: umount passed May 15 23:40:31.023046 ignition[997]: INFO : Ignition finished successfully May 15 23:40:31.022139 systemd[1]: ignition-mount.service: Deactivated successfully. May 15 23:40:31.022241 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 15 23:40:31.024896 systemd[1]: Stopped target network.target - Network. May 15 23:40:31.026773 systemd[1]: ignition-disks.service: Deactivated successfully. May 15 23:40:31.026859 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 15 23:40:31.030438 systemd[1]: ignition-kargs.service: Deactivated successfully. May 15 23:40:31.030496 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 15 23:40:31.032368 systemd[1]: ignition-setup.service: Deactivated successfully. May 15 23:40:31.032416 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 15 23:40:31.034637 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 15 23:40:31.034685 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 15 23:40:31.037604 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 15 23:40:31.039470 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 15 23:40:31.046192 systemd-networkd[758]: eth0: DHCPv6 lease lost May 15 23:40:31.047273 systemd[1]: systemd-resolved.service: Deactivated successfully. May 15 23:40:31.047431 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 15 23:40:31.050811 systemd[1]: systemd-networkd.service: Deactivated successfully. May 15 23:40:31.050958 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 15 23:40:31.053709 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 15 23:40:31.053767 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 15 23:40:31.065337 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 15 23:40:31.066344 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 15 23:40:31.066434 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 15 23:40:31.068594 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 15 23:40:31.068643 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 15 23:40:31.070634 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 15 23:40:31.070685 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 15 23:40:31.073264 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 15 23:40:31.073313 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 15 23:40:31.075571 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 15 23:40:31.085430 systemd[1]: network-cleanup.service: Deactivated successfully. May 15 23:40:31.085545 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 15 23:40:31.096850 systemd[1]: sysroot-boot.service: Deactivated successfully. May 15 23:40:31.096971 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 15 23:40:31.099068 systemd[1]: systemd-udevd.service: Deactivated successfully. May 15 23:40:31.099226 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 15 23:40:31.101531 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 15 23:40:31.101591 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 15 23:40:31.102880 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 15 23:40:31.102912 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 15 23:40:31.105281 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 15 23:40:31.105330 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 15 23:40:31.108082 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 15 23:40:31.108147 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 15 23:40:31.110014 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 15 23:40:31.110058 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 15 23:40:31.112420 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 15 23:40:31.112469 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 15 23:40:31.126297 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 15 23:40:31.127464 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 15 23:40:31.127531 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 15 23:40:31.129841 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. May 15 23:40:31.129888 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 15 23:40:31.132102 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 15 23:40:31.132157 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 15 23:40:31.134493 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 15 23:40:31.134542 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 15 23:40:31.136931 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 15 23:40:31.138152 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 15 23:40:31.140730 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 15 23:40:31.155291 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 15 23:40:31.160895 systemd[1]: Switching root. May 15 23:40:31.190060 systemd-journald[239]: Journal stopped May 15 23:40:31.945444 systemd-journald[239]: Received SIGTERM from PID 1 (systemd). May 15 23:40:31.945503 kernel: SELinux: policy capability network_peer_controls=1 May 15 23:40:31.945516 kernel: SELinux: policy capability open_perms=1 May 15 23:40:31.945531 kernel: SELinux: policy capability extended_socket_class=1 May 15 23:40:31.945542 kernel: SELinux: policy capability always_check_network=0 May 15 23:40:31.945552 kernel: SELinux: policy capability cgroup_seclabel=1 May 15 23:40:31.945563 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 15 23:40:31.945574 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 15 23:40:31.945584 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 15 23:40:31.945595 systemd[1]: Successfully loaded SELinux policy in 32.019ms. May 15 23:40:31.945612 kernel: audit: type=1403 audit(1747352431.386:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 15 23:40:31.945624 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.301ms. May 15 23:40:31.945637 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 15 23:40:31.945658 systemd[1]: Detected virtualization kvm. May 15 23:40:31.945669 systemd[1]: Detected architecture arm64. May 15 23:40:31.945680 systemd[1]: Detected first boot. May 15 23:40:31.945691 systemd[1]: Initializing machine ID from VM UUID. May 15 23:40:31.945702 zram_generator::config[1062]: No configuration found. May 15 23:40:31.945715 systemd[1]: Populated /etc with preset unit settings. May 15 23:40:31.945726 systemd[1]: Queued start job for default target multi-user.target. May 15 23:40:31.945739 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 15 23:40:31.945752 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 15 23:40:31.945764 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 15 23:40:31.945776 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 15 23:40:31.945787 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 15 23:40:31.945798 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 15 23:40:31.945810 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 15 23:40:31.945821 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 15 23:40:31.945832 systemd[1]: Created slice user.slice - User and Session Slice. May 15 23:40:31.945844 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 15 23:40:31.945855 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 15 23:40:31.945867 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 15 23:40:31.945878 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 15 23:40:31.945889 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 15 23:40:31.945902 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 15 23:40:31.945913 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... May 15 23:40:31.945924 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 15 23:40:31.945935 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 15 23:40:31.945948 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 15 23:40:31.945971 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 15 23:40:31.945984 systemd[1]: Reached target slices.target - Slice Units. May 15 23:40:31.945997 systemd[1]: Reached target swap.target - Swaps. May 15 23:40:31.946008 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 15 23:40:31.946019 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 15 23:40:31.946031 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 15 23:40:31.946043 systemd[1]: Listening on systemd-journald.socket - Journal Socket. May 15 23:40:31.946056 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 15 23:40:31.946067 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 15 23:40:31.946078 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 15 23:40:31.946097 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 15 23:40:31.946129 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 15 23:40:31.946141 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 15 23:40:31.946153 systemd[1]: Mounting media.mount - External Media Directory... May 15 23:40:31.946164 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 15 23:40:31.946175 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 15 23:40:31.946191 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 15 23:40:31.946218 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 15 23:40:31.946230 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 15 23:40:31.946241 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 15 23:40:31.946252 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 15 23:40:31.946264 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 15 23:40:31.946275 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 15 23:40:31.946287 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 15 23:40:31.946298 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 15 23:40:31.946313 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 15 23:40:31.946328 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 15 23:40:31.946341 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. May 15 23:40:31.946353 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) May 15 23:40:31.946364 kernel: loop: module loaded May 15 23:40:31.946374 systemd[1]: Starting systemd-journald.service - Journal Service... May 15 23:40:31.946385 kernel: fuse: init (API version 7.39) May 15 23:40:31.946395 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 15 23:40:31.946407 kernel: ACPI: bus type drm_connector registered May 15 23:40:31.946418 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 15 23:40:31.946429 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 15 23:40:31.946457 systemd-journald[1141]: Collecting audit messages is disabled. May 15 23:40:31.946480 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 15 23:40:31.946492 systemd-journald[1141]: Journal started May 15 23:40:31.946516 systemd-journald[1141]: Runtime Journal (/run/log/journal/5709a6d37a1545a997d3f18d139e2f96) is 5.9M, max 47.3M, 41.4M free. May 15 23:40:31.951739 systemd[1]: Started systemd-journald.service - Journal Service. May 15 23:40:31.951243 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 15 23:40:31.952436 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 15 23:40:31.953722 systemd[1]: Mounted media.mount - External Media Directory. May 15 23:40:31.954914 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 15 23:40:31.956173 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 15 23:40:31.957379 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 15 23:40:31.960507 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 15 23:40:31.962014 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 15 23:40:31.963549 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 15 23:40:31.963719 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 15 23:40:31.965187 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 15 23:40:31.965334 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 15 23:40:31.967031 systemd[1]: modprobe@drm.service: Deactivated successfully. May 15 23:40:31.967212 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 15 23:40:31.968522 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 15 23:40:31.968679 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 15 23:40:31.970164 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 15 23:40:31.970311 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 15 23:40:31.971688 systemd[1]: modprobe@loop.service: Deactivated successfully. May 15 23:40:31.971895 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 15 23:40:31.973382 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 15 23:40:31.974847 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 15 23:40:31.976425 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 15 23:40:31.988236 systemd[1]: Reached target network-pre.target - Preparation for Network. May 15 23:40:32.000216 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 15 23:40:32.002533 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 15 23:40:32.003675 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 15 23:40:32.008432 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 15 23:40:32.010763 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 15 23:40:32.011962 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 15 23:40:32.013406 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 15 23:40:32.014503 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 15 23:40:32.018277 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 15 23:40:32.020291 systemd-journald[1141]: Time spent on flushing to /var/log/journal/5709a6d37a1545a997d3f18d139e2f96 is 17.680ms for 850 entries. May 15 23:40:32.020291 systemd-journald[1141]: System Journal (/var/log/journal/5709a6d37a1545a997d3f18d139e2f96) is 8.0M, max 195.6M, 187.6M free. May 15 23:40:32.053954 systemd-journald[1141]: Received client request to flush runtime journal. May 15 23:40:32.021352 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 15 23:40:32.026808 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 15 23:40:32.033298 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 15 23:40:32.034636 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 15 23:40:32.036367 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 15 23:40:32.039474 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 15 23:40:32.049323 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 15 23:40:32.051077 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 15 23:40:32.057488 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 15 23:40:32.060814 udevadm[1204]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. May 15 23:40:32.061518 systemd-tmpfiles[1194]: ACLs are not supported, ignoring. May 15 23:40:32.061538 systemd-tmpfiles[1194]: ACLs are not supported, ignoring. May 15 23:40:32.065880 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 15 23:40:32.079374 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 15 23:40:32.097816 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 15 23:40:32.110322 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 15 23:40:32.123488 systemd-tmpfiles[1215]: ACLs are not supported, ignoring. May 15 23:40:32.123504 systemd-tmpfiles[1215]: ACLs are not supported, ignoring. May 15 23:40:32.127511 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 15 23:40:32.452509 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 15 23:40:32.470327 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 15 23:40:32.493001 systemd-udevd[1221]: Using default interface naming scheme 'v255'. May 15 23:40:32.516478 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 15 23:40:32.530334 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 15 23:40:32.555208 systemd[1]: Found device dev-ttyAMA0.device - /dev/ttyAMA0. May 15 23:40:32.582150 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1227) May 15 23:40:32.585394 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 15 23:40:32.621160 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 15 23:40:32.634865 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 15 23:40:32.678386 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 15 23:40:32.690327 systemd-networkd[1230]: lo: Link UP May 15 23:40:32.690339 systemd-networkd[1230]: lo: Gained carrier May 15 23:40:32.690608 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 15 23:40:32.691055 systemd-networkd[1230]: Enumeration completed May 15 23:40:32.692050 systemd[1]: Started systemd-networkd.service - Network Configuration. May 15 23:40:32.695479 systemd-networkd[1230]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 15 23:40:32.695489 systemd-networkd[1230]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 15 23:40:32.696165 systemd-networkd[1230]: eth0: Link UP May 15 23:40:32.696168 systemd-networkd[1230]: eth0: Gained carrier May 15 23:40:32.696181 systemd-networkd[1230]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 15 23:40:32.701362 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 15 23:40:32.704433 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 15 23:40:32.715202 systemd-networkd[1230]: eth0: DHCPv4 address 10.0.0.43/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 15 23:40:32.715628 lvm[1259]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 15 23:40:32.723039 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 15 23:40:32.750760 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 15 23:40:32.752440 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 15 23:40:32.760289 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 15 23:40:32.765243 lvm[1267]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 15 23:40:32.794736 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 15 23:40:32.796492 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 15 23:40:32.797832 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 15 23:40:32.797863 systemd[1]: Reached target local-fs.target - Local File Systems. May 15 23:40:32.798955 systemd[1]: Reached target machines.target - Containers. May 15 23:40:32.801214 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). May 15 23:40:32.815302 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 15 23:40:32.817946 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 15 23:40:32.819175 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 15 23:40:32.820374 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 15 23:40:32.822911 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... May 15 23:40:32.826440 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 15 23:40:32.828409 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 15 23:40:32.838109 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 15 23:40:32.842201 kernel: loop0: detected capacity change from 0 to 113536 May 15 23:40:32.847640 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 15 23:40:32.850311 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. May 15 23:40:32.854524 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 15 23:40:32.889276 kernel: loop1: detected capacity change from 0 to 203944 May 15 23:40:32.923947 kernel: loop2: detected capacity change from 0 to 116808 May 15 23:40:32.959151 kernel: loop3: detected capacity change from 0 to 113536 May 15 23:40:32.964130 kernel: loop4: detected capacity change from 0 to 203944 May 15 23:40:32.969144 kernel: loop5: detected capacity change from 0 to 116808 May 15 23:40:32.971484 (sd-merge)[1287]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. May 15 23:40:32.971851 (sd-merge)[1287]: Merged extensions into '/usr'. May 15 23:40:32.975897 systemd[1]: Reloading requested from client PID 1275 ('systemd-sysext') (unit systemd-sysext.service)... May 15 23:40:32.975912 systemd[1]: Reloading... May 15 23:40:33.015244 zram_generator::config[1315]: No configuration found. May 15 23:40:33.069283 ldconfig[1272]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 15 23:40:33.113049 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 23:40:33.155995 systemd[1]: Reloading finished in 179 ms. May 15 23:40:33.175968 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 15 23:40:33.177528 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 15 23:40:33.193289 systemd[1]: Starting ensure-sysext.service... May 15 23:40:33.195404 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 15 23:40:33.200499 systemd[1]: Reloading requested from client PID 1356 ('systemctl') (unit ensure-sysext.service)... May 15 23:40:33.200513 systemd[1]: Reloading... May 15 23:40:33.213348 systemd-tmpfiles[1357]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 15 23:40:33.213615 systemd-tmpfiles[1357]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 15 23:40:33.214294 systemd-tmpfiles[1357]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 15 23:40:33.214510 systemd-tmpfiles[1357]: ACLs are not supported, ignoring. May 15 23:40:33.214562 systemd-tmpfiles[1357]: ACLs are not supported, ignoring. May 15 23:40:33.216972 systemd-tmpfiles[1357]: Detected autofs mount point /boot during canonicalization of boot. May 15 23:40:33.216985 systemd-tmpfiles[1357]: Skipping /boot May 15 23:40:33.223908 systemd-tmpfiles[1357]: Detected autofs mount point /boot during canonicalization of boot. May 15 23:40:33.223924 systemd-tmpfiles[1357]: Skipping /boot May 15 23:40:33.251213 zram_generator::config[1389]: No configuration found. May 15 23:40:33.337658 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 23:40:33.380738 systemd[1]: Reloading finished in 179 ms. May 15 23:40:33.394186 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 15 23:40:33.415328 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 15 23:40:33.417932 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 15 23:40:33.420567 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 15 23:40:33.426280 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 15 23:40:33.429584 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 15 23:40:33.443761 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 15 23:40:33.445610 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 15 23:40:33.456582 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 15 23:40:33.459003 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 15 23:40:33.461994 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 15 23:40:33.463507 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 15 23:40:33.465484 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 15 23:40:33.465663 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 15 23:40:33.473924 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 15 23:40:33.474242 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 15 23:40:33.476320 systemd[1]: modprobe@loop.service: Deactivated successfully. May 15 23:40:33.478481 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 15 23:40:33.485597 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 15 23:40:33.491548 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 15 23:40:33.502488 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 15 23:40:33.505252 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 15 23:40:33.507834 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 15 23:40:33.508989 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 15 23:40:33.512908 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 15 23:40:33.514183 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 15 23:40:33.517275 augenrules[1472]: No rules May 15 23:40:33.517024 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 15 23:40:33.519024 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 15 23:40:33.519207 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 15 23:40:33.521000 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 15 23:40:33.521541 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 15 23:40:33.523646 systemd[1]: audit-rules.service: Deactivated successfully. May 15 23:40:33.523910 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 15 23:40:33.525409 systemd[1]: modprobe@loop.service: Deactivated successfully. May 15 23:40:33.525636 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 15 23:40:33.528675 systemd-resolved[1431]: Positive Trust Anchors: May 15 23:40:33.528832 systemd-resolved[1431]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 15 23:40:33.528865 systemd-resolved[1431]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 15 23:40:33.531458 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 15 23:40:33.535804 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 15 23:40:33.535937 systemd-resolved[1431]: Defaulting to hostname 'linux'. May 15 23:40:33.535989 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 15 23:40:33.555402 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 15 23:40:33.556524 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 15 23:40:33.558293 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 15 23:40:33.560835 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 15 23:40:33.565486 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 15 23:40:33.569565 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 15 23:40:33.572034 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 15 23:40:33.572252 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 15 23:40:33.572932 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 15 23:40:33.575010 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 15 23:40:33.575214 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 15 23:40:33.577231 systemd[1]: modprobe@drm.service: Deactivated successfully. May 15 23:40:33.577394 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 15 23:40:33.578940 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 15 23:40:33.579123 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 15 23:40:33.580886 systemd[1]: modprobe@loop.service: Deactivated successfully. May 15 23:40:33.581140 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 15 23:40:33.582309 augenrules[1489]: /sbin/augenrules: No change May 15 23:40:33.584020 systemd[1]: Finished ensure-sysext.service. May 15 23:40:33.588346 augenrules[1520]: No rules May 15 23:40:33.589505 systemd[1]: audit-rules.service: Deactivated successfully. May 15 23:40:33.589726 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 15 23:40:33.591472 systemd[1]: Reached target network.target - Network. May 15 23:40:33.592390 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 15 23:40:33.593590 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 15 23:40:33.593645 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 15 23:40:33.605331 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 15 23:40:33.647300 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 15 23:40:33.648849 systemd-timesyncd[1528]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 15 23:40:33.648900 systemd-timesyncd[1528]: Initial clock synchronization to Thu 2025-05-15 23:40:33.744621 UTC. May 15 23:40:33.649013 systemd[1]: Reached target sysinit.target - System Initialization. May 15 23:40:33.650240 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 15 23:40:33.651511 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 15 23:40:33.652772 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 15 23:40:33.654031 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 15 23:40:33.654068 systemd[1]: Reached target paths.target - Path Units. May 15 23:40:33.655021 systemd[1]: Reached target time-set.target - System Time Set. May 15 23:40:33.656293 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 15 23:40:33.657497 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 15 23:40:33.658738 systemd[1]: Reached target timers.target - Timer Units. May 15 23:40:33.660407 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 15 23:40:33.663207 systemd[1]: Starting docker.socket - Docker Socket for the API... May 15 23:40:33.665268 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 15 23:40:33.671170 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 15 23:40:33.672245 systemd[1]: Reached target sockets.target - Socket Units. May 15 23:40:33.673240 systemd[1]: Reached target basic.target - Basic System. May 15 23:40:33.674434 systemd[1]: System is tainted: cgroupsv1 May 15 23:40:33.674484 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 15 23:40:33.674503 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 15 23:40:33.675885 systemd[1]: Starting containerd.service - containerd container runtime... May 15 23:40:33.678197 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 15 23:40:33.680303 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 15 23:40:33.685277 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 15 23:40:33.686285 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 15 23:40:33.687468 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 15 23:40:33.693208 jq[1534]: false May 15 23:40:33.691393 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 15 23:40:33.696346 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 15 23:40:33.699806 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 15 23:40:33.709301 systemd[1]: Starting systemd-logind.service - User Login Management... May 15 23:40:33.713861 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 15 23:40:33.714936 extend-filesystems[1536]: Found loop3 May 15 23:40:33.714936 extend-filesystems[1536]: Found loop4 May 15 23:40:33.714936 extend-filesystems[1536]: Found loop5 May 15 23:40:33.714936 extend-filesystems[1536]: Found vda May 15 23:40:33.714936 extend-filesystems[1536]: Found vda1 May 15 23:40:33.714936 extend-filesystems[1536]: Found vda2 May 15 23:40:33.714936 extend-filesystems[1536]: Found vda3 May 15 23:40:33.714936 extend-filesystems[1536]: Found usr May 15 23:40:33.714936 extend-filesystems[1536]: Found vda4 May 15 23:40:33.714936 extend-filesystems[1536]: Found vda6 May 15 23:40:33.714936 extend-filesystems[1536]: Found vda7 May 15 23:40:33.714936 extend-filesystems[1536]: Found vda9 May 15 23:40:33.714936 extend-filesystems[1536]: Checking size of /dev/vda9 May 15 23:40:33.718937 dbus-daemon[1533]: [system] SELinux support is enabled May 15 23:40:33.722412 systemd[1]: Starting update-engine.service - Update Engine... May 15 23:40:33.727996 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 15 23:40:33.730337 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 15 23:40:33.734528 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 15 23:40:33.734779 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 15 23:40:33.735034 systemd[1]: motdgen.service: Deactivated successfully. May 15 23:40:33.735260 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 15 23:40:33.740843 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 15 23:40:33.741065 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 15 23:40:33.750394 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 15 23:40:33.752441 extend-filesystems[1536]: Resized partition /dev/vda9 May 15 23:40:33.750429 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 15 23:40:33.753543 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 15 23:40:33.753568 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 15 23:40:33.757196 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1225) May 15 23:40:33.757244 jq[1558]: true May 15 23:40:33.762693 (ntainerd)[1567]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 15 23:40:33.764247 extend-filesystems[1571]: resize2fs 1.47.1 (20-May-2024) May 15 23:40:33.783552 jq[1575]: true May 15 23:40:33.791136 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 15 23:40:33.797455 tar[1562]: linux-arm64/helm May 15 23:40:33.814978 systemd-logind[1546]: Watching system buttons on /dev/input/event0 (Power Button) May 15 23:40:33.815225 systemd-logind[1546]: New seat seat0. May 15 23:40:33.815468 update_engine[1554]: I20250515 23:40:33.814805 1554 main.cc:92] Flatcar Update Engine starting May 15 23:40:33.815790 systemd[1]: Started systemd-logind.service - User Login Management. May 15 23:40:33.825601 systemd[1]: Started update-engine.service - Update Engine. May 15 23:40:33.825913 update_engine[1554]: I20250515 23:40:33.825865 1554 update_check_scheduler.cc:74] Next update check in 6m32s May 15 23:40:33.830206 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 15 23:40:33.837140 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 15 23:40:33.846414 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 15 23:40:33.864666 extend-filesystems[1571]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 15 23:40:33.864666 extend-filesystems[1571]: old_desc_blocks = 1, new_desc_blocks = 1 May 15 23:40:33.864666 extend-filesystems[1571]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 15 23:40:33.873266 extend-filesystems[1536]: Resized filesystem in /dev/vda9 May 15 23:40:33.875238 bash[1593]: Updated "/home/core/.ssh/authorized_keys" May 15 23:40:33.866158 systemd[1]: extend-filesystems.service: Deactivated successfully. May 15 23:40:33.866408 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 15 23:40:33.875476 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 15 23:40:33.880024 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 15 23:40:33.925205 locksmithd[1594]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 15 23:40:33.999974 containerd[1567]: time="2025-05-15T23:40:33.999782920Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 May 15 23:40:34.033108 containerd[1567]: time="2025-05-15T23:40:34.032857962Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 15 23:40:34.034641 containerd[1567]: time="2025-05-15T23:40:34.034606951Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.90-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 15 23:40:34.035114 containerd[1567]: time="2025-05-15T23:40:34.034783359Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 15 23:40:34.035114 containerd[1567]: time="2025-05-15T23:40:34.034811290Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 15 23:40:34.035114 containerd[1567]: time="2025-05-15T23:40:34.034978550Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 May 15 23:40:34.035114 containerd[1567]: time="2025-05-15T23:40:34.034998062Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 May 15 23:40:34.035114 containerd[1567]: time="2025-05-15T23:40:34.035053680Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 May 15 23:40:34.035114 containerd[1567]: time="2025-05-15T23:40:34.035065702Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 15 23:40:34.035349 containerd[1567]: time="2025-05-15T23:40:34.035306716Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 15 23:40:34.035349 containerd[1567]: time="2025-05-15T23:40:34.035330113Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 15 23:40:34.035349 containerd[1567]: time="2025-05-15T23:40:34.035344362Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 May 15 23:40:34.035418 containerd[1567]: time="2025-05-15T23:40:34.035353915Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 15 23:40:34.035451 containerd[1567]: time="2025-05-15T23:40:34.035434914Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 15 23:40:34.035667 containerd[1567]: time="2025-05-15T23:40:34.035641075Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 15 23:40:34.035800 containerd[1567]: time="2025-05-15T23:40:34.035776883Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 15 23:40:34.035829 containerd[1567]: time="2025-05-15T23:40:34.035805826Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 15 23:40:34.035904 containerd[1567]: time="2025-05-15T23:40:34.035889901Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 15 23:40:34.035950 containerd[1567]: time="2025-05-15T23:40:34.035938153Z" level=info msg="metadata content store policy set" policy=shared May 15 23:40:34.042406 containerd[1567]: time="2025-05-15T23:40:34.042364887Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 15 23:40:34.042494 containerd[1567]: time="2025-05-15T23:40:34.042420465Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 15 23:40:34.042948 containerd[1567]: time="2025-05-15T23:40:34.042587159Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 May 15 23:40:34.042948 containerd[1567]: time="2025-05-15T23:40:34.042620069Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 May 15 23:40:34.042948 containerd[1567]: time="2025-05-15T23:40:34.042640875Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 15 23:40:34.042948 containerd[1567]: time="2025-05-15T23:40:34.042794858Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 15 23:40:34.043122 containerd[1567]: time="2025-05-15T23:40:34.043092260Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 15 23:40:34.045849 containerd[1567]: time="2025-05-15T23:40:34.044002032Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 May 15 23:40:34.045849 containerd[1567]: time="2025-05-15T23:40:34.044077323Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 May 15 23:40:34.045849 containerd[1567]: time="2025-05-15T23:40:34.044099749Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 May 15 23:40:34.045849 containerd[1567]: time="2025-05-15T23:40:34.044114928Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 15 23:40:34.045849 containerd[1567]: time="2025-05-15T23:40:34.044154598Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 15 23:40:34.045849 containerd[1567]: time="2025-05-15T23:40:34.044168483Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 15 23:40:34.045849 containerd[1567]: time="2025-05-15T23:40:34.044183298Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 15 23:40:34.045849 containerd[1567]: time="2025-05-15T23:40:34.044198559Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 15 23:40:34.045849 containerd[1567]: time="2025-05-15T23:40:34.044261625Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 15 23:40:34.045849 containerd[1567]: time="2025-05-15T23:40:34.044277696Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 15 23:40:34.045849 containerd[1567]: time="2025-05-15T23:40:34.044290244Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 15 23:40:34.045849 containerd[1567]: time="2025-05-15T23:40:34.044310767Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 15 23:40:34.045849 containerd[1567]: time="2025-05-15T23:40:34.044324206Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 15 23:40:34.045849 containerd[1567]: time="2025-05-15T23:40:34.044337484Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 15 23:40:34.046195 containerd[1567]: time="2025-05-15T23:40:34.044350113Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 15 23:40:34.046195 containerd[1567]: time="2025-05-15T23:40:34.044362378Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 15 23:40:34.046195 containerd[1567]: time="2025-05-15T23:40:34.044375534Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 15 23:40:34.046195 containerd[1567]: time="2025-05-15T23:40:34.044387071Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 15 23:40:34.046195 containerd[1567]: time="2025-05-15T23:40:34.044447142Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 15 23:40:34.046195 containerd[1567]: time="2025-05-15T23:40:34.044465641Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 May 15 23:40:34.046195 containerd[1567]: time="2025-05-15T23:40:34.044482197Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 May 15 23:40:34.046195 containerd[1567]: time="2025-05-15T23:40:34.044500413Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 15 23:40:34.046195 containerd[1567]: time="2025-05-15T23:40:34.044513204Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 May 15 23:40:34.046195 containerd[1567]: time="2025-05-15T23:40:34.044526967Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 15 23:40:34.046195 containerd[1567]: time="2025-05-15T23:40:34.044543240Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 May 15 23:40:34.046195 containerd[1567]: time="2025-05-15T23:40:34.044569268Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 May 15 23:40:34.046195 containerd[1567]: time="2025-05-15T23:40:34.044645693Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 15 23:40:34.046195 containerd[1567]: time="2025-05-15T23:40:34.044664232Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 15 23:40:34.046445 containerd[1567]: time="2025-05-15T23:40:34.044998389Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 15 23:40:34.046445 containerd[1567]: time="2025-05-15T23:40:34.045067447Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 May 15 23:40:34.046445 containerd[1567]: time="2025-05-15T23:40:34.045080319Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 15 23:40:34.046445 containerd[1567]: time="2025-05-15T23:40:34.045092382Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 May 15 23:40:34.046445 containerd[1567]: time="2025-05-15T23:40:34.045101449Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 15 23:40:34.046445 containerd[1567]: time="2025-05-15T23:40:34.045115779Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 May 15 23:40:34.046445 containerd[1567]: time="2025-05-15T23:40:34.045138488Z" level=info msg="NRI interface is disabled by configuration." May 15 23:40:34.046445 containerd[1567]: time="2025-05-15T23:40:34.045150915Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 15 23:40:34.046590 containerd[1567]: time="2025-05-15T23:40:34.045633145Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 15 23:40:34.046590 containerd[1567]: time="2025-05-15T23:40:34.045689492Z" level=info msg="Connect containerd service" May 15 23:40:34.046590 containerd[1567]: time="2025-05-15T23:40:34.045727461Z" level=info msg="using legacy CRI server" May 15 23:40:34.046590 containerd[1567]: time="2025-05-15T23:40:34.045735719Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 15 23:40:34.046590 containerd[1567]: time="2025-05-15T23:40:34.046093030Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 15 23:40:34.048261 containerd[1567]: time="2025-05-15T23:40:34.048228636Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 15 23:40:34.048517 containerd[1567]: time="2025-05-15T23:40:34.048474710Z" level=info msg="Start subscribing containerd event" May 15 23:40:34.048549 containerd[1567]: time="2025-05-15T23:40:34.048536765Z" level=info msg="Start recovering state" May 15 23:40:34.048628 containerd[1567]: time="2025-05-15T23:40:34.048611611Z" level=info msg="Start event monitor" May 15 23:40:34.048674 containerd[1567]: time="2025-05-15T23:40:34.048636991Z" level=info msg="Start snapshots syncer" May 15 23:40:34.048674 containerd[1567]: time="2025-05-15T23:40:34.048658000Z" level=info msg="Start cni network conf syncer for default" May 15 23:40:34.048674 containerd[1567]: time="2025-05-15T23:40:34.048666703Z" level=info msg="Start streaming server" May 15 23:40:34.049096 containerd[1567]: time="2025-05-15T23:40:34.049072548Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 15 23:40:34.049197 containerd[1567]: time="2025-05-15T23:40:34.049181276Z" level=info msg=serving... address=/run/containerd/containerd.sock May 15 23:40:34.050910 containerd[1567]: time="2025-05-15T23:40:34.049359182Z" level=info msg="containerd successfully booted in 0.050549s" May 15 23:40:34.049465 systemd[1]: Started containerd.service - containerd container runtime. May 15 23:40:34.192315 tar[1562]: linux-arm64/LICENSE May 15 23:40:34.192315 tar[1562]: linux-arm64/README.md May 15 23:40:34.206900 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 15 23:40:34.242427 sshd_keygen[1555]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 15 23:40:34.262544 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 15 23:40:34.271431 systemd[1]: Starting issuegen.service - Generate /run/issue... May 15 23:40:34.278513 systemd[1]: issuegen.service: Deactivated successfully. May 15 23:40:34.278775 systemd[1]: Finished issuegen.service - Generate /run/issue. May 15 23:40:34.281868 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 15 23:40:34.294910 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 15 23:40:34.298473 systemd[1]: Started getty@tty1.service - Getty on tty1. May 15 23:40:34.301307 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. May 15 23:40:34.303021 systemd[1]: Reached target getty.target - Login Prompts. May 15 23:40:34.553177 systemd-networkd[1230]: eth0: Gained IPv6LL May 15 23:40:34.555717 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 15 23:40:34.557534 systemd[1]: Reached target network-online.target - Network is Online. May 15 23:40:34.574423 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... May 15 23:40:34.577381 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 23:40:34.579919 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 15 23:40:34.598437 systemd[1]: coreos-metadata.service: Deactivated successfully. May 15 23:40:34.598718 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. May 15 23:40:34.600549 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 15 23:40:34.605429 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 15 23:40:35.135923 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 23:40:35.137716 systemd[1]: Reached target multi-user.target - Multi-User System. May 15 23:40:35.138992 systemd[1]: Startup finished in 6.305s (kernel) + 3.785s (userspace) = 10.090s. May 15 23:40:35.140262 (kubelet)[1671]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 15 23:40:35.624306 kubelet[1671]: E0515 23:40:35.624186 1671 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 15 23:40:35.626448 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 15 23:40:35.626645 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 15 23:40:38.495464 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 15 23:40:38.507355 systemd[1]: Started sshd@0-10.0.0.43:22-10.0.0.1:40702.service - OpenSSH per-connection server daemon (10.0.0.1:40702). May 15 23:40:38.563572 sshd[1684]: Accepted publickey for core from 10.0.0.1 port 40702 ssh2: RSA SHA256:gmt3ErEAFpcoTNZjCS2EQ7u5MuXyZAbf7EZ0clvImok May 15 23:40:38.565238 sshd-session[1684]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 23:40:38.572357 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 15 23:40:38.588422 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 15 23:40:38.592436 systemd-logind[1546]: New session 1 of user core. May 15 23:40:38.598211 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 15 23:40:38.600430 systemd[1]: Starting user@500.service - User Manager for UID 500... May 15 23:40:38.607531 (systemd)[1690]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 15 23:40:38.680519 systemd[1690]: Queued start job for default target default.target. May 15 23:40:38.680914 systemd[1690]: Created slice app.slice - User Application Slice. May 15 23:40:38.680940 systemd[1690]: Reached target paths.target - Paths. May 15 23:40:38.680951 systemd[1690]: Reached target timers.target - Timers. May 15 23:40:38.699229 systemd[1690]: Starting dbus.socket - D-Bus User Message Bus Socket... May 15 23:40:38.705172 systemd[1690]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 15 23:40:38.705239 systemd[1690]: Reached target sockets.target - Sockets. May 15 23:40:38.705251 systemd[1690]: Reached target basic.target - Basic System. May 15 23:40:38.705288 systemd[1690]: Reached target default.target - Main User Target. May 15 23:40:38.705313 systemd[1690]: Startup finished in 92ms. May 15 23:40:38.705597 systemd[1]: Started user@500.service - User Manager for UID 500. May 15 23:40:38.706959 systemd[1]: Started session-1.scope - Session 1 of User core. May 15 23:40:38.766355 systemd[1]: Started sshd@1-10.0.0.43:22-10.0.0.1:40714.service - OpenSSH per-connection server daemon (10.0.0.1:40714). May 15 23:40:38.817052 sshd[1702]: Accepted publickey for core from 10.0.0.1 port 40714 ssh2: RSA SHA256:gmt3ErEAFpcoTNZjCS2EQ7u5MuXyZAbf7EZ0clvImok May 15 23:40:38.818212 sshd-session[1702]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 23:40:38.821860 systemd-logind[1546]: New session 2 of user core. May 15 23:40:38.843483 systemd[1]: Started session-2.scope - Session 2 of User core. May 15 23:40:38.894466 sshd[1705]: Connection closed by 10.0.0.1 port 40714 May 15 23:40:38.894862 sshd-session[1702]: pam_unix(sshd:session): session closed for user core May 15 23:40:38.904373 systemd[1]: Started sshd@2-10.0.0.43:22-10.0.0.1:40718.service - OpenSSH per-connection server daemon (10.0.0.1:40718). May 15 23:40:38.905091 systemd[1]: sshd@1-10.0.0.43:22-10.0.0.1:40714.service: Deactivated successfully. May 15 23:40:38.907205 systemd[1]: session-2.scope: Deactivated successfully. May 15 23:40:38.908039 systemd-logind[1546]: Session 2 logged out. Waiting for processes to exit. May 15 23:40:38.909214 systemd-logind[1546]: Removed session 2. May 15 23:40:38.944976 sshd[1707]: Accepted publickey for core from 10.0.0.1 port 40718 ssh2: RSA SHA256:gmt3ErEAFpcoTNZjCS2EQ7u5MuXyZAbf7EZ0clvImok May 15 23:40:38.946187 sshd-session[1707]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 23:40:38.950431 systemd-logind[1546]: New session 3 of user core. May 15 23:40:38.957384 systemd[1]: Started session-3.scope - Session 3 of User core. May 15 23:40:39.006739 sshd[1713]: Connection closed by 10.0.0.1 port 40718 May 15 23:40:39.007536 sshd-session[1707]: pam_unix(sshd:session): session closed for user core May 15 23:40:39.024376 systemd[1]: Started sshd@3-10.0.0.43:22-10.0.0.1:40720.service - OpenSSH per-connection server daemon (10.0.0.1:40720). May 15 23:40:39.024779 systemd[1]: sshd@2-10.0.0.43:22-10.0.0.1:40718.service: Deactivated successfully. May 15 23:40:39.027057 systemd-logind[1546]: Session 3 logged out. Waiting for processes to exit. May 15 23:40:39.027711 systemd[1]: session-3.scope: Deactivated successfully. May 15 23:40:39.029015 systemd-logind[1546]: Removed session 3. May 15 23:40:39.064595 sshd[1715]: Accepted publickey for core from 10.0.0.1 port 40720 ssh2: RSA SHA256:gmt3ErEAFpcoTNZjCS2EQ7u5MuXyZAbf7EZ0clvImok May 15 23:40:39.065853 sshd-session[1715]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 23:40:39.070374 systemd-logind[1546]: New session 4 of user core. May 15 23:40:39.080401 systemd[1]: Started session-4.scope - Session 4 of User core. May 15 23:40:39.132825 sshd[1721]: Connection closed by 10.0.0.1 port 40720 May 15 23:40:39.133156 sshd-session[1715]: pam_unix(sshd:session): session closed for user core May 15 23:40:39.147377 systemd[1]: Started sshd@4-10.0.0.43:22-10.0.0.1:40728.service - OpenSSH per-connection server daemon (10.0.0.1:40728). May 15 23:40:39.147770 systemd[1]: sshd@3-10.0.0.43:22-10.0.0.1:40720.service: Deactivated successfully. May 15 23:40:39.149467 systemd-logind[1546]: Session 4 logged out. Waiting for processes to exit. May 15 23:40:39.150171 systemd[1]: session-4.scope: Deactivated successfully. May 15 23:40:39.151441 systemd-logind[1546]: Removed session 4. May 15 23:40:39.188420 sshd[1723]: Accepted publickey for core from 10.0.0.1 port 40728 ssh2: RSA SHA256:gmt3ErEAFpcoTNZjCS2EQ7u5MuXyZAbf7EZ0clvImok May 15 23:40:39.189745 sshd-session[1723]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 23:40:39.193703 systemd-logind[1546]: New session 5 of user core. May 15 23:40:39.208419 systemd[1]: Started session-5.scope - Session 5 of User core. May 15 23:40:39.273602 sudo[1730]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 15 23:40:39.273925 sudo[1730]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 15 23:40:39.286978 sudo[1730]: pam_unix(sudo:session): session closed for user root May 15 23:40:39.289023 sshd[1729]: Connection closed by 10.0.0.1 port 40728 May 15 23:40:39.289844 sshd-session[1723]: pam_unix(sshd:session): session closed for user core May 15 23:40:39.302403 systemd[1]: Started sshd@5-10.0.0.43:22-10.0.0.1:40734.service - OpenSSH per-connection server daemon (10.0.0.1:40734). May 15 23:40:39.302792 systemd[1]: sshd@4-10.0.0.43:22-10.0.0.1:40728.service: Deactivated successfully. May 15 23:40:39.305420 systemd[1]: session-5.scope: Deactivated successfully. May 15 23:40:39.305697 systemd-logind[1546]: Session 5 logged out. Waiting for processes to exit. May 15 23:40:39.306743 systemd-logind[1546]: Removed session 5. May 15 23:40:39.343474 sshd[1732]: Accepted publickey for core from 10.0.0.1 port 40734 ssh2: RSA SHA256:gmt3ErEAFpcoTNZjCS2EQ7u5MuXyZAbf7EZ0clvImok May 15 23:40:39.344861 sshd-session[1732]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 23:40:39.348437 systemd-logind[1546]: New session 6 of user core. May 15 23:40:39.370410 systemd[1]: Started session-6.scope - Session 6 of User core. May 15 23:40:39.421975 sudo[1740]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 15 23:40:39.422293 sudo[1740]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 15 23:40:39.425477 sudo[1740]: pam_unix(sudo:session): session closed for user root May 15 23:40:39.430822 sudo[1739]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 15 23:40:39.431115 sudo[1739]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 15 23:40:39.456452 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 15 23:40:39.480108 augenrules[1762]: No rules May 15 23:40:39.481425 systemd[1]: audit-rules.service: Deactivated successfully. May 15 23:40:39.481699 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 15 23:40:39.483111 sudo[1739]: pam_unix(sudo:session): session closed for user root May 15 23:40:39.484756 sshd[1738]: Connection closed by 10.0.0.1 port 40734 May 15 23:40:39.485425 sshd-session[1732]: pam_unix(sshd:session): session closed for user core May 15 23:40:39.495511 systemd[1]: Started sshd@6-10.0.0.43:22-10.0.0.1:40744.service - OpenSSH per-connection server daemon (10.0.0.1:40744). May 15 23:40:39.495900 systemd[1]: sshd@5-10.0.0.43:22-10.0.0.1:40734.service: Deactivated successfully. May 15 23:40:39.498633 systemd[1]: session-6.scope: Deactivated successfully. May 15 23:40:39.498993 systemd-logind[1546]: Session 6 logged out. Waiting for processes to exit. May 15 23:40:39.500443 systemd-logind[1546]: Removed session 6. May 15 23:40:39.535657 sshd[1768]: Accepted publickey for core from 10.0.0.1 port 40744 ssh2: RSA SHA256:gmt3ErEAFpcoTNZjCS2EQ7u5MuXyZAbf7EZ0clvImok May 15 23:40:39.536833 sshd-session[1768]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 23:40:39.540886 systemd-logind[1546]: New session 7 of user core. May 15 23:40:39.552375 systemd[1]: Started session-7.scope - Session 7 of User core. May 15 23:40:39.603560 sudo[1775]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 15 23:40:39.603831 sudo[1775]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 15 23:40:39.929582 (dockerd)[1795]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 15 23:40:39.930241 systemd[1]: Starting docker.service - Docker Application Container Engine... May 15 23:40:40.217855 dockerd[1795]: time="2025-05-15T23:40:40.217721403Z" level=info msg="Starting up" May 15 23:40:40.472024 dockerd[1795]: time="2025-05-15T23:40:40.471893239Z" level=info msg="Loading containers: start." May 15 23:40:40.616166 kernel: Initializing XFRM netlink socket May 15 23:40:40.683659 systemd-networkd[1230]: docker0: Link UP May 15 23:40:40.723607 dockerd[1795]: time="2025-05-15T23:40:40.723488493Z" level=info msg="Loading containers: done." May 15 23:40:40.738078 dockerd[1795]: time="2025-05-15T23:40:40.738027902Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 15 23:40:40.738230 dockerd[1795]: time="2025-05-15T23:40:40.738148467Z" level=info msg="Docker daemon" commit=8b539b8df24032dabeaaa099cf1d0535ef0286a3 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1 May 15 23:40:40.738297 dockerd[1795]: time="2025-05-15T23:40:40.738262114Z" level=info msg="Daemon has completed initialization" May 15 23:40:40.772067 dockerd[1795]: time="2025-05-15T23:40:40.771999328Z" level=info msg="API listen on /run/docker.sock" May 15 23:40:40.772290 systemd[1]: Started docker.service - Docker Application Container Engine. May 15 23:40:41.413899 containerd[1567]: time="2025-05-15T23:40:41.413861339Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.9\"" May 15 23:40:42.016106 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1484923187.mount: Deactivated successfully. May 15 23:40:42.938375 containerd[1567]: time="2025-05-15T23:40:42.938328811Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 23:40:42.939300 containerd[1567]: time="2025-05-15T23:40:42.939013940Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.9: active requests=0, bytes read=25651976" May 15 23:40:42.940058 containerd[1567]: time="2025-05-15T23:40:42.940022194Z" level=info msg="ImageCreate event name:\"sha256:90d52158b7646075e7e560c1bd670904ba3f4f4c8c199106bf96ee0944663d61\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 23:40:42.943256 containerd[1567]: time="2025-05-15T23:40:42.943226614Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:5b68f0df22013422dc8fb9ddfcff513eb6fc92f9dbf8aae41555c895efef5a20\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 23:40:42.944242 containerd[1567]: time="2025-05-15T23:40:42.944216994Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.9\" with image id \"sha256:90d52158b7646075e7e560c1bd670904ba3f4f4c8c199106bf96ee0944663d61\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:5b68f0df22013422dc8fb9ddfcff513eb6fc92f9dbf8aae41555c895efef5a20\", size \"25648774\" in 1.530316236s" May 15 23:40:42.944293 containerd[1567]: time="2025-05-15T23:40:42.944250170Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.9\" returns image reference \"sha256:90d52158b7646075e7e560c1bd670904ba3f4f4c8c199106bf96ee0944663d61\"" May 15 23:40:42.947250 containerd[1567]: time="2025-05-15T23:40:42.947217779Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.9\"" May 15 23:40:44.044981 containerd[1567]: time="2025-05-15T23:40:44.044930608Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 23:40:44.046755 containerd[1567]: time="2025-05-15T23:40:44.046706107Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.9: active requests=0, bytes read=22459530" May 15 23:40:44.047773 containerd[1567]: time="2025-05-15T23:40:44.047736585Z" level=info msg="ImageCreate event name:\"sha256:2d03fe540daca1d9520c403342787715eab3b05fb6773ea41153572716c82dba\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 23:40:44.051718 containerd[1567]: time="2025-05-15T23:40:44.051668260Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:be9e7987d323b38a12e28436cff6d6ec6fc31ffdd3ea11eaa9d74852e9d31248\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 23:40:44.053922 containerd[1567]: time="2025-05-15T23:40:44.053880652Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.9\" with image id \"sha256:2d03fe540daca1d9520c403342787715eab3b05fb6773ea41153572716c82dba\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:be9e7987d323b38a12e28436cff6d6ec6fc31ffdd3ea11eaa9d74852e9d31248\", size \"23995294\" in 1.106626171s" May 15 23:40:44.053922 containerd[1567]: time="2025-05-15T23:40:44.053917688Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.9\" returns image reference \"sha256:2d03fe540daca1d9520c403342787715eab3b05fb6773ea41153572716c82dba\"" May 15 23:40:44.054589 containerd[1567]: time="2025-05-15T23:40:44.054561431Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.9\"" May 15 23:40:45.276870 containerd[1567]: time="2025-05-15T23:40:45.275987763Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 23:40:45.276870 containerd[1567]: time="2025-05-15T23:40:45.276840228Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.9: active requests=0, bytes read=17125281" May 15 23:40:45.277638 containerd[1567]: time="2025-05-15T23:40:45.277599636Z" level=info msg="ImageCreate event name:\"sha256:b333fec06af219faaf48f1784baa0b7274945b2e5be5bd2fca2681f7d1baff5f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 23:40:45.282161 containerd[1567]: time="2025-05-15T23:40:45.281284171Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:eb358c7346bb17ab2c639c3ff8ab76a147dec7ae609f5c0c2800233e42253ed1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 23:40:45.282510 containerd[1567]: time="2025-05-15T23:40:45.282379343Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.9\" with image id \"sha256:b333fec06af219faaf48f1784baa0b7274945b2e5be5bd2fca2681f7d1baff5f\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:eb358c7346bb17ab2c639c3ff8ab76a147dec7ae609f5c0c2800233e42253ed1\", size \"18661063\" in 1.227786537s" May 15 23:40:45.282510 containerd[1567]: time="2025-05-15T23:40:45.282416887Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.9\" returns image reference \"sha256:b333fec06af219faaf48f1784baa0b7274945b2e5be5bd2fca2681f7d1baff5f\"" May 15 23:40:45.283051 containerd[1567]: time="2025-05-15T23:40:45.283023836Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.9\"" May 15 23:40:45.877201 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 15 23:40:45.892348 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 23:40:46.017390 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 23:40:46.022165 (kubelet)[2067]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 15 23:40:46.083299 kubelet[2067]: E0515 23:40:46.083202 2067 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 15 23:40:46.086509 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 15 23:40:46.086915 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 15 23:40:46.416871 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2584526202.mount: Deactivated successfully. May 15 23:40:46.759629 containerd[1567]: time="2025-05-15T23:40:46.759099584Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 23:40:46.760348 containerd[1567]: time="2025-05-15T23:40:46.759784432Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.9: active requests=0, bytes read=26871377" May 15 23:40:46.761101 containerd[1567]: time="2025-05-15T23:40:46.761046311Z" level=info msg="ImageCreate event name:\"sha256:cbfba5e6542fe387b24d9e73bf5a054a6b07b95af1392268fd82b6f449ef1c27\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 23:40:46.764448 containerd[1567]: time="2025-05-15T23:40:46.763746892Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:fdf026cf2434537e499e9c739d189ca8fc57101d929ac5ccd8e24f979a9738c1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 23:40:46.764448 containerd[1567]: time="2025-05-15T23:40:46.764225966Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.9\" with image id \"sha256:cbfba5e6542fe387b24d9e73bf5a054a6b07b95af1392268fd82b6f449ef1c27\", repo tag \"registry.k8s.io/kube-proxy:v1.31.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:fdf026cf2434537e499e9c739d189ca8fc57101d929ac5ccd8e24f979a9738c1\", size \"26870394\" in 1.481028535s" May 15 23:40:46.764448 containerd[1567]: time="2025-05-15T23:40:46.764251187Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.9\" returns image reference \"sha256:cbfba5e6542fe387b24d9e73bf5a054a6b07b95af1392268fd82b6f449ef1c27\"" May 15 23:40:46.764808 containerd[1567]: time="2025-05-15T23:40:46.764763981Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" May 15 23:40:47.358774 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2124358338.mount: Deactivated successfully. May 15 23:40:48.147101 containerd[1567]: time="2025-05-15T23:40:48.145863447Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 23:40:48.147693 containerd[1567]: time="2025-05-15T23:40:48.147652627Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951624" May 15 23:40:48.148711 containerd[1567]: time="2025-05-15T23:40:48.148686694Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 23:40:48.152153 containerd[1567]: time="2025-05-15T23:40:48.152095381Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 23:40:48.153332 containerd[1567]: time="2025-05-15T23:40:48.153296557Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.388490118s" May 15 23:40:48.153400 containerd[1567]: time="2025-05-15T23:40:48.153337232Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" May 15 23:40:48.153807 containerd[1567]: time="2025-05-15T23:40:48.153736248Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 15 23:40:48.645155 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4151945249.mount: Deactivated successfully. May 15 23:40:48.650255 containerd[1567]: time="2025-05-15T23:40:48.649735615Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 23:40:48.650982 containerd[1567]: time="2025-05-15T23:40:48.650932583Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" May 15 23:40:48.652106 containerd[1567]: time="2025-05-15T23:40:48.651885461Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 23:40:48.654064 containerd[1567]: time="2025-05-15T23:40:48.654029254Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 23:40:48.655682 containerd[1567]: time="2025-05-15T23:40:48.655282927Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 501.522073ms" May 15 23:40:48.655682 containerd[1567]: time="2025-05-15T23:40:48.655314866Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" May 15 23:40:48.656193 containerd[1567]: time="2025-05-15T23:40:48.656165715Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" May 15 23:40:49.161203 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount128626272.mount: Deactivated successfully. May 15 23:40:50.996273 containerd[1567]: time="2025-05-15T23:40:50.996223784Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 23:40:50.997543 containerd[1567]: time="2025-05-15T23:40:50.997447393Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66406467" May 15 23:40:50.998662 containerd[1567]: time="2025-05-15T23:40:50.998284936Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 23:40:51.001862 containerd[1567]: time="2025-05-15T23:40:51.001826335Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 23:40:51.003335 containerd[1567]: time="2025-05-15T23:40:51.003309008Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 2.347111079s" May 15 23:40:51.003415 containerd[1567]: time="2025-05-15T23:40:51.003334119Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" May 15 23:40:55.906250 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 15 23:40:55.917412 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 23:40:55.939510 systemd[1]: Reloading requested from client PID 2225 ('systemctl') (unit session-7.scope)... May 15 23:40:55.939527 systemd[1]: Reloading... May 15 23:40:56.008371 zram_generator::config[2264]: No configuration found. May 15 23:40:56.174012 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 23:40:56.226927 systemd[1]: Reloading finished in 287 ms. May 15 23:40:56.264497 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 15 23:40:56.264568 systemd[1]: kubelet.service: Failed with result 'signal'. May 15 23:40:56.264823 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 15 23:40:56.267316 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 23:40:56.379060 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 23:40:56.383705 (kubelet)[2322]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 15 23:40:56.426070 kubelet[2322]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 23:40:56.426070 kubelet[2322]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 15 23:40:56.426070 kubelet[2322]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 23:40:56.426070 kubelet[2322]: I0515 23:40:56.425425 2322 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 15 23:40:56.923672 kubelet[2322]: I0515 23:40:56.923639 2322 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" May 15 23:40:56.925143 kubelet[2322]: I0515 23:40:56.923779 2322 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 15 23:40:56.925143 kubelet[2322]: I0515 23:40:56.924098 2322 server.go:934] "Client rotation is on, will bootstrap in background" May 15 23:40:56.967241 kubelet[2322]: I0515 23:40:56.967209 2322 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 15 23:40:56.970621 kubelet[2322]: E0515 23:40:56.970588 2322 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.43:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.43:6443: connect: connection refused" logger="UnhandledError" May 15 23:40:56.974658 kubelet[2322]: E0515 23:40:56.974621 2322 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 15 23:40:56.974658 kubelet[2322]: I0515 23:40:56.974655 2322 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 15 23:40:56.978286 kubelet[2322]: I0515 23:40:56.978255 2322 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 15 23:40:56.979397 kubelet[2322]: I0515 23:40:56.979355 2322 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 15 23:40:56.979567 kubelet[2322]: I0515 23:40:56.979529 2322 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 15 23:40:56.979740 kubelet[2322]: I0515 23:40:56.979561 2322 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} May 15 23:40:56.979822 kubelet[2322]: I0515 23:40:56.979801 2322 topology_manager.go:138] "Creating topology manager with none policy" May 15 23:40:56.979822 kubelet[2322]: I0515 23:40:56.979810 2322 container_manager_linux.go:300] "Creating device plugin manager" May 15 23:40:56.980079 kubelet[2322]: I0515 23:40:56.980055 2322 state_mem.go:36] "Initialized new in-memory state store" May 15 23:40:56.984577 kubelet[2322]: I0515 23:40:56.984473 2322 kubelet.go:408] "Attempting to sync node with API server" May 15 23:40:56.984577 kubelet[2322]: I0515 23:40:56.984511 2322 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 15 23:40:56.984577 kubelet[2322]: I0515 23:40:56.984541 2322 kubelet.go:314] "Adding apiserver pod source" May 15 23:40:56.984676 kubelet[2322]: I0515 23:40:56.984625 2322 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 15 23:40:56.986111 kubelet[2322]: W0515 23:40:56.985856 2322 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.43:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.43:6443: connect: connection refused May 15 23:40:56.986111 kubelet[2322]: E0515 23:40:56.985935 2322 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.43:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.43:6443: connect: connection refused" logger="UnhandledError" May 15 23:40:56.986111 kubelet[2322]: W0515 23:40:56.986037 2322 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.43:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.43:6443: connect: connection refused May 15 23:40:56.986111 kubelet[2322]: E0515 23:40:56.986083 2322 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.43:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.43:6443: connect: connection refused" logger="UnhandledError" May 15 23:40:56.990959 kubelet[2322]: I0515 23:40:56.989318 2322 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" May 15 23:40:56.990959 kubelet[2322]: I0515 23:40:56.990069 2322 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 15 23:40:56.990959 kubelet[2322]: W0515 23:40:56.990198 2322 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 15 23:40:56.991816 kubelet[2322]: I0515 23:40:56.991255 2322 server.go:1274] "Started kubelet" May 15 23:40:56.991816 kubelet[2322]: I0515 23:40:56.991432 2322 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 15 23:40:56.994651 kubelet[2322]: I0515 23:40:56.994593 2322 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 15 23:40:56.995615 kubelet[2322]: I0515 23:40:56.995580 2322 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 15 23:40:56.995834 kubelet[2322]: I0515 23:40:56.995814 2322 server.go:449] "Adding debug handlers to kubelet server" May 15 23:40:56.997314 kubelet[2322]: I0515 23:40:56.997290 2322 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 15 23:40:56.998299 kubelet[2322]: I0515 23:40:56.998265 2322 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 15 23:40:56.998570 kubelet[2322]: I0515 23:40:56.998548 2322 volume_manager.go:289] "Starting Kubelet Volume Manager" May 15 23:40:56.998999 kubelet[2322]: I0515 23:40:56.998971 2322 desired_state_of_world_populator.go:147] "Desired state populator starts to run" May 15 23:40:56.999061 kubelet[2322]: I0515 23:40:56.999041 2322 reconciler.go:26] "Reconciler: start to sync state" May 15 23:40:56.999637 kubelet[2322]: E0515 23:40:56.999613 2322 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 23:40:57.001447 kubelet[2322]: W0515 23:40:57.000435 2322 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.43:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.43:6443: connect: connection refused May 15 23:40:57.001513 kubelet[2322]: E0515 23:40:57.001465 2322 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.43:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.43:6443: connect: connection refused" logger="UnhandledError" May 15 23:40:57.001551 kubelet[2322]: E0515 23:40:57.001514 2322 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.43:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.43:6443: connect: connection refused" interval="200ms" May 15 23:40:57.001713 kubelet[2322]: I0515 23:40:57.001680 2322 factory.go:221] Registration of the systemd container factory successfully May 15 23:40:57.001786 kubelet[2322]: I0515 23:40:57.001765 2322 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 15 23:40:57.003237 kubelet[2322]: E0515 23:40:57.003203 2322 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 15 23:40:57.003375 kubelet[2322]: I0515 23:40:57.003359 2322 factory.go:221] Registration of the containerd container factory successfully May 15 23:40:57.003808 kubelet[2322]: E0515 23:40:57.000217 2322 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.43:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.43:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183fd7cd405e34a6 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-15 23:40:56.991224998 +0000 UTC m=+0.604434282,LastTimestamp:2025-05-15 23:40:56.991224998 +0000 UTC m=+0.604434282,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 15 23:40:57.016167 kubelet[2322]: I0515 23:40:57.014534 2322 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 15 23:40:57.016167 kubelet[2322]: I0515 23:40:57.015533 2322 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 15 23:40:57.016167 kubelet[2322]: I0515 23:40:57.015548 2322 status_manager.go:217] "Starting to sync pod status with apiserver" May 15 23:40:57.016167 kubelet[2322]: I0515 23:40:57.015566 2322 kubelet.go:2321] "Starting kubelet main sync loop" May 15 23:40:57.016167 kubelet[2322]: E0515 23:40:57.015606 2322 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 15 23:40:57.019273 kubelet[2322]: W0515 23:40:57.019217 2322 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.43:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.43:6443: connect: connection refused May 15 23:40:57.019351 kubelet[2322]: E0515 23:40:57.019275 2322 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.43:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.43:6443: connect: connection refused" logger="UnhandledError" May 15 23:40:57.023707 kubelet[2322]: I0515 23:40:57.023684 2322 cpu_manager.go:214] "Starting CPU manager" policy="none" May 15 23:40:57.023707 kubelet[2322]: I0515 23:40:57.023702 2322 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 15 23:40:57.023812 kubelet[2322]: I0515 23:40:57.023720 2322 state_mem.go:36] "Initialized new in-memory state store" May 15 23:40:57.097995 kubelet[2322]: I0515 23:40:57.097951 2322 policy_none.go:49] "None policy: Start" May 15 23:40:57.098786 kubelet[2322]: I0515 23:40:57.098763 2322 memory_manager.go:170] "Starting memorymanager" policy="None" May 15 23:40:57.098862 kubelet[2322]: I0515 23:40:57.098795 2322 state_mem.go:35] "Initializing new in-memory state store" May 15 23:40:57.099956 kubelet[2322]: E0515 23:40:57.099935 2322 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 23:40:57.103965 kubelet[2322]: I0515 23:40:57.103939 2322 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 15 23:40:57.104181 kubelet[2322]: I0515 23:40:57.104166 2322 eviction_manager.go:189] "Eviction manager: starting control loop" May 15 23:40:57.104227 kubelet[2322]: I0515 23:40:57.104182 2322 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 15 23:40:57.104871 kubelet[2322]: I0515 23:40:57.104808 2322 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 15 23:40:57.106017 kubelet[2322]: E0515 23:40:57.105993 2322 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 15 23:40:57.200829 kubelet[2322]: I0515 23:40:57.200714 2322 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 15 23:40:57.200829 kubelet[2322]: I0515 23:40:57.200757 2322 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 15 23:40:57.200829 kubelet[2322]: I0515 23:40:57.200778 2322 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ea5884ad3481d5218ff4c8f11f2934d5-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"ea5884ad3481d5218ff4c8f11f2934d5\") " pod="kube-system/kube-scheduler-localhost" May 15 23:40:57.200829 kubelet[2322]: I0515 23:40:57.200794 2322 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e0bee3d1328190f9893a12251980d221-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"e0bee3d1328190f9893a12251980d221\") " pod="kube-system/kube-apiserver-localhost" May 15 23:40:57.200829 kubelet[2322]: I0515 23:40:57.200812 2322 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e0bee3d1328190f9893a12251980d221-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"e0bee3d1328190f9893a12251980d221\") " pod="kube-system/kube-apiserver-localhost" May 15 23:40:57.201024 kubelet[2322]: I0515 23:40:57.200830 2322 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e0bee3d1328190f9893a12251980d221-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"e0bee3d1328190f9893a12251980d221\") " pod="kube-system/kube-apiserver-localhost" May 15 23:40:57.201024 kubelet[2322]: I0515 23:40:57.200844 2322 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 15 23:40:57.201024 kubelet[2322]: I0515 23:40:57.200859 2322 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 15 23:40:57.201024 kubelet[2322]: I0515 23:40:57.200874 2322 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 15 23:40:57.202257 kubelet[2322]: E0515 23:40:57.202071 2322 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.43:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.43:6443: connect: connection refused" interval="400ms" May 15 23:40:57.206084 kubelet[2322]: I0515 23:40:57.206046 2322 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 15 23:40:57.206622 kubelet[2322]: E0515 23:40:57.206590 2322 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.43:6443/api/v1/nodes\": dial tcp 10.0.0.43:6443: connect: connection refused" node="localhost" May 15 23:40:57.408436 kubelet[2322]: I0515 23:40:57.408407 2322 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 15 23:40:57.408859 kubelet[2322]: E0515 23:40:57.408826 2322 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.43:6443/api/v1/nodes\": dial tcp 10.0.0.43:6443: connect: connection refused" node="localhost" May 15 23:40:57.423026 kubelet[2322]: E0515 23:40:57.422997 2322 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:40:57.423435 kubelet[2322]: E0515 23:40:57.423413 2322 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:40:57.423767 containerd[1567]: time="2025-05-15T23:40:57.423727165Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:a3416600bab1918b24583836301c9096,Namespace:kube-system,Attempt:0,}" May 15 23:40:57.424278 containerd[1567]: time="2025-05-15T23:40:57.423854596Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:e0bee3d1328190f9893a12251980d221,Namespace:kube-system,Attempt:0,}" May 15 23:40:57.424928 kubelet[2322]: E0515 23:40:57.424862 2322 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:40:57.425744 containerd[1567]: time="2025-05-15T23:40:57.425625019Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:ea5884ad3481d5218ff4c8f11f2934d5,Namespace:kube-system,Attempt:0,}" May 15 23:40:57.603299 kubelet[2322]: E0515 23:40:57.603171 2322 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.43:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.43:6443: connect: connection refused" interval="800ms" May 15 23:40:57.810109 kubelet[2322]: I0515 23:40:57.810074 2322 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 15 23:40:57.810511 kubelet[2322]: E0515 23:40:57.810488 2322 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.43:6443/api/v1/nodes\": dial tcp 10.0.0.43:6443: connect: connection refused" node="localhost" May 15 23:40:57.965437 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4108982901.mount: Deactivated successfully. May 15 23:40:57.971628 containerd[1567]: time="2025-05-15T23:40:57.971584278Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 15 23:40:57.973143 containerd[1567]: time="2025-05-15T23:40:57.973030281Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269175" May 15 23:40:57.976026 containerd[1567]: time="2025-05-15T23:40:57.975989123Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 15 23:40:57.978090 containerd[1567]: time="2025-05-15T23:40:57.978047466Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 15 23:40:57.978806 containerd[1567]: time="2025-05-15T23:40:57.978763664Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 15 23:40:57.979497 containerd[1567]: time="2025-05-15T23:40:57.979453127Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 15 23:40:57.980149 containerd[1567]: time="2025-05-15T23:40:57.980037211Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 15 23:40:57.983315 containerd[1567]: time="2025-05-15T23:40:57.983276049Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 15 23:40:57.984753 containerd[1567]: time="2025-05-15T23:40:57.984617474Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 560.806983ms" May 15 23:40:57.985375 containerd[1567]: time="2025-05-15T23:40:57.985345639Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 560.958227ms" May 15 23:40:57.989488 containerd[1567]: time="2025-05-15T23:40:57.989321566Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 563.637434ms" May 15 23:40:58.109042 kubelet[2322]: W0515 23:40:58.108929 2322 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.43:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.43:6443: connect: connection refused May 15 23:40:58.109042 kubelet[2322]: E0515 23:40:58.108996 2322 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.43:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.43:6443: connect: connection refused" logger="UnhandledError" May 15 23:40:58.117614 kubelet[2322]: W0515 23:40:58.117563 2322 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.43:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.43:6443: connect: connection refused May 15 23:40:58.117802 kubelet[2322]: E0515 23:40:58.117766 2322 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.43:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.43:6443: connect: connection refused" logger="UnhandledError" May 15 23:40:58.130160 containerd[1567]: time="2025-05-15T23:40:58.130049880Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 23:40:58.130160 containerd[1567]: time="2025-05-15T23:40:58.130130359Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 23:40:58.130160 containerd[1567]: time="2025-05-15T23:40:58.130141685Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 23:40:58.130387 containerd[1567]: time="2025-05-15T23:40:58.130223644Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 23:40:58.130942 containerd[1567]: time="2025-05-15T23:40:58.130634484Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 23:40:58.130942 containerd[1567]: time="2025-05-15T23:40:58.130686029Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 23:40:58.130942 containerd[1567]: time="2025-05-15T23:40:58.130702637Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 23:40:58.130942 containerd[1567]: time="2025-05-15T23:40:58.130791560Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 23:40:58.133288 containerd[1567]: time="2025-05-15T23:40:58.133201852Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 23:40:58.133356 containerd[1567]: time="2025-05-15T23:40:58.133315827Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 23:40:58.133356 containerd[1567]: time="2025-05-15T23:40:58.133344641Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 23:40:58.133497 containerd[1567]: time="2025-05-15T23:40:58.133466860Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 23:40:58.189458 containerd[1567]: time="2025-05-15T23:40:58.189406440Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:ea5884ad3481d5218ff4c8f11f2934d5,Namespace:kube-system,Attempt:0,} returns sandbox id \"22b7c0157bca0dd91e456a42620fe3c2da50b445c4b02b5a70f2329ac7edecae\"" May 15 23:40:58.190431 kubelet[2322]: E0515 23:40:58.190404 2322 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:40:58.190852 containerd[1567]: time="2025-05-15T23:40:58.190691984Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:e0bee3d1328190f9893a12251980d221,Namespace:kube-system,Attempt:0,} returns sandbox id \"6ebaf0732fa1aaabfc3668c3e82ca9db2b3b2be7e85408014b73a6848adc4b43\"" May 15 23:40:58.191440 kubelet[2322]: E0515 23:40:58.191315 2322 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:40:58.193729 containerd[1567]: time="2025-05-15T23:40:58.193694563Z" level=info msg="CreateContainer within sandbox \"22b7c0157bca0dd91e456a42620fe3c2da50b445c4b02b5a70f2329ac7edecae\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 15 23:40:58.195076 containerd[1567]: time="2025-05-15T23:40:58.194834757Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:a3416600bab1918b24583836301c9096,Namespace:kube-system,Attempt:0,} returns sandbox id \"015353278c916705e5b04aa0a2b444042e6bf8e6a9cd764f221ec0ccb8a920db\"" May 15 23:40:58.195076 containerd[1567]: time="2025-05-15T23:40:58.194886942Z" level=info msg="CreateContainer within sandbox \"6ebaf0732fa1aaabfc3668c3e82ca9db2b3b2be7e85408014b73a6848adc4b43\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 15 23:40:58.195992 kubelet[2322]: E0515 23:40:58.195917 2322 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:40:58.199324 containerd[1567]: time="2025-05-15T23:40:58.199142570Z" level=info msg="CreateContainer within sandbox \"015353278c916705e5b04aa0a2b444042e6bf8e6a9cd764f221ec0ccb8a920db\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 15 23:40:58.208628 containerd[1567]: time="2025-05-15T23:40:58.208397827Z" level=info msg="CreateContainer within sandbox \"22b7c0157bca0dd91e456a42620fe3c2da50b445c4b02b5a70f2329ac7edecae\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"edeee62a77861b9db0acfca798c95aebc2ab39f28ffd5b636c2caed0c601da74\"" May 15 23:40:58.209429 containerd[1567]: time="2025-05-15T23:40:58.209393271Z" level=info msg="StartContainer for \"edeee62a77861b9db0acfca798c95aebc2ab39f28ffd5b636c2caed0c601da74\"" May 15 23:40:58.212828 containerd[1567]: time="2025-05-15T23:40:58.212777235Z" level=info msg="CreateContainer within sandbox \"6ebaf0732fa1aaabfc3668c3e82ca9db2b3b2be7e85408014b73a6848adc4b43\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"e2f446abe76e5a5e723ec7b9c0458a79d328467801a6a41ed626535cea8d1300\"" May 15 23:40:58.213655 containerd[1567]: time="2025-05-15T23:40:58.213634211Z" level=info msg="StartContainer for \"e2f446abe76e5a5e723ec7b9c0458a79d328467801a6a41ed626535cea8d1300\"" May 15 23:40:58.216084 containerd[1567]: time="2025-05-15T23:40:58.215776492Z" level=info msg="CreateContainer within sandbox \"015353278c916705e5b04aa0a2b444042e6bf8e6a9cd764f221ec0ccb8a920db\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"dc7eda80b2bbc13987b398d864f122a0ebee1063049819c3ba5412a576105c56\"" May 15 23:40:58.217374 containerd[1567]: time="2025-05-15T23:40:58.217331688Z" level=info msg="StartContainer for \"dc7eda80b2bbc13987b398d864f122a0ebee1063049819c3ba5412a576105c56\"" May 15 23:40:58.256177 kubelet[2322]: W0515 23:40:58.255484 2322 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.43:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.43:6443: connect: connection refused May 15 23:40:58.256177 kubelet[2322]: E0515 23:40:58.255692 2322 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.43:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.43:6443: connect: connection refused" logger="UnhandledError" May 15 23:40:58.276019 containerd[1567]: time="2025-05-15T23:40:58.275977182Z" level=info msg="StartContainer for \"edeee62a77861b9db0acfca798c95aebc2ab39f28ffd5b636c2caed0c601da74\" returns successfully" May 15 23:40:58.285084 containerd[1567]: time="2025-05-15T23:40:58.285034863Z" level=info msg="StartContainer for \"dc7eda80b2bbc13987b398d864f122a0ebee1063049819c3ba5412a576105c56\" returns successfully" May 15 23:40:58.291324 containerd[1567]: time="2025-05-15T23:40:58.291276975Z" level=info msg="StartContainer for \"e2f446abe76e5a5e723ec7b9c0458a79d328467801a6a41ed626535cea8d1300\" returns successfully" May 15 23:40:58.403904 kubelet[2322]: E0515 23:40:58.403853 2322 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.43:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.43:6443: connect: connection refused" interval="1.6s" May 15 23:40:58.612775 kubelet[2322]: I0515 23:40:58.612650 2322 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 15 23:40:59.026630 kubelet[2322]: E0515 23:40:59.026566 2322 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:40:59.028405 kubelet[2322]: E0515 23:40:59.028355 2322 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:40:59.030142 kubelet[2322]: E0515 23:40:59.029379 2322 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:40:59.870043 kubelet[2322]: I0515 23:40:59.869832 2322 kubelet_node_status.go:75] "Successfully registered node" node="localhost" May 15 23:40:59.870043 kubelet[2322]: E0515 23:40:59.869871 2322 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" May 15 23:40:59.879008 kubelet[2322]: E0515 23:40:59.878949 2322 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 23:40:59.987990 kubelet[2322]: I0515 23:40:59.987740 2322 apiserver.go:52] "Watching apiserver" May 15 23:40:59.999812 kubelet[2322]: I0515 23:40:59.999767 2322 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" May 15 23:41:00.036359 kubelet[2322]: E0515 23:41:00.036323 2322 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" May 15 23:41:00.036579 kubelet[2322]: E0515 23:41:00.036537 2322 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:41:01.056533 kubelet[2322]: E0515 23:41:01.056472 2322 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:41:01.816420 kubelet[2322]: E0515 23:41:01.816034 2322 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:41:02.034285 kubelet[2322]: E0515 23:41:02.034244 2322 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:41:02.034858 kubelet[2322]: E0515 23:41:02.034781 2322 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:41:02.362759 systemd[1]: Reloading requested from client PID 2599 ('systemctl') (unit session-7.scope)... May 15 23:41:02.362775 systemd[1]: Reloading... May 15 23:41:02.429147 zram_generator::config[2641]: No configuration found. May 15 23:41:02.516632 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 15 23:41:02.575635 systemd[1]: Reloading finished in 212 ms. May 15 23:41:02.603214 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 15 23:41:02.620074 systemd[1]: kubelet.service: Deactivated successfully. May 15 23:41:02.620590 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 15 23:41:02.632594 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 15 23:41:02.727976 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 15 23:41:02.731836 (kubelet)[2690]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 15 23:41:02.766839 kubelet[2690]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 23:41:02.766839 kubelet[2690]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 15 23:41:02.766839 kubelet[2690]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 15 23:41:02.767236 kubelet[2690]: I0515 23:41:02.766889 2690 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 15 23:41:02.774233 kubelet[2690]: I0515 23:41:02.774196 2690 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" May 15 23:41:02.774233 kubelet[2690]: I0515 23:41:02.774228 2690 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 15 23:41:02.774763 kubelet[2690]: I0515 23:41:02.774722 2690 server.go:934] "Client rotation is on, will bootstrap in background" May 15 23:41:02.778209 kubelet[2690]: I0515 23:41:02.778072 2690 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 15 23:41:02.780645 kubelet[2690]: I0515 23:41:02.780240 2690 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 15 23:41:02.784235 kubelet[2690]: E0515 23:41:02.784207 2690 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 15 23:41:02.784235 kubelet[2690]: I0515 23:41:02.784234 2690 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 15 23:41:02.787135 kubelet[2690]: I0515 23:41:02.786845 2690 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 15 23:41:02.787376 kubelet[2690]: I0515 23:41:02.787352 2690 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 15 23:41:02.787484 kubelet[2690]: I0515 23:41:02.787457 2690 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 15 23:41:02.787684 kubelet[2690]: I0515 23:41:02.787492 2690 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} May 15 23:41:02.787750 kubelet[2690]: I0515 23:41:02.787693 2690 topology_manager.go:138] "Creating topology manager with none policy" May 15 23:41:02.787750 kubelet[2690]: I0515 23:41:02.787702 2690 container_manager_linux.go:300] "Creating device plugin manager" May 15 23:41:02.787750 kubelet[2690]: I0515 23:41:02.787744 2690 state_mem.go:36] "Initialized new in-memory state store" May 15 23:41:02.787847 kubelet[2690]: I0515 23:41:02.787837 2690 kubelet.go:408] "Attempting to sync node with API server" May 15 23:41:02.787881 kubelet[2690]: I0515 23:41:02.787856 2690 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 15 23:41:02.787881 kubelet[2690]: I0515 23:41:02.787876 2690 kubelet.go:314] "Adding apiserver pod source" May 15 23:41:02.787938 kubelet[2690]: I0515 23:41:02.787889 2690 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 15 23:41:02.788887 kubelet[2690]: I0515 23:41:02.788487 2690 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" May 15 23:41:02.789124 kubelet[2690]: I0515 23:41:02.789095 2690 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 15 23:41:02.789576 kubelet[2690]: I0515 23:41:02.789558 2690 server.go:1274] "Started kubelet" May 15 23:41:02.790366 kubelet[2690]: I0515 23:41:02.790319 2690 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 15 23:41:02.790611 kubelet[2690]: I0515 23:41:02.790595 2690 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 15 23:41:02.790675 kubelet[2690]: I0515 23:41:02.790655 2690 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 15 23:41:02.798142 kubelet[2690]: I0515 23:41:02.795957 2690 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 15 23:41:02.798142 kubelet[2690]: I0515 23:41:02.796351 2690 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 15 23:41:02.798142 kubelet[2690]: I0515 23:41:02.797760 2690 volume_manager.go:289] "Starting Kubelet Volume Manager" May 15 23:41:02.798142 kubelet[2690]: E0515 23:41:02.797868 2690 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 15 23:41:02.798598 kubelet[2690]: I0515 23:41:02.798571 2690 desired_state_of_world_populator.go:147] "Desired state populator starts to run" May 15 23:41:02.798712 kubelet[2690]: I0515 23:41:02.798698 2690 reconciler.go:26] "Reconciler: start to sync state" May 15 23:41:02.808394 kubelet[2690]: I0515 23:41:02.808361 2690 factory.go:221] Registration of the systemd container factory successfully May 15 23:41:02.808559 kubelet[2690]: I0515 23:41:02.808534 2690 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 15 23:41:02.817510 kubelet[2690]: I0515 23:41:02.817482 2690 server.go:449] "Adding debug handlers to kubelet server" May 15 23:41:02.817861 kubelet[2690]: I0515 23:41:02.817831 2690 factory.go:221] Registration of the containerd container factory successfully May 15 23:41:02.819386 kubelet[2690]: E0515 23:41:02.819366 2690 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 15 23:41:02.827898 kubelet[2690]: I0515 23:41:02.827853 2690 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 15 23:41:02.828953 kubelet[2690]: I0515 23:41:02.828920 2690 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 15 23:41:02.829041 kubelet[2690]: I0515 23:41:02.828965 2690 status_manager.go:217] "Starting to sync pod status with apiserver" May 15 23:41:02.829041 kubelet[2690]: I0515 23:41:02.828986 2690 kubelet.go:2321] "Starting kubelet main sync loop" May 15 23:41:02.829087 kubelet[2690]: E0515 23:41:02.829042 2690 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 15 23:41:02.856201 kubelet[2690]: I0515 23:41:02.856172 2690 cpu_manager.go:214] "Starting CPU manager" policy="none" May 15 23:41:02.856201 kubelet[2690]: I0515 23:41:02.856189 2690 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 15 23:41:02.856201 kubelet[2690]: I0515 23:41:02.856211 2690 state_mem.go:36] "Initialized new in-memory state store" May 15 23:41:02.856376 kubelet[2690]: I0515 23:41:02.856368 2690 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 15 23:41:02.856399 kubelet[2690]: I0515 23:41:02.856379 2690 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 15 23:41:02.856399 kubelet[2690]: I0515 23:41:02.856398 2690 policy_none.go:49] "None policy: Start" May 15 23:41:02.856923 kubelet[2690]: I0515 23:41:02.856856 2690 memory_manager.go:170] "Starting memorymanager" policy="None" May 15 23:41:02.856923 kubelet[2690]: I0515 23:41:02.856882 2690 state_mem.go:35] "Initializing new in-memory state store" May 15 23:41:02.857046 kubelet[2690]: I0515 23:41:02.857018 2690 state_mem.go:75] "Updated machine memory state" May 15 23:41:02.858818 kubelet[2690]: I0515 23:41:02.858203 2690 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 15 23:41:02.858818 kubelet[2690]: I0515 23:41:02.858379 2690 eviction_manager.go:189] "Eviction manager: starting control loop" May 15 23:41:02.858818 kubelet[2690]: I0515 23:41:02.858391 2690 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 15 23:41:02.858818 kubelet[2690]: I0515 23:41:02.858596 2690 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 15 23:41:02.938201 kubelet[2690]: E0515 23:41:02.937980 2690 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 15 23:41:02.938347 kubelet[2690]: E0515 23:41:02.938258 2690 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" May 15 23:41:02.963798 kubelet[2690]: I0515 23:41:02.963767 2690 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 15 23:41:02.976268 kubelet[2690]: I0515 23:41:02.976237 2690 kubelet_node_status.go:111] "Node was previously registered" node="localhost" May 15 23:41:02.976359 kubelet[2690]: I0515 23:41:02.976327 2690 kubelet_node_status.go:75] "Successfully registered node" node="localhost" May 15 23:41:02.999941 kubelet[2690]: I0515 23:41:02.999900 2690 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ea5884ad3481d5218ff4c8f11f2934d5-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"ea5884ad3481d5218ff4c8f11f2934d5\") " pod="kube-system/kube-scheduler-localhost" May 15 23:41:03.000239 kubelet[2690]: I0515 23:41:03.000081 2690 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e0bee3d1328190f9893a12251980d221-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"e0bee3d1328190f9893a12251980d221\") " pod="kube-system/kube-apiserver-localhost" May 15 23:41:03.000239 kubelet[2690]: I0515 23:41:03.000109 2690 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 15 23:41:03.000239 kubelet[2690]: I0515 23:41:03.000159 2690 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 15 23:41:03.000239 kubelet[2690]: I0515 23:41:03.000176 2690 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 15 23:41:03.000239 kubelet[2690]: I0515 23:41:03.000193 2690 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 15 23:41:03.000407 kubelet[2690]: I0515 23:41:03.000209 2690 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e0bee3d1328190f9893a12251980d221-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"e0bee3d1328190f9893a12251980d221\") " pod="kube-system/kube-apiserver-localhost" May 15 23:41:03.000407 kubelet[2690]: I0515 23:41:03.000224 2690 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e0bee3d1328190f9893a12251980d221-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"e0bee3d1328190f9893a12251980d221\") " pod="kube-system/kube-apiserver-localhost" May 15 23:41:03.000576 kubelet[2690]: I0515 23:41:03.000526 2690 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 15 23:41:03.237713 kubelet[2690]: E0515 23:41:03.237555 2690 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:41:03.238472 kubelet[2690]: E0515 23:41:03.238404 2690 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:41:03.238785 kubelet[2690]: E0515 23:41:03.238765 2690 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:41:03.417338 sudo[2725]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin May 15 23:41:03.417643 sudo[2725]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) May 15 23:41:03.788728 kubelet[2690]: I0515 23:41:03.788584 2690 apiserver.go:52] "Watching apiserver" May 15 23:41:03.801429 kubelet[2690]: I0515 23:41:03.801393 2690 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" May 15 23:41:03.837947 kubelet[2690]: E0515 23:41:03.837754 2690 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:41:03.837947 kubelet[2690]: E0515 23:41:03.837935 2690 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:41:03.851393 kubelet[2690]: E0515 23:41:03.851226 2690 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" May 15 23:41:03.851924 kubelet[2690]: E0515 23:41:03.851823 2690 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:41:03.871819 sudo[2725]: pam_unix(sudo:session): session closed for user root May 15 23:41:03.876378 kubelet[2690]: I0515 23:41:03.876319 2690 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.876301796 podStartE2EDuration="2.876301796s" podCreationTimestamp="2025-05-15 23:41:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 23:41:03.874328744 +0000 UTC m=+1.139014821" watchObservedRunningTime="2025-05-15 23:41:03.876301796 +0000 UTC m=+1.140987873" May 15 23:41:03.896750 kubelet[2690]: I0515 23:41:03.896629 2690 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.896613619 podStartE2EDuration="2.896613619s" podCreationTimestamp="2025-05-15 23:41:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 23:41:03.888198161 +0000 UTC m=+1.152884238" watchObservedRunningTime="2025-05-15 23:41:03.896613619 +0000 UTC m=+1.161299696" May 15 23:41:03.905353 kubelet[2690]: I0515 23:41:03.905300 2690 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.9052843 podStartE2EDuration="1.9052843s" podCreationTimestamp="2025-05-15 23:41:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 23:41:03.89641973 +0000 UTC m=+1.161105807" watchObservedRunningTime="2025-05-15 23:41:03.9052843 +0000 UTC m=+1.169970337" May 15 23:41:04.838581 kubelet[2690]: E0515 23:41:04.838546 2690 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:41:04.839005 kubelet[2690]: E0515 23:41:04.838617 2690 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:41:05.841318 kubelet[2690]: E0515 23:41:05.841090 2690 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:41:05.964476 sudo[1775]: pam_unix(sudo:session): session closed for user root May 15 23:41:05.965662 sshd[1774]: Connection closed by 10.0.0.1 port 40744 May 15 23:41:05.966027 sshd-session[1768]: pam_unix(sshd:session): session closed for user core May 15 23:41:05.969263 systemd[1]: sshd@6-10.0.0.43:22-10.0.0.1:40744.service: Deactivated successfully. May 15 23:41:05.971470 systemd-logind[1546]: Session 7 logged out. Waiting for processes to exit. May 15 23:41:05.971551 systemd[1]: session-7.scope: Deactivated successfully. May 15 23:41:05.972795 systemd-logind[1546]: Removed session 7. May 15 23:41:07.307932 kubelet[2690]: E0515 23:41:07.307577 2690 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:41:07.700161 kubelet[2690]: I0515 23:41:07.696746 2690 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 15 23:41:07.700161 kubelet[2690]: I0515 23:41:07.697481 2690 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 15 23:41:07.700326 containerd[1567]: time="2025-05-15T23:41:07.697034151Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 15 23:41:08.640166 kubelet[2690]: I0515 23:41:08.640107 2690 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/29e030a9-1896-472f-8031-6cce6378509b-bpf-maps\") pod \"cilium-wv8rs\" (UID: \"29e030a9-1896-472f-8031-6cce6378509b\") " pod="kube-system/cilium-wv8rs" May 15 23:41:08.640166 kubelet[2690]: I0515 23:41:08.640161 2690 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shqfq\" (UniqueName: \"kubernetes.io/projected/d6a9ae21-94bf-496a-aaa6-88a6d844d251-kube-api-access-shqfq\") pod \"kube-proxy-jgvdt\" (UID: \"d6a9ae21-94bf-496a-aaa6-88a6d844d251\") " pod="kube-system/kube-proxy-jgvdt" May 15 23:41:08.640635 kubelet[2690]: I0515 23:41:08.640184 2690 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/29e030a9-1896-472f-8031-6cce6378509b-hubble-tls\") pod \"cilium-wv8rs\" (UID: \"29e030a9-1896-472f-8031-6cce6378509b\") " pod="kube-system/cilium-wv8rs" May 15 23:41:08.640635 kubelet[2690]: I0515 23:41:08.640202 2690 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d6a9ae21-94bf-496a-aaa6-88a6d844d251-kube-proxy\") pod \"kube-proxy-jgvdt\" (UID: \"d6a9ae21-94bf-496a-aaa6-88a6d844d251\") " pod="kube-system/kube-proxy-jgvdt" May 15 23:41:08.640635 kubelet[2690]: I0515 23:41:08.640221 2690 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/29e030a9-1896-472f-8031-6cce6378509b-host-proc-sys-kernel\") pod \"cilium-wv8rs\" (UID: \"29e030a9-1896-472f-8031-6cce6378509b\") " pod="kube-system/cilium-wv8rs" May 15 23:41:08.640635 kubelet[2690]: I0515 23:41:08.640288 2690 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/29e030a9-1896-472f-8031-6cce6378509b-hostproc\") pod \"cilium-wv8rs\" (UID: \"29e030a9-1896-472f-8031-6cce6378509b\") " pod="kube-system/cilium-wv8rs" May 15 23:41:08.640635 kubelet[2690]: I0515 23:41:08.640306 2690 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/29e030a9-1896-472f-8031-6cce6378509b-cilium-cgroup\") pod \"cilium-wv8rs\" (UID: \"29e030a9-1896-472f-8031-6cce6378509b\") " pod="kube-system/cilium-wv8rs" May 15 23:41:08.640635 kubelet[2690]: I0515 23:41:08.640379 2690 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/29e030a9-1896-472f-8031-6cce6378509b-cni-path\") pod \"cilium-wv8rs\" (UID: \"29e030a9-1896-472f-8031-6cce6378509b\") " pod="kube-system/cilium-wv8rs" May 15 23:41:08.640779 kubelet[2690]: I0515 23:41:08.640418 2690 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/29e030a9-1896-472f-8031-6cce6378509b-cilium-config-path\") pod \"cilium-wv8rs\" (UID: \"29e030a9-1896-472f-8031-6cce6378509b\") " pod="kube-system/cilium-wv8rs" May 15 23:41:08.640779 kubelet[2690]: I0515 23:41:08.640442 2690 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/29e030a9-1896-472f-8031-6cce6378509b-xtables-lock\") pod \"cilium-wv8rs\" (UID: \"29e030a9-1896-472f-8031-6cce6378509b\") " pod="kube-system/cilium-wv8rs" May 15 23:41:08.640779 kubelet[2690]: I0515 23:41:08.640459 2690 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/29e030a9-1896-472f-8031-6cce6378509b-clustermesh-secrets\") pod \"cilium-wv8rs\" (UID: \"29e030a9-1896-472f-8031-6cce6378509b\") " pod="kube-system/cilium-wv8rs" May 15 23:41:08.640779 kubelet[2690]: I0515 23:41:08.640478 2690 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h78mv\" (UniqueName: \"kubernetes.io/projected/29e030a9-1896-472f-8031-6cce6378509b-kube-api-access-h78mv\") pod \"cilium-wv8rs\" (UID: \"29e030a9-1896-472f-8031-6cce6378509b\") " pod="kube-system/cilium-wv8rs" May 15 23:41:08.640779 kubelet[2690]: I0515 23:41:08.640501 2690 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d6a9ae21-94bf-496a-aaa6-88a6d844d251-lib-modules\") pod \"kube-proxy-jgvdt\" (UID: \"d6a9ae21-94bf-496a-aaa6-88a6d844d251\") " pod="kube-system/kube-proxy-jgvdt" May 15 23:41:08.640891 kubelet[2690]: I0515 23:41:08.640546 2690 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d6a9ae21-94bf-496a-aaa6-88a6d844d251-xtables-lock\") pod \"kube-proxy-jgvdt\" (UID: \"d6a9ae21-94bf-496a-aaa6-88a6d844d251\") " pod="kube-system/kube-proxy-jgvdt" May 15 23:41:08.640891 kubelet[2690]: I0515 23:41:08.640583 2690 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/29e030a9-1896-472f-8031-6cce6378509b-cilium-run\") pod \"cilium-wv8rs\" (UID: \"29e030a9-1896-472f-8031-6cce6378509b\") " pod="kube-system/cilium-wv8rs" May 15 23:41:08.640891 kubelet[2690]: I0515 23:41:08.640601 2690 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/29e030a9-1896-472f-8031-6cce6378509b-etc-cni-netd\") pod \"cilium-wv8rs\" (UID: \"29e030a9-1896-472f-8031-6cce6378509b\") " pod="kube-system/cilium-wv8rs" May 15 23:41:08.640891 kubelet[2690]: I0515 23:41:08.640617 2690 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/29e030a9-1896-472f-8031-6cce6378509b-host-proc-sys-net\") pod \"cilium-wv8rs\" (UID: \"29e030a9-1896-472f-8031-6cce6378509b\") " pod="kube-system/cilium-wv8rs" May 15 23:41:08.640891 kubelet[2690]: I0515 23:41:08.640639 2690 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/29e030a9-1896-472f-8031-6cce6378509b-lib-modules\") pod \"cilium-wv8rs\" (UID: \"29e030a9-1896-472f-8031-6cce6378509b\") " pod="kube-system/cilium-wv8rs" May 15 23:41:08.840288 kubelet[2690]: E0515 23:41:08.840247 2690 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:41:08.841352 containerd[1567]: time="2025-05-15T23:41:08.841313427Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jgvdt,Uid:d6a9ae21-94bf-496a-aaa6-88a6d844d251,Namespace:kube-system,Attempt:0,}" May 15 23:41:08.843155 kubelet[2690]: I0515 23:41:08.843000 2690 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4f8nr\" (UniqueName: \"kubernetes.io/projected/54afbe1f-8b8b-4054-a909-15bc5da113a6-kube-api-access-4f8nr\") pod \"cilium-operator-5d85765b45-rx9lp\" (UID: \"54afbe1f-8b8b-4054-a909-15bc5da113a6\") " pod="kube-system/cilium-operator-5d85765b45-rx9lp" May 15 23:41:08.843155 kubelet[2690]: I0515 23:41:08.843039 2690 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/54afbe1f-8b8b-4054-a909-15bc5da113a6-cilium-config-path\") pod \"cilium-operator-5d85765b45-rx9lp\" (UID: \"54afbe1f-8b8b-4054-a909-15bc5da113a6\") " pod="kube-system/cilium-operator-5d85765b45-rx9lp" May 15 23:41:08.843155 kubelet[2690]: E0515 23:41:08.843075 2690 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:41:08.843448 containerd[1567]: time="2025-05-15T23:41:08.843419670Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wv8rs,Uid:29e030a9-1896-472f-8031-6cce6378509b,Namespace:kube-system,Attempt:0,}" May 15 23:41:08.897614 containerd[1567]: time="2025-05-15T23:41:08.897420972Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 23:41:08.897614 containerd[1567]: time="2025-05-15T23:41:08.897482058Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 23:41:08.897614 containerd[1567]: time="2025-05-15T23:41:08.897497219Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 23:41:08.898824 containerd[1567]: time="2025-05-15T23:41:08.898701495Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 23:41:08.906233 containerd[1567]: time="2025-05-15T23:41:08.905907712Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 23:41:08.906233 containerd[1567]: time="2025-05-15T23:41:08.905953357Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 23:41:08.906233 containerd[1567]: time="2025-05-15T23:41:08.905964518Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 23:41:08.906233 containerd[1567]: time="2025-05-15T23:41:08.906038045Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 23:41:08.942606 containerd[1567]: time="2025-05-15T23:41:08.942041566Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wv8rs,Uid:29e030a9-1896-472f-8031-6cce6378509b,Namespace:kube-system,Attempt:0,} returns sandbox id \"d66b165869cb9648fd48205dcc3e7654a428dbbb4e073835380026e2a1f9ec19\"" May 15 23:41:08.946143 kubelet[2690]: E0515 23:41:08.943302 2690 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:41:08.946247 containerd[1567]: time="2025-05-15T23:41:08.945417012Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 15 23:41:08.958096 containerd[1567]: time="2025-05-15T23:41:08.958062035Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jgvdt,Uid:d6a9ae21-94bf-496a-aaa6-88a6d844d251,Namespace:kube-system,Attempt:0,} returns sandbox id \"ece97e144547a2b72b790e8b3d7bbe1ec29814d4671a8f529fb8d297ea8a8524\"" May 15 23:41:08.958745 kubelet[2690]: E0515 23:41:08.958688 2690 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:41:08.961903 containerd[1567]: time="2025-05-15T23:41:08.961869203Z" level=info msg="CreateContainer within sandbox \"ece97e144547a2b72b790e8b3d7bbe1ec29814d4671a8f529fb8d297ea8a8524\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 15 23:41:08.978888 containerd[1567]: time="2025-05-15T23:41:08.978824402Z" level=info msg="CreateContainer within sandbox \"ece97e144547a2b72b790e8b3d7bbe1ec29814d4671a8f529fb8d297ea8a8524\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"26de72ef7ddc04b2a5ef00ab7728a608b6d768f3d89168961c5e32075ab96c8f\"" May 15 23:41:08.979508 containerd[1567]: time="2025-05-15T23:41:08.979475305Z" level=info msg="StartContainer for \"26de72ef7ddc04b2a5ef00ab7728a608b6d768f3d89168961c5e32075ab96c8f\"" May 15 23:41:09.043963 containerd[1567]: time="2025-05-15T23:41:09.043909949Z" level=info msg="StartContainer for \"26de72ef7ddc04b2a5ef00ab7728a608b6d768f3d89168961c5e32075ab96c8f\" returns successfully" May 15 23:41:09.110550 kubelet[2690]: E0515 23:41:09.110423 2690 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:41:09.112492 containerd[1567]: time="2025-05-15T23:41:09.112428652Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-rx9lp,Uid:54afbe1f-8b8b-4054-a909-15bc5da113a6,Namespace:kube-system,Attempt:0,}" May 15 23:41:09.164392 containerd[1567]: time="2025-05-15T23:41:09.164057010Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 23:41:09.164392 containerd[1567]: time="2025-05-15T23:41:09.164175381Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 23:41:09.164392 containerd[1567]: time="2025-05-15T23:41:09.164188182Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 23:41:09.164392 containerd[1567]: time="2025-05-15T23:41:09.164288951Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 23:41:09.224837 containerd[1567]: time="2025-05-15T23:41:09.224746957Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-rx9lp,Uid:54afbe1f-8b8b-4054-a909-15bc5da113a6,Namespace:kube-system,Attempt:0,} returns sandbox id \"f06dba274d785278483d2886a5da5dd1f1eb023a714818f65a621a2efbfb0cab\"" May 15 23:41:09.226353 kubelet[2690]: E0515 23:41:09.225625 2690 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:41:09.850180 kubelet[2690]: E0515 23:41:09.849994 2690 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:41:12.304000 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1925077542.mount: Deactivated successfully. May 15 23:41:13.570729 kubelet[2690]: E0515 23:41:13.570519 2690 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:41:13.580161 containerd[1567]: time="2025-05-15T23:41:13.580050674Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 23:41:13.585958 containerd[1567]: time="2025-05-15T23:41:13.585893263Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" May 15 23:41:13.588673 containerd[1567]: time="2025-05-15T23:41:13.588612542Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 4.643157926s" May 15 23:41:13.588673 containerd[1567]: time="2025-05-15T23:41:13.588655065Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" May 15 23:41:13.591012 containerd[1567]: time="2025-05-15T23:41:13.590231661Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 23:41:13.592352 containerd[1567]: time="2025-05-15T23:41:13.592321214Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 15 23:41:13.593494 containerd[1567]: time="2025-05-15T23:41:13.593462458Z" level=info msg="CreateContainer within sandbox \"d66b165869cb9648fd48205dcc3e7654a428dbbb4e073835380026e2a1f9ec19\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 15 23:41:13.597697 kubelet[2690]: I0515 23:41:13.596786 2690 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-jgvdt" podStartSLOduration=5.59676542 podStartE2EDuration="5.59676542s" podCreationTimestamp="2025-05-15 23:41:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 23:41:09.861206328 +0000 UTC m=+7.125892405" watchObservedRunningTime="2025-05-15 23:41:13.59676542 +0000 UTC m=+10.861451577" May 15 23:41:13.633365 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1258219118.mount: Deactivated successfully. May 15 23:41:13.634990 containerd[1567]: time="2025-05-15T23:41:13.634869015Z" level=info msg="CreateContainer within sandbox \"d66b165869cb9648fd48205dcc3e7654a428dbbb4e073835380026e2a1f9ec19\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"6e3789a87248ded98bfb848031a9adb14c9e32ed2acbe801aafe0e620622509a\"" May 15 23:41:13.639350 containerd[1567]: time="2025-05-15T23:41:13.639303500Z" level=info msg="StartContainer for \"6e3789a87248ded98bfb848031a9adb14c9e32ed2acbe801aafe0e620622509a\"" May 15 23:41:13.687257 containerd[1567]: time="2025-05-15T23:41:13.687195292Z" level=info msg="StartContainer for \"6e3789a87248ded98bfb848031a9adb14c9e32ed2acbe801aafe0e620622509a\" returns successfully" May 15 23:41:13.870438 kubelet[2690]: E0515 23:41:13.870280 2690 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:41:13.903140 containerd[1567]: time="2025-05-15T23:41:13.900566702Z" level=info msg="shim disconnected" id=6e3789a87248ded98bfb848031a9adb14c9e32ed2acbe801aafe0e620622509a namespace=k8s.io May 15 23:41:13.903140 containerd[1567]: time="2025-05-15T23:41:13.903062005Z" level=warning msg="cleaning up after shim disconnected" id=6e3789a87248ded98bfb848031a9adb14c9e32ed2acbe801aafe0e620622509a namespace=k8s.io May 15 23:41:13.903140 containerd[1567]: time="2025-05-15T23:41:13.903075966Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 23:41:14.630478 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6e3789a87248ded98bfb848031a9adb14c9e32ed2acbe801aafe0e620622509a-rootfs.mount: Deactivated successfully. May 15 23:41:14.766289 kubelet[2690]: E0515 23:41:14.765983 2690 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:41:14.875457 kubelet[2690]: E0515 23:41:14.875411 2690 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:41:14.875604 kubelet[2690]: E0515 23:41:14.875578 2690 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:41:14.884417 containerd[1567]: time="2025-05-15T23:41:14.884309269Z" level=info msg="CreateContainer within sandbox \"d66b165869cb9648fd48205dcc3e7654a428dbbb4e073835380026e2a1f9ec19\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 15 23:41:14.910491 containerd[1567]: time="2025-05-15T23:41:14.910444206Z" level=info msg="CreateContainer within sandbox \"d66b165869cb9648fd48205dcc3e7654a428dbbb4e073835380026e2a1f9ec19\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"749c4f1f5a6c08e3d87f1a0f29bb2027f580774281e4e8932f899275daed4838\"" May 15 23:41:14.911264 containerd[1567]: time="2025-05-15T23:41:14.911164096Z" level=info msg="StartContainer for \"749c4f1f5a6c08e3d87f1a0f29bb2027f580774281e4e8932f899275daed4838\"" May 15 23:41:14.962878 containerd[1567]: time="2025-05-15T23:41:14.962771483Z" level=info msg="StartContainer for \"749c4f1f5a6c08e3d87f1a0f29bb2027f580774281e4e8932f899275daed4838\" returns successfully" May 15 23:41:14.978848 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 15 23:41:14.979129 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 15 23:41:14.979201 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... May 15 23:41:14.992905 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 15 23:41:15.029998 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 15 23:41:15.046084 containerd[1567]: time="2025-05-15T23:41:15.046016629Z" level=info msg="shim disconnected" id=749c4f1f5a6c08e3d87f1a0f29bb2027f580774281e4e8932f899275daed4838 namespace=k8s.io May 15 23:41:15.046084 containerd[1567]: time="2025-05-15T23:41:15.046071072Z" level=warning msg="cleaning up after shim disconnected" id=749c4f1f5a6c08e3d87f1a0f29bb2027f580774281e4e8932f899275daed4838 namespace=k8s.io May 15 23:41:15.046084 containerd[1567]: time="2025-05-15T23:41:15.046082193Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 23:41:15.278907 containerd[1567]: time="2025-05-15T23:41:15.278857778Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 23:41:15.279714 containerd[1567]: time="2025-05-15T23:41:15.279513141Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" May 15 23:41:15.281846 containerd[1567]: time="2025-05-15T23:41:15.280426801Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 15 23:41:15.281928 containerd[1567]: time="2025-05-15T23:41:15.281860575Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 1.689409112s" May 15 23:41:15.281928 containerd[1567]: time="2025-05-15T23:41:15.281896698Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" May 15 23:41:15.284565 containerd[1567]: time="2025-05-15T23:41:15.283555727Z" level=info msg="CreateContainer within sandbox \"f06dba274d785278483d2886a5da5dd1f1eb023a714818f65a621a2efbfb0cab\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 15 23:41:15.293010 containerd[1567]: time="2025-05-15T23:41:15.292970588Z" level=info msg="CreateContainer within sandbox \"f06dba274d785278483d2886a5da5dd1f1eb023a714818f65a621a2efbfb0cab\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"27aa5f7a5087dd8355e18ffbecf282294e5b292e09b0559b9a4a89853c0ba24c\"" May 15 23:41:15.293433 containerd[1567]: time="2025-05-15T23:41:15.293416577Z" level=info msg="StartContainer for \"27aa5f7a5087dd8355e18ffbecf282294e5b292e09b0559b9a4a89853c0ba24c\"" May 15 23:41:15.334966 containerd[1567]: time="2025-05-15T23:41:15.334926234Z" level=info msg="StartContainer for \"27aa5f7a5087dd8355e18ffbecf282294e5b292e09b0559b9a4a89853c0ba24c\" returns successfully" May 15 23:41:15.879527 kubelet[2690]: E0515 23:41:15.879497 2690 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:41:15.889003 kubelet[2690]: E0515 23:41:15.888953 2690 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:41:15.896419 kubelet[2690]: I0515 23:41:15.895064 2690 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-rx9lp" podStartSLOduration=1.839027575 podStartE2EDuration="7.895049637s" podCreationTimestamp="2025-05-15 23:41:08 +0000 UTC" firstStartedPulling="2025-05-15 23:41:09.226482396 +0000 UTC m=+6.491168433" lastFinishedPulling="2025-05-15 23:41:15.282504458 +0000 UTC m=+12.547190495" observedRunningTime="2025-05-15 23:41:15.89417814 +0000 UTC m=+13.158864217" watchObservedRunningTime="2025-05-15 23:41:15.895049637 +0000 UTC m=+13.159735714" May 15 23:41:15.902346 containerd[1567]: time="2025-05-15T23:41:15.902199308Z" level=info msg="CreateContainer within sandbox \"d66b165869cb9648fd48205dcc3e7654a428dbbb4e073835380026e2a1f9ec19\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 15 23:41:15.935300 containerd[1567]: time="2025-05-15T23:41:15.933674823Z" level=info msg="CreateContainer within sandbox \"d66b165869cb9648fd48205dcc3e7654a428dbbb4e073835380026e2a1f9ec19\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"aff04963990ee8057f617955095806c8a01314cc695b823f4092e403370305a8\"" May 15 23:41:15.935818 containerd[1567]: time="2025-05-15T23:41:15.935778442Z" level=info msg="StartContainer for \"aff04963990ee8057f617955095806c8a01314cc695b823f4092e403370305a8\"" May 15 23:41:16.015348 containerd[1567]: time="2025-05-15T23:41:16.013089538Z" level=info msg="StartContainer for \"aff04963990ee8057f617955095806c8a01314cc695b823f4092e403370305a8\" returns successfully" May 15 23:41:16.118357 containerd[1567]: time="2025-05-15T23:41:16.118298279Z" level=info msg="shim disconnected" id=aff04963990ee8057f617955095806c8a01314cc695b823f4092e403370305a8 namespace=k8s.io May 15 23:41:16.118357 containerd[1567]: time="2025-05-15T23:41:16.118353162Z" level=warning msg="cleaning up after shim disconnected" id=aff04963990ee8057f617955095806c8a01314cc695b823f4092e403370305a8 namespace=k8s.io May 15 23:41:16.118357 containerd[1567]: time="2025-05-15T23:41:16.118361963Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 23:41:16.630773 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-aff04963990ee8057f617955095806c8a01314cc695b823f4092e403370305a8-rootfs.mount: Deactivated successfully. May 15 23:41:16.892612 kubelet[2690]: E0515 23:41:16.892197 2690 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:41:16.892612 kubelet[2690]: E0515 23:41:16.892242 2690 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:41:16.894196 containerd[1567]: time="2025-05-15T23:41:16.894140730Z" level=info msg="CreateContainer within sandbox \"d66b165869cb9648fd48205dcc3e7654a428dbbb4e073835380026e2a1f9ec19\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 15 23:41:16.933111 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2331363961.mount: Deactivated successfully. May 15 23:41:16.934798 containerd[1567]: time="2025-05-15T23:41:16.934755951Z" level=info msg="CreateContainer within sandbox \"d66b165869cb9648fd48205dcc3e7654a428dbbb4e073835380026e2a1f9ec19\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"c210bd04653fde8a2f1ac3b67d9f71db5c6f68a4e5c7f7cb05640349a3514c4a\"" May 15 23:41:16.943215 containerd[1567]: time="2025-05-15T23:41:16.943162557Z" level=info msg="StartContainer for \"c210bd04653fde8a2f1ac3b67d9f71db5c6f68a4e5c7f7cb05640349a3514c4a\"" May 15 23:41:16.989850 containerd[1567]: time="2025-05-15T23:41:16.989734950Z" level=info msg="StartContainer for \"c210bd04653fde8a2f1ac3b67d9f71db5c6f68a4e5c7f7cb05640349a3514c4a\" returns successfully" May 15 23:41:17.010723 containerd[1567]: time="2025-05-15T23:41:17.010648508Z" level=info msg="shim disconnected" id=c210bd04653fde8a2f1ac3b67d9f71db5c6f68a4e5c7f7cb05640349a3514c4a namespace=k8s.io May 15 23:41:17.010723 containerd[1567]: time="2025-05-15T23:41:17.010716792Z" level=warning msg="cleaning up after shim disconnected" id=c210bd04653fde8a2f1ac3b67d9f71db5c6f68a4e5c7f7cb05640349a3514c4a namespace=k8s.io May 15 23:41:17.010723 containerd[1567]: time="2025-05-15T23:41:17.010725912Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 23:41:17.358054 kubelet[2690]: E0515 23:41:17.357975 2690 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:41:17.630809 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c210bd04653fde8a2f1ac3b67d9f71db5c6f68a4e5c7f7cb05640349a3514c4a-rootfs.mount: Deactivated successfully. May 15 23:41:17.895294 kubelet[2690]: E0515 23:41:17.895191 2690 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:41:17.897743 containerd[1567]: time="2025-05-15T23:41:17.897704635Z" level=info msg="CreateContainer within sandbox \"d66b165869cb9648fd48205dcc3e7654a428dbbb4e073835380026e2a1f9ec19\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 15 23:41:17.911890 containerd[1567]: time="2025-05-15T23:41:17.911849956Z" level=info msg="CreateContainer within sandbox \"d66b165869cb9648fd48205dcc3e7654a428dbbb4e073835380026e2a1f9ec19\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"82c77277f3eb82ffa83a9733ba32416d7049963b847a4022de17abedbb5d7994\"" May 15 23:41:17.912333 containerd[1567]: time="2025-05-15T23:41:17.912303863Z" level=info msg="StartContainer for \"82c77277f3eb82ffa83a9733ba32416d7049963b847a4022de17abedbb5d7994\"" May 15 23:41:17.979141 containerd[1567]: time="2025-05-15T23:41:17.979081469Z" level=info msg="StartContainer for \"82c77277f3eb82ffa83a9733ba32416d7049963b847a4022de17abedbb5d7994\" returns successfully" May 15 23:41:18.096235 kubelet[2690]: I0515 23:41:18.096030 2690 kubelet_node_status.go:488] "Fast updating node status as it just became ready" May 15 23:41:18.206463 kubelet[2690]: I0515 23:41:18.205331 2690 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/df4db3b2-d01d-4421-9a0a-fcbe00729a9e-config-volume\") pod \"coredns-7c65d6cfc9-2dn5q\" (UID: \"df4db3b2-d01d-4421-9a0a-fcbe00729a9e\") " pod="kube-system/coredns-7c65d6cfc9-2dn5q" May 15 23:41:18.206463 kubelet[2690]: I0515 23:41:18.205379 2690 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5rz95\" (UniqueName: \"kubernetes.io/projected/df4db3b2-d01d-4421-9a0a-fcbe00729a9e-kube-api-access-5rz95\") pod \"coredns-7c65d6cfc9-2dn5q\" (UID: \"df4db3b2-d01d-4421-9a0a-fcbe00729a9e\") " pod="kube-system/coredns-7c65d6cfc9-2dn5q" May 15 23:41:18.206463 kubelet[2690]: I0515 23:41:18.205404 2690 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/31ac3922-5d88-4f89-b3cf-e268954d4108-config-volume\") pod \"coredns-7c65d6cfc9-w46k8\" (UID: \"31ac3922-5d88-4f89-b3cf-e268954d4108\") " pod="kube-system/coredns-7c65d6cfc9-w46k8" May 15 23:41:18.206463 kubelet[2690]: I0515 23:41:18.205427 2690 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g5jcv\" (UniqueName: \"kubernetes.io/projected/31ac3922-5d88-4f89-b3cf-e268954d4108-kube-api-access-g5jcv\") pod \"coredns-7c65d6cfc9-w46k8\" (UID: \"31ac3922-5d88-4f89-b3cf-e268954d4108\") " pod="kube-system/coredns-7c65d6cfc9-w46k8" May 15 23:41:18.430091 kubelet[2690]: E0515 23:41:18.430034 2690 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:41:18.430999 kubelet[2690]: E0515 23:41:18.430963 2690 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:41:18.432335 containerd[1567]: time="2025-05-15T23:41:18.432293909Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-w46k8,Uid:31ac3922-5d88-4f89-b3cf-e268954d4108,Namespace:kube-system,Attempt:0,}" May 15 23:41:18.432414 containerd[1567]: time="2025-05-15T23:41:18.432352592Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-2dn5q,Uid:df4db3b2-d01d-4421-9a0a-fcbe00729a9e,Namespace:kube-system,Attempt:0,}" May 15 23:41:18.900123 kubelet[2690]: E0515 23:41:18.900079 2690 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:41:19.055624 update_engine[1554]: I20250515 23:41:19.055553 1554 update_attempter.cc:509] Updating boot flags... May 15 23:41:19.075152 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (3538) May 15 23:41:19.116258 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (3476) May 15 23:41:19.906212 kubelet[2690]: E0515 23:41:19.904430 2690 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:41:20.212071 systemd-networkd[1230]: cilium_host: Link UP May 15 23:41:20.212234 systemd-networkd[1230]: cilium_net: Link UP May 15 23:41:20.212238 systemd-networkd[1230]: cilium_net: Gained carrier May 15 23:41:20.212385 systemd-networkd[1230]: cilium_host: Gained carrier May 15 23:41:20.297229 systemd-networkd[1230]: cilium_vxlan: Link UP May 15 23:41:20.297243 systemd-networkd[1230]: cilium_vxlan: Gained carrier May 15 23:41:20.645160 kernel: NET: Registered PF_ALG protocol family May 15 23:41:20.656388 systemd-networkd[1230]: cilium_host: Gained IPv6LL May 15 23:41:20.902715 kubelet[2690]: E0515 23:41:20.902686 2690 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:41:21.015312 systemd-networkd[1230]: cilium_net: Gained IPv6LL May 15 23:41:21.237775 systemd-networkd[1230]: lxc_health: Link UP May 15 23:41:21.242038 systemd-networkd[1230]: lxc_health: Gained carrier May 15 23:41:21.463279 systemd-networkd[1230]: cilium_vxlan: Gained IPv6LL May 15 23:41:21.560134 systemd-networkd[1230]: lxce9e5f369f42c: Link UP May 15 23:41:21.571143 kernel: eth0: renamed from tmpe9d7d May 15 23:41:21.590162 kernel: eth0: renamed from tmp8d31a May 15 23:41:21.593542 systemd-networkd[1230]: lxcbb87cf314540: Link UP May 15 23:41:21.595558 systemd-networkd[1230]: lxce9e5f369f42c: Gained carrier May 15 23:41:21.598051 systemd-networkd[1230]: lxcbb87cf314540: Gained carrier May 15 23:41:22.551289 systemd-networkd[1230]: lxc_health: Gained IPv6LL May 15 23:41:22.861635 kubelet[2690]: E0515 23:41:22.861370 2690 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:41:22.883310 kubelet[2690]: I0515 23:41:22.883244 2690 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-wv8rs" podStartSLOduration=10.235189291 podStartE2EDuration="14.883218366s" podCreationTimestamp="2025-05-15 23:41:08 +0000 UTC" firstStartedPulling="2025-05-15 23:41:08.944111126 +0000 UTC m=+6.208797163" lastFinishedPulling="2025-05-15 23:41:13.592140161 +0000 UTC m=+10.856826238" observedRunningTime="2025-05-15 23:41:18.917510093 +0000 UTC m=+16.182196170" watchObservedRunningTime="2025-05-15 23:41:22.883218366 +0000 UTC m=+20.147904443" May 15 23:41:23.063247 systemd-networkd[1230]: lxcbb87cf314540: Gained IPv6LL May 15 23:41:23.231599 kubelet[2690]: I0515 23:41:23.231565 2690 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 15 23:41:23.232055 kubelet[2690]: E0515 23:41:23.232018 2690 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:41:23.575280 systemd-networkd[1230]: lxce9e5f369f42c: Gained IPv6LL May 15 23:41:23.913024 kubelet[2690]: E0515 23:41:23.912988 2690 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:41:25.187456 containerd[1567]: time="2025-05-15T23:41:25.187352708Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 23:41:25.193333 containerd[1567]: time="2025-05-15T23:41:25.193077859Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 23:41:25.193333 containerd[1567]: time="2025-05-15T23:41:25.193153102Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 23:41:25.211104 containerd[1567]: time="2025-05-15T23:41:25.194014257Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 23:41:25.231015 containerd[1567]: time="2025-05-15T23:41:25.230922304Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 23:41:25.231015 containerd[1567]: time="2025-05-15T23:41:25.230976826Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 23:41:25.231015 containerd[1567]: time="2025-05-15T23:41:25.230987307Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 23:41:25.231248 containerd[1567]: time="2025-05-15T23:41:25.231062670Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 23:41:25.245582 systemd-resolved[1431]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 15 23:41:25.251861 systemd-resolved[1431]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 15 23:41:25.276085 containerd[1567]: time="2025-05-15T23:41:25.276049563Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-2dn5q,Uid:df4db3b2-d01d-4421-9a0a-fcbe00729a9e,Namespace:kube-system,Attempt:0,} returns sandbox id \"e9d7d7ee42fba3fcca39fc66bb1b6d282338dfb227a5d6dc7654e3ee5639a05c\"" May 15 23:41:25.276634 kubelet[2690]: E0515 23:41:25.276611 2690 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:41:25.279442 containerd[1567]: time="2025-05-15T23:41:25.279413178Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-w46k8,Uid:31ac3922-5d88-4f89-b3cf-e268954d4108,Namespace:kube-system,Attempt:0,} returns sandbox id \"8d31af403aaa694de018a4bbe30fb09e0539f484af72a87c74f5ba50a3e08c74\"" May 15 23:41:25.279750 containerd[1567]: time="2025-05-15T23:41:25.279628147Z" level=info msg="CreateContainer within sandbox \"e9d7d7ee42fba3fcca39fc66bb1b6d282338dfb227a5d6dc7654e3ee5639a05c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 15 23:41:25.280359 kubelet[2690]: E0515 23:41:25.280336 2690 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:41:25.282717 containerd[1567]: time="2025-05-15T23:41:25.282614307Z" level=info msg="CreateContainer within sandbox \"8d31af403aaa694de018a4bbe30fb09e0539f484af72a87c74f5ba50a3e08c74\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 15 23:41:25.291775 containerd[1567]: time="2025-05-15T23:41:25.291719674Z" level=info msg="CreateContainer within sandbox \"e9d7d7ee42fba3fcca39fc66bb1b6d282338dfb227a5d6dc7654e3ee5639a05c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"08d28479cb1c28b1edd9ae653269bee1fd068d7a7945194e321b34534eb66e5e\"" May 15 23:41:25.292318 containerd[1567]: time="2025-05-15T23:41:25.292284977Z" level=info msg="StartContainer for \"08d28479cb1c28b1edd9ae653269bee1fd068d7a7945194e321b34534eb66e5e\"" May 15 23:41:25.298010 containerd[1567]: time="2025-05-15T23:41:25.297973286Z" level=info msg="CreateContainer within sandbox \"8d31af403aaa694de018a4bbe30fb09e0539f484af72a87c74f5ba50a3e08c74\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"532f9a13ea9370dc4234cdc0de90dd33524d855d0f19eda2aca821e95bcd8237\"" May 15 23:41:25.298566 containerd[1567]: time="2025-05-15T23:41:25.298542109Z" level=info msg="StartContainer for \"532f9a13ea9370dc4234cdc0de90dd33524d855d0f19eda2aca821e95bcd8237\"" May 15 23:41:25.346232 containerd[1567]: time="2025-05-15T23:41:25.346193149Z" level=info msg="StartContainer for \"532f9a13ea9370dc4234cdc0de90dd33524d855d0f19eda2aca821e95bcd8237\" returns successfully" May 15 23:41:25.346454 containerd[1567]: time="2025-05-15T23:41:25.346231311Z" level=info msg="StartContainer for \"08d28479cb1c28b1edd9ae653269bee1fd068d7a7945194e321b34534eb66e5e\" returns successfully" May 15 23:41:25.918053 kubelet[2690]: E0515 23:41:25.917813 2690 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:41:25.921433 kubelet[2690]: E0515 23:41:25.920820 2690 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:41:25.943599 kubelet[2690]: I0515 23:41:25.943499 2690 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-2dn5q" podStartSLOduration=17.943481099 podStartE2EDuration="17.943481099s" podCreationTimestamp="2025-05-15 23:41:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 23:41:25.931064159 +0000 UTC m=+23.195750236" watchObservedRunningTime="2025-05-15 23:41:25.943481099 +0000 UTC m=+23.208167176" May 15 23:41:25.957179 kubelet[2690]: I0515 23:41:25.957122 2690 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-w46k8" podStartSLOduration=17.957097048 podStartE2EDuration="17.957097048s" podCreationTimestamp="2025-05-15 23:41:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 23:41:25.957020685 +0000 UTC m=+23.221706762" watchObservedRunningTime="2025-05-15 23:41:25.957097048 +0000 UTC m=+23.221783125" May 15 23:41:26.921907 kubelet[2690]: E0515 23:41:26.921877 2690 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:41:26.923234 kubelet[2690]: E0515 23:41:26.921932 2690 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:41:27.924162 kubelet[2690]: E0515 23:41:27.923815 2690 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:41:27.924162 kubelet[2690]: E0515 23:41:27.923866 2690 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:41:29.846381 systemd[1]: Started sshd@7-10.0.0.43:22-10.0.0.1:41418.service - OpenSSH per-connection server daemon (10.0.0.1:41418). May 15 23:41:29.894357 sshd[4093]: Accepted publickey for core from 10.0.0.1 port 41418 ssh2: RSA SHA256:gmt3ErEAFpcoTNZjCS2EQ7u5MuXyZAbf7EZ0clvImok May 15 23:41:29.896171 sshd-session[4093]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 23:41:29.900157 systemd-logind[1546]: New session 8 of user core. May 15 23:41:29.910508 systemd[1]: Started session-8.scope - Session 8 of User core. May 15 23:41:30.045444 sshd[4096]: Connection closed by 10.0.0.1 port 41418 May 15 23:41:30.046137 sshd-session[4093]: pam_unix(sshd:session): session closed for user core May 15 23:41:30.049338 systemd-logind[1546]: Session 8 logged out. Waiting for processes to exit. May 15 23:41:30.049454 systemd[1]: sshd@7-10.0.0.43:22-10.0.0.1:41418.service: Deactivated successfully. May 15 23:41:30.051931 systemd[1]: session-8.scope: Deactivated successfully. May 15 23:41:30.052642 systemd-logind[1546]: Removed session 8. May 15 23:41:35.058403 systemd[1]: Started sshd@8-10.0.0.43:22-10.0.0.1:40344.service - OpenSSH per-connection server daemon (10.0.0.1:40344). May 15 23:41:35.101572 sshd[4111]: Accepted publickey for core from 10.0.0.1 port 40344 ssh2: RSA SHA256:gmt3ErEAFpcoTNZjCS2EQ7u5MuXyZAbf7EZ0clvImok May 15 23:41:35.102886 sshd-session[4111]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 23:41:35.109922 systemd-logind[1546]: New session 9 of user core. May 15 23:41:35.119391 systemd[1]: Started session-9.scope - Session 9 of User core. May 15 23:41:35.255671 sshd[4114]: Connection closed by 10.0.0.1 port 40344 May 15 23:41:35.255998 sshd-session[4111]: pam_unix(sshd:session): session closed for user core May 15 23:41:35.259342 systemd[1]: sshd@8-10.0.0.43:22-10.0.0.1:40344.service: Deactivated successfully. May 15 23:41:35.261277 systemd-logind[1546]: Session 9 logged out. Waiting for processes to exit. May 15 23:41:35.261410 systemd[1]: session-9.scope: Deactivated successfully. May 15 23:41:35.262212 systemd-logind[1546]: Removed session 9. May 15 23:41:40.277376 systemd[1]: Started sshd@9-10.0.0.43:22-10.0.0.1:40352.service - OpenSSH per-connection server daemon (10.0.0.1:40352). May 15 23:41:40.320359 sshd[4130]: Accepted publickey for core from 10.0.0.1 port 40352 ssh2: RSA SHA256:gmt3ErEAFpcoTNZjCS2EQ7u5MuXyZAbf7EZ0clvImok May 15 23:41:40.321713 sshd-session[4130]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 23:41:40.326198 systemd-logind[1546]: New session 10 of user core. May 15 23:41:40.336360 systemd[1]: Started session-10.scope - Session 10 of User core. May 15 23:41:40.451998 sshd[4133]: Connection closed by 10.0.0.1 port 40352 May 15 23:41:40.452584 sshd-session[4130]: pam_unix(sshd:session): session closed for user core May 15 23:41:40.455739 systemd[1]: sshd@9-10.0.0.43:22-10.0.0.1:40352.service: Deactivated successfully. May 15 23:41:40.457815 systemd-logind[1546]: Session 10 logged out. Waiting for processes to exit. May 15 23:41:40.457870 systemd[1]: session-10.scope: Deactivated successfully. May 15 23:41:40.459297 systemd-logind[1546]: Removed session 10. May 15 23:41:45.461369 systemd[1]: Started sshd@10-10.0.0.43:22-10.0.0.1:60458.service - OpenSSH per-connection server daemon (10.0.0.1:60458). May 15 23:41:45.501576 sshd[4148]: Accepted publickey for core from 10.0.0.1 port 60458 ssh2: RSA SHA256:gmt3ErEAFpcoTNZjCS2EQ7u5MuXyZAbf7EZ0clvImok May 15 23:41:45.502843 sshd-session[4148]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 23:41:45.506379 systemd-logind[1546]: New session 11 of user core. May 15 23:41:45.512419 systemd[1]: Started session-11.scope - Session 11 of User core. May 15 23:41:45.628534 sshd[4151]: Connection closed by 10.0.0.1 port 60458 May 15 23:41:45.628922 sshd-session[4148]: pam_unix(sshd:session): session closed for user core May 15 23:41:45.638363 systemd[1]: Started sshd@11-10.0.0.43:22-10.0.0.1:60470.service - OpenSSH per-connection server daemon (10.0.0.1:60470). May 15 23:41:45.638761 systemd[1]: sshd@10-10.0.0.43:22-10.0.0.1:60458.service: Deactivated successfully. May 15 23:41:45.641384 systemd-logind[1546]: Session 11 logged out. Waiting for processes to exit. May 15 23:41:45.641497 systemd[1]: session-11.scope: Deactivated successfully. May 15 23:41:45.644044 systemd-logind[1546]: Removed session 11. May 15 23:41:45.680130 sshd[4161]: Accepted publickey for core from 10.0.0.1 port 60470 ssh2: RSA SHA256:gmt3ErEAFpcoTNZjCS2EQ7u5MuXyZAbf7EZ0clvImok May 15 23:41:45.681578 sshd-session[4161]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 23:41:45.685480 systemd-logind[1546]: New session 12 of user core. May 15 23:41:45.692345 systemd[1]: Started session-12.scope - Session 12 of User core. May 15 23:41:45.837724 sshd[4167]: Connection closed by 10.0.0.1 port 60470 May 15 23:41:45.839245 sshd-session[4161]: pam_unix(sshd:session): session closed for user core May 15 23:41:45.849260 systemd[1]: Started sshd@12-10.0.0.43:22-10.0.0.1:60476.service - OpenSSH per-connection server daemon (10.0.0.1:60476). May 15 23:41:45.849653 systemd[1]: sshd@11-10.0.0.43:22-10.0.0.1:60470.service: Deactivated successfully. May 15 23:41:45.853284 systemd-logind[1546]: Session 12 logged out. Waiting for processes to exit. May 15 23:41:45.857523 systemd[1]: session-12.scope: Deactivated successfully. May 15 23:41:45.862904 systemd-logind[1546]: Removed session 12. May 15 23:41:45.901984 sshd[4174]: Accepted publickey for core from 10.0.0.1 port 60476 ssh2: RSA SHA256:gmt3ErEAFpcoTNZjCS2EQ7u5MuXyZAbf7EZ0clvImok May 15 23:41:45.903512 sshd-session[4174]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 23:41:45.909345 systemd-logind[1546]: New session 13 of user core. May 15 23:41:45.920443 systemd[1]: Started session-13.scope - Session 13 of User core. May 15 23:41:46.038874 sshd[4180]: Connection closed by 10.0.0.1 port 60476 May 15 23:41:46.039275 sshd-session[4174]: pam_unix(sshd:session): session closed for user core May 15 23:41:46.042885 systemd[1]: sshd@12-10.0.0.43:22-10.0.0.1:60476.service: Deactivated successfully. May 15 23:41:46.045638 systemd[1]: session-13.scope: Deactivated successfully. May 15 23:41:46.046664 systemd-logind[1546]: Session 13 logged out. Waiting for processes to exit. May 15 23:41:46.047599 systemd-logind[1546]: Removed session 13. May 15 23:41:51.050393 systemd[1]: Started sshd@13-10.0.0.43:22-10.0.0.1:60486.service - OpenSSH per-connection server daemon (10.0.0.1:60486). May 15 23:41:51.095354 sshd[4192]: Accepted publickey for core from 10.0.0.1 port 60486 ssh2: RSA SHA256:gmt3ErEAFpcoTNZjCS2EQ7u5MuXyZAbf7EZ0clvImok May 15 23:41:51.095790 sshd-session[4192]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 23:41:51.100016 systemd-logind[1546]: New session 14 of user core. May 15 23:41:51.108395 systemd[1]: Started session-14.scope - Session 14 of User core. May 15 23:41:51.239984 sshd[4195]: Connection closed by 10.0.0.1 port 60486 May 15 23:41:51.240478 sshd-session[4192]: pam_unix(sshd:session): session closed for user core May 15 23:41:51.254382 systemd[1]: Started sshd@14-10.0.0.43:22-10.0.0.1:60498.service - OpenSSH per-connection server daemon (10.0.0.1:60498). May 15 23:41:51.254764 systemd[1]: sshd@13-10.0.0.43:22-10.0.0.1:60486.service: Deactivated successfully. May 15 23:41:51.257512 systemd-logind[1546]: Session 14 logged out. Waiting for processes to exit. May 15 23:41:51.257787 systemd[1]: session-14.scope: Deactivated successfully. May 15 23:41:51.259956 systemd-logind[1546]: Removed session 14. May 15 23:41:51.326822 sshd[4204]: Accepted publickey for core from 10.0.0.1 port 60498 ssh2: RSA SHA256:gmt3ErEAFpcoTNZjCS2EQ7u5MuXyZAbf7EZ0clvImok May 15 23:41:51.328063 sshd-session[4204]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 23:41:51.331804 systemd-logind[1546]: New session 15 of user core. May 15 23:41:51.341429 systemd[1]: Started session-15.scope - Session 15 of User core. May 15 23:41:51.550675 sshd[4210]: Connection closed by 10.0.0.1 port 60498 May 15 23:41:51.551868 sshd-session[4204]: pam_unix(sshd:session): session closed for user core May 15 23:41:51.562415 systemd[1]: Started sshd@15-10.0.0.43:22-10.0.0.1:60506.service - OpenSSH per-connection server daemon (10.0.0.1:60506). May 15 23:41:51.562846 systemd[1]: sshd@14-10.0.0.43:22-10.0.0.1:60498.service: Deactivated successfully. May 15 23:41:51.565876 systemd[1]: session-15.scope: Deactivated successfully. May 15 23:41:51.568135 systemd-logind[1546]: Session 15 logged out. Waiting for processes to exit. May 15 23:41:51.569232 systemd-logind[1546]: Removed session 15. May 15 23:41:51.620158 sshd[4217]: Accepted publickey for core from 10.0.0.1 port 60506 ssh2: RSA SHA256:gmt3ErEAFpcoTNZjCS2EQ7u5MuXyZAbf7EZ0clvImok May 15 23:41:51.621157 sshd-session[4217]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 23:41:51.625646 systemd-logind[1546]: New session 16 of user core. May 15 23:41:51.635705 systemd[1]: Started session-16.scope - Session 16 of User core. May 15 23:41:52.998203 sshd[4223]: Connection closed by 10.0.0.1 port 60506 May 15 23:41:52.999524 sshd-session[4217]: pam_unix(sshd:session): session closed for user core May 15 23:41:53.013821 systemd[1]: Started sshd@16-10.0.0.43:22-10.0.0.1:37906.service - OpenSSH per-connection server daemon (10.0.0.1:37906). May 15 23:41:53.017451 systemd[1]: sshd@15-10.0.0.43:22-10.0.0.1:60506.service: Deactivated successfully. May 15 23:41:53.021442 systemd-logind[1546]: Session 16 logged out. Waiting for processes to exit. May 15 23:41:53.022190 systemd[1]: session-16.scope: Deactivated successfully. May 15 23:41:53.024660 systemd-logind[1546]: Removed session 16. May 15 23:41:53.061739 sshd[4237]: Accepted publickey for core from 10.0.0.1 port 37906 ssh2: RSA SHA256:gmt3ErEAFpcoTNZjCS2EQ7u5MuXyZAbf7EZ0clvImok May 15 23:41:53.062998 sshd-session[4237]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 23:41:53.067277 systemd-logind[1546]: New session 17 of user core. May 15 23:41:53.077365 systemd[1]: Started session-17.scope - Session 17 of User core. May 15 23:41:53.306744 sshd[4243]: Connection closed by 10.0.0.1 port 37906 May 15 23:41:53.308038 sshd-session[4237]: pam_unix(sshd:session): session closed for user core May 15 23:41:53.314416 systemd[1]: Started sshd@17-10.0.0.43:22-10.0.0.1:37920.service - OpenSSH per-connection server daemon (10.0.0.1:37920). May 15 23:41:53.314885 systemd[1]: sshd@16-10.0.0.43:22-10.0.0.1:37906.service: Deactivated successfully. May 15 23:41:53.317076 systemd-logind[1546]: Session 17 logged out. Waiting for processes to exit. May 15 23:41:53.317877 systemd[1]: session-17.scope: Deactivated successfully. May 15 23:41:53.320969 systemd-logind[1546]: Removed session 17. May 15 23:41:53.357185 sshd[4251]: Accepted publickey for core from 10.0.0.1 port 37920 ssh2: RSA SHA256:gmt3ErEAFpcoTNZjCS2EQ7u5MuXyZAbf7EZ0clvImok May 15 23:41:53.358513 sshd-session[4251]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 23:41:53.362817 systemd-logind[1546]: New session 18 of user core. May 15 23:41:53.369479 systemd[1]: Started session-18.scope - Session 18 of User core. May 15 23:41:53.479841 sshd[4257]: Connection closed by 10.0.0.1 port 37920 May 15 23:41:53.480233 sshd-session[4251]: pam_unix(sshd:session): session closed for user core May 15 23:41:53.483436 systemd[1]: sshd@17-10.0.0.43:22-10.0.0.1:37920.service: Deactivated successfully. May 15 23:41:53.485920 systemd-logind[1546]: Session 18 logged out. Waiting for processes to exit. May 15 23:41:53.486746 systemd[1]: session-18.scope: Deactivated successfully. May 15 23:41:53.490225 systemd-logind[1546]: Removed session 18. May 15 23:41:58.491417 systemd[1]: Started sshd@18-10.0.0.43:22-10.0.0.1:37922.service - OpenSSH per-connection server daemon (10.0.0.1:37922). May 15 23:41:58.533516 sshd[4274]: Accepted publickey for core from 10.0.0.1 port 37922 ssh2: RSA SHA256:gmt3ErEAFpcoTNZjCS2EQ7u5MuXyZAbf7EZ0clvImok May 15 23:41:58.534801 sshd-session[4274]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 23:41:58.539238 systemd-logind[1546]: New session 19 of user core. May 15 23:41:58.546432 systemd[1]: Started session-19.scope - Session 19 of User core. May 15 23:41:58.662818 sshd[4277]: Connection closed by 10.0.0.1 port 37922 May 15 23:41:58.663427 sshd-session[4274]: pam_unix(sshd:session): session closed for user core May 15 23:41:58.666638 systemd[1]: sshd@18-10.0.0.43:22-10.0.0.1:37922.service: Deactivated successfully. May 15 23:41:58.668777 systemd-logind[1546]: Session 19 logged out. Waiting for processes to exit. May 15 23:41:58.669146 systemd[1]: session-19.scope: Deactivated successfully. May 15 23:41:58.670199 systemd-logind[1546]: Removed session 19. May 15 23:42:03.672487 systemd[1]: Started sshd@19-10.0.0.43:22-10.0.0.1:40250.service - OpenSSH per-connection server daemon (10.0.0.1:40250). May 15 23:42:03.715547 sshd[4291]: Accepted publickey for core from 10.0.0.1 port 40250 ssh2: RSA SHA256:gmt3ErEAFpcoTNZjCS2EQ7u5MuXyZAbf7EZ0clvImok May 15 23:42:03.716900 sshd-session[4291]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 23:42:03.720959 systemd-logind[1546]: New session 20 of user core. May 15 23:42:03.734503 systemd[1]: Started session-20.scope - Session 20 of User core. May 15 23:42:03.852677 sshd[4294]: Connection closed by 10.0.0.1 port 40250 May 15 23:42:03.853893 sshd-session[4291]: pam_unix(sshd:session): session closed for user core May 15 23:42:03.857166 systemd[1]: sshd@19-10.0.0.43:22-10.0.0.1:40250.service: Deactivated successfully. May 15 23:42:03.860777 systemd-logind[1546]: Session 20 logged out. Waiting for processes to exit. May 15 23:42:03.863055 systemd[1]: session-20.scope: Deactivated successfully. May 15 23:42:03.865362 systemd-logind[1546]: Removed session 20. May 15 23:42:08.869435 systemd[1]: Started sshd@20-10.0.0.43:22-10.0.0.1:40254.service - OpenSSH per-connection server daemon (10.0.0.1:40254). May 15 23:42:08.915696 sshd[4307]: Accepted publickey for core from 10.0.0.1 port 40254 ssh2: RSA SHA256:gmt3ErEAFpcoTNZjCS2EQ7u5MuXyZAbf7EZ0clvImok May 15 23:42:08.917301 sshd-session[4307]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 23:42:08.922387 systemd-logind[1546]: New session 21 of user core. May 15 23:42:08.932528 systemd[1]: Started session-21.scope - Session 21 of User core. May 15 23:42:09.061863 sshd[4311]: Connection closed by 10.0.0.1 port 40254 May 15 23:42:09.063042 sshd-session[4307]: pam_unix(sshd:session): session closed for user core May 15 23:42:09.073575 systemd[1]: Started sshd@21-10.0.0.43:22-10.0.0.1:40264.service - OpenSSH per-connection server daemon (10.0.0.1:40264). May 15 23:42:09.074032 systemd[1]: sshd@20-10.0.0.43:22-10.0.0.1:40254.service: Deactivated successfully. May 15 23:42:09.081241 systemd[1]: session-21.scope: Deactivated successfully. May 15 23:42:09.081295 systemd-logind[1546]: Session 21 logged out. Waiting for processes to exit. May 15 23:42:09.084706 systemd-logind[1546]: Removed session 21. May 15 23:42:09.123666 sshd[4320]: Accepted publickey for core from 10.0.0.1 port 40264 ssh2: RSA SHA256:gmt3ErEAFpcoTNZjCS2EQ7u5MuXyZAbf7EZ0clvImok May 15 23:42:09.125600 sshd-session[4320]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 23:42:09.130362 systemd-logind[1546]: New session 22 of user core. May 15 23:42:09.145667 systemd[1]: Started session-22.scope - Session 22 of User core. May 15 23:42:11.231493 containerd[1567]: time="2025-05-15T23:42:11.231452031Z" level=info msg="StopContainer for \"27aa5f7a5087dd8355e18ffbecf282294e5b292e09b0559b9a4a89853c0ba24c\" with timeout 30 (s)" May 15 23:42:11.234667 containerd[1567]: time="2025-05-15T23:42:11.234433970Z" level=info msg="Stop container \"27aa5f7a5087dd8355e18ffbecf282294e5b292e09b0559b9a4a89853c0ba24c\" with signal terminated" May 15 23:42:11.268439 containerd[1567]: time="2025-05-15T23:42:11.268293650Z" level=info msg="StopContainer for \"82c77277f3eb82ffa83a9733ba32416d7049963b847a4022de17abedbb5d7994\" with timeout 2 (s)" May 15 23:42:11.268817 containerd[1567]: time="2025-05-15T23:42:11.268794526Z" level=info msg="Stop container \"82c77277f3eb82ffa83a9733ba32416d7049963b847a4022de17abedbb5d7994\" with signal terminated" May 15 23:42:11.271671 containerd[1567]: time="2025-05-15T23:42:11.271624666Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 15 23:42:11.275282 systemd-networkd[1230]: lxc_health: Link DOWN May 15 23:42:11.275289 systemd-networkd[1230]: lxc_health: Lost carrier May 15 23:42:11.279745 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-27aa5f7a5087dd8355e18ffbecf282294e5b292e09b0559b9a4a89853c0ba24c-rootfs.mount: Deactivated successfully. May 15 23:42:11.295253 containerd[1567]: time="2025-05-15T23:42:11.294130787Z" level=info msg="shim disconnected" id=27aa5f7a5087dd8355e18ffbecf282294e5b292e09b0559b9a4a89853c0ba24c namespace=k8s.io May 15 23:42:11.295253 containerd[1567]: time="2025-05-15T23:42:11.294183546Z" level=warning msg="cleaning up after shim disconnected" id=27aa5f7a5087dd8355e18ffbecf282294e5b292e09b0559b9a4a89853c0ba24c namespace=k8s.io May 15 23:42:11.295253 containerd[1567]: time="2025-05-15T23:42:11.294191986Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 23:42:11.343646 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-82c77277f3eb82ffa83a9733ba32416d7049963b847a4022de17abedbb5d7994-rootfs.mount: Deactivated successfully. May 15 23:42:11.347829 containerd[1567]: time="2025-05-15T23:42:11.347770287Z" level=info msg="shim disconnected" id=82c77277f3eb82ffa83a9733ba32416d7049963b847a4022de17abedbb5d7994 namespace=k8s.io May 15 23:42:11.347829 containerd[1567]: time="2025-05-15T23:42:11.347827206Z" level=warning msg="cleaning up after shim disconnected" id=82c77277f3eb82ffa83a9733ba32416d7049963b847a4022de17abedbb5d7994 namespace=k8s.io May 15 23:42:11.347829 containerd[1567]: time="2025-05-15T23:42:11.347841846Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 23:42:11.357615 containerd[1567]: time="2025-05-15T23:42:11.357463258Z" level=info msg="StopContainer for \"27aa5f7a5087dd8355e18ffbecf282294e5b292e09b0559b9a4a89853c0ba24c\" returns successfully" May 15 23:42:11.359938 containerd[1567]: time="2025-05-15T23:42:11.359893921Z" level=info msg="StopPodSandbox for \"f06dba274d785278483d2886a5da5dd1f1eb023a714818f65a621a2efbfb0cab\"" May 15 23:42:11.365014 containerd[1567]: time="2025-05-15T23:42:11.364794526Z" level=info msg="Container to stop \"27aa5f7a5087dd8355e18ffbecf282294e5b292e09b0559b9a4a89853c0ba24c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 23:42:11.367514 containerd[1567]: time="2025-05-15T23:42:11.366911111Z" level=info msg="StopContainer for \"82c77277f3eb82ffa83a9733ba32416d7049963b847a4022de17abedbb5d7994\" returns successfully" May 15 23:42:11.367514 containerd[1567]: time="2025-05-15T23:42:11.367418947Z" level=info msg="StopPodSandbox for \"d66b165869cb9648fd48205dcc3e7654a428dbbb4e073835380026e2a1f9ec19\"" May 15 23:42:11.367514 containerd[1567]: time="2025-05-15T23:42:11.367450947Z" level=info msg="Container to stop \"aff04963990ee8057f617955095806c8a01314cc695b823f4092e403370305a8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 23:42:11.367514 containerd[1567]: time="2025-05-15T23:42:11.367463667Z" level=info msg="Container to stop \"c210bd04653fde8a2f1ac3b67d9f71db5c6f68a4e5c7f7cb05640349a3514c4a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 23:42:11.367514 containerd[1567]: time="2025-05-15T23:42:11.367472267Z" level=info msg="Container to stop \"6e3789a87248ded98bfb848031a9adb14c9e32ed2acbe801aafe0e620622509a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 23:42:11.367514 containerd[1567]: time="2025-05-15T23:42:11.367481227Z" level=info msg="Container to stop \"749c4f1f5a6c08e3d87f1a0f29bb2027f580774281e4e8932f899275daed4838\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 23:42:11.367514 containerd[1567]: time="2025-05-15T23:42:11.367489107Z" level=info msg="Container to stop \"82c77277f3eb82ffa83a9733ba32416d7049963b847a4022de17abedbb5d7994\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 15 23:42:11.369772 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f06dba274d785278483d2886a5da5dd1f1eb023a714818f65a621a2efbfb0cab-shm.mount: Deactivated successfully. May 15 23:42:11.372359 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d66b165869cb9648fd48205dcc3e7654a428dbbb4e073835380026e2a1f9ec19-shm.mount: Deactivated successfully. May 15 23:42:11.396867 containerd[1567]: time="2025-05-15T23:42:11.395989025Z" level=info msg="shim disconnected" id=d66b165869cb9648fd48205dcc3e7654a428dbbb4e073835380026e2a1f9ec19 namespace=k8s.io May 15 23:42:11.396867 containerd[1567]: time="2025-05-15T23:42:11.396053585Z" level=warning msg="cleaning up after shim disconnected" id=d66b165869cb9648fd48205dcc3e7654a428dbbb4e073835380026e2a1f9ec19 namespace=k8s.io May 15 23:42:11.396867 containerd[1567]: time="2025-05-15T23:42:11.396062705Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 23:42:11.420521 containerd[1567]: time="2025-05-15T23:42:11.419422099Z" level=info msg="shim disconnected" id=f06dba274d785278483d2886a5da5dd1f1eb023a714818f65a621a2efbfb0cab namespace=k8s.io May 15 23:42:11.420521 containerd[1567]: time="2025-05-15T23:42:11.419482819Z" level=warning msg="cleaning up after shim disconnected" id=f06dba274d785278483d2886a5da5dd1f1eb023a714818f65a621a2efbfb0cab namespace=k8s.io May 15 23:42:11.420521 containerd[1567]: time="2025-05-15T23:42:11.419491938Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 23:42:11.421537 containerd[1567]: time="2025-05-15T23:42:11.421510564Z" level=info msg="TearDown network for sandbox \"d66b165869cb9648fd48205dcc3e7654a428dbbb4e073835380026e2a1f9ec19\" successfully" May 15 23:42:11.421625 containerd[1567]: time="2025-05-15T23:42:11.421612283Z" level=info msg="StopPodSandbox for \"d66b165869cb9648fd48205dcc3e7654a428dbbb4e073835380026e2a1f9ec19\" returns successfully" May 15 23:42:11.439041 containerd[1567]: time="2025-05-15T23:42:11.438995760Z" level=info msg="TearDown network for sandbox \"f06dba274d785278483d2886a5da5dd1f1eb023a714818f65a621a2efbfb0cab\" successfully" May 15 23:42:11.439374 containerd[1567]: time="2025-05-15T23:42:11.439244399Z" level=info msg="StopPodSandbox for \"f06dba274d785278483d2886a5da5dd1f1eb023a714818f65a621a2efbfb0cab\" returns successfully" May 15 23:42:11.540874 kubelet[2690]: I0515 23:42:11.540750 2690 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/29e030a9-1896-472f-8031-6cce6378509b-etc-cni-netd\") pod \"29e030a9-1896-472f-8031-6cce6378509b\" (UID: \"29e030a9-1896-472f-8031-6cce6378509b\") " May 15 23:42:11.540874 kubelet[2690]: I0515 23:42:11.540792 2690 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/29e030a9-1896-472f-8031-6cce6378509b-lib-modules\") pod \"29e030a9-1896-472f-8031-6cce6378509b\" (UID: \"29e030a9-1896-472f-8031-6cce6378509b\") " May 15 23:42:11.540874 kubelet[2690]: I0515 23:42:11.540813 2690 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/29e030a9-1896-472f-8031-6cce6378509b-bpf-maps\") pod \"29e030a9-1896-472f-8031-6cce6378509b\" (UID: \"29e030a9-1896-472f-8031-6cce6378509b\") " May 15 23:42:11.540874 kubelet[2690]: I0515 23:42:11.540829 2690 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/29e030a9-1896-472f-8031-6cce6378509b-cni-path\") pod \"29e030a9-1896-472f-8031-6cce6378509b\" (UID: \"29e030a9-1896-472f-8031-6cce6378509b\") " May 15 23:42:11.540874 kubelet[2690]: I0515 23:42:11.540846 2690 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/29e030a9-1896-472f-8031-6cce6378509b-host-proc-sys-net\") pod \"29e030a9-1896-472f-8031-6cce6378509b\" (UID: \"29e030a9-1896-472f-8031-6cce6378509b\") " May 15 23:42:11.540874 kubelet[2690]: I0515 23:42:11.540869 2690 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/29e030a9-1896-472f-8031-6cce6378509b-cilium-cgroup\") pod \"29e030a9-1896-472f-8031-6cce6378509b\" (UID: \"29e030a9-1896-472f-8031-6cce6378509b\") " May 15 23:42:11.541504 kubelet[2690]: I0515 23:42:11.540893 2690 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4f8nr\" (UniqueName: \"kubernetes.io/projected/54afbe1f-8b8b-4054-a909-15bc5da113a6-kube-api-access-4f8nr\") pod \"54afbe1f-8b8b-4054-a909-15bc5da113a6\" (UID: \"54afbe1f-8b8b-4054-a909-15bc5da113a6\") " May 15 23:42:11.541504 kubelet[2690]: I0515 23:42:11.540915 2690 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/29e030a9-1896-472f-8031-6cce6378509b-clustermesh-secrets\") pod \"29e030a9-1896-472f-8031-6cce6378509b\" (UID: \"29e030a9-1896-472f-8031-6cce6378509b\") " May 15 23:42:11.541504 kubelet[2690]: I0515 23:42:11.540930 2690 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/29e030a9-1896-472f-8031-6cce6378509b-xtables-lock\") pod \"29e030a9-1896-472f-8031-6cce6378509b\" (UID: \"29e030a9-1896-472f-8031-6cce6378509b\") " May 15 23:42:11.541504 kubelet[2690]: I0515 23:42:11.540944 2690 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/29e030a9-1896-472f-8031-6cce6378509b-cilium-run\") pod \"29e030a9-1896-472f-8031-6cce6378509b\" (UID: \"29e030a9-1896-472f-8031-6cce6378509b\") " May 15 23:42:11.541504 kubelet[2690]: I0515 23:42:11.540960 2690 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/54afbe1f-8b8b-4054-a909-15bc5da113a6-cilium-config-path\") pod \"54afbe1f-8b8b-4054-a909-15bc5da113a6\" (UID: \"54afbe1f-8b8b-4054-a909-15bc5da113a6\") " May 15 23:42:11.541504 kubelet[2690]: I0515 23:42:11.540975 2690 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/29e030a9-1896-472f-8031-6cce6378509b-host-proc-sys-kernel\") pod \"29e030a9-1896-472f-8031-6cce6378509b\" (UID: \"29e030a9-1896-472f-8031-6cce6378509b\") " May 15 23:42:11.541665 kubelet[2690]: I0515 23:42:11.540988 2690 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/29e030a9-1896-472f-8031-6cce6378509b-hostproc\") pod \"29e030a9-1896-472f-8031-6cce6378509b\" (UID: \"29e030a9-1896-472f-8031-6cce6378509b\") " May 15 23:42:11.541665 kubelet[2690]: I0515 23:42:11.541007 2690 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/29e030a9-1896-472f-8031-6cce6378509b-cilium-config-path\") pod \"29e030a9-1896-472f-8031-6cce6378509b\" (UID: \"29e030a9-1896-472f-8031-6cce6378509b\") " May 15 23:42:11.541665 kubelet[2690]: I0515 23:42:11.541035 2690 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/29e030a9-1896-472f-8031-6cce6378509b-hubble-tls\") pod \"29e030a9-1896-472f-8031-6cce6378509b\" (UID: \"29e030a9-1896-472f-8031-6cce6378509b\") " May 15 23:42:11.541665 kubelet[2690]: I0515 23:42:11.541053 2690 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h78mv\" (UniqueName: \"kubernetes.io/projected/29e030a9-1896-472f-8031-6cce6378509b-kube-api-access-h78mv\") pod \"29e030a9-1896-472f-8031-6cce6378509b\" (UID: \"29e030a9-1896-472f-8031-6cce6378509b\") " May 15 23:42:11.548776 kubelet[2690]: I0515 23:42:11.546359 2690 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/29e030a9-1896-472f-8031-6cce6378509b-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "29e030a9-1896-472f-8031-6cce6378509b" (UID: "29e030a9-1896-472f-8031-6cce6378509b"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 23:42:11.548776 kubelet[2690]: I0515 23:42:11.546443 2690 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/29e030a9-1896-472f-8031-6cce6378509b-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "29e030a9-1896-472f-8031-6cce6378509b" (UID: "29e030a9-1896-472f-8031-6cce6378509b"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 23:42:11.548776 kubelet[2690]: I0515 23:42:11.546464 2690 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/29e030a9-1896-472f-8031-6cce6378509b-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "29e030a9-1896-472f-8031-6cce6378509b" (UID: "29e030a9-1896-472f-8031-6cce6378509b"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 23:42:11.548776 kubelet[2690]: I0515 23:42:11.546536 2690 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/29e030a9-1896-472f-8031-6cce6378509b-cni-path" (OuterVolumeSpecName: "cni-path") pod "29e030a9-1896-472f-8031-6cce6378509b" (UID: "29e030a9-1896-472f-8031-6cce6378509b"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 23:42:11.548776 kubelet[2690]: I0515 23:42:11.546580 2690 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/29e030a9-1896-472f-8031-6cce6378509b-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "29e030a9-1896-472f-8031-6cce6378509b" (UID: "29e030a9-1896-472f-8031-6cce6378509b"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 23:42:11.549008 kubelet[2690]: I0515 23:42:11.546598 2690 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/29e030a9-1896-472f-8031-6cce6378509b-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "29e030a9-1896-472f-8031-6cce6378509b" (UID: "29e030a9-1896-472f-8031-6cce6378509b"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 23:42:11.549008 kubelet[2690]: I0515 23:42:11.546612 2690 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/29e030a9-1896-472f-8031-6cce6378509b-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "29e030a9-1896-472f-8031-6cce6378509b" (UID: "29e030a9-1896-472f-8031-6cce6378509b"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 23:42:11.549008 kubelet[2690]: I0515 23:42:11.548537 2690 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/54afbe1f-8b8b-4054-a909-15bc5da113a6-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "54afbe1f-8b8b-4054-a909-15bc5da113a6" (UID: "54afbe1f-8b8b-4054-a909-15bc5da113a6"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 15 23:42:11.549008 kubelet[2690]: I0515 23:42:11.548604 2690 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/29e030a9-1896-472f-8031-6cce6378509b-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "29e030a9-1896-472f-8031-6cce6378509b" (UID: "29e030a9-1896-472f-8031-6cce6378509b"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 15 23:42:11.549008 kubelet[2690]: I0515 23:42:11.548619 2690 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/29e030a9-1896-472f-8031-6cce6378509b-hostproc" (OuterVolumeSpecName: "hostproc") pod "29e030a9-1896-472f-8031-6cce6378509b" (UID: "29e030a9-1896-472f-8031-6cce6378509b"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 23:42:11.549162 kubelet[2690]: I0515 23:42:11.548636 2690 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/29e030a9-1896-472f-8031-6cce6378509b-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "29e030a9-1896-472f-8031-6cce6378509b" (UID: "29e030a9-1896-472f-8031-6cce6378509b"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 23:42:11.549162 kubelet[2690]: I0515 23:42:11.548640 2690 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/29e030a9-1896-472f-8031-6cce6378509b-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "29e030a9-1896-472f-8031-6cce6378509b" (UID: "29e030a9-1896-472f-8031-6cce6378509b"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 15 23:42:11.549320 kubelet[2690]: I0515 23:42:11.549264 2690 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29e030a9-1896-472f-8031-6cce6378509b-kube-api-access-h78mv" (OuterVolumeSpecName: "kube-api-access-h78mv") pod "29e030a9-1896-472f-8031-6cce6378509b" (UID: "29e030a9-1896-472f-8031-6cce6378509b"). InnerVolumeSpecName "kube-api-access-h78mv". PluginName "kubernetes.io/projected", VolumeGidValue "" May 15 23:42:11.549358 kubelet[2690]: I0515 23:42:11.549322 2690 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29e030a9-1896-472f-8031-6cce6378509b-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "29e030a9-1896-472f-8031-6cce6378509b" (UID: "29e030a9-1896-472f-8031-6cce6378509b"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" May 15 23:42:11.551435 kubelet[2690]: I0515 23:42:11.551392 2690 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/54afbe1f-8b8b-4054-a909-15bc5da113a6-kube-api-access-4f8nr" (OuterVolumeSpecName: "kube-api-access-4f8nr") pod "54afbe1f-8b8b-4054-a909-15bc5da113a6" (UID: "54afbe1f-8b8b-4054-a909-15bc5da113a6"). InnerVolumeSpecName "kube-api-access-4f8nr". PluginName "kubernetes.io/projected", VolumeGidValue "" May 15 23:42:11.553691 kubelet[2690]: I0515 23:42:11.553652 2690 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/29e030a9-1896-472f-8031-6cce6378509b-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "29e030a9-1896-472f-8031-6cce6378509b" (UID: "29e030a9-1896-472f-8031-6cce6378509b"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" May 15 23:42:11.642043 kubelet[2690]: I0515 23:42:11.641988 2690 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/29e030a9-1896-472f-8031-6cce6378509b-xtables-lock\") on node \"localhost\" DevicePath \"\"" May 15 23:42:11.642043 kubelet[2690]: I0515 23:42:11.642032 2690 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/29e030a9-1896-472f-8031-6cce6378509b-cilium-run\") on node \"localhost\" DevicePath \"\"" May 15 23:42:11.642043 kubelet[2690]: I0515 23:42:11.642042 2690 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/54afbe1f-8b8b-4054-a909-15bc5da113a6-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 15 23:42:11.642043 kubelet[2690]: I0515 23:42:11.642054 2690 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/29e030a9-1896-472f-8031-6cce6378509b-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" May 15 23:42:11.642278 kubelet[2690]: I0515 23:42:11.642064 2690 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/29e030a9-1896-472f-8031-6cce6378509b-hostproc\") on node \"localhost\" DevicePath \"\"" May 15 23:42:11.642278 kubelet[2690]: I0515 23:42:11.642073 2690 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/29e030a9-1896-472f-8031-6cce6378509b-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 15 23:42:11.642278 kubelet[2690]: I0515 23:42:11.642106 2690 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/29e030a9-1896-472f-8031-6cce6378509b-hubble-tls\") on node \"localhost\" DevicePath \"\"" May 15 23:42:11.642278 kubelet[2690]: I0515 23:42:11.642139 2690 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h78mv\" (UniqueName: \"kubernetes.io/projected/29e030a9-1896-472f-8031-6cce6378509b-kube-api-access-h78mv\") on node \"localhost\" DevicePath \"\"" May 15 23:42:11.642278 kubelet[2690]: I0515 23:42:11.642150 2690 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/29e030a9-1896-472f-8031-6cce6378509b-lib-modules\") on node \"localhost\" DevicePath \"\"" May 15 23:42:11.642278 kubelet[2690]: I0515 23:42:11.642157 2690 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/29e030a9-1896-472f-8031-6cce6378509b-bpf-maps\") on node \"localhost\" DevicePath \"\"" May 15 23:42:11.642278 kubelet[2690]: I0515 23:42:11.642164 2690 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/29e030a9-1896-472f-8031-6cce6378509b-cni-path\") on node \"localhost\" DevicePath \"\"" May 15 23:42:11.642278 kubelet[2690]: I0515 23:42:11.642172 2690 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/29e030a9-1896-472f-8031-6cce6378509b-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" May 15 23:42:11.642471 kubelet[2690]: I0515 23:42:11.642185 2690 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/29e030a9-1896-472f-8031-6cce6378509b-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" May 15 23:42:11.642471 kubelet[2690]: I0515 23:42:11.642192 2690 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/29e030a9-1896-472f-8031-6cce6378509b-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" May 15 23:42:11.642471 kubelet[2690]: I0515 23:42:11.642200 2690 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4f8nr\" (UniqueName: \"kubernetes.io/projected/54afbe1f-8b8b-4054-a909-15bc5da113a6-kube-api-access-4f8nr\") on node \"localhost\" DevicePath \"\"" May 15 23:42:11.642471 kubelet[2690]: I0515 23:42:11.642208 2690 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/29e030a9-1896-472f-8031-6cce6378509b-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" May 15 23:42:12.029180 kubelet[2690]: I0515 23:42:12.029148 2690 scope.go:117] "RemoveContainer" containerID="82c77277f3eb82ffa83a9733ba32416d7049963b847a4022de17abedbb5d7994" May 15 23:42:12.031961 containerd[1567]: time="2025-05-15T23:42:12.031821175Z" level=info msg="RemoveContainer for \"82c77277f3eb82ffa83a9733ba32416d7049963b847a4022de17abedbb5d7994\"" May 15 23:42:12.058176 containerd[1567]: time="2025-05-15T23:42:12.058133523Z" level=info msg="RemoveContainer for \"82c77277f3eb82ffa83a9733ba32416d7049963b847a4022de17abedbb5d7994\" returns successfully" May 15 23:42:12.059468 kubelet[2690]: I0515 23:42:12.059355 2690 scope.go:117] "RemoveContainer" containerID="c210bd04653fde8a2f1ac3b67d9f71db5c6f68a4e5c7f7cb05640349a3514c4a" May 15 23:42:12.060515 containerd[1567]: time="2025-05-15T23:42:12.060476908Z" level=info msg="RemoveContainer for \"c210bd04653fde8a2f1ac3b67d9f71db5c6f68a4e5c7f7cb05640349a3514c4a\"" May 15 23:42:12.065153 containerd[1567]: time="2025-05-15T23:42:12.065080997Z" level=info msg="RemoveContainer for \"c210bd04653fde8a2f1ac3b67d9f71db5c6f68a4e5c7f7cb05640349a3514c4a\" returns successfully" May 15 23:42:12.066411 kubelet[2690]: I0515 23:42:12.066372 2690 scope.go:117] "RemoveContainer" containerID="aff04963990ee8057f617955095806c8a01314cc695b823f4092e403370305a8" May 15 23:42:12.067493 containerd[1567]: time="2025-05-15T23:42:12.067466862Z" level=info msg="RemoveContainer for \"aff04963990ee8057f617955095806c8a01314cc695b823f4092e403370305a8\"" May 15 23:42:12.110897 containerd[1567]: time="2025-05-15T23:42:12.110848537Z" level=info msg="RemoveContainer for \"aff04963990ee8057f617955095806c8a01314cc695b823f4092e403370305a8\" returns successfully" May 15 23:42:12.111254 kubelet[2690]: I0515 23:42:12.111227 2690 scope.go:117] "RemoveContainer" containerID="749c4f1f5a6c08e3d87f1a0f29bb2027f580774281e4e8932f899275daed4838" May 15 23:42:12.112441 containerd[1567]: time="2025-05-15T23:42:12.112413407Z" level=info msg="RemoveContainer for \"749c4f1f5a6c08e3d87f1a0f29bb2027f580774281e4e8932f899275daed4838\"" May 15 23:42:12.180861 containerd[1567]: time="2025-05-15T23:42:12.180818399Z" level=info msg="RemoveContainer for \"749c4f1f5a6c08e3d87f1a0f29bb2027f580774281e4e8932f899275daed4838\" returns successfully" May 15 23:42:12.181162 kubelet[2690]: I0515 23:42:12.181140 2690 scope.go:117] "RemoveContainer" containerID="6e3789a87248ded98bfb848031a9adb14c9e32ed2acbe801aafe0e620622509a" May 15 23:42:12.182223 containerd[1567]: time="2025-05-15T23:42:12.182196389Z" level=info msg="RemoveContainer for \"6e3789a87248ded98bfb848031a9adb14c9e32ed2acbe801aafe0e620622509a\"" May 15 23:42:12.189404 containerd[1567]: time="2025-05-15T23:42:12.189341543Z" level=info msg="RemoveContainer for \"6e3789a87248ded98bfb848031a9adb14c9e32ed2acbe801aafe0e620622509a\" returns successfully" May 15 23:42:12.189716 kubelet[2690]: I0515 23:42:12.189684 2690 scope.go:117] "RemoveContainer" containerID="82c77277f3eb82ffa83a9733ba32416d7049963b847a4022de17abedbb5d7994" May 15 23:42:12.189951 containerd[1567]: time="2025-05-15T23:42:12.189908139Z" level=error msg="ContainerStatus for \"82c77277f3eb82ffa83a9733ba32416d7049963b847a4022de17abedbb5d7994\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"82c77277f3eb82ffa83a9733ba32416d7049963b847a4022de17abedbb5d7994\": not found" May 15 23:42:12.196677 kubelet[2690]: E0515 23:42:12.196447 2690 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"82c77277f3eb82ffa83a9733ba32416d7049963b847a4022de17abedbb5d7994\": not found" containerID="82c77277f3eb82ffa83a9733ba32416d7049963b847a4022de17abedbb5d7994" May 15 23:42:12.196677 kubelet[2690]: I0515 23:42:12.196488 2690 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"82c77277f3eb82ffa83a9733ba32416d7049963b847a4022de17abedbb5d7994"} err="failed to get container status \"82c77277f3eb82ffa83a9733ba32416d7049963b847a4022de17abedbb5d7994\": rpc error: code = NotFound desc = an error occurred when try to find container \"82c77277f3eb82ffa83a9733ba32416d7049963b847a4022de17abedbb5d7994\": not found" May 15 23:42:12.196677 kubelet[2690]: I0515 23:42:12.196576 2690 scope.go:117] "RemoveContainer" containerID="c210bd04653fde8a2f1ac3b67d9f71db5c6f68a4e5c7f7cb05640349a3514c4a" May 15 23:42:12.196851 containerd[1567]: time="2025-05-15T23:42:12.196805494Z" level=error msg="ContainerStatus for \"c210bd04653fde8a2f1ac3b67d9f71db5c6f68a4e5c7f7cb05640349a3514c4a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c210bd04653fde8a2f1ac3b67d9f71db5c6f68a4e5c7f7cb05640349a3514c4a\": not found" May 15 23:42:12.196976 kubelet[2690]: E0515 23:42:12.196938 2690 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c210bd04653fde8a2f1ac3b67d9f71db5c6f68a4e5c7f7cb05640349a3514c4a\": not found" containerID="c210bd04653fde8a2f1ac3b67d9f71db5c6f68a4e5c7f7cb05640349a3514c4a" May 15 23:42:12.196976 kubelet[2690]: I0515 23:42:12.196968 2690 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c210bd04653fde8a2f1ac3b67d9f71db5c6f68a4e5c7f7cb05640349a3514c4a"} err="failed to get container status \"c210bd04653fde8a2f1ac3b67d9f71db5c6f68a4e5c7f7cb05640349a3514c4a\": rpc error: code = NotFound desc = an error occurred when try to find container \"c210bd04653fde8a2f1ac3b67d9f71db5c6f68a4e5c7f7cb05640349a3514c4a\": not found" May 15 23:42:12.196976 kubelet[2690]: I0515 23:42:12.196992 2690 scope.go:117] "RemoveContainer" containerID="aff04963990ee8057f617955095806c8a01314cc695b823f4092e403370305a8" May 15 23:42:12.197414 kubelet[2690]: E0515 23:42:12.197339 2690 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"aff04963990ee8057f617955095806c8a01314cc695b823f4092e403370305a8\": not found" containerID="aff04963990ee8057f617955095806c8a01314cc695b823f4092e403370305a8" May 15 23:42:12.197414 kubelet[2690]: I0515 23:42:12.197360 2690 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"aff04963990ee8057f617955095806c8a01314cc695b823f4092e403370305a8"} err="failed to get container status \"aff04963990ee8057f617955095806c8a01314cc695b823f4092e403370305a8\": rpc error: code = NotFound desc = an error occurred when try to find container \"aff04963990ee8057f617955095806c8a01314cc695b823f4092e403370305a8\": not found" May 15 23:42:12.197414 kubelet[2690]: I0515 23:42:12.197374 2690 scope.go:117] "RemoveContainer" containerID="749c4f1f5a6c08e3d87f1a0f29bb2027f580774281e4e8932f899275daed4838" May 15 23:42:12.197519 containerd[1567]: time="2025-05-15T23:42:12.197204331Z" level=error msg="ContainerStatus for \"aff04963990ee8057f617955095806c8a01314cc695b823f4092e403370305a8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"aff04963990ee8057f617955095806c8a01314cc695b823f4092e403370305a8\": not found" May 15 23:42:12.197578 containerd[1567]: time="2025-05-15T23:42:12.197536129Z" level=error msg="ContainerStatus for \"749c4f1f5a6c08e3d87f1a0f29bb2027f580774281e4e8932f899275daed4838\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"749c4f1f5a6c08e3d87f1a0f29bb2027f580774281e4e8932f899275daed4838\": not found" May 15 23:42:12.197814 kubelet[2690]: E0515 23:42:12.197701 2690 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"749c4f1f5a6c08e3d87f1a0f29bb2027f580774281e4e8932f899275daed4838\": not found" containerID="749c4f1f5a6c08e3d87f1a0f29bb2027f580774281e4e8932f899275daed4838" May 15 23:42:12.197814 kubelet[2690]: I0515 23:42:12.197729 2690 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"749c4f1f5a6c08e3d87f1a0f29bb2027f580774281e4e8932f899275daed4838"} err="failed to get container status \"749c4f1f5a6c08e3d87f1a0f29bb2027f580774281e4e8932f899275daed4838\": rpc error: code = NotFound desc = an error occurred when try to find container \"749c4f1f5a6c08e3d87f1a0f29bb2027f580774281e4e8932f899275daed4838\": not found" May 15 23:42:12.197814 kubelet[2690]: I0515 23:42:12.197745 2690 scope.go:117] "RemoveContainer" containerID="6e3789a87248ded98bfb848031a9adb14c9e32ed2acbe801aafe0e620622509a" May 15 23:42:12.197982 containerd[1567]: time="2025-05-15T23:42:12.197916766Z" level=error msg="ContainerStatus for \"6e3789a87248ded98bfb848031a9adb14c9e32ed2acbe801aafe0e620622509a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6e3789a87248ded98bfb848031a9adb14c9e32ed2acbe801aafe0e620622509a\": not found" May 15 23:42:12.198056 kubelet[2690]: E0515 23:42:12.198037 2690 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6e3789a87248ded98bfb848031a9adb14c9e32ed2acbe801aafe0e620622509a\": not found" containerID="6e3789a87248ded98bfb848031a9adb14c9e32ed2acbe801aafe0e620622509a" May 15 23:42:12.198102 kubelet[2690]: I0515 23:42:12.198058 2690 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6e3789a87248ded98bfb848031a9adb14c9e32ed2acbe801aafe0e620622509a"} err="failed to get container status \"6e3789a87248ded98bfb848031a9adb14c9e32ed2acbe801aafe0e620622509a\": rpc error: code = NotFound desc = an error occurred when try to find container \"6e3789a87248ded98bfb848031a9adb14c9e32ed2acbe801aafe0e620622509a\": not found" May 15 23:42:12.198102 kubelet[2690]: I0515 23:42:12.198073 2690 scope.go:117] "RemoveContainer" containerID="27aa5f7a5087dd8355e18ffbecf282294e5b292e09b0559b9a4a89853c0ba24c" May 15 23:42:12.198977 containerd[1567]: time="2025-05-15T23:42:12.198951000Z" level=info msg="RemoveContainer for \"27aa5f7a5087dd8355e18ffbecf282294e5b292e09b0559b9a4a89853c0ba24c\"" May 15 23:42:12.201399 containerd[1567]: time="2025-05-15T23:42:12.201360744Z" level=info msg="RemoveContainer for \"27aa5f7a5087dd8355e18ffbecf282294e5b292e09b0559b9a4a89853c0ba24c\" returns successfully" May 15 23:42:12.201584 kubelet[2690]: I0515 23:42:12.201562 2690 scope.go:117] "RemoveContainer" containerID="27aa5f7a5087dd8355e18ffbecf282294e5b292e09b0559b9a4a89853c0ba24c" May 15 23:42:12.201911 containerd[1567]: time="2025-05-15T23:42:12.201880980Z" level=error msg="ContainerStatus for \"27aa5f7a5087dd8355e18ffbecf282294e5b292e09b0559b9a4a89853c0ba24c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"27aa5f7a5087dd8355e18ffbecf282294e5b292e09b0559b9a4a89853c0ba24c\": not found" May 15 23:42:12.202096 kubelet[2690]: E0515 23:42:12.202057 2690 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"27aa5f7a5087dd8355e18ffbecf282294e5b292e09b0559b9a4a89853c0ba24c\": not found" containerID="27aa5f7a5087dd8355e18ffbecf282294e5b292e09b0559b9a4a89853c0ba24c" May 15 23:42:12.202096 kubelet[2690]: I0515 23:42:12.202088 2690 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"27aa5f7a5087dd8355e18ffbecf282294e5b292e09b0559b9a4a89853c0ba24c"} err="failed to get container status \"27aa5f7a5087dd8355e18ffbecf282294e5b292e09b0559b9a4a89853c0ba24c\": rpc error: code = NotFound desc = an error occurred when try to find container \"27aa5f7a5087dd8355e18ffbecf282294e5b292e09b0559b9a4a89853c0ba24c\": not found" May 15 23:42:12.249154 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f06dba274d785278483d2886a5da5dd1f1eb023a714818f65a621a2efbfb0cab-rootfs.mount: Deactivated successfully. May 15 23:42:12.249305 systemd[1]: var-lib-kubelet-pods-54afbe1f\x2d8b8b\x2d4054\x2da909\x2d15bc5da113a6-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d4f8nr.mount: Deactivated successfully. May 15 23:42:12.249400 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d66b165869cb9648fd48205dcc3e7654a428dbbb4e073835380026e2a1f9ec19-rootfs.mount: Deactivated successfully. May 15 23:42:12.249474 systemd[1]: var-lib-kubelet-pods-29e030a9\x2d1896\x2d472f\x2d8031\x2d6cce6378509b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dh78mv.mount: Deactivated successfully. May 15 23:42:12.249562 systemd[1]: var-lib-kubelet-pods-29e030a9\x2d1896\x2d472f\x2d8031\x2d6cce6378509b-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 15 23:42:12.249644 systemd[1]: var-lib-kubelet-pods-29e030a9\x2d1896\x2d472f\x2d8031\x2d6cce6378509b-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 15 23:42:12.831399 kubelet[2690]: I0515 23:42:12.831247 2690 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="29e030a9-1896-472f-8031-6cce6378509b" path="/var/lib/kubelet/pods/29e030a9-1896-472f-8031-6cce6378509b/volumes" May 15 23:42:12.831812 kubelet[2690]: I0515 23:42:12.831791 2690 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="54afbe1f-8b8b-4054-a909-15bc5da113a6" path="/var/lib/kubelet/pods/54afbe1f-8b8b-4054-a909-15bc5da113a6/volumes" May 15 23:42:12.878829 kubelet[2690]: E0515 23:42:12.878784 2690 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 15 23:42:13.166418 sshd[4326]: Connection closed by 10.0.0.1 port 40264 May 15 23:42:13.167235 sshd-session[4320]: pam_unix(sshd:session): session closed for user core May 15 23:42:13.173606 systemd[1]: Started sshd@22-10.0.0.43:22-10.0.0.1:38344.service - OpenSSH per-connection server daemon (10.0.0.1:38344). May 15 23:42:13.174193 systemd[1]: sshd@21-10.0.0.43:22-10.0.0.1:40264.service: Deactivated successfully. May 15 23:42:13.179307 systemd-logind[1546]: Session 22 logged out. Waiting for processes to exit. May 15 23:42:13.179425 systemd[1]: session-22.scope: Deactivated successfully. May 15 23:42:13.183852 systemd-logind[1546]: Removed session 22. May 15 23:42:13.215907 sshd[4488]: Accepted publickey for core from 10.0.0.1 port 38344 ssh2: RSA SHA256:gmt3ErEAFpcoTNZjCS2EQ7u5MuXyZAbf7EZ0clvImok May 15 23:42:13.217602 sshd-session[4488]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 23:42:13.221437 systemd-logind[1546]: New session 23 of user core. May 15 23:42:13.232441 systemd[1]: Started session-23.scope - Session 23 of User core. May 15 23:42:13.904529 sshd[4494]: Connection closed by 10.0.0.1 port 38344 May 15 23:42:13.908867 sshd-session[4488]: pam_unix(sshd:session): session closed for user core May 15 23:42:13.923271 kubelet[2690]: E0515 23:42:13.923229 2690 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="29e030a9-1896-472f-8031-6cce6378509b" containerName="apply-sysctl-overwrites" May 15 23:42:13.923271 kubelet[2690]: E0515 23:42:13.923261 2690 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="54afbe1f-8b8b-4054-a909-15bc5da113a6" containerName="cilium-operator" May 15 23:42:13.923271 kubelet[2690]: E0515 23:42:13.923268 2690 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="29e030a9-1896-472f-8031-6cce6378509b" containerName="mount-bpf-fs" May 15 23:42:13.923271 kubelet[2690]: E0515 23:42:13.923275 2690 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="29e030a9-1896-472f-8031-6cce6378509b" containerName="mount-cgroup" May 15 23:42:13.923271 kubelet[2690]: E0515 23:42:13.923281 2690 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="29e030a9-1896-472f-8031-6cce6378509b" containerName="clean-cilium-state" May 15 23:42:13.923271 kubelet[2690]: E0515 23:42:13.923286 2690 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="29e030a9-1896-472f-8031-6cce6378509b" containerName="cilium-agent" May 15 23:42:13.926508 kubelet[2690]: I0515 23:42:13.923311 2690 memory_manager.go:354] "RemoveStaleState removing state" podUID="54afbe1f-8b8b-4054-a909-15bc5da113a6" containerName="cilium-operator" May 15 23:42:13.926508 kubelet[2690]: I0515 23:42:13.923318 2690 memory_manager.go:354] "RemoveStaleState removing state" podUID="29e030a9-1896-472f-8031-6cce6378509b" containerName="cilium-agent" May 15 23:42:13.926416 systemd[1]: Started sshd@23-10.0.0.43:22-10.0.0.1:38350.service - OpenSSH per-connection server daemon (10.0.0.1:38350). May 15 23:42:13.926839 systemd[1]: sshd@22-10.0.0.43:22-10.0.0.1:38344.service: Deactivated successfully. May 15 23:42:13.928428 systemd[1]: session-23.scope: Deactivated successfully. May 15 23:42:13.933401 systemd-logind[1546]: Session 23 logged out. Waiting for processes to exit. May 15 23:42:13.940796 systemd-logind[1546]: Removed session 23. May 15 23:42:13.974157 sshd[4503]: Accepted publickey for core from 10.0.0.1 port 38350 ssh2: RSA SHA256:gmt3ErEAFpcoTNZjCS2EQ7u5MuXyZAbf7EZ0clvImok May 15 23:42:13.975028 sshd-session[4503]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 23:42:13.979218 systemd-logind[1546]: New session 24 of user core. May 15 23:42:13.989510 systemd[1]: Started session-24.scope - Session 24 of User core. May 15 23:42:14.039168 sshd[4508]: Connection closed by 10.0.0.1 port 38350 May 15 23:42:14.039727 sshd-session[4503]: pam_unix(sshd:session): session closed for user core May 15 23:42:14.060361 kubelet[2690]: I0515 23:42:14.059957 2690 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e0be1f12-3038-49d5-ae4d-dfbfc6fff69c-hostproc\") pod \"cilium-mpcbp\" (UID: \"e0be1f12-3038-49d5-ae4d-dfbfc6fff69c\") " pod="kube-system/cilium-mpcbp" May 15 23:42:14.060361 kubelet[2690]: I0515 23:42:14.060013 2690 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e0be1f12-3038-49d5-ae4d-dfbfc6fff69c-clustermesh-secrets\") pod \"cilium-mpcbp\" (UID: \"e0be1f12-3038-49d5-ae4d-dfbfc6fff69c\") " pod="kube-system/cilium-mpcbp" May 15 23:42:14.060361 kubelet[2690]: I0515 23:42:14.060036 2690 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e0be1f12-3038-49d5-ae4d-dfbfc6fff69c-lib-modules\") pod \"cilium-mpcbp\" (UID: \"e0be1f12-3038-49d5-ae4d-dfbfc6fff69c\") " pod="kube-system/cilium-mpcbp" May 15 23:42:14.060361 kubelet[2690]: I0515 23:42:14.060054 2690 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e0be1f12-3038-49d5-ae4d-dfbfc6fff69c-cilium-config-path\") pod \"cilium-mpcbp\" (UID: \"e0be1f12-3038-49d5-ae4d-dfbfc6fff69c\") " pod="kube-system/cilium-mpcbp" May 15 23:42:14.060361 kubelet[2690]: I0515 23:42:14.060077 2690 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e0be1f12-3038-49d5-ae4d-dfbfc6fff69c-host-proc-sys-kernel\") pod \"cilium-mpcbp\" (UID: \"e0be1f12-3038-49d5-ae4d-dfbfc6fff69c\") " pod="kube-system/cilium-mpcbp" May 15 23:42:14.060361 kubelet[2690]: I0515 23:42:14.060093 2690 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e0be1f12-3038-49d5-ae4d-dfbfc6fff69c-hubble-tls\") pod \"cilium-mpcbp\" (UID: \"e0be1f12-3038-49d5-ae4d-dfbfc6fff69c\") " pod="kube-system/cilium-mpcbp" May 15 23:42:14.060617 kubelet[2690]: I0515 23:42:14.060109 2690 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sn68p\" (UniqueName: \"kubernetes.io/projected/e0be1f12-3038-49d5-ae4d-dfbfc6fff69c-kube-api-access-sn68p\") pod \"cilium-mpcbp\" (UID: \"e0be1f12-3038-49d5-ae4d-dfbfc6fff69c\") " pod="kube-system/cilium-mpcbp" May 15 23:42:14.060617 kubelet[2690]: I0515 23:42:14.060140 2690 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e0be1f12-3038-49d5-ae4d-dfbfc6fff69c-etc-cni-netd\") pod \"cilium-mpcbp\" (UID: \"e0be1f12-3038-49d5-ae4d-dfbfc6fff69c\") " pod="kube-system/cilium-mpcbp" May 15 23:42:14.060617 kubelet[2690]: I0515 23:42:14.060155 2690 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e0be1f12-3038-49d5-ae4d-dfbfc6fff69c-xtables-lock\") pod \"cilium-mpcbp\" (UID: \"e0be1f12-3038-49d5-ae4d-dfbfc6fff69c\") " pod="kube-system/cilium-mpcbp" May 15 23:42:14.060617 kubelet[2690]: I0515 23:42:14.060175 2690 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e0be1f12-3038-49d5-ae4d-dfbfc6fff69c-cilium-cgroup\") pod \"cilium-mpcbp\" (UID: \"e0be1f12-3038-49d5-ae4d-dfbfc6fff69c\") " pod="kube-system/cilium-mpcbp" May 15 23:42:14.060617 kubelet[2690]: I0515 23:42:14.060195 2690 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e0be1f12-3038-49d5-ae4d-dfbfc6fff69c-host-proc-sys-net\") pod \"cilium-mpcbp\" (UID: \"e0be1f12-3038-49d5-ae4d-dfbfc6fff69c\") " pod="kube-system/cilium-mpcbp" May 15 23:42:14.060617 kubelet[2690]: I0515 23:42:14.060213 2690 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e0be1f12-3038-49d5-ae4d-dfbfc6fff69c-cni-path\") pod \"cilium-mpcbp\" (UID: \"e0be1f12-3038-49d5-ae4d-dfbfc6fff69c\") " pod="kube-system/cilium-mpcbp" May 15 23:42:14.060757 kubelet[2690]: I0515 23:42:14.060229 2690 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/e0be1f12-3038-49d5-ae4d-dfbfc6fff69c-cilium-ipsec-secrets\") pod \"cilium-mpcbp\" (UID: \"e0be1f12-3038-49d5-ae4d-dfbfc6fff69c\") " pod="kube-system/cilium-mpcbp" May 15 23:42:14.060757 kubelet[2690]: I0515 23:42:14.060245 2690 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e0be1f12-3038-49d5-ae4d-dfbfc6fff69c-cilium-run\") pod \"cilium-mpcbp\" (UID: \"e0be1f12-3038-49d5-ae4d-dfbfc6fff69c\") " pod="kube-system/cilium-mpcbp" May 15 23:42:14.060757 kubelet[2690]: I0515 23:42:14.060261 2690 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e0be1f12-3038-49d5-ae4d-dfbfc6fff69c-bpf-maps\") pod \"cilium-mpcbp\" (UID: \"e0be1f12-3038-49d5-ae4d-dfbfc6fff69c\") " pod="kube-system/cilium-mpcbp" May 15 23:42:14.062439 systemd[1]: Started sshd@24-10.0.0.43:22-10.0.0.1:38364.service - OpenSSH per-connection server daemon (10.0.0.1:38364). May 15 23:42:14.062909 systemd[1]: sshd@23-10.0.0.43:22-10.0.0.1:38350.service: Deactivated successfully. May 15 23:42:14.064649 systemd[1]: session-24.scope: Deactivated successfully. May 15 23:42:14.066748 systemd-logind[1546]: Session 24 logged out. Waiting for processes to exit. May 15 23:42:14.067829 systemd-logind[1546]: Removed session 24. May 15 23:42:14.105741 sshd[4511]: Accepted publickey for core from 10.0.0.1 port 38364 ssh2: RSA SHA256:gmt3ErEAFpcoTNZjCS2EQ7u5MuXyZAbf7EZ0clvImok May 15 23:42:14.107467 sshd-session[4511]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 15 23:42:14.111390 systemd-logind[1546]: New session 25 of user core. May 15 23:42:14.123558 systemd[1]: Started session-25.scope - Session 25 of User core. May 15 23:42:14.231884 kubelet[2690]: E0515 23:42:14.231852 2690 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:42:14.233297 containerd[1567]: time="2025-05-15T23:42:14.232821411Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mpcbp,Uid:e0be1f12-3038-49d5-ae4d-dfbfc6fff69c,Namespace:kube-system,Attempt:0,}" May 15 23:42:14.256829 containerd[1567]: time="2025-05-15T23:42:14.256733358Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 15 23:42:14.256829 containerd[1567]: time="2025-05-15T23:42:14.256785318Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 15 23:42:14.256829 containerd[1567]: time="2025-05-15T23:42:14.256796758Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 23:42:14.257110 containerd[1567]: time="2025-05-15T23:42:14.256868918Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 15 23:42:14.303010 containerd[1567]: time="2025-05-15T23:42:14.302966662Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mpcbp,Uid:e0be1f12-3038-49d5-ae4d-dfbfc6fff69c,Namespace:kube-system,Attempt:0,} returns sandbox id \"0ec7130214b95e9a694fcd4cc8384f691260d960ce3dea54f1571cbd195b1c57\"" May 15 23:42:14.303756 kubelet[2690]: E0515 23:42:14.303731 2690 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:42:14.306558 containerd[1567]: time="2025-05-15T23:42:14.306526802Z" level=info msg="CreateContainer within sandbox \"0ec7130214b95e9a694fcd4cc8384f691260d960ce3dea54f1571cbd195b1c57\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 15 23:42:14.319593 containerd[1567]: time="2025-05-15T23:42:14.319474570Z" level=info msg="CreateContainer within sandbox \"0ec7130214b95e9a694fcd4cc8384f691260d960ce3dea54f1571cbd195b1c57\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"12f684d50df74911f0ba5e63e10155fe708277346bb14875bd9871f4206bce0a\"" May 15 23:42:14.320731 containerd[1567]: time="2025-05-15T23:42:14.320035767Z" level=info msg="StartContainer for \"12f684d50df74911f0ba5e63e10155fe708277346bb14875bd9871f4206bce0a\"" May 15 23:42:14.370878 containerd[1567]: time="2025-05-15T23:42:14.370822126Z" level=info msg="StartContainer for \"12f684d50df74911f0ba5e63e10155fe708277346bb14875bd9871f4206bce0a\" returns successfully" May 15 23:42:14.417658 containerd[1567]: time="2025-05-15T23:42:14.417515147Z" level=info msg="shim disconnected" id=12f684d50df74911f0ba5e63e10155fe708277346bb14875bd9871f4206bce0a namespace=k8s.io May 15 23:42:14.417658 containerd[1567]: time="2025-05-15T23:42:14.417568546Z" level=warning msg="cleaning up after shim disconnected" id=12f684d50df74911f0ba5e63e10155fe708277346bb14875bd9871f4206bce0a namespace=k8s.io May 15 23:42:14.417658 containerd[1567]: time="2025-05-15T23:42:14.417577426Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 23:42:14.651634 kubelet[2690]: I0515 23:42:14.651183 2690 setters.go:600] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-05-15T23:42:14Z","lastTransitionTime":"2025-05-15T23:42:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} May 15 23:42:15.041655 kubelet[2690]: E0515 23:42:15.041612 2690 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:42:15.050367 containerd[1567]: time="2025-05-15T23:42:15.050317660Z" level=info msg="CreateContainer within sandbox \"0ec7130214b95e9a694fcd4cc8384f691260d960ce3dea54f1571cbd195b1c57\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 15 23:42:15.069099 containerd[1567]: time="2025-05-15T23:42:15.069037365Z" level=info msg="CreateContainer within sandbox \"0ec7130214b95e9a694fcd4cc8384f691260d960ce3dea54f1571cbd195b1c57\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"1d462fd249485c3ebc937927181945f9493aedfd7a6511ec9f26c66e78f7eae5\"" May 15 23:42:15.069843 containerd[1567]: time="2025-05-15T23:42:15.069798721Z" level=info msg="StartContainer for \"1d462fd249485c3ebc937927181945f9493aedfd7a6511ec9f26c66e78f7eae5\"" May 15 23:42:15.138621 containerd[1567]: time="2025-05-15T23:42:15.138581492Z" level=info msg="StartContainer for \"1d462fd249485c3ebc937927181945f9493aedfd7a6511ec9f26c66e78f7eae5\" returns successfully" May 15 23:42:15.178138 containerd[1567]: time="2025-05-15T23:42:15.177892933Z" level=info msg="shim disconnected" id=1d462fd249485c3ebc937927181945f9493aedfd7a6511ec9f26c66e78f7eae5 namespace=k8s.io May 15 23:42:15.178138 containerd[1567]: time="2025-05-15T23:42:15.177946253Z" level=warning msg="cleaning up after shim disconnected" id=1d462fd249485c3ebc937927181945f9493aedfd7a6511ec9f26c66e78f7eae5 namespace=k8s.io May 15 23:42:15.178138 containerd[1567]: time="2025-05-15T23:42:15.177957573Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 23:42:16.045430 kubelet[2690]: E0515 23:42:16.044950 2690 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:42:16.048432 containerd[1567]: time="2025-05-15T23:42:16.048276745Z" level=info msg="CreateContainer within sandbox \"0ec7130214b95e9a694fcd4cc8384f691260d960ce3dea54f1571cbd195b1c57\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 15 23:42:16.066741 containerd[1567]: time="2025-05-15T23:42:16.066700581Z" level=info msg="CreateContainer within sandbox \"0ec7130214b95e9a694fcd4cc8384f691260d960ce3dea54f1571cbd195b1c57\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"430afce9797674e6a2590013255113c6883c6b4ed4d8a373419ef4dfffd4084e\"" May 15 23:42:16.068964 containerd[1567]: time="2025-05-15T23:42:16.067316938Z" level=info msg="StartContainer for \"430afce9797674e6a2590013255113c6883c6b4ed4d8a373419ef4dfffd4084e\"" May 15 23:42:16.144209 containerd[1567]: time="2025-05-15T23:42:16.144169664Z" level=info msg="StartContainer for \"430afce9797674e6a2590013255113c6883c6b4ed4d8a373419ef4dfffd4084e\" returns successfully" May 15 23:42:16.165234 containerd[1567]: time="2025-05-15T23:42:16.165040168Z" level=info msg="shim disconnected" id=430afce9797674e6a2590013255113c6883c6b4ed4d8a373419ef4dfffd4084e namespace=k8s.io May 15 23:42:16.165234 containerd[1567]: time="2025-05-15T23:42:16.165091528Z" level=warning msg="cleaning up after shim disconnected" id=430afce9797674e6a2590013255113c6883c6b4ed4d8a373419ef4dfffd4084e namespace=k8s.io May 15 23:42:16.165234 containerd[1567]: time="2025-05-15T23:42:16.165100488Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 23:42:16.166707 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-430afce9797674e6a2590013255113c6883c6b4ed4d8a373419ef4dfffd4084e-rootfs.mount: Deactivated successfully. May 15 23:42:17.049111 kubelet[2690]: E0515 23:42:17.049060 2690 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:42:17.055331 containerd[1567]: time="2025-05-15T23:42:17.055280217Z" level=info msg="CreateContainer within sandbox \"0ec7130214b95e9a694fcd4cc8384f691260d960ce3dea54f1571cbd195b1c57\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 15 23:42:17.075605 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3797593483.mount: Deactivated successfully. May 15 23:42:17.077197 containerd[1567]: time="2025-05-15T23:42:17.077057046Z" level=info msg="CreateContainer within sandbox \"0ec7130214b95e9a694fcd4cc8384f691260d960ce3dea54f1571cbd195b1c57\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"3155d378ec5349f21b0ee3e81b934bfe93bc0809d79d6fe9a8b89f7859994ea3\"" May 15 23:42:17.077930 containerd[1567]: time="2025-05-15T23:42:17.077897523Z" level=info msg="StartContainer for \"3155d378ec5349f21b0ee3e81b934bfe93bc0809d79d6fe9a8b89f7859994ea3\"" May 15 23:42:17.129559 containerd[1567]: time="2025-05-15T23:42:17.129520029Z" level=info msg="StartContainer for \"3155d378ec5349f21b0ee3e81b934bfe93bc0809d79d6fe9a8b89f7859994ea3\" returns successfully" May 15 23:42:17.154702 containerd[1567]: time="2025-05-15T23:42:17.154493445Z" level=info msg="shim disconnected" id=3155d378ec5349f21b0ee3e81b934bfe93bc0809d79d6fe9a8b89f7859994ea3 namespace=k8s.io May 15 23:42:17.154702 containerd[1567]: time="2025-05-15T23:42:17.154548205Z" level=warning msg="cleaning up after shim disconnected" id=3155d378ec5349f21b0ee3e81b934bfe93bc0809d79d6fe9a8b89f7859994ea3 namespace=k8s.io May 15 23:42:17.154702 containerd[1567]: time="2025-05-15T23:42:17.154556005Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 15 23:42:17.166672 systemd[1]: run-containerd-runc-k8s.io-3155d378ec5349f21b0ee3e81b934bfe93bc0809d79d6fe9a8b89f7859994ea3-runc.Dfha0V.mount: Deactivated successfully. May 15 23:42:17.166813 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3155d378ec5349f21b0ee3e81b934bfe93bc0809d79d6fe9a8b89f7859994ea3-rootfs.mount: Deactivated successfully. May 15 23:42:17.879900 kubelet[2690]: E0515 23:42:17.879855 2690 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 15 23:42:18.061447 kubelet[2690]: E0515 23:42:18.061235 2690 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:42:18.066181 containerd[1567]: time="2025-05-15T23:42:18.066144332Z" level=info msg="CreateContainer within sandbox \"0ec7130214b95e9a694fcd4cc8384f691260d960ce3dea54f1571cbd195b1c57\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 15 23:42:18.077996 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3195826067.mount: Deactivated successfully. May 15 23:42:18.080316 containerd[1567]: time="2025-05-15T23:42:18.080264839Z" level=info msg="CreateContainer within sandbox \"0ec7130214b95e9a694fcd4cc8384f691260d960ce3dea54f1571cbd195b1c57\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"fbe67b88eb8043864adf79176600d6668859c4efdd67a67cbb32d2fa6e85aa01\"" May 15 23:42:18.080922 containerd[1567]: time="2025-05-15T23:42:18.080829917Z" level=info msg="StartContainer for \"fbe67b88eb8043864adf79176600d6668859c4efdd67a67cbb32d2fa6e85aa01\"" May 15 23:42:18.138794 containerd[1567]: time="2025-05-15T23:42:18.138693022Z" level=info msg="StartContainer for \"fbe67b88eb8043864adf79176600d6668859c4efdd67a67cbb32d2fa6e85aa01\" returns successfully" May 15 23:42:18.446166 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) May 15 23:42:19.067441 kubelet[2690]: E0515 23:42:19.067385 2690 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:42:19.098853 kubelet[2690]: I0515 23:42:19.098755 2690 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-mpcbp" podStartSLOduration=6.098735741 podStartE2EDuration="6.098735741s" podCreationTimestamp="2025-05-15 23:42:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-15 23:42:19.097280506 +0000 UTC m=+76.361966623" watchObservedRunningTime="2025-05-15 23:42:19.098735741 +0000 UTC m=+76.363421818" May 15 23:42:20.233613 kubelet[2690]: E0515 23:42:20.233537 2690 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:42:21.460591 systemd-networkd[1230]: lxc_health: Link UP May 15 23:42:21.474457 systemd-networkd[1230]: lxc_health: Gained carrier May 15 23:42:22.233555 kubelet[2690]: E0515 23:42:22.233011 2690 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:42:22.645975 systemd[1]: run-containerd-runc-k8s.io-fbe67b88eb8043864adf79176600d6668859c4efdd67a67cbb32d2fa6e85aa01-runc.9ZsKAW.mount: Deactivated successfully. May 15 23:42:23.075361 kubelet[2690]: E0515 23:42:23.075087 2690 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:42:23.416538 systemd-networkd[1230]: lxc_health: Gained IPv6LL May 15 23:42:24.076616 kubelet[2690]: E0515 23:42:24.076574 2690 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:42:28.831705 kubelet[2690]: E0515 23:42:28.831239 2690 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 15 23:42:29.110359 sshd[4517]: Connection closed by 10.0.0.1 port 38364 May 15 23:42:29.111855 sshd-session[4511]: pam_unix(sshd:session): session closed for user core May 15 23:42:29.115613 systemd[1]: sshd@24-10.0.0.43:22-10.0.0.1:38364.service: Deactivated successfully. May 15 23:42:29.118160 systemd-logind[1546]: Session 25 logged out. Waiting for processes to exit. May 15 23:42:29.118460 systemd[1]: session-25.scope: Deactivated successfully. May 15 23:42:29.119651 systemd-logind[1546]: Removed session 25.