May 14 05:09:01.811911 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] May 14 05:09:01.811931 kernel: Linux version 6.12.20-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT Wed May 14 03:42:50 -00 2025 May 14 05:09:01.811940 kernel: KASLR enabled May 14 05:09:01.811945 kernel: efi: EFI v2.7 by EDK II May 14 05:09:01.811951 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb832018 ACPI 2.0=0xdbfd0018 RNG=0xdbfd0a18 MEMRESERVE=0xdb838218 May 14 05:09:01.811956 kernel: random: crng init done May 14 05:09:01.811963 kernel: secureboot: Secure boot disabled May 14 05:09:01.811968 kernel: ACPI: Early table checksum verification disabled May 14 05:09:01.811974 kernel: ACPI: RSDP 0x00000000DBFD0018 000024 (v02 BOCHS ) May 14 05:09:01.811981 kernel: ACPI: XSDT 0x00000000DBFD0F18 000064 (v01 BOCHS BXPC 00000001 01000013) May 14 05:09:01.811986 kernel: ACPI: FACP 0x00000000DBFD0B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) May 14 05:09:01.811992 kernel: ACPI: DSDT 0x00000000DBF0E018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 14 05:09:01.811997 kernel: ACPI: APIC 0x00000000DBFD0C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) May 14 05:09:01.812003 kernel: ACPI: PPTT 0x00000000DBFD0098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) May 14 05:09:01.812010 kernel: ACPI: GTDT 0x00000000DBFD0818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 14 05:09:01.812017 kernel: ACPI: MCFG 0x00000000DBFD0A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 14 05:09:01.812023 kernel: ACPI: SPCR 0x00000000DBFD0918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 14 05:09:01.812029 kernel: ACPI: DBG2 0x00000000DBFD0998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) May 14 05:09:01.812047 kernel: ACPI: IORT 0x00000000DBFD0198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 14 05:09:01.812053 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 May 14 05:09:01.812059 kernel: ACPI: Use ACPI SPCR as default console: Yes May 14 05:09:01.812065 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] May 14 05:09:01.812071 kernel: NODE_DATA(0) allocated [mem 0xdc964dc0-0xdc96bfff] May 14 05:09:01.812077 kernel: Zone ranges: May 14 05:09:01.812083 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] May 14 05:09:01.812090 kernel: DMA32 empty May 14 05:09:01.812096 kernel: Normal empty May 14 05:09:01.812102 kernel: Device empty May 14 05:09:01.812108 kernel: Movable zone start for each node May 14 05:09:01.812114 kernel: Early memory node ranges May 14 05:09:01.812120 kernel: node 0: [mem 0x0000000040000000-0x00000000db81ffff] May 14 05:09:01.812126 kernel: node 0: [mem 0x00000000db820000-0x00000000db82ffff] May 14 05:09:01.812132 kernel: node 0: [mem 0x00000000db830000-0x00000000dc09ffff] May 14 05:09:01.812147 kernel: node 0: [mem 0x00000000dc0a0000-0x00000000dc2dffff] May 14 05:09:01.812153 kernel: node 0: [mem 0x00000000dc2e0000-0x00000000dc36ffff] May 14 05:09:01.812159 kernel: node 0: [mem 0x00000000dc370000-0x00000000dc45ffff] May 14 05:09:01.812165 kernel: node 0: [mem 0x00000000dc460000-0x00000000dc52ffff] May 14 05:09:01.812173 kernel: node 0: [mem 0x00000000dc530000-0x00000000dc5cffff] May 14 05:09:01.812179 kernel: node 0: [mem 0x00000000dc5d0000-0x00000000dce1ffff] May 14 05:09:01.812185 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] May 14 05:09:01.812195 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] May 14 05:09:01.812201 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] May 14 05:09:01.812208 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] May 14 05:09:01.812216 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] May 14 05:09:01.812222 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges May 14 05:09:01.812229 kernel: psci: probing for conduit method from ACPI. May 14 05:09:01.812235 kernel: psci: PSCIv1.1 detected in firmware. May 14 05:09:01.812241 kernel: psci: Using standard PSCI v0.2 function IDs May 14 05:09:01.812248 kernel: psci: Trusted OS migration not required May 14 05:09:01.812254 kernel: psci: SMC Calling Convention v1.1 May 14 05:09:01.812260 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) May 14 05:09:01.812267 kernel: percpu: Embedded 33 pages/cpu s98136 r8192 d28840 u135168 May 14 05:09:01.812273 kernel: pcpu-alloc: s98136 r8192 d28840 u135168 alloc=33*4096 May 14 05:09:01.812281 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 May 14 05:09:01.812287 kernel: Detected PIPT I-cache on CPU0 May 14 05:09:01.812294 kernel: CPU features: detected: GIC system register CPU interface May 14 05:09:01.812300 kernel: CPU features: detected: Spectre-v4 May 14 05:09:01.812306 kernel: CPU features: detected: Spectre-BHB May 14 05:09:01.812313 kernel: CPU features: kernel page table isolation forced ON by KASLR May 14 05:09:01.812319 kernel: CPU features: detected: Kernel page table isolation (KPTI) May 14 05:09:01.812325 kernel: CPU features: detected: ARM erratum 1418040 May 14 05:09:01.812332 kernel: CPU features: detected: SSBS not fully self-synchronizing May 14 05:09:01.812338 kernel: alternatives: applying boot alternatives May 14 05:09:01.812345 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=121c9a3653fd599e6c6b931638a08771d538e77e97aff08e06f2cb7bca392d8e May 14 05:09:01.812353 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 14 05:09:01.812360 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 14 05:09:01.812366 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 14 05:09:01.812373 kernel: Fallback order for Node 0: 0 May 14 05:09:01.812379 kernel: Built 1 zonelists, mobility grouping on. Total pages: 643072 May 14 05:09:01.812386 kernel: Policy zone: DMA May 14 05:09:01.812393 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 14 05:09:01.812399 kernel: software IO TLB: SWIOTLB bounce buffer size adjusted to 2MB May 14 05:09:01.812405 kernel: software IO TLB: area num 4. May 14 05:09:01.812412 kernel: software IO TLB: SWIOTLB bounce buffer size roundup to 4MB May 14 05:09:01.812418 kernel: software IO TLB: mapped [mem 0x00000000d8c00000-0x00000000d9000000] (4MB) May 14 05:09:01.812424 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 14 05:09:01.812432 kernel: rcu: Preemptible hierarchical RCU implementation. May 14 05:09:01.812439 kernel: rcu: RCU event tracing is enabled. May 14 05:09:01.812445 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 14 05:09:01.812452 kernel: Trampoline variant of Tasks RCU enabled. May 14 05:09:01.812459 kernel: Tracing variant of Tasks RCU enabled. May 14 05:09:01.812465 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 14 05:09:01.812471 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 14 05:09:01.812478 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 14 05:09:01.812484 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 14 05:09:01.812491 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 May 14 05:09:01.812514 kernel: GICv3: 256 SPIs implemented May 14 05:09:01.812523 kernel: GICv3: 0 Extended SPIs implemented May 14 05:09:01.812530 kernel: Root IRQ handler: gic_handle_irq May 14 05:09:01.812536 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI May 14 05:09:01.812542 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 May 14 05:09:01.812548 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 May 14 05:09:01.812555 kernel: ITS [mem 0x08080000-0x0809ffff] May 14 05:09:01.812561 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400e0000 (indirect, esz 8, psz 64K, shr 1) May 14 05:09:01.812568 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400f0000 (flat, esz 8, psz 64K, shr 1) May 14 05:09:01.812574 kernel: GICv3: using LPI property table @0x0000000040100000 May 14 05:09:01.812580 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040110000 May 14 05:09:01.812587 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 14 05:09:01.812593 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 05:09:01.812601 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). May 14 05:09:01.812608 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns May 14 05:09:01.812614 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns May 14 05:09:01.812621 kernel: arm-pv: using stolen time PV May 14 05:09:01.812627 kernel: Console: colour dummy device 80x25 May 14 05:09:01.812634 kernel: ACPI: Core revision 20240827 May 14 05:09:01.812641 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) May 14 05:09:01.812648 kernel: pid_max: default: 32768 minimum: 301 May 14 05:09:01.812654 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima May 14 05:09:01.812662 kernel: landlock: Up and running. May 14 05:09:01.812668 kernel: SELinux: Initializing. May 14 05:09:01.812674 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 14 05:09:01.812681 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 14 05:09:01.812688 kernel: rcu: Hierarchical SRCU implementation. May 14 05:09:01.812694 kernel: rcu: Max phase no-delay instances is 400. May 14 05:09:01.812701 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level May 14 05:09:01.812707 kernel: Remapping and enabling EFI services. May 14 05:09:01.812714 kernel: smp: Bringing up secondary CPUs ... May 14 05:09:01.812720 kernel: Detected PIPT I-cache on CPU1 May 14 05:09:01.812733 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 May 14 05:09:01.812740 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040120000 May 14 05:09:01.812748 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 05:09:01.812755 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] May 14 05:09:01.812761 kernel: Detected PIPT I-cache on CPU2 May 14 05:09:01.812768 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 May 14 05:09:01.812775 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040130000 May 14 05:09:01.812784 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 05:09:01.812791 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] May 14 05:09:01.812798 kernel: Detected PIPT I-cache on CPU3 May 14 05:09:01.812804 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 May 14 05:09:01.812811 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040140000 May 14 05:09:01.812818 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 14 05:09:01.812825 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] May 14 05:09:01.812831 kernel: smp: Brought up 1 node, 4 CPUs May 14 05:09:01.812838 kernel: SMP: Total of 4 processors activated. May 14 05:09:01.812845 kernel: CPU: All CPU(s) started at EL1 May 14 05:09:01.812853 kernel: CPU features: detected: 32-bit EL0 Support May 14 05:09:01.812860 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence May 14 05:09:01.812867 kernel: CPU features: detected: Common not Private translations May 14 05:09:01.812874 kernel: CPU features: detected: CRC32 instructions May 14 05:09:01.812881 kernel: CPU features: detected: Enhanced Virtualization Traps May 14 05:09:01.812888 kernel: CPU features: detected: RCpc load-acquire (LDAPR) May 14 05:09:01.812894 kernel: CPU features: detected: LSE atomic instructions May 14 05:09:01.812901 kernel: CPU features: detected: Privileged Access Never May 14 05:09:01.812908 kernel: CPU features: detected: RAS Extension Support May 14 05:09:01.812916 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) May 14 05:09:01.812923 kernel: alternatives: applying system-wide alternatives May 14 05:09:01.812930 kernel: CPU features: detected: Hardware dirty bit management on CPU0-3 May 14 05:09:01.812937 kernel: Memory: 2440980K/2572288K available (11072K kernel code, 2276K rwdata, 8928K rodata, 39424K init, 1034K bss, 125540K reserved, 0K cma-reserved) May 14 05:09:01.812945 kernel: devtmpfs: initialized May 14 05:09:01.812952 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 14 05:09:01.812959 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 14 05:09:01.812966 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL May 14 05:09:01.812973 kernel: 0 pages in range for non-PLT usage May 14 05:09:01.812980 kernel: 508544 pages in range for PLT usage May 14 05:09:01.812987 kernel: pinctrl core: initialized pinctrl subsystem May 14 05:09:01.812994 kernel: SMBIOS 3.0.0 present. May 14 05:09:01.813001 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 May 14 05:09:01.813008 kernel: DMI: Memory slots populated: 1/1 May 14 05:09:01.813015 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 14 05:09:01.813022 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations May 14 05:09:01.813029 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations May 14 05:09:01.813036 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations May 14 05:09:01.813044 kernel: audit: initializing netlink subsys (disabled) May 14 05:09:01.813051 kernel: audit: type=2000 audit(0.033:1): state=initialized audit_enabled=0 res=1 May 14 05:09:01.813058 kernel: thermal_sys: Registered thermal governor 'step_wise' May 14 05:09:01.813065 kernel: cpuidle: using governor menu May 14 05:09:01.813072 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. May 14 05:09:01.813079 kernel: ASID allocator initialised with 32768 entries May 14 05:09:01.813086 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 14 05:09:01.813093 kernel: Serial: AMBA PL011 UART driver May 14 05:09:01.813100 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 14 05:09:01.813108 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page May 14 05:09:01.813115 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages May 14 05:09:01.813122 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page May 14 05:09:01.813129 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 14 05:09:01.813141 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page May 14 05:09:01.813148 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages May 14 05:09:01.813155 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page May 14 05:09:01.813162 kernel: ACPI: Added _OSI(Module Device) May 14 05:09:01.813169 kernel: ACPI: Added _OSI(Processor Device) May 14 05:09:01.813177 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 14 05:09:01.813184 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 14 05:09:01.813191 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 14 05:09:01.813216 kernel: ACPI: Interpreter enabled May 14 05:09:01.813230 kernel: ACPI: Using GIC for interrupt routing May 14 05:09:01.813237 kernel: ACPI: MCFG table detected, 1 entries May 14 05:09:01.813244 kernel: ACPI: CPU0 has been hot-added May 14 05:09:01.813251 kernel: ACPI: CPU1 has been hot-added May 14 05:09:01.813258 kernel: ACPI: CPU2 has been hot-added May 14 05:09:01.813264 kernel: ACPI: CPU3 has been hot-added May 14 05:09:01.813273 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA May 14 05:09:01.813280 kernel: printk: legacy console [ttyAMA0] enabled May 14 05:09:01.813287 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 14 05:09:01.813418 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 14 05:09:01.813483 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] May 14 05:09:01.813579 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] May 14 05:09:01.813644 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 May 14 05:09:01.813706 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] May 14 05:09:01.813715 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] May 14 05:09:01.813722 kernel: PCI host bridge to bus 0000:00 May 14 05:09:01.813787 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] May 14 05:09:01.813844 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] May 14 05:09:01.813897 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] May 14 05:09:01.813950 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 14 05:09:01.814031 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint May 14 05:09:01.814101 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint May 14 05:09:01.814174 kernel: pci 0000:00:01.0: BAR 0 [io 0x0000-0x001f] May 14 05:09:01.814236 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff] May 14 05:09:01.814297 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] May 14 05:09:01.814357 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned May 14 05:09:01.814418 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff]: assigned May 14 05:09:01.814479 kernel: pci 0000:00:01.0: BAR 0 [io 0x1000-0x101f]: assigned May 14 05:09:01.814545 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] May 14 05:09:01.814600 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] May 14 05:09:01.814653 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] May 14 05:09:01.814662 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 May 14 05:09:01.814669 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 May 14 05:09:01.814677 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 May 14 05:09:01.814685 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 May 14 05:09:01.814692 kernel: iommu: Default domain type: Translated May 14 05:09:01.814699 kernel: iommu: DMA domain TLB invalidation policy: strict mode May 14 05:09:01.814707 kernel: efivars: Registered efivars operations May 14 05:09:01.814714 kernel: vgaarb: loaded May 14 05:09:01.814721 kernel: clocksource: Switched to clocksource arch_sys_counter May 14 05:09:01.814727 kernel: VFS: Disk quotas dquot_6.6.0 May 14 05:09:01.814734 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 14 05:09:01.814741 kernel: pnp: PnP ACPI init May 14 05:09:01.814811 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved May 14 05:09:01.814821 kernel: pnp: PnP ACPI: found 1 devices May 14 05:09:01.814828 kernel: NET: Registered PF_INET protocol family May 14 05:09:01.814835 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 14 05:09:01.814842 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 14 05:09:01.814849 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 14 05:09:01.814856 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 14 05:09:01.814863 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 14 05:09:01.814872 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 14 05:09:01.814879 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 14 05:09:01.814886 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 14 05:09:01.814893 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 14 05:09:01.814899 kernel: PCI: CLS 0 bytes, default 64 May 14 05:09:01.814906 kernel: kvm [1]: HYP mode not available May 14 05:09:01.814913 kernel: Initialise system trusted keyrings May 14 05:09:01.814920 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 14 05:09:01.814927 kernel: Key type asymmetric registered May 14 05:09:01.814935 kernel: Asymmetric key parser 'x509' registered May 14 05:09:01.814942 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) May 14 05:09:01.814949 kernel: io scheduler mq-deadline registered May 14 05:09:01.814956 kernel: io scheduler kyber registered May 14 05:09:01.814976 kernel: io scheduler bfq registered May 14 05:09:01.814983 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 May 14 05:09:01.814991 kernel: ACPI: button: Power Button [PWRB] May 14 05:09:01.814998 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 May 14 05:09:01.815060 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) May 14 05:09:01.815071 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 14 05:09:01.815078 kernel: thunder_xcv, ver 1.0 May 14 05:09:01.815085 kernel: thunder_bgx, ver 1.0 May 14 05:09:01.815092 kernel: nicpf, ver 1.0 May 14 05:09:01.815099 kernel: nicvf, ver 1.0 May 14 05:09:01.815173 kernel: rtc-efi rtc-efi.0: registered as rtc0 May 14 05:09:01.815232 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-05-14T05:09:01 UTC (1747199341) May 14 05:09:01.815241 kernel: hid: raw HID events driver (C) Jiri Kosina May 14 05:09:01.815250 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available May 14 05:09:01.815257 kernel: watchdog: NMI not fully supported May 14 05:09:01.815263 kernel: watchdog: Hard watchdog permanently disabled May 14 05:09:01.815270 kernel: NET: Registered PF_INET6 protocol family May 14 05:09:01.815277 kernel: Segment Routing with IPv6 May 14 05:09:01.815284 kernel: In-situ OAM (IOAM) with IPv6 May 14 05:09:01.815291 kernel: NET: Registered PF_PACKET protocol family May 14 05:09:01.815297 kernel: Key type dns_resolver registered May 14 05:09:01.815304 kernel: registered taskstats version 1 May 14 05:09:01.815311 kernel: Loading compiled-in X.509 certificates May 14 05:09:01.815319 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.20-flatcar: 9f54d711faad5edc118c062fcbac248335430a87' May 14 05:09:01.815326 kernel: Demotion targets for Node 0: null May 14 05:09:01.815333 kernel: Key type .fscrypt registered May 14 05:09:01.815340 kernel: Key type fscrypt-provisioning registered May 14 05:09:01.815346 kernel: ima: No TPM chip found, activating TPM-bypass! May 14 05:09:01.815353 kernel: ima: Allocated hash algorithm: sha1 May 14 05:09:01.815360 kernel: ima: No architecture policies found May 14 05:09:01.815367 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) May 14 05:09:01.815375 kernel: clk: Disabling unused clocks May 14 05:09:01.815382 kernel: PM: genpd: Disabling unused power domains May 14 05:09:01.815389 kernel: Warning: unable to open an initial console. May 14 05:09:01.815396 kernel: Freeing unused kernel memory: 39424K May 14 05:09:01.815403 kernel: Run /init as init process May 14 05:09:01.815410 kernel: with arguments: May 14 05:09:01.815416 kernel: /init May 14 05:09:01.815423 kernel: with environment: May 14 05:09:01.815430 kernel: HOME=/ May 14 05:09:01.815436 kernel: TERM=linux May 14 05:09:01.815444 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 14 05:09:01.815452 systemd[1]: Successfully made /usr/ read-only. May 14 05:09:01.815462 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 14 05:09:01.815470 systemd[1]: Detected virtualization kvm. May 14 05:09:01.815477 systemd[1]: Detected architecture arm64. May 14 05:09:01.815484 systemd[1]: Running in initrd. May 14 05:09:01.815491 systemd[1]: No hostname configured, using default hostname. May 14 05:09:01.815555 systemd[1]: Hostname set to . May 14 05:09:01.815563 systemd[1]: Initializing machine ID from VM UUID. May 14 05:09:01.815570 systemd[1]: Queued start job for default target initrd.target. May 14 05:09:01.815578 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 14 05:09:01.815586 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 14 05:09:01.815594 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 14 05:09:01.815601 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 14 05:09:01.815609 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 14 05:09:01.815619 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 14 05:09:01.815628 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 14 05:09:01.815635 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 14 05:09:01.815643 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 14 05:09:01.815651 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 14 05:09:01.815658 systemd[1]: Reached target paths.target - Path Units. May 14 05:09:01.815666 systemd[1]: Reached target slices.target - Slice Units. May 14 05:09:01.815674 systemd[1]: Reached target swap.target - Swaps. May 14 05:09:01.815682 systemd[1]: Reached target timers.target - Timer Units. May 14 05:09:01.815689 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 14 05:09:01.815697 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 14 05:09:01.815704 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 14 05:09:01.815712 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 14 05:09:01.815719 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 14 05:09:01.815727 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 14 05:09:01.815735 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 14 05:09:01.815743 systemd[1]: Reached target sockets.target - Socket Units. May 14 05:09:01.815750 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 14 05:09:01.815758 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 14 05:09:01.815765 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 14 05:09:01.815773 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). May 14 05:09:01.815780 systemd[1]: Starting systemd-fsck-usr.service... May 14 05:09:01.815788 systemd[1]: Starting systemd-journald.service - Journal Service... May 14 05:09:01.815795 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 14 05:09:01.815804 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 14 05:09:01.815811 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 14 05:09:01.815819 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 14 05:09:01.815826 systemd[1]: Finished systemd-fsck-usr.service. May 14 05:09:01.815835 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 14 05:09:01.815860 systemd-journald[244]: Collecting audit messages is disabled. May 14 05:09:01.815881 systemd-journald[244]: Journal started May 14 05:09:01.815900 systemd-journald[244]: Runtime Journal (/run/log/journal/a7ba8755f5534fbfba120506886a57d3) is 6M, max 48.5M, 42.4M free. May 14 05:09:01.807413 systemd-modules-load[245]: Inserted module 'overlay' May 14 05:09:01.817607 systemd[1]: Started systemd-journald.service - Journal Service. May 14 05:09:01.818687 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 14 05:09:01.819756 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 14 05:09:01.823578 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 14 05:09:01.825009 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 14 05:09:01.828945 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 14 05:09:01.832335 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 14 05:09:01.833074 systemd-modules-load[245]: Inserted module 'br_netfilter' May 14 05:09:01.834771 kernel: Bridge firewalling registered May 14 05:09:01.836674 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 14 05:09:01.839435 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 14 05:09:01.842345 systemd-tmpfiles[266]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. May 14 05:09:01.843651 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 14 05:09:01.845554 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 14 05:09:01.852724 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 14 05:09:01.853747 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 14 05:09:01.856351 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 14 05:09:01.858474 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 14 05:09:01.887595 dracut-cmdline[285]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=121c9a3653fd599e6c6b931638a08771d538e77e97aff08e06f2cb7bca392d8e May 14 05:09:01.903066 systemd-resolved[286]: Positive Trust Anchors: May 14 05:09:01.903082 systemd-resolved[286]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 14 05:09:01.903118 systemd-resolved[286]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 14 05:09:01.907780 systemd-resolved[286]: Defaulting to hostname 'linux'. May 14 05:09:01.908693 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 14 05:09:01.912125 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 14 05:09:01.960521 kernel: SCSI subsystem initialized May 14 05:09:01.964519 kernel: Loading iSCSI transport class v2.0-870. May 14 05:09:01.973534 kernel: iscsi: registered transport (tcp) May 14 05:09:01.984523 kernel: iscsi: registered transport (qla4xxx) May 14 05:09:01.984539 kernel: QLogic iSCSI HBA Driver May 14 05:09:02.001778 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 14 05:09:02.024468 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 14 05:09:02.026321 systemd[1]: Reached target network-pre.target - Preparation for Network. May 14 05:09:02.069280 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 14 05:09:02.071351 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 14 05:09:02.133519 kernel: raid6: neonx8 gen() 15767 MB/s May 14 05:09:02.150521 kernel: raid6: neonx4 gen() 15766 MB/s May 14 05:09:02.167521 kernel: raid6: neonx2 gen() 13168 MB/s May 14 05:09:02.184526 kernel: raid6: neonx1 gen() 10434 MB/s May 14 05:09:02.201518 kernel: raid6: int64x8 gen() 6889 MB/s May 14 05:09:02.218527 kernel: raid6: int64x4 gen() 7349 MB/s May 14 05:09:02.235520 kernel: raid6: int64x2 gen() 6099 MB/s May 14 05:09:02.252602 kernel: raid6: int64x1 gen() 5056 MB/s May 14 05:09:02.252628 kernel: raid6: using algorithm neonx8 gen() 15767 MB/s May 14 05:09:02.270570 kernel: raid6: .... xor() 12059 MB/s, rmw enabled May 14 05:09:02.270584 kernel: raid6: using neon recovery algorithm May 14 05:09:02.275829 kernel: xor: measuring software checksum speed May 14 05:09:02.275858 kernel: 8regs : 21624 MB/sec May 14 05:09:02.276521 kernel: 32regs : 21670 MB/sec May 14 05:09:02.277692 kernel: arm64_neon : 23542 MB/sec May 14 05:09:02.277704 kernel: xor: using function: arm64_neon (23542 MB/sec) May 14 05:09:02.333882 kernel: Btrfs loaded, zoned=no, fsverity=no May 14 05:09:02.340378 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 14 05:09:02.342724 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 14 05:09:02.368547 systemd-udevd[497]: Using default interface naming scheme 'v255'. May 14 05:09:02.374389 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 14 05:09:02.376560 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 14 05:09:02.402431 dracut-pre-trigger[505]: rd.md=0: removing MD RAID activation May 14 05:09:02.423620 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 14 05:09:02.425584 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 14 05:09:02.477727 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 14 05:09:02.480045 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 14 05:09:02.525179 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues May 14 05:09:02.533733 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 14 05:09:02.533834 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 14 05:09:02.533844 kernel: GPT:9289727 != 19775487 May 14 05:09:02.533853 kernel: GPT:Alternate GPT header not at the end of the disk. May 14 05:09:02.533862 kernel: GPT:9289727 != 19775487 May 14 05:09:02.533871 kernel: GPT: Use GNU Parted to correct GPT errors. May 14 05:09:02.533885 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 14 05:09:02.528651 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 14 05:09:02.528755 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 14 05:09:02.532280 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 14 05:09:02.535106 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 14 05:09:02.561042 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 14 05:09:02.562546 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 14 05:09:02.564586 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 14 05:09:02.578089 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 14 05:09:02.589785 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 14 05:09:02.595965 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 14 05:09:02.597164 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 14 05:09:02.600022 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 14 05:09:02.602114 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 14 05:09:02.604079 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 14 05:09:02.606613 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 14 05:09:02.608193 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 14 05:09:02.630235 disk-uuid[593]: Primary Header is updated. May 14 05:09:02.630235 disk-uuid[593]: Secondary Entries is updated. May 14 05:09:02.630235 disk-uuid[593]: Secondary Header is updated. May 14 05:09:02.633522 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 14 05:09:02.636002 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 14 05:09:03.645635 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 14 05:09:03.645726 disk-uuid[597]: The operation has completed successfully. May 14 05:09:03.666001 systemd[1]: disk-uuid.service: Deactivated successfully. May 14 05:09:03.666101 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 14 05:09:03.694259 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 14 05:09:03.716063 sh[612]: Success May 14 05:09:03.729365 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 14 05:09:03.729408 kernel: device-mapper: uevent: version 1.0.3 May 14 05:09:03.731038 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev May 14 05:09:03.741554 kernel: device-mapper: verity: sha256 using shash "sha256-ce" May 14 05:09:03.771296 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 14 05:09:03.773700 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 14 05:09:03.788673 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 14 05:09:03.795138 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' May 14 05:09:03.795164 kernel: BTRFS: device fsid 73dd31f4-39c4-4cc0-95ea-0c124bed739c devid 1 transid 39 /dev/mapper/usr (253:0) scanned by mount (623) May 14 05:09:03.797707 kernel: BTRFS info (device dm-0): first mount of filesystem 73dd31f4-39c4-4cc0-95ea-0c124bed739c May 14 05:09:03.797724 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm May 14 05:09:03.797734 kernel: BTRFS info (device dm-0): using free-space-tree May 14 05:09:03.801633 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 14 05:09:03.802788 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. May 14 05:09:03.804161 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 14 05:09:03.804856 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 14 05:09:03.806248 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 14 05:09:03.828858 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 (254:6) scanned by mount (655) May 14 05:09:03.828901 kernel: BTRFS info (device vda6): first mount of filesystem 9734c607-12cd-4e4b-b169-9d2d51a1b870 May 14 05:09:03.830078 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 14 05:09:03.830112 kernel: BTRFS info (device vda6): using free-space-tree May 14 05:09:03.837512 kernel: BTRFS info (device vda6): last unmount of filesystem 9734c607-12cd-4e4b-b169-9d2d51a1b870 May 14 05:09:03.838039 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 14 05:09:03.839769 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 14 05:09:03.915464 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 14 05:09:03.918406 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 14 05:09:03.959764 systemd-networkd[806]: lo: Link UP May 14 05:09:03.959778 systemd-networkd[806]: lo: Gained carrier May 14 05:09:03.960561 systemd-networkd[806]: Enumeration completed May 14 05:09:03.960889 systemd[1]: Started systemd-networkd.service - Network Configuration. May 14 05:09:03.960960 systemd-networkd[806]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 14 05:09:03.960964 systemd-networkd[806]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 14 05:09:03.961511 systemd-networkd[806]: eth0: Link UP May 14 05:09:03.961514 systemd-networkd[806]: eth0: Gained carrier May 14 05:09:03.961523 systemd-networkd[806]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 14 05:09:03.962459 systemd[1]: Reached target network.target - Network. May 14 05:09:03.980054 ignition[702]: Ignition 2.21.0 May 14 05:09:03.980070 ignition[702]: Stage: fetch-offline May 14 05:09:03.980098 ignition[702]: no configs at "/usr/lib/ignition/base.d" May 14 05:09:03.980106 ignition[702]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 14 05:09:03.980294 ignition[702]: parsed url from cmdline: "" May 14 05:09:03.980298 ignition[702]: no config URL provided May 14 05:09:03.980302 ignition[702]: reading system config file "/usr/lib/ignition/user.ign" May 14 05:09:03.980308 ignition[702]: no config at "/usr/lib/ignition/user.ign" May 14 05:09:03.984547 systemd-networkd[806]: eth0: DHCPv4 address 10.0.0.132/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 14 05:09:03.980327 ignition[702]: op(1): [started] loading QEMU firmware config module May 14 05:09:03.980331 ignition[702]: op(1): executing: "modprobe" "qemu_fw_cfg" May 14 05:09:03.987313 ignition[702]: op(1): [finished] loading QEMU firmware config module May 14 05:09:04.024442 ignition[702]: parsing config with SHA512: d602aef4f43955fe7fc701e7359c3f5abf13ad637d5b67ac45161e38abefa41e3b45e972b5020eed24703a7f3791aaee9a27758d01f9b249e188f85408fee611 May 14 05:09:04.028252 unknown[702]: fetched base config from "system" May 14 05:09:04.028264 unknown[702]: fetched user config from "qemu" May 14 05:09:04.028612 ignition[702]: fetch-offline: fetch-offline passed May 14 05:09:04.028667 ignition[702]: Ignition finished successfully May 14 05:09:04.030420 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 14 05:09:04.031687 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 14 05:09:04.032433 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 14 05:09:04.057720 ignition[813]: Ignition 2.21.0 May 14 05:09:04.057737 ignition[813]: Stage: kargs May 14 05:09:04.057873 ignition[813]: no configs at "/usr/lib/ignition/base.d" May 14 05:09:04.057882 ignition[813]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 14 05:09:04.059356 ignition[813]: kargs: kargs passed May 14 05:09:04.059421 ignition[813]: Ignition finished successfully May 14 05:09:04.062761 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 14 05:09:04.065082 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 14 05:09:04.092999 ignition[822]: Ignition 2.21.0 May 14 05:09:04.093012 ignition[822]: Stage: disks May 14 05:09:04.093147 ignition[822]: no configs at "/usr/lib/ignition/base.d" May 14 05:09:04.093157 ignition[822]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 14 05:09:04.095316 ignition[822]: disks: disks passed May 14 05:09:04.095368 ignition[822]: Ignition finished successfully May 14 05:09:04.096632 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 14 05:09:04.098000 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 14 05:09:04.099136 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 14 05:09:04.101185 systemd[1]: Reached target local-fs.target - Local File Systems. May 14 05:09:04.102977 systemd[1]: Reached target sysinit.target - System Initialization. May 14 05:09:04.104802 systemd[1]: Reached target basic.target - Basic System. May 14 05:09:04.107183 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 14 05:09:04.125558 systemd-fsck[832]: ROOT: clean, 15/553520 files, 52789/553472 blocks May 14 05:09:04.129485 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 14 05:09:04.133163 systemd[1]: Mounting sysroot.mount - /sysroot... May 14 05:09:04.193519 kernel: EXT4-fs (vda9): mounted filesystem 008d778b-58b1-4ebe-9d06-c739d7d81b3b r/w with ordered data mode. Quota mode: none. May 14 05:09:04.193750 systemd[1]: Mounted sysroot.mount - /sysroot. May 14 05:09:04.194779 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 14 05:09:04.197023 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 14 05:09:04.198520 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 14 05:09:04.199368 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 14 05:09:04.199408 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 14 05:09:04.199431 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 14 05:09:04.209084 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 14 05:09:04.211403 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 14 05:09:04.214527 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 (254:6) scanned by mount (840) May 14 05:09:04.217195 kernel: BTRFS info (device vda6): first mount of filesystem 9734c607-12cd-4e4b-b169-9d2d51a1b870 May 14 05:09:04.217224 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 14 05:09:04.217234 kernel: BTRFS info (device vda6): using free-space-tree May 14 05:09:04.220292 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 14 05:09:04.255812 initrd-setup-root[864]: cut: /sysroot/etc/passwd: No such file or directory May 14 05:09:04.258802 initrd-setup-root[871]: cut: /sysroot/etc/group: No such file or directory May 14 05:09:04.261715 initrd-setup-root[878]: cut: /sysroot/etc/shadow: No such file or directory May 14 05:09:04.264752 initrd-setup-root[885]: cut: /sysroot/etc/gshadow: No such file or directory May 14 05:09:04.333723 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 14 05:09:04.335548 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 14 05:09:04.336932 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 14 05:09:04.353528 kernel: BTRFS info (device vda6): last unmount of filesystem 9734c607-12cd-4e4b-b169-9d2d51a1b870 May 14 05:09:04.366394 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 14 05:09:04.371561 ignition[954]: INFO : Ignition 2.21.0 May 14 05:09:04.371561 ignition[954]: INFO : Stage: mount May 14 05:09:04.372937 ignition[954]: INFO : no configs at "/usr/lib/ignition/base.d" May 14 05:09:04.372937 ignition[954]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 14 05:09:04.372937 ignition[954]: INFO : mount: mount passed May 14 05:09:04.376038 ignition[954]: INFO : Ignition finished successfully May 14 05:09:04.374428 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 14 05:09:04.376230 systemd[1]: Starting ignition-files.service - Ignition (files)... May 14 05:09:04.803484 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 14 05:09:04.805202 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 14 05:09:04.830374 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 (254:6) scanned by mount (967) May 14 05:09:04.830418 kernel: BTRFS info (device vda6): first mount of filesystem 9734c607-12cd-4e4b-b169-9d2d51a1b870 May 14 05:09:04.830438 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 14 05:09:04.831974 kernel: BTRFS info (device vda6): using free-space-tree May 14 05:09:04.835443 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 14 05:09:04.862332 ignition[984]: INFO : Ignition 2.21.0 May 14 05:09:04.862332 ignition[984]: INFO : Stage: files May 14 05:09:04.863796 ignition[984]: INFO : no configs at "/usr/lib/ignition/base.d" May 14 05:09:04.863796 ignition[984]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 14 05:09:04.863796 ignition[984]: DEBUG : files: compiled without relabeling support, skipping May 14 05:09:04.866862 ignition[984]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 14 05:09:04.866862 ignition[984]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 14 05:09:04.866862 ignition[984]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 14 05:09:04.866862 ignition[984]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 14 05:09:04.866862 ignition[984]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 14 05:09:04.866348 unknown[984]: wrote ssh authorized keys file for user: core May 14 05:09:04.874217 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 14 05:09:04.874217 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 May 14 05:09:04.981461 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 14 05:09:05.168235 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 14 05:09:05.168235 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" May 14 05:09:05.171643 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 May 14 05:09:05.184602 systemd-networkd[806]: eth0: Gained IPv6LL May 14 05:09:05.494697 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 14 05:09:05.563532 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" May 14 05:09:05.565144 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 14 05:09:05.565144 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 14 05:09:05.565144 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 14 05:09:05.565144 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 14 05:09:05.565144 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 14 05:09:05.565144 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 14 05:09:05.565144 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 14 05:09:05.565144 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 14 05:09:05.577289 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 14 05:09:05.577289 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 14 05:09:05.577289 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" May 14 05:09:05.577289 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" May 14 05:09:05.577289 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" May 14 05:09:05.577289 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-arm64.raw: attempt #1 May 14 05:09:05.803593 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK May 14 05:09:06.031412 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-arm64.raw" May 14 05:09:06.031412 ignition[984]: INFO : files: op(c): [started] processing unit "prepare-helm.service" May 14 05:09:06.034805 ignition[984]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 14 05:09:06.034805 ignition[984]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 14 05:09:06.034805 ignition[984]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" May 14 05:09:06.034805 ignition[984]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" May 14 05:09:06.034805 ignition[984]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 14 05:09:06.034805 ignition[984]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 14 05:09:06.034805 ignition[984]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" May 14 05:09:06.034805 ignition[984]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" May 14 05:09:06.049534 ignition[984]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" May 14 05:09:06.053047 ignition[984]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 14 05:09:06.054425 ignition[984]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" May 14 05:09:06.054425 ignition[984]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" May 14 05:09:06.054425 ignition[984]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" May 14 05:09:06.054425 ignition[984]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" May 14 05:09:06.054425 ignition[984]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" May 14 05:09:06.054425 ignition[984]: INFO : files: files passed May 14 05:09:06.054425 ignition[984]: INFO : Ignition finished successfully May 14 05:09:06.055196 systemd[1]: Finished ignition-files.service - Ignition (files). May 14 05:09:06.058657 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 14 05:09:06.061635 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 14 05:09:06.075122 systemd[1]: ignition-quench.service: Deactivated successfully. May 14 05:09:06.075240 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 14 05:09:06.077931 initrd-setup-root-after-ignition[1013]: grep: /sysroot/oem/oem-release: No such file or directory May 14 05:09:06.079487 initrd-setup-root-after-ignition[1015]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 14 05:09:06.079487 initrd-setup-root-after-ignition[1015]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 14 05:09:06.082220 initrd-setup-root-after-ignition[1019]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 14 05:09:06.081995 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 14 05:09:06.083449 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 14 05:09:06.086162 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 14 05:09:06.128866 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 14 05:09:06.128987 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 14 05:09:06.130858 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 14 05:09:06.132350 systemd[1]: Reached target initrd.target - Initrd Default Target. May 14 05:09:06.134008 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 14 05:09:06.134720 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 14 05:09:06.156561 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 14 05:09:06.159653 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 14 05:09:06.174730 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 14 05:09:06.175790 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 14 05:09:06.177606 systemd[1]: Stopped target timers.target - Timer Units. May 14 05:09:06.179301 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 14 05:09:06.179419 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 14 05:09:06.181565 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 14 05:09:06.183440 systemd[1]: Stopped target basic.target - Basic System. May 14 05:09:06.184911 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 14 05:09:06.186458 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 14 05:09:06.188263 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 14 05:09:06.190002 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. May 14 05:09:06.191600 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 14 05:09:06.193343 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 14 05:09:06.195086 systemd[1]: Stopped target sysinit.target - System Initialization. May 14 05:09:06.196802 systemd[1]: Stopped target local-fs.target - Local File Systems. May 14 05:09:06.198340 systemd[1]: Stopped target swap.target - Swaps. May 14 05:09:06.199663 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 14 05:09:06.199779 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 14 05:09:06.201826 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 14 05:09:06.203547 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 14 05:09:06.205305 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 14 05:09:06.205383 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 14 05:09:06.207203 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 14 05:09:06.207316 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 14 05:09:06.209748 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 14 05:09:06.209860 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 14 05:09:06.211552 systemd[1]: Stopped target paths.target - Path Units. May 14 05:09:06.212989 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 14 05:09:06.216532 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 14 05:09:06.217607 systemd[1]: Stopped target slices.target - Slice Units. May 14 05:09:06.219462 systemd[1]: Stopped target sockets.target - Socket Units. May 14 05:09:06.220888 systemd[1]: iscsid.socket: Deactivated successfully. May 14 05:09:06.220969 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 14 05:09:06.222371 systemd[1]: iscsiuio.socket: Deactivated successfully. May 14 05:09:06.222451 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 14 05:09:06.223888 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 14 05:09:06.224003 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 14 05:09:06.225591 systemd[1]: ignition-files.service: Deactivated successfully. May 14 05:09:06.225691 systemd[1]: Stopped ignition-files.service - Ignition (files). May 14 05:09:06.227748 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 14 05:09:06.229988 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 14 05:09:06.231045 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 14 05:09:06.231188 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 14 05:09:06.233342 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 14 05:09:06.233443 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 14 05:09:06.238308 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 14 05:09:06.242675 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 14 05:09:06.251281 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 14 05:09:06.255898 ignition[1039]: INFO : Ignition 2.21.0 May 14 05:09:06.255898 ignition[1039]: INFO : Stage: umount May 14 05:09:06.257563 ignition[1039]: INFO : no configs at "/usr/lib/ignition/base.d" May 14 05:09:06.257563 ignition[1039]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 14 05:09:06.257563 ignition[1039]: INFO : umount: umount passed May 14 05:09:06.257563 ignition[1039]: INFO : Ignition finished successfully May 14 05:09:06.258729 systemd[1]: ignition-mount.service: Deactivated successfully. May 14 05:09:06.259563 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 14 05:09:06.260592 systemd[1]: Stopped target network.target - Network. May 14 05:09:06.261814 systemd[1]: ignition-disks.service: Deactivated successfully. May 14 05:09:06.261876 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 14 05:09:06.263359 systemd[1]: ignition-kargs.service: Deactivated successfully. May 14 05:09:06.263400 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 14 05:09:06.265023 systemd[1]: ignition-setup.service: Deactivated successfully. May 14 05:09:06.265076 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 14 05:09:06.266543 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 14 05:09:06.266582 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 14 05:09:06.268216 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 14 05:09:06.269881 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 14 05:09:06.275800 systemd[1]: systemd-resolved.service: Deactivated successfully. May 14 05:09:06.275935 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 14 05:09:06.278800 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 14 05:09:06.279037 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 14 05:09:06.279073 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 14 05:09:06.283161 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 14 05:09:06.283393 systemd[1]: systemd-networkd.service: Deactivated successfully. May 14 05:09:06.283489 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 14 05:09:06.288061 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 14 05:09:06.288426 systemd[1]: Stopped target network-pre.target - Preparation for Network. May 14 05:09:06.289400 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 14 05:09:06.289436 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 14 05:09:06.293465 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 14 05:09:06.295245 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 14 05:09:06.295312 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 14 05:09:06.297420 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 14 05:09:06.297470 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 14 05:09:06.300156 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 14 05:09:06.300206 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 14 05:09:06.302157 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 14 05:09:06.306009 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 14 05:09:06.316557 systemd[1]: sysroot-boot.service: Deactivated successfully. May 14 05:09:06.317645 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 14 05:09:06.318712 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 14 05:09:06.318759 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 14 05:09:06.320514 systemd[1]: network-cleanup.service: Deactivated successfully. May 14 05:09:06.320598 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 14 05:09:06.322092 systemd[1]: systemd-udevd.service: Deactivated successfully. May 14 05:09:06.322224 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 14 05:09:06.324240 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 14 05:09:06.324304 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 14 05:09:06.326050 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 14 05:09:06.326082 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 14 05:09:06.330266 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 14 05:09:06.330314 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 14 05:09:06.332622 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 14 05:09:06.332685 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 14 05:09:06.334899 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 14 05:09:06.334953 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 14 05:09:06.337965 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 14 05:09:06.339249 systemd[1]: systemd-network-generator.service: Deactivated successfully. May 14 05:09:06.339306 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. May 14 05:09:06.342381 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 14 05:09:06.342423 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 14 05:09:06.345652 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. May 14 05:09:06.345701 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 14 05:09:06.348631 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 14 05:09:06.348672 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 14 05:09:06.350651 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 14 05:09:06.350697 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 14 05:09:06.353934 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 14 05:09:06.354034 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 14 05:09:06.357006 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 14 05:09:06.359030 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 14 05:09:06.376759 systemd[1]: Switching root. May 14 05:09:06.403532 systemd-journald[244]: Received SIGTERM from PID 1 (systemd). May 14 05:09:06.403574 systemd-journald[244]: Journal stopped May 14 05:09:07.143103 kernel: SELinux: policy capability network_peer_controls=1 May 14 05:09:07.143167 kernel: SELinux: policy capability open_perms=1 May 14 05:09:07.143180 kernel: SELinux: policy capability extended_socket_class=1 May 14 05:09:07.143190 kernel: SELinux: policy capability always_check_network=0 May 14 05:09:07.143202 kernel: SELinux: policy capability cgroup_seclabel=1 May 14 05:09:07.143214 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 14 05:09:07.143224 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 14 05:09:07.143233 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 14 05:09:07.143243 kernel: SELinux: policy capability userspace_initial_context=0 May 14 05:09:07.143253 kernel: audit: type=1403 audit(1747199346.585:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 14 05:09:07.143271 systemd[1]: Successfully loaded SELinux policy in 52.765ms. May 14 05:09:07.143288 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 9.207ms. May 14 05:09:07.143300 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 14 05:09:07.143316 systemd[1]: Detected virtualization kvm. May 14 05:09:07.143326 systemd[1]: Detected architecture arm64. May 14 05:09:07.143337 systemd[1]: Detected first boot. May 14 05:09:07.143348 systemd[1]: Initializing machine ID from VM UUID. May 14 05:09:07.143358 zram_generator::config[1084]: No configuration found. May 14 05:09:07.143369 kernel: NET: Registered PF_VSOCK protocol family May 14 05:09:07.143381 systemd[1]: Populated /etc with preset unit settings. May 14 05:09:07.143392 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 14 05:09:07.143403 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 14 05:09:07.143414 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 14 05:09:07.143428 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 14 05:09:07.143438 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 14 05:09:07.143449 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 14 05:09:07.143460 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 14 05:09:07.143470 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 14 05:09:07.143481 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 14 05:09:07.143491 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 14 05:09:07.143522 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 14 05:09:07.143533 systemd[1]: Created slice user.slice - User and Session Slice. May 14 05:09:07.143543 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 14 05:09:07.143554 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 14 05:09:07.143565 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 14 05:09:07.143575 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 14 05:09:07.143585 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 14 05:09:07.143597 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 14 05:09:07.143608 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... May 14 05:09:07.143620 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 14 05:09:07.143631 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 14 05:09:07.143642 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 14 05:09:07.143652 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 14 05:09:07.143662 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 14 05:09:07.143673 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 14 05:09:07.143684 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 14 05:09:07.143696 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 14 05:09:07.143706 systemd[1]: Reached target slices.target - Slice Units. May 14 05:09:07.143717 systemd[1]: Reached target swap.target - Swaps. May 14 05:09:07.143727 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 14 05:09:07.143737 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 14 05:09:07.143748 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 14 05:09:07.143758 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 14 05:09:07.143769 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 14 05:09:07.143779 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 14 05:09:07.143790 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 14 05:09:07.143802 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 14 05:09:07.143813 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 14 05:09:07.143823 systemd[1]: Mounting media.mount - External Media Directory... May 14 05:09:07.143833 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 14 05:09:07.143844 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 14 05:09:07.143854 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 14 05:09:07.143865 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 14 05:09:07.143876 systemd[1]: Reached target machines.target - Containers. May 14 05:09:07.143887 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 14 05:09:07.143898 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 14 05:09:07.143908 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 14 05:09:07.143921 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 14 05:09:07.143931 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 14 05:09:07.143942 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 14 05:09:07.143952 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 14 05:09:07.143963 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 14 05:09:07.143973 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 14 05:09:07.143985 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 14 05:09:07.143996 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 14 05:09:07.144006 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 14 05:09:07.144017 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 14 05:09:07.144027 systemd[1]: Stopped systemd-fsck-usr.service. May 14 05:09:07.144038 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 14 05:09:07.144048 kernel: fuse: init (API version 7.41) May 14 05:09:07.144058 systemd[1]: Starting systemd-journald.service - Journal Service... May 14 05:09:07.144069 kernel: loop: module loaded May 14 05:09:07.144080 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 14 05:09:07.144090 kernel: ACPI: bus type drm_connector registered May 14 05:09:07.144099 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 14 05:09:07.144117 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 14 05:09:07.144132 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 14 05:09:07.144142 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 14 05:09:07.144153 systemd[1]: verity-setup.service: Deactivated successfully. May 14 05:09:07.144164 systemd[1]: Stopped verity-setup.service. May 14 05:09:07.144199 systemd-journald[1156]: Collecting audit messages is disabled. May 14 05:09:07.144221 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 14 05:09:07.144232 systemd-journald[1156]: Journal started May 14 05:09:07.144253 systemd-journald[1156]: Runtime Journal (/run/log/journal/a7ba8755f5534fbfba120506886a57d3) is 6M, max 48.5M, 42.4M free. May 14 05:09:07.152606 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 14 05:09:07.152633 systemd[1]: Mounted media.mount - External Media Directory. May 14 05:09:07.152648 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 14 05:09:07.152661 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 14 05:09:07.152677 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 14 05:09:06.944315 systemd[1]: Queued start job for default target multi-user.target. May 14 05:09:06.955376 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 14 05:09:06.955747 systemd[1]: systemd-journald.service: Deactivated successfully. May 14 05:09:07.156219 systemd[1]: Started systemd-journald.service - Journal Service. May 14 05:09:07.158537 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 14 05:09:07.159786 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 14 05:09:07.161222 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 14 05:09:07.161381 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 14 05:09:07.162731 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 14 05:09:07.162876 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 14 05:09:07.164056 systemd[1]: modprobe@drm.service: Deactivated successfully. May 14 05:09:07.164218 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 14 05:09:07.165439 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 14 05:09:07.165618 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 14 05:09:07.166853 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 14 05:09:07.167012 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 14 05:09:07.168183 systemd[1]: modprobe@loop.service: Deactivated successfully. May 14 05:09:07.168330 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 14 05:09:07.169656 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 14 05:09:07.170848 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 14 05:09:07.172198 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 14 05:09:07.173612 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 14 05:09:07.185678 systemd[1]: Reached target network-pre.target - Preparation for Network. May 14 05:09:07.187952 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 14 05:09:07.189849 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 14 05:09:07.190868 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 14 05:09:07.190904 systemd[1]: Reached target local-fs.target - Local File Systems. May 14 05:09:07.192718 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 14 05:09:07.200183 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 14 05:09:07.201379 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 14 05:09:07.202479 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 14 05:09:07.204186 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 14 05:09:07.205396 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 14 05:09:07.206465 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 14 05:09:07.207545 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 14 05:09:07.211644 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 14 05:09:07.211859 systemd-journald[1156]: Time spent on flushing to /var/log/journal/a7ba8755f5534fbfba120506886a57d3 is 30.396ms for 886 entries. May 14 05:09:07.211859 systemd-journald[1156]: System Journal (/var/log/journal/a7ba8755f5534fbfba120506886a57d3) is 8M, max 195.6M, 187.6M free. May 14 05:09:07.260038 systemd-journald[1156]: Received client request to flush runtime journal. May 14 05:09:07.260099 kernel: loop0: detected capacity change from 0 to 107312 May 14 05:09:07.260131 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 14 05:09:07.216216 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 14 05:09:07.219662 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 14 05:09:07.225790 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 14 05:09:07.227862 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 14 05:09:07.229976 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 14 05:09:07.232889 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 14 05:09:07.237334 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 14 05:09:07.243425 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 14 05:09:07.247745 systemd-tmpfiles[1202]: ACLs are not supported, ignoring. May 14 05:09:07.247756 systemd-tmpfiles[1202]: ACLs are not supported, ignoring. May 14 05:09:07.250790 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 14 05:09:07.252189 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 14 05:09:07.255560 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 14 05:09:07.263677 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 14 05:09:07.273062 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 14 05:09:07.286520 kernel: loop1: detected capacity change from 0 to 138376 May 14 05:09:07.287482 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 14 05:09:07.293640 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 14 05:09:07.311521 kernel: loop2: detected capacity change from 0 to 189592 May 14 05:09:07.318648 systemd-tmpfiles[1221]: ACLs are not supported, ignoring. May 14 05:09:07.318662 systemd-tmpfiles[1221]: ACLs are not supported, ignoring. May 14 05:09:07.323046 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 14 05:09:07.346526 kernel: loop3: detected capacity change from 0 to 107312 May 14 05:09:07.351740 kernel: loop4: detected capacity change from 0 to 138376 May 14 05:09:07.358686 kernel: loop5: detected capacity change from 0 to 189592 May 14 05:09:07.362129 (sd-merge)[1226]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. May 14 05:09:07.362529 (sd-merge)[1226]: Merged extensions into '/usr'. May 14 05:09:07.368148 systemd[1]: Reload requested from client PID 1201 ('systemd-sysext') (unit systemd-sysext.service)... May 14 05:09:07.368169 systemd[1]: Reloading... May 14 05:09:07.429094 zram_generator::config[1255]: No configuration found. May 14 05:09:07.487895 ldconfig[1196]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 14 05:09:07.511208 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 05:09:07.573325 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 14 05:09:07.573698 systemd[1]: Reloading finished in 205 ms. May 14 05:09:07.611052 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 14 05:09:07.612349 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 14 05:09:07.627812 systemd[1]: Starting ensure-sysext.service... May 14 05:09:07.629440 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 14 05:09:07.644700 systemd-tmpfiles[1287]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. May 14 05:09:07.644834 systemd-tmpfiles[1287]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. May 14 05:09:07.645064 systemd-tmpfiles[1287]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 14 05:09:07.645273 systemd-tmpfiles[1287]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 14 05:09:07.645663 systemd[1]: Reload requested from client PID 1286 ('systemctl') (unit ensure-sysext.service)... May 14 05:09:07.645679 systemd[1]: Reloading... May 14 05:09:07.646001 systemd-tmpfiles[1287]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 14 05:09:07.646226 systemd-tmpfiles[1287]: ACLs are not supported, ignoring. May 14 05:09:07.646273 systemd-tmpfiles[1287]: ACLs are not supported, ignoring. May 14 05:09:07.648897 systemd-tmpfiles[1287]: Detected autofs mount point /boot during canonicalization of boot. May 14 05:09:07.648911 systemd-tmpfiles[1287]: Skipping /boot May 14 05:09:07.657939 systemd-tmpfiles[1287]: Detected autofs mount point /boot during canonicalization of boot. May 14 05:09:07.657957 systemd-tmpfiles[1287]: Skipping /boot May 14 05:09:07.687557 zram_generator::config[1315]: No configuration found. May 14 05:09:07.749897 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 05:09:07.810724 systemd[1]: Reloading finished in 164 ms. May 14 05:09:07.833814 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 14 05:09:07.849713 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 14 05:09:07.856709 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 14 05:09:07.858887 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 14 05:09:07.867406 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 14 05:09:07.870322 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 14 05:09:07.872931 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 14 05:09:07.878558 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 14 05:09:07.881898 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 14 05:09:07.885728 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 14 05:09:07.889966 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 14 05:09:07.892157 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 14 05:09:07.893365 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 14 05:09:07.893475 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 14 05:09:07.900564 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 14 05:09:07.901972 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 14 05:09:07.903141 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 14 05:09:07.903254 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 14 05:09:07.907167 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 14 05:09:07.911563 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 14 05:09:07.913419 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 14 05:09:07.914552 systemd-udevd[1355]: Using default interface naming scheme 'v255'. May 14 05:09:07.915136 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 14 05:09:07.917071 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 14 05:09:07.917241 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 14 05:09:07.918932 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 14 05:09:07.919078 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 14 05:09:07.920745 systemd[1]: modprobe@loop.service: Deactivated successfully. May 14 05:09:07.920892 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 14 05:09:07.922637 systemd[1]: modprobe@drm.service: Deactivated successfully. May 14 05:09:07.922899 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 14 05:09:07.927625 systemd[1]: Finished ensure-sysext.service. May 14 05:09:07.934644 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 14 05:09:07.934708 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 14 05:09:07.936658 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 14 05:09:07.938597 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 14 05:09:07.941592 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 14 05:09:07.941722 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 14 05:09:07.945676 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 14 05:09:07.954483 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 14 05:09:07.970706 augenrules[1419]: No rules May 14 05:09:07.972131 systemd[1]: audit-rules.service: Deactivated successfully. May 14 05:09:07.976547 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 14 05:09:07.989526 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 14 05:09:08.002170 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. May 14 05:09:08.044683 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 14 05:09:08.047241 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 14 05:09:08.077550 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 14 05:09:08.116831 systemd-networkd[1401]: lo: Link UP May 14 05:09:08.116838 systemd-networkd[1401]: lo: Gained carrier May 14 05:09:08.117625 systemd-networkd[1401]: Enumeration completed May 14 05:09:08.117738 systemd[1]: Started systemd-networkd.service - Network Configuration. May 14 05:09:08.118041 systemd-networkd[1401]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 14 05:09:08.118051 systemd-networkd[1401]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 14 05:09:08.118477 systemd-networkd[1401]: eth0: Link UP May 14 05:09:08.118601 systemd-networkd[1401]: eth0: Gained carrier May 14 05:09:08.118615 systemd-networkd[1401]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 14 05:09:08.120642 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 14 05:09:08.123038 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 14 05:09:08.124414 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 14 05:09:08.125836 systemd[1]: Reached target time-set.target - System Time Set. May 14 05:09:08.132905 systemd-resolved[1354]: Positive Trust Anchors: May 14 05:09:08.133181 systemd-resolved[1354]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 14 05:09:08.133268 systemd-resolved[1354]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 14 05:09:08.133560 systemd-networkd[1401]: eth0: DHCPv4 address 10.0.0.132/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 14 05:09:08.134764 systemd-timesyncd[1385]: Network configuration changed, trying to establish connection. May 14 05:09:08.136944 systemd-timesyncd[1385]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 14 05:09:08.137008 systemd-timesyncd[1385]: Initial clock synchronization to Wed 2025-05-14 05:09:07.908255 UTC. May 14 05:09:08.140807 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 14 05:09:08.144379 systemd-resolved[1354]: Defaulting to hostname 'linux'. May 14 05:09:08.150489 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 14 05:09:08.151659 systemd[1]: Reached target network.target - Network. May 14 05:09:08.152609 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 14 05:09:08.153742 systemd[1]: Reached target sysinit.target - System Initialization. May 14 05:09:08.154853 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 14 05:09:08.157101 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 14 05:09:08.158521 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 14 05:09:08.159664 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 14 05:09:08.161340 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 14 05:09:08.162548 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 14 05:09:08.162583 systemd[1]: Reached target paths.target - Path Units. May 14 05:09:08.163325 systemd[1]: Reached target timers.target - Timer Units. May 14 05:09:08.166480 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 14 05:09:08.168577 systemd[1]: Starting docker.socket - Docker Socket for the API... May 14 05:09:08.172672 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 14 05:09:08.174018 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). May 14 05:09:08.175565 systemd[1]: Reached target ssh-access.target - SSH Access Available. May 14 05:09:08.178343 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 14 05:09:08.179877 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 14 05:09:08.181417 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 14 05:09:08.184698 systemd[1]: Reached target sockets.target - Socket Units. May 14 05:09:08.185612 systemd[1]: Reached target basic.target - Basic System. May 14 05:09:08.186622 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 14 05:09:08.186708 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 14 05:09:08.187751 systemd[1]: Starting containerd.service - containerd container runtime... May 14 05:09:08.190693 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 14 05:09:08.192476 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 14 05:09:08.197357 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 14 05:09:08.200209 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 14 05:09:08.201211 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 14 05:09:08.202275 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 14 05:09:08.203612 jq[1466]: false May 14 05:09:08.204204 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 14 05:09:08.207555 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 14 05:09:08.209517 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 14 05:09:08.215712 systemd[1]: Starting systemd-logind.service - User Login Management... May 14 05:09:08.217438 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 14 05:09:08.219359 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 14 05:09:08.219812 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 14 05:09:08.220976 systemd[1]: Starting update-engine.service - Update Engine... May 14 05:09:08.223688 extend-filesystems[1467]: Found loop3 May 14 05:09:08.225632 extend-filesystems[1467]: Found loop4 May 14 05:09:08.225632 extend-filesystems[1467]: Found loop5 May 14 05:09:08.225632 extend-filesystems[1467]: Found vda May 14 05:09:08.225632 extend-filesystems[1467]: Found vda1 May 14 05:09:08.225632 extend-filesystems[1467]: Found vda2 May 14 05:09:08.225632 extend-filesystems[1467]: Found vda3 May 14 05:09:08.225632 extend-filesystems[1467]: Found usr May 14 05:09:08.225632 extend-filesystems[1467]: Found vda4 May 14 05:09:08.225632 extend-filesystems[1467]: Found vda6 May 14 05:09:08.225632 extend-filesystems[1467]: Found vda7 May 14 05:09:08.225632 extend-filesystems[1467]: Found vda9 May 14 05:09:08.225632 extend-filesystems[1467]: Checking size of /dev/vda9 May 14 05:09:08.232692 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 14 05:09:08.236990 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 14 05:09:08.245889 jq[1483]: true May 14 05:09:08.248245 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 14 05:09:08.251176 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 14 05:09:08.251720 systemd[1]: motdgen.service: Deactivated successfully. May 14 05:09:08.251887 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 14 05:09:08.256553 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 14 05:09:08.258560 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 14 05:09:08.262509 extend-filesystems[1467]: Resized partition /dev/vda9 May 14 05:09:08.275362 jq[1493]: true May 14 05:09:08.278856 extend-filesystems[1495]: resize2fs 1.47.2 (1-Jan-2025) May 14 05:09:08.295069 update_engine[1481]: I20250514 05:09:08.294901 1481 main.cc:92] Flatcar Update Engine starting May 14 05:09:08.300600 (ntainerd)[1494]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 14 05:09:08.307532 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 14 05:09:08.316145 dbus-daemon[1464]: [system] SELinux support is enabled May 14 05:09:08.318064 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 14 05:09:08.321326 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 14 05:09:08.321361 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 14 05:09:08.322553 tar[1492]: linux-arm64/helm May 14 05:09:08.323424 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 14 05:09:08.323450 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 14 05:09:08.328914 systemd[1]: Started update-engine.service - Update Engine. May 14 05:09:08.331675 update_engine[1481]: I20250514 05:09:08.328997 1481 update_check_scheduler.cc:74] Next update check in 6m28s May 14 05:09:08.332204 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 14 05:09:08.333821 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 14 05:09:08.337440 systemd-logind[1475]: Watching system buttons on /dev/input/event0 (Power Button) May 14 05:09:08.337629 systemd-logind[1475]: New seat seat0. May 14 05:09:08.338569 systemd[1]: Started systemd-logind.service - User Login Management. May 14 05:09:08.344511 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 14 05:09:08.356743 extend-filesystems[1495]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 14 05:09:08.356743 extend-filesystems[1495]: old_desc_blocks = 1, new_desc_blocks = 1 May 14 05:09:08.356743 extend-filesystems[1495]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 14 05:09:08.366007 extend-filesystems[1467]: Resized filesystem in /dev/vda9 May 14 05:09:08.358818 systemd[1]: extend-filesystems.service: Deactivated successfully. May 14 05:09:08.370674 bash[1523]: Updated "/home/core/.ssh/authorized_keys" May 14 05:09:08.359257 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 14 05:09:08.363948 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 14 05:09:08.372761 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 14 05:09:08.398414 locksmithd[1525]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 14 05:09:08.524208 containerd[1494]: time="2025-05-14T05:09:08Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 May 14 05:09:08.524813 containerd[1494]: time="2025-05-14T05:09:08.524781400Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 May 14 05:09:08.535368 containerd[1494]: time="2025-05-14T05:09:08.535281240Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="8.96µs" May 14 05:09:08.535368 containerd[1494]: time="2025-05-14T05:09:08.535363720Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 May 14 05:09:08.535439 containerd[1494]: time="2025-05-14T05:09:08.535384360Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 May 14 05:09:08.535687 containerd[1494]: time="2025-05-14T05:09:08.535609480Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 May 14 05:09:08.535729 containerd[1494]: time="2025-05-14T05:09:08.535693760Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 May 14 05:09:08.535729 containerd[1494]: time="2025-05-14T05:09:08.535722960Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 14 05:09:08.535856 containerd[1494]: time="2025-05-14T05:09:08.535830400Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 14 05:09:08.535856 containerd[1494]: time="2025-05-14T05:09:08.535852560Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 14 05:09:08.536210 containerd[1494]: time="2025-05-14T05:09:08.536179920Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 14 05:09:08.536241 containerd[1494]: time="2025-05-14T05:09:08.536208160Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 14 05:09:08.536241 containerd[1494]: time="2025-05-14T05:09:08.536222320Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 14 05:09:08.536241 containerd[1494]: time="2025-05-14T05:09:08.536230640Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 May 14 05:09:08.536384 containerd[1494]: time="2025-05-14T05:09:08.536360880Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 May 14 05:09:08.536812 containerd[1494]: time="2025-05-14T05:09:08.536786680Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 14 05:09:08.536850 containerd[1494]: time="2025-05-14T05:09:08.536833120Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 14 05:09:08.536875 containerd[1494]: time="2025-05-14T05:09:08.536848080Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 May 14 05:09:08.536906 containerd[1494]: time="2025-05-14T05:09:08.536884480Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 May 14 05:09:08.537813 containerd[1494]: time="2025-05-14T05:09:08.537778800Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 May 14 05:09:08.539651 containerd[1494]: time="2025-05-14T05:09:08.539589480Z" level=info msg="metadata content store policy set" policy=shared May 14 05:09:08.543797 containerd[1494]: time="2025-05-14T05:09:08.543762200Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 May 14 05:09:08.543860 containerd[1494]: time="2025-05-14T05:09:08.543808400Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 May 14 05:09:08.543860 containerd[1494]: time="2025-05-14T05:09:08.543823480Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 May 14 05:09:08.543860 containerd[1494]: time="2025-05-14T05:09:08.543840320Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 May 14 05:09:08.543860 containerd[1494]: time="2025-05-14T05:09:08.543852760Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 May 14 05:09:08.543951 containerd[1494]: time="2025-05-14T05:09:08.543863680Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 May 14 05:09:08.543951 containerd[1494]: time="2025-05-14T05:09:08.543874800Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 May 14 05:09:08.543951 containerd[1494]: time="2025-05-14T05:09:08.543886920Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 May 14 05:09:08.543951 containerd[1494]: time="2025-05-14T05:09:08.543897960Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 May 14 05:09:08.543951 containerd[1494]: time="2025-05-14T05:09:08.543908120Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 May 14 05:09:08.543951 containerd[1494]: time="2025-05-14T05:09:08.543917600Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 May 14 05:09:08.543951 containerd[1494]: time="2025-05-14T05:09:08.543930280Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 May 14 05:09:08.544064 containerd[1494]: time="2025-05-14T05:09:08.544034560Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 May 14 05:09:08.544064 containerd[1494]: time="2025-05-14T05:09:08.544054640Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 May 14 05:09:08.544098 containerd[1494]: time="2025-05-14T05:09:08.544067840Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 May 14 05:09:08.544098 containerd[1494]: time="2025-05-14T05:09:08.544079200Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 May 14 05:09:08.544098 containerd[1494]: time="2025-05-14T05:09:08.544090520Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 May 14 05:09:08.544160 containerd[1494]: time="2025-05-14T05:09:08.544101440Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 May 14 05:09:08.544160 containerd[1494]: time="2025-05-14T05:09:08.544127160Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 May 14 05:09:08.544160 containerd[1494]: time="2025-05-14T05:09:08.544138560Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 May 14 05:09:08.544160 containerd[1494]: time="2025-05-14T05:09:08.544154800Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 May 14 05:09:08.544233 containerd[1494]: time="2025-05-14T05:09:08.544165880Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 May 14 05:09:08.544233 containerd[1494]: time="2025-05-14T05:09:08.544176040Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 May 14 05:09:08.544392 containerd[1494]: time="2025-05-14T05:09:08.544358800Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" May 14 05:09:08.544392 containerd[1494]: time="2025-05-14T05:09:08.544383360Z" level=info msg="Start snapshots syncer" May 14 05:09:08.544436 containerd[1494]: time="2025-05-14T05:09:08.544410760Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 May 14 05:09:08.544659 containerd[1494]: time="2025-05-14T05:09:08.544617400Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" May 14 05:09:08.544760 containerd[1494]: time="2025-05-14T05:09:08.544688960Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 May 14 05:09:08.544782 containerd[1494]: time="2025-05-14T05:09:08.544758520Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 May 14 05:09:08.544879 containerd[1494]: time="2025-05-14T05:09:08.544859120Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 May 14 05:09:08.544902 containerd[1494]: time="2025-05-14T05:09:08.544888200Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 May 14 05:09:08.544926 containerd[1494]: time="2025-05-14T05:09:08.544900200Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 May 14 05:09:08.544926 containerd[1494]: time="2025-05-14T05:09:08.544916200Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 May 14 05:09:08.544964 containerd[1494]: time="2025-05-14T05:09:08.544927840Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 May 14 05:09:08.544964 containerd[1494]: time="2025-05-14T05:09:08.544939440Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 May 14 05:09:08.544964 containerd[1494]: time="2025-05-14T05:09:08.544949520Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 May 14 05:09:08.545014 containerd[1494]: time="2025-05-14T05:09:08.544972240Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 May 14 05:09:08.545014 containerd[1494]: time="2025-05-14T05:09:08.544984240Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 May 14 05:09:08.545014 containerd[1494]: time="2025-05-14T05:09:08.544994720Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 May 14 05:09:08.545066 containerd[1494]: time="2025-05-14T05:09:08.545031080Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 14 05:09:08.545066 containerd[1494]: time="2025-05-14T05:09:08.545045160Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 14 05:09:08.545066 containerd[1494]: time="2025-05-14T05:09:08.545054040Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 14 05:09:08.545066 containerd[1494]: time="2025-05-14T05:09:08.545062680Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 14 05:09:08.545143 containerd[1494]: time="2025-05-14T05:09:08.545070080Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 May 14 05:09:08.545143 containerd[1494]: time="2025-05-14T05:09:08.545079480Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 May 14 05:09:08.545143 containerd[1494]: time="2025-05-14T05:09:08.545089760Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 May 14 05:09:08.545199 containerd[1494]: time="2025-05-14T05:09:08.545182800Z" level=info msg="runtime interface created" May 14 05:09:08.545199 containerd[1494]: time="2025-05-14T05:09:08.545191400Z" level=info msg="created NRI interface" May 14 05:09:08.545232 containerd[1494]: time="2025-05-14T05:09:08.545203440Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 May 14 05:09:08.545232 containerd[1494]: time="2025-05-14T05:09:08.545215000Z" level=info msg="Connect containerd service" May 14 05:09:08.545269 containerd[1494]: time="2025-05-14T05:09:08.545240200Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 14 05:09:08.548488 containerd[1494]: time="2025-05-14T05:09:08.548449640Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 14 05:09:08.651529 containerd[1494]: time="2025-05-14T05:09:08.651457280Z" level=info msg="Start subscribing containerd event" May 14 05:09:08.651772 containerd[1494]: time="2025-05-14T05:09:08.651666880Z" level=info msg="Start recovering state" May 14 05:09:08.651837 containerd[1494]: time="2025-05-14T05:09:08.651806240Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 14 05:09:08.651874 containerd[1494]: time="2025-05-14T05:09:08.651859840Z" level=info msg=serving... address=/run/containerd/containerd.sock May 14 05:09:08.651993 containerd[1494]: time="2025-05-14T05:09:08.651915720Z" level=info msg="Start event monitor" May 14 05:09:08.651993 containerd[1494]: time="2025-05-14T05:09:08.651943320Z" level=info msg="Start cni network conf syncer for default" May 14 05:09:08.651993 containerd[1494]: time="2025-05-14T05:09:08.651951560Z" level=info msg="Start streaming server" May 14 05:09:08.652145 containerd[1494]: time="2025-05-14T05:09:08.651961480Z" level=info msg="Registered namespace \"k8s.io\" with NRI" May 14 05:09:08.652145 containerd[1494]: time="2025-05-14T05:09:08.652099680Z" level=info msg="runtime interface starting up..." May 14 05:09:08.652145 containerd[1494]: time="2025-05-14T05:09:08.652119360Z" level=info msg="starting plugins..." May 14 05:09:08.652237 containerd[1494]: time="2025-05-14T05:09:08.652225320Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" May 14 05:09:08.652547 containerd[1494]: time="2025-05-14T05:09:08.652436560Z" level=info msg="containerd successfully booted in 0.128599s" May 14 05:09:08.652665 systemd[1]: Started containerd.service - containerd container runtime. May 14 05:09:08.706950 tar[1492]: linux-arm64/LICENSE May 14 05:09:08.707035 tar[1492]: linux-arm64/README.md May 14 05:09:08.731409 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 14 05:09:09.378319 sshd_keygen[1484]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 14 05:09:09.395668 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 14 05:09:09.398354 systemd[1]: Starting issuegen.service - Generate /run/issue... May 14 05:09:09.421331 systemd[1]: issuegen.service: Deactivated successfully. May 14 05:09:09.421535 systemd[1]: Finished issuegen.service - Generate /run/issue. May 14 05:09:09.423802 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 14 05:09:09.445363 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 14 05:09:09.447792 systemd[1]: Started getty@tty1.service - Getty on tty1. May 14 05:09:09.449615 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. May 14 05:09:09.450754 systemd[1]: Reached target getty.target - Login Prompts. May 14 05:09:09.728766 systemd-networkd[1401]: eth0: Gained IPv6LL May 14 05:09:09.731199 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 14 05:09:09.733053 systemd[1]: Reached target network-online.target - Network is Online. May 14 05:09:09.735275 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... May 14 05:09:09.737327 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 05:09:09.766993 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 14 05:09:09.779706 systemd[1]: coreos-metadata.service: Deactivated successfully. May 14 05:09:09.779865 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. May 14 05:09:09.782316 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 14 05:09:09.789960 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 14 05:09:10.227253 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 05:09:10.229203 systemd[1]: Reached target multi-user.target - Multi-User System. May 14 05:09:10.230688 (kubelet)[1598]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 05:09:10.230847 systemd[1]: Startup finished in 2.116s (kernel) + 4.957s (initrd) + 3.700s (userspace) = 10.774s. May 14 05:09:10.617780 kubelet[1598]: E0514 05:09:10.617656 1598 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 05:09:10.620371 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 05:09:10.620532 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 05:09:10.620860 systemd[1]: kubelet.service: Consumed 752ms CPU time, 231.7M memory peak. May 14 05:09:14.003805 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 14 05:09:14.004929 systemd[1]: Started sshd@0-10.0.0.132:22-10.0.0.1:52558.service - OpenSSH per-connection server daemon (10.0.0.1:52558). May 14 05:09:14.081533 sshd[1611]: Accepted publickey for core from 10.0.0.1 port 52558 ssh2: RSA SHA256:smyBmIa3wdfW9qC8bkPmwJMNCzTtNvEfnmjMEHeX+hQ May 14 05:09:14.083153 sshd-session[1611]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 05:09:14.088948 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 14 05:09:14.089867 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 14 05:09:14.094871 systemd-logind[1475]: New session 1 of user core. May 14 05:09:14.115530 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 14 05:09:14.117924 systemd[1]: Starting user@500.service - User Manager for UID 500... May 14 05:09:14.134277 (systemd)[1615]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 14 05:09:14.136319 systemd-logind[1475]: New session c1 of user core. May 14 05:09:14.239786 systemd[1615]: Queued start job for default target default.target. May 14 05:09:14.247291 systemd[1615]: Created slice app.slice - User Application Slice. May 14 05:09:14.247319 systemd[1615]: Reached target paths.target - Paths. May 14 05:09:14.247352 systemd[1615]: Reached target timers.target - Timers. May 14 05:09:14.248479 systemd[1615]: Starting dbus.socket - D-Bus User Message Bus Socket... May 14 05:09:14.256568 systemd[1615]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 14 05:09:14.256624 systemd[1615]: Reached target sockets.target - Sockets. May 14 05:09:14.256660 systemd[1615]: Reached target basic.target - Basic System. May 14 05:09:14.256687 systemd[1615]: Reached target default.target - Main User Target. May 14 05:09:14.256709 systemd[1615]: Startup finished in 115ms. May 14 05:09:14.256864 systemd[1]: Started user@500.service - User Manager for UID 500. May 14 05:09:14.258140 systemd[1]: Started session-1.scope - Session 1 of User core. May 14 05:09:14.324670 systemd[1]: Started sshd@1-10.0.0.132:22-10.0.0.1:52560.service - OpenSSH per-connection server daemon (10.0.0.1:52560). May 14 05:09:14.366619 sshd[1626]: Accepted publickey for core from 10.0.0.1 port 52560 ssh2: RSA SHA256:smyBmIa3wdfW9qC8bkPmwJMNCzTtNvEfnmjMEHeX+hQ May 14 05:09:14.367147 sshd-session[1626]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 05:09:14.371274 systemd-logind[1475]: New session 2 of user core. May 14 05:09:14.387643 systemd[1]: Started session-2.scope - Session 2 of User core. May 14 05:09:14.436465 sshd[1628]: Connection closed by 10.0.0.1 port 52560 May 14 05:09:14.436748 sshd-session[1626]: pam_unix(sshd:session): session closed for user core May 14 05:09:14.449374 systemd[1]: sshd@1-10.0.0.132:22-10.0.0.1:52560.service: Deactivated successfully. May 14 05:09:14.450657 systemd[1]: session-2.scope: Deactivated successfully. May 14 05:09:14.452562 systemd-logind[1475]: Session 2 logged out. Waiting for processes to exit. May 14 05:09:14.454111 systemd[1]: Started sshd@2-10.0.0.132:22-10.0.0.1:52568.service - OpenSSH per-connection server daemon (10.0.0.1:52568). May 14 05:09:14.454928 systemd-logind[1475]: Removed session 2. May 14 05:09:14.507275 sshd[1634]: Accepted publickey for core from 10.0.0.1 port 52568 ssh2: RSA SHA256:smyBmIa3wdfW9qC8bkPmwJMNCzTtNvEfnmjMEHeX+hQ May 14 05:09:14.508139 sshd-session[1634]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 05:09:14.511619 systemd-logind[1475]: New session 3 of user core. May 14 05:09:14.520625 systemd[1]: Started session-3.scope - Session 3 of User core. May 14 05:09:14.566705 sshd[1636]: Connection closed by 10.0.0.1 port 52568 May 14 05:09:14.566950 sshd-session[1634]: pam_unix(sshd:session): session closed for user core May 14 05:09:14.581189 systemd[1]: sshd@2-10.0.0.132:22-10.0.0.1:52568.service: Deactivated successfully. May 14 05:09:14.582503 systemd[1]: session-3.scope: Deactivated successfully. May 14 05:09:14.583072 systemd-logind[1475]: Session 3 logged out. Waiting for processes to exit. May 14 05:09:14.585055 systemd[1]: Started sshd@3-10.0.0.132:22-10.0.0.1:52572.service - OpenSSH per-connection server daemon (10.0.0.1:52572). May 14 05:09:14.585733 systemd-logind[1475]: Removed session 3. May 14 05:09:14.637213 sshd[1642]: Accepted publickey for core from 10.0.0.1 port 52572 ssh2: RSA SHA256:smyBmIa3wdfW9qC8bkPmwJMNCzTtNvEfnmjMEHeX+hQ May 14 05:09:14.638227 sshd-session[1642]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 05:09:14.641566 systemd-logind[1475]: New session 4 of user core. May 14 05:09:14.655619 systemd[1]: Started session-4.scope - Session 4 of User core. May 14 05:09:14.704554 sshd[1644]: Connection closed by 10.0.0.1 port 52572 May 14 05:09:14.704924 sshd-session[1642]: pam_unix(sshd:session): session closed for user core May 14 05:09:14.717345 systemd[1]: sshd@3-10.0.0.132:22-10.0.0.1:52572.service: Deactivated successfully. May 14 05:09:14.718670 systemd[1]: session-4.scope: Deactivated successfully. May 14 05:09:14.719249 systemd-logind[1475]: Session 4 logged out. Waiting for processes to exit. May 14 05:09:14.721244 systemd[1]: Started sshd@4-10.0.0.132:22-10.0.0.1:52574.service - OpenSSH per-connection server daemon (10.0.0.1:52574). May 14 05:09:14.722109 systemd-logind[1475]: Removed session 4. May 14 05:09:14.770319 sshd[1650]: Accepted publickey for core from 10.0.0.1 port 52574 ssh2: RSA SHA256:smyBmIa3wdfW9qC8bkPmwJMNCzTtNvEfnmjMEHeX+hQ May 14 05:09:14.771257 sshd-session[1650]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 05:09:14.775192 systemd-logind[1475]: New session 5 of user core. May 14 05:09:14.791671 systemd[1]: Started session-5.scope - Session 5 of User core. May 14 05:09:14.846879 sudo[1653]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 14 05:09:14.847135 sudo[1653]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 14 05:09:14.860997 sudo[1653]: pam_unix(sudo:session): session closed for user root May 14 05:09:14.864041 sshd[1652]: Connection closed by 10.0.0.1 port 52574 May 14 05:09:14.864363 sshd-session[1650]: pam_unix(sshd:session): session closed for user core May 14 05:09:14.885468 systemd[1]: sshd@4-10.0.0.132:22-10.0.0.1:52574.service: Deactivated successfully. May 14 05:09:14.886832 systemd[1]: session-5.scope: Deactivated successfully. May 14 05:09:14.887420 systemd-logind[1475]: Session 5 logged out. Waiting for processes to exit. May 14 05:09:14.889747 systemd[1]: Started sshd@5-10.0.0.132:22-10.0.0.1:52590.service - OpenSSH per-connection server daemon (10.0.0.1:52590). May 14 05:09:14.890339 systemd-logind[1475]: Removed session 5. May 14 05:09:14.943709 sshd[1659]: Accepted publickey for core from 10.0.0.1 port 52590 ssh2: RSA SHA256:smyBmIa3wdfW9qC8bkPmwJMNCzTtNvEfnmjMEHeX+hQ May 14 05:09:14.944761 sshd-session[1659]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 05:09:14.948013 systemd-logind[1475]: New session 6 of user core. May 14 05:09:14.966670 systemd[1]: Started session-6.scope - Session 6 of User core. May 14 05:09:15.015548 sudo[1663]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 14 05:09:15.015796 sudo[1663]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 14 05:09:15.078199 sudo[1663]: pam_unix(sudo:session): session closed for user root May 14 05:09:15.083131 sudo[1662]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 14 05:09:15.083384 sudo[1662]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 14 05:09:15.091609 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 14 05:09:15.131061 augenrules[1685]: No rules May 14 05:09:15.131636 systemd[1]: audit-rules.service: Deactivated successfully. May 14 05:09:15.131831 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 14 05:09:15.133156 sudo[1662]: pam_unix(sudo:session): session closed for user root May 14 05:09:15.134180 sshd[1661]: Connection closed by 10.0.0.1 port 52590 May 14 05:09:15.134566 sshd-session[1659]: pam_unix(sshd:session): session closed for user core May 14 05:09:15.141295 systemd[1]: sshd@5-10.0.0.132:22-10.0.0.1:52590.service: Deactivated successfully. May 14 05:09:15.143642 systemd[1]: session-6.scope: Deactivated successfully. May 14 05:09:15.144291 systemd-logind[1475]: Session 6 logged out. Waiting for processes to exit. May 14 05:09:15.146417 systemd[1]: Started sshd@6-10.0.0.132:22-10.0.0.1:52596.service - OpenSSH per-connection server daemon (10.0.0.1:52596). May 14 05:09:15.147019 systemd-logind[1475]: Removed session 6. May 14 05:09:15.194431 sshd[1694]: Accepted publickey for core from 10.0.0.1 port 52596 ssh2: RSA SHA256:smyBmIa3wdfW9qC8bkPmwJMNCzTtNvEfnmjMEHeX+hQ May 14 05:09:15.195460 sshd-session[1694]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 05:09:15.199014 systemd-logind[1475]: New session 7 of user core. May 14 05:09:15.211651 systemd[1]: Started session-7.scope - Session 7 of User core. May 14 05:09:15.260205 sudo[1697]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 14 05:09:15.260451 sudo[1697]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 14 05:09:15.605953 systemd[1]: Starting docker.service - Docker Application Container Engine... May 14 05:09:15.614820 (dockerd)[1717]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 14 05:09:15.870069 dockerd[1717]: time="2025-05-14T05:09:15.869947116Z" level=info msg="Starting up" May 14 05:09:15.871209 dockerd[1717]: time="2025-05-14T05:09:15.871184993Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" May 14 05:09:15.911656 dockerd[1717]: time="2025-05-14T05:09:15.911529076Z" level=info msg="Loading containers: start." May 14 05:09:15.920518 kernel: Initializing XFRM netlink socket May 14 05:09:16.101969 systemd-networkd[1401]: docker0: Link UP May 14 05:09:16.105160 dockerd[1717]: time="2025-05-14T05:09:16.105096795Z" level=info msg="Loading containers: done." May 14 05:09:16.116070 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3965071422-merged.mount: Deactivated successfully. May 14 05:09:16.118151 dockerd[1717]: time="2025-05-14T05:09:16.117853649Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 14 05:09:16.118151 dockerd[1717]: time="2025-05-14T05:09:16.117920767Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 May 14 05:09:16.118151 dockerd[1717]: time="2025-05-14T05:09:16.118008570Z" level=info msg="Initializing buildkit" May 14 05:09:16.139995 dockerd[1717]: time="2025-05-14T05:09:16.139919843Z" level=info msg="Completed buildkit initialization" May 14 05:09:16.144612 dockerd[1717]: time="2025-05-14T05:09:16.144584501Z" level=info msg="Daemon has completed initialization" May 14 05:09:16.144750 dockerd[1717]: time="2025-05-14T05:09:16.144715415Z" level=info msg="API listen on /run/docker.sock" May 14 05:09:16.144822 systemd[1]: Started docker.service - Docker Application Container Engine. May 14 05:09:16.813008 containerd[1494]: time="2025-05-14T05:09:16.812975607Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.8\"" May 14 05:09:17.379362 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3700688738.mount: Deactivated successfully. May 14 05:09:18.274834 containerd[1494]: time="2025-05-14T05:09:18.274788087Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 05:09:18.275747 containerd[1494]: time="2025-05-14T05:09:18.275509435Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.8: active requests=0, bytes read=25554610" May 14 05:09:18.276489 containerd[1494]: time="2025-05-14T05:09:18.276469356Z" level=info msg="ImageCreate event name:\"sha256:ef8fb1ea7c9599dbedea6f9d5589975ebc5bf4ec72f6be6acaaec59a723a09b3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 05:09:18.279507 containerd[1494]: time="2025-05-14T05:09:18.279452661Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:30090db6a7d53799163ce82dae9e8ddb645fd47db93f2ec9da0cc787fd825625\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 05:09:18.280035 containerd[1494]: time="2025-05-14T05:09:18.280010784Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.8\" with image id \"sha256:ef8fb1ea7c9599dbedea6f9d5589975ebc5bf4ec72f6be6acaaec59a723a09b3\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.8\", repo digest \"registry.k8s.io/kube-apiserver@sha256:30090db6a7d53799163ce82dae9e8ddb645fd47db93f2ec9da0cc787fd825625\", size \"25551408\" in 1.4669979s" May 14 05:09:18.280077 containerd[1494]: time="2025-05-14T05:09:18.280042826Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.8\" returns image reference \"sha256:ef8fb1ea7c9599dbedea6f9d5589975ebc5bf4ec72f6be6acaaec59a723a09b3\"" May 14 05:09:18.280637 containerd[1494]: time="2025-05-14T05:09:18.280599085Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.8\"" May 14 05:09:19.469384 containerd[1494]: time="2025-05-14T05:09:19.469336092Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 05:09:19.469922 containerd[1494]: time="2025-05-14T05:09:19.469890334Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.8: active requests=0, bytes read=22458980" May 14 05:09:19.470712 containerd[1494]: time="2025-05-14T05:09:19.470686384Z" level=info msg="ImageCreate event name:\"sha256:ea6e6085feca75547d0422ab0536fe0d18c9ff5831de7a9d6a707c968027bb6a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 05:09:19.473270 containerd[1494]: time="2025-05-14T05:09:19.473234601Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:29eaddc64792a689df48506e78bbc641d063ac8bb92d2e66ae2ad05977420747\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 05:09:19.474226 containerd[1494]: time="2025-05-14T05:09:19.474203898Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.8\" with image id \"sha256:ea6e6085feca75547d0422ab0536fe0d18c9ff5831de7a9d6a707c968027bb6a\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.8\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:29eaddc64792a689df48506e78bbc641d063ac8bb92d2e66ae2ad05977420747\", size \"23900539\" in 1.193575658s" May 14 05:09:19.474267 containerd[1494]: time="2025-05-14T05:09:19.474232522Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.8\" returns image reference \"sha256:ea6e6085feca75547d0422ab0536fe0d18c9ff5831de7a9d6a707c968027bb6a\"" May 14 05:09:19.474908 containerd[1494]: time="2025-05-14T05:09:19.474766715Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.8\"" May 14 05:09:20.626131 containerd[1494]: time="2025-05-14T05:09:20.626065387Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 05:09:20.626879 containerd[1494]: time="2025-05-14T05:09:20.626832587Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.8: active requests=0, bytes read=17125815" May 14 05:09:20.627578 containerd[1494]: time="2025-05-14T05:09:20.627546062Z" level=info msg="ImageCreate event name:\"sha256:1d2db6ef0dd2f3e08bdfcd46afde7b755b05192841f563d8df54b807daaa7d8d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 05:09:20.629897 containerd[1494]: time="2025-05-14T05:09:20.629860774Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:22994a2632e81059720480b9f6bdeb133b08d58492d0b36dfd6e9768b159b22a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 05:09:20.630988 containerd[1494]: time="2025-05-14T05:09:20.630908556Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.8\" with image id \"sha256:1d2db6ef0dd2f3e08bdfcd46afde7b755b05192841f563d8df54b807daaa7d8d\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.8\", repo digest \"registry.k8s.io/kube-scheduler@sha256:22994a2632e81059720480b9f6bdeb133b08d58492d0b36dfd6e9768b159b22a\", size \"18567392\" in 1.156105588s" May 14 05:09:20.630988 containerd[1494]: time="2025-05-14T05:09:20.630936411Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.8\" returns image reference \"sha256:1d2db6ef0dd2f3e08bdfcd46afde7b755b05192841f563d8df54b807daaa7d8d\"" May 14 05:09:20.631420 containerd[1494]: time="2025-05-14T05:09:20.631336800Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\"" May 14 05:09:20.870966 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 14 05:09:20.872294 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 05:09:20.987536 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 05:09:20.990825 (kubelet)[1999]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 14 05:09:21.028035 kubelet[1999]: E0514 05:09:21.027991 1999 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 14 05:09:21.031206 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 14 05:09:21.031421 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 14 05:09:21.031934 systemd[1]: kubelet.service: Consumed 131ms CPU time, 95M memory peak. May 14 05:09:21.639018 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1354202001.mount: Deactivated successfully. May 14 05:09:21.847136 containerd[1494]: time="2025-05-14T05:09:21.847086519Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 05:09:21.847710 containerd[1494]: time="2025-05-14T05:09:21.847673640Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.8: active requests=0, bytes read=26871919" May 14 05:09:21.848581 containerd[1494]: time="2025-05-14T05:09:21.848539706Z" level=info msg="ImageCreate event name:\"sha256:c5361ece77e80334cd5fb082c0b678cb3244f5834ecacea1719ae6b38b465581\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 05:09:21.850335 containerd[1494]: time="2025-05-14T05:09:21.850307352Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:dd0c9a37670f209947b1ed880f06a2e93e1d41da78c037f52f94b13858769838\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 05:09:21.850844 containerd[1494]: time="2025-05-14T05:09:21.850811951Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.8\" with image id \"sha256:c5361ece77e80334cd5fb082c0b678cb3244f5834ecacea1719ae6b38b465581\", repo tag \"registry.k8s.io/kube-proxy:v1.31.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:dd0c9a37670f209947b1ed880f06a2e93e1d41da78c037f52f94b13858769838\", size \"26870936\" in 1.219280672s" May 14 05:09:21.850886 containerd[1494]: time="2025-05-14T05:09:21.850845676Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\" returns image reference \"sha256:c5361ece77e80334cd5fb082c0b678cb3244f5834ecacea1719ae6b38b465581\"" May 14 05:09:21.851441 containerd[1494]: time="2025-05-14T05:09:21.851257810Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" May 14 05:09:22.454334 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount99632832.mount: Deactivated successfully. May 14 05:09:22.986644 containerd[1494]: time="2025-05-14T05:09:22.986600276Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 05:09:22.988524 containerd[1494]: time="2025-05-14T05:09:22.988366574Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=16485383" May 14 05:09:22.989227 containerd[1494]: time="2025-05-14T05:09:22.989202264Z" level=info msg="ImageCreate event name:\"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 05:09:22.992518 containerd[1494]: time="2025-05-14T05:09:22.992182339Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 05:09:22.993522 containerd[1494]: time="2025-05-14T05:09:22.993464966Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"16482581\" in 1.142179471s" May 14 05:09:22.993633 containerd[1494]: time="2025-05-14T05:09:22.993616320Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:2437cf762177702dec2dfe99a09c37427a15af6d9a57c456b65352667c223d93\"" May 14 05:09:22.994085 containerd[1494]: time="2025-05-14T05:09:22.994061905Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 14 05:09:23.465171 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1914229286.mount: Deactivated successfully. May 14 05:09:23.469389 containerd[1494]: time="2025-05-14T05:09:23.469348372Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 14 05:09:23.470373 containerd[1494]: time="2025-05-14T05:09:23.470343686Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" May 14 05:09:23.471254 containerd[1494]: time="2025-05-14T05:09:23.471220963Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 14 05:09:23.476146 containerd[1494]: time="2025-05-14T05:09:23.475773213Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 14 05:09:23.476614 containerd[1494]: time="2025-05-14T05:09:23.476592546Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 482.499235ms" May 14 05:09:23.476663 containerd[1494]: time="2025-05-14T05:09:23.476618073Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" May 14 05:09:23.477244 containerd[1494]: time="2025-05-14T05:09:23.477044864Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" May 14 05:09:23.992032 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2575668787.mount: Deactivated successfully. May 14 05:09:25.825893 containerd[1494]: time="2025-05-14T05:09:25.825840843Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 05:09:25.826406 containerd[1494]: time="2025-05-14T05:09:25.826366184Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66406467" May 14 05:09:25.827299 containerd[1494]: time="2025-05-14T05:09:25.827258364Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 05:09:25.830737 containerd[1494]: time="2025-05-14T05:09:25.830709879Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 05:09:25.831774 containerd[1494]: time="2025-05-14T05:09:25.831733852Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 2.354656962s" May 14 05:09:25.831774 containerd[1494]: time="2025-05-14T05:09:25.831774634Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" May 14 05:09:31.116720 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 14 05:09:31.118173 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 05:09:31.132019 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 14 05:09:31.132082 systemd[1]: kubelet.service: Failed with result 'signal'. May 14 05:09:31.132294 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 14 05:09:31.135923 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 05:09:31.156613 systemd[1]: Reload requested from client PID 2147 ('systemctl') (unit session-7.scope)... May 14 05:09:31.156629 systemd[1]: Reloading... May 14 05:09:31.224532 zram_generator::config[2189]: No configuration found. May 14 05:09:31.309518 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 05:09:31.392396 systemd[1]: Reloading finished in 235 ms. May 14 05:09:31.437925 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 05:09:31.440174 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 14 05:09:31.441217 systemd[1]: kubelet.service: Deactivated successfully. May 14 05:09:31.441434 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 14 05:09:31.441469 systemd[1]: kubelet.service: Consumed 79ms CPU time, 82.5M memory peak. May 14 05:09:31.442801 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 05:09:31.552014 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 05:09:31.555296 (kubelet)[2239]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 14 05:09:31.591457 kubelet[2239]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 14 05:09:31.591457 kubelet[2239]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 14 05:09:31.591457 kubelet[2239]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 14 05:09:31.591820 kubelet[2239]: I0514 05:09:31.591654 2239 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 14 05:09:32.363306 kubelet[2239]: I0514 05:09:32.363120 2239 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" May 14 05:09:32.363306 kubelet[2239]: I0514 05:09:32.363150 2239 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 14 05:09:32.363437 kubelet[2239]: I0514 05:09:32.363403 2239 server.go:929] "Client rotation is on, will bootstrap in background" May 14 05:09:32.407926 kubelet[2239]: E0514 05:09:32.407883 2239 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.132:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.132:6443: connect: connection refused" logger="UnhandledError" May 14 05:09:32.409218 kubelet[2239]: I0514 05:09:32.409193 2239 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 14 05:09:32.415228 kubelet[2239]: I0514 05:09:32.415212 2239 server.go:1426] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 14 05:09:32.418753 kubelet[2239]: I0514 05:09:32.418710 2239 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 14 05:09:32.419563 kubelet[2239]: I0514 05:09:32.419534 2239 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 14 05:09:32.419708 kubelet[2239]: I0514 05:09:32.419677 2239 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 14 05:09:32.419858 kubelet[2239]: I0514 05:09:32.419700 2239 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 14 05:09:32.420000 kubelet[2239]: I0514 05:09:32.419988 2239 topology_manager.go:138] "Creating topology manager with none policy" May 14 05:09:32.420023 kubelet[2239]: I0514 05:09:32.420001 2239 container_manager_linux.go:300] "Creating device plugin manager" May 14 05:09:32.420188 kubelet[2239]: I0514 05:09:32.420165 2239 state_mem.go:36] "Initialized new in-memory state store" May 14 05:09:32.421866 kubelet[2239]: I0514 05:09:32.421847 2239 kubelet.go:408] "Attempting to sync node with API server" May 14 05:09:32.421917 kubelet[2239]: I0514 05:09:32.421871 2239 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 14 05:09:32.422623 kubelet[2239]: I0514 05:09:32.421963 2239 kubelet.go:314] "Adding apiserver pod source" May 14 05:09:32.422623 kubelet[2239]: I0514 05:09:32.421976 2239 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 14 05:09:32.423821 kubelet[2239]: I0514 05:09:32.423792 2239 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 14 05:09:32.425840 kubelet[2239]: W0514 05:09:32.425790 2239 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.132:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.132:6443: connect: connection refused May 14 05:09:32.425993 kubelet[2239]: E0514 05:09:32.425976 2239 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.132:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.132:6443: connect: connection refused" logger="UnhandledError" May 14 05:09:32.426304 kubelet[2239]: W0514 05:09:32.425883 2239 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.132:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.132:6443: connect: connection refused May 14 05:09:32.426304 kubelet[2239]: E0514 05:09:32.426281 2239 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.132:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.132:6443: connect: connection refused" logger="UnhandledError" May 14 05:09:32.427022 kubelet[2239]: I0514 05:09:32.426589 2239 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 14 05:09:32.427521 kubelet[2239]: W0514 05:09:32.427486 2239 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 14 05:09:32.429914 kubelet[2239]: I0514 05:09:32.429886 2239 server.go:1269] "Started kubelet" May 14 05:09:32.430312 kubelet[2239]: I0514 05:09:32.430282 2239 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 14 05:09:32.431420 kubelet[2239]: I0514 05:09:32.431368 2239 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 14 05:09:32.431682 kubelet[2239]: I0514 05:09:32.431660 2239 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 14 05:09:32.431890 kubelet[2239]: I0514 05:09:32.431868 2239 server.go:460] "Adding debug handlers to kubelet server" May 14 05:09:32.432875 kubelet[2239]: I0514 05:09:32.432849 2239 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 14 05:09:32.433852 kubelet[2239]: I0514 05:09:32.433817 2239 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 14 05:09:32.434117 kubelet[2239]: E0514 05:09:32.433914 2239 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 05:09:32.434117 kubelet[2239]: I0514 05:09:32.434069 2239 volume_manager.go:289] "Starting Kubelet Volume Manager" May 14 05:09:32.436688 kubelet[2239]: E0514 05:09:32.436648 2239 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.132:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.132:6443: connect: connection refused" interval="200ms" May 14 05:09:32.436757 kubelet[2239]: I0514 05:09:32.436705 2239 desired_state_of_world_populator.go:146] "Desired state populator starts to run" May 14 05:09:32.436889 kubelet[2239]: I0514 05:09:32.436862 2239 reconciler.go:26] "Reconciler: start to sync state" May 14 05:09:32.437007 kubelet[2239]: W0514 05:09:32.436970 2239 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.132:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.132:6443: connect: connection refused May 14 05:09:32.437048 kubelet[2239]: E0514 05:09:32.437014 2239 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.132:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.132:6443: connect: connection refused" logger="UnhandledError" May 14 05:09:32.437278 kubelet[2239]: E0514 05:09:32.436295 2239 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.132:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.132:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.183f4c927977b021 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-14 05:09:32.429864993 +0000 UTC m=+0.871811501,LastTimestamp:2025-05-14 05:09:32.429864993 +0000 UTC m=+0.871811501,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 14 05:09:32.437943 kubelet[2239]: I0514 05:09:32.437729 2239 factory.go:221] Registration of the systemd container factory successfully May 14 05:09:32.437943 kubelet[2239]: I0514 05:09:32.437799 2239 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 14 05:09:32.438340 kubelet[2239]: E0514 05:09:32.438314 2239 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 14 05:09:32.439668 kubelet[2239]: I0514 05:09:32.439618 2239 factory.go:221] Registration of the containerd container factory successfully May 14 05:09:32.449358 kubelet[2239]: I0514 05:09:32.449311 2239 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 14 05:09:32.450458 kubelet[2239]: I0514 05:09:32.450441 2239 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 14 05:09:32.450779 kubelet[2239]: I0514 05:09:32.450559 2239 status_manager.go:217] "Starting to sync pod status with apiserver" May 14 05:09:32.450779 kubelet[2239]: I0514 05:09:32.450582 2239 kubelet.go:2321] "Starting kubelet main sync loop" May 14 05:09:32.450779 kubelet[2239]: E0514 05:09:32.450627 2239 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 14 05:09:32.453115 kubelet[2239]: W0514 05:09:32.453087 2239 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.132:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.132:6443: connect: connection refused May 14 05:09:32.453164 kubelet[2239]: E0514 05:09:32.453124 2239 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.132:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.132:6443: connect: connection refused" logger="UnhandledError" May 14 05:09:32.453203 kubelet[2239]: I0514 05:09:32.453188 2239 cpu_manager.go:214] "Starting CPU manager" policy="none" May 14 05:09:32.453203 kubelet[2239]: I0514 05:09:32.453201 2239 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 14 05:09:32.453245 kubelet[2239]: I0514 05:09:32.453217 2239 state_mem.go:36] "Initialized new in-memory state store" May 14 05:09:32.529309 kubelet[2239]: I0514 05:09:32.529277 2239 policy_none.go:49] "None policy: Start" May 14 05:09:32.531915 kubelet[2239]: I0514 05:09:32.531895 2239 memory_manager.go:170] "Starting memorymanager" policy="None" May 14 05:09:32.531988 kubelet[2239]: I0514 05:09:32.531923 2239 state_mem.go:35] "Initializing new in-memory state store" May 14 05:09:32.534004 kubelet[2239]: E0514 05:09:32.533976 2239 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 05:09:32.539025 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 14 05:09:32.550936 kubelet[2239]: E0514 05:09:32.550901 2239 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 14 05:09:32.554159 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 14 05:09:32.556742 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 14 05:09:32.578285 kubelet[2239]: I0514 05:09:32.578249 2239 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 14 05:09:32.578627 kubelet[2239]: I0514 05:09:32.578461 2239 eviction_manager.go:189] "Eviction manager: starting control loop" May 14 05:09:32.578627 kubelet[2239]: I0514 05:09:32.578479 2239 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 14 05:09:32.579551 kubelet[2239]: I0514 05:09:32.579525 2239 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 14 05:09:32.580077 kubelet[2239]: E0514 05:09:32.579965 2239 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 14 05:09:32.637566 kubelet[2239]: E0514 05:09:32.637471 2239 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.132:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.132:6443: connect: connection refused" interval="400ms" May 14 05:09:32.680356 kubelet[2239]: I0514 05:09:32.680330 2239 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 14 05:09:32.680753 kubelet[2239]: E0514 05:09:32.680729 2239 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.132:6443/api/v1/nodes\": dial tcp 10.0.0.132:6443: connect: connection refused" node="localhost" May 14 05:09:32.761022 systemd[1]: Created slice kubepods-burstable-podb17a71ca9b1d3b4692a995b09ab73e26.slice - libcontainer container kubepods-burstable-podb17a71ca9b1d3b4692a995b09ab73e26.slice. May 14 05:09:32.774878 systemd[1]: Created slice kubepods-burstable-podd4a6b755cb4739fbca401212ebb82b6d.slice - libcontainer container kubepods-burstable-podd4a6b755cb4739fbca401212ebb82b6d.slice. May 14 05:09:32.778108 systemd[1]: Created slice kubepods-burstable-pod0613557c150e4f35d1f3f822b5f32ff1.slice - libcontainer container kubepods-burstable-pod0613557c150e4f35d1f3f822b5f32ff1.slice. May 14 05:09:32.838591 kubelet[2239]: I0514 05:09:32.838548 2239 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 14 05:09:32.838879 kubelet[2239]: I0514 05:09:32.838639 2239 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 14 05:09:32.838879 kubelet[2239]: I0514 05:09:32.838660 2239 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0613557c150e4f35d1f3f822b5f32ff1-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0613557c150e4f35d1f3f822b5f32ff1\") " pod="kube-system/kube-scheduler-localhost" May 14 05:09:32.838879 kubelet[2239]: I0514 05:09:32.838676 2239 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 14 05:09:32.838879 kubelet[2239]: I0514 05:09:32.838700 2239 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 14 05:09:32.838879 kubelet[2239]: I0514 05:09:32.838718 2239 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 14 05:09:32.838992 kubelet[2239]: I0514 05:09:32.838732 2239 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b17a71ca9b1d3b4692a995b09ab73e26-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"b17a71ca9b1d3b4692a995b09ab73e26\") " pod="kube-system/kube-apiserver-localhost" May 14 05:09:32.838992 kubelet[2239]: I0514 05:09:32.838746 2239 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b17a71ca9b1d3b4692a995b09ab73e26-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"b17a71ca9b1d3b4692a995b09ab73e26\") " pod="kube-system/kube-apiserver-localhost" May 14 05:09:32.838992 kubelet[2239]: I0514 05:09:32.838761 2239 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b17a71ca9b1d3b4692a995b09ab73e26-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"b17a71ca9b1d3b4692a995b09ab73e26\") " pod="kube-system/kube-apiserver-localhost" May 14 05:09:32.882450 kubelet[2239]: I0514 05:09:32.882429 2239 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 14 05:09:32.882787 kubelet[2239]: E0514 05:09:32.882759 2239 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.132:6443/api/v1/nodes\": dial tcp 10.0.0.132:6443: connect: connection refused" node="localhost" May 14 05:09:33.038007 kubelet[2239]: E0514 05:09:33.037905 2239 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.132:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.132:6443: connect: connection refused" interval="800ms" May 14 05:09:33.073841 containerd[1494]: time="2025-05-14T05:09:33.073772408Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:b17a71ca9b1d3b4692a995b09ab73e26,Namespace:kube-system,Attempt:0,}" May 14 05:09:33.077675 containerd[1494]: time="2025-05-14T05:09:33.077639559Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d4a6b755cb4739fbca401212ebb82b6d,Namespace:kube-system,Attempt:0,}" May 14 05:09:33.080582 containerd[1494]: time="2025-05-14T05:09:33.080548023Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0613557c150e4f35d1f3f822b5f32ff1,Namespace:kube-system,Attempt:0,}" May 14 05:09:33.094869 containerd[1494]: time="2025-05-14T05:09:33.094823493Z" level=info msg="connecting to shim b383e8aac3823e787ac8dd93c7d4fd22c8b42fef81bbfa6c8306c9a66494c259" address="unix:///run/containerd/s/144ecb8d04dce73cf8fbcfd6f55afc78c70bdfecb11ae3f4bf30cdd902c87062" namespace=k8s.io protocol=ttrpc version=3 May 14 05:09:33.102524 containerd[1494]: time="2025-05-14T05:09:33.101888693Z" level=info msg="connecting to shim aea5cae8ffda91831d04e457cae07ddade1ca2a7fd567437679a41b1ece65e1c" address="unix:///run/containerd/s/9213303c1fce8cccc49e7ede7ef43d64c6a8aa20f2508f1fdadef57186e70ab3" namespace=k8s.io protocol=ttrpc version=3 May 14 05:09:33.113621 containerd[1494]: time="2025-05-14T05:09:33.113579083Z" level=info msg="connecting to shim 5e492c7c05dcb6e31a7836c0d2219ff8da03f2ad82b33198107f306f5f02257f" address="unix:///run/containerd/s/93ecf90b9ae0949432f1121221c0d311ea41e30da60e95828ee9c4097af01f0a" namespace=k8s.io protocol=ttrpc version=3 May 14 05:09:33.128646 systemd[1]: Started cri-containerd-b383e8aac3823e787ac8dd93c7d4fd22c8b42fef81bbfa6c8306c9a66494c259.scope - libcontainer container b383e8aac3823e787ac8dd93c7d4fd22c8b42fef81bbfa6c8306c9a66494c259. May 14 05:09:33.133385 systemd[1]: Started cri-containerd-5e492c7c05dcb6e31a7836c0d2219ff8da03f2ad82b33198107f306f5f02257f.scope - libcontainer container 5e492c7c05dcb6e31a7836c0d2219ff8da03f2ad82b33198107f306f5f02257f. May 14 05:09:33.134296 systemd[1]: Started cri-containerd-aea5cae8ffda91831d04e457cae07ddade1ca2a7fd567437679a41b1ece65e1c.scope - libcontainer container aea5cae8ffda91831d04e457cae07ddade1ca2a7fd567437679a41b1ece65e1c. May 14 05:09:33.168958 containerd[1494]: time="2025-05-14T05:09:33.168913576Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:b17a71ca9b1d3b4692a995b09ab73e26,Namespace:kube-system,Attempt:0,} returns sandbox id \"b383e8aac3823e787ac8dd93c7d4fd22c8b42fef81bbfa6c8306c9a66494c259\"" May 14 05:09:33.172166 containerd[1494]: time="2025-05-14T05:09:33.172035512Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d4a6b755cb4739fbca401212ebb82b6d,Namespace:kube-system,Attempt:0,} returns sandbox id \"aea5cae8ffda91831d04e457cae07ddade1ca2a7fd567437679a41b1ece65e1c\"" May 14 05:09:33.172807 containerd[1494]: time="2025-05-14T05:09:33.172773895Z" level=info msg="CreateContainer within sandbox \"b383e8aac3823e787ac8dd93c7d4fd22c8b42fef81bbfa6c8306c9a66494c259\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 14 05:09:33.174579 containerd[1494]: time="2025-05-14T05:09:33.174467330Z" level=info msg="CreateContainer within sandbox \"aea5cae8ffda91831d04e457cae07ddade1ca2a7fd567437679a41b1ece65e1c\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 14 05:09:33.180021 containerd[1494]: time="2025-05-14T05:09:33.179989041Z" level=info msg="Container 4c7c7ed8685d02cd2e52252aea31334ce321e265edce8e3a7f4e236a3f06507a: CDI devices from CRI Config.CDIDevices: []" May 14 05:09:33.180471 containerd[1494]: time="2025-05-14T05:09:33.180445750Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:0613557c150e4f35d1f3f822b5f32ff1,Namespace:kube-system,Attempt:0,} returns sandbox id \"5e492c7c05dcb6e31a7836c0d2219ff8da03f2ad82b33198107f306f5f02257f\"" May 14 05:09:33.183378 containerd[1494]: time="2025-05-14T05:09:33.183339751Z" level=info msg="CreateContainer within sandbox \"5e492c7c05dcb6e31a7836c0d2219ff8da03f2ad82b33198107f306f5f02257f\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 14 05:09:33.186077 containerd[1494]: time="2025-05-14T05:09:33.186043773Z" level=info msg="Container 5d1e1745ad01c5bcfbd50cab8ed93e19e1a19736b495d0e470f92cda5baf202c: CDI devices from CRI Config.CDIDevices: []" May 14 05:09:33.187345 containerd[1494]: time="2025-05-14T05:09:33.187312420Z" level=info msg="CreateContainer within sandbox \"b383e8aac3823e787ac8dd93c7d4fd22c8b42fef81bbfa6c8306c9a66494c259\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"4c7c7ed8685d02cd2e52252aea31334ce321e265edce8e3a7f4e236a3f06507a\"" May 14 05:09:33.188897 containerd[1494]: time="2025-05-14T05:09:33.188699251Z" level=info msg="StartContainer for \"4c7c7ed8685d02cd2e52252aea31334ce321e265edce8e3a7f4e236a3f06507a\"" May 14 05:09:33.189752 containerd[1494]: time="2025-05-14T05:09:33.189720185Z" level=info msg="connecting to shim 4c7c7ed8685d02cd2e52252aea31334ce321e265edce8e3a7f4e236a3f06507a" address="unix:///run/containerd/s/144ecb8d04dce73cf8fbcfd6f55afc78c70bdfecb11ae3f4bf30cdd902c87062" protocol=ttrpc version=3 May 14 05:09:33.193010 containerd[1494]: time="2025-05-14T05:09:33.192927623Z" level=info msg="CreateContainer within sandbox \"aea5cae8ffda91831d04e457cae07ddade1ca2a7fd567437679a41b1ece65e1c\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"5d1e1745ad01c5bcfbd50cab8ed93e19e1a19736b495d0e470f92cda5baf202c\"" May 14 05:09:33.193459 containerd[1494]: time="2025-05-14T05:09:33.193426683Z" level=info msg="StartContainer for \"5d1e1745ad01c5bcfbd50cab8ed93e19e1a19736b495d0e470f92cda5baf202c\"" May 14 05:09:33.195974 containerd[1494]: time="2025-05-14T05:09:33.195867370Z" level=info msg="Container b5e622fec5f28ba0533dbe6325d88453edba755cd160639a4f2247a13377aa9b: CDI devices from CRI Config.CDIDevices: []" May 14 05:09:33.197983 containerd[1494]: time="2025-05-14T05:09:33.197951911Z" level=info msg="connecting to shim 5d1e1745ad01c5bcfbd50cab8ed93e19e1a19736b495d0e470f92cda5baf202c" address="unix:///run/containerd/s/9213303c1fce8cccc49e7ede7ef43d64c6a8aa20f2508f1fdadef57186e70ab3" protocol=ttrpc version=3 May 14 05:09:33.205294 containerd[1494]: time="2025-05-14T05:09:33.205164219Z" level=info msg="CreateContainer within sandbox \"5e492c7c05dcb6e31a7836c0d2219ff8da03f2ad82b33198107f306f5f02257f\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"b5e622fec5f28ba0533dbe6325d88453edba755cd160639a4f2247a13377aa9b\"" May 14 05:09:33.205769 containerd[1494]: time="2025-05-14T05:09:33.205743907Z" level=info msg="StartContainer for \"b5e622fec5f28ba0533dbe6325d88453edba755cd160639a4f2247a13377aa9b\"" May 14 05:09:33.207320 containerd[1494]: time="2025-05-14T05:09:33.206634113Z" level=info msg="connecting to shim b5e622fec5f28ba0533dbe6325d88453edba755cd160639a4f2247a13377aa9b" address="unix:///run/containerd/s/93ecf90b9ae0949432f1121221c0d311ea41e30da60e95828ee9c4097af01f0a" protocol=ttrpc version=3 May 14 05:09:33.209684 systemd[1]: Started cri-containerd-4c7c7ed8685d02cd2e52252aea31334ce321e265edce8e3a7f4e236a3f06507a.scope - libcontainer container 4c7c7ed8685d02cd2e52252aea31334ce321e265edce8e3a7f4e236a3f06507a. May 14 05:09:33.212393 systemd[1]: Started cri-containerd-5d1e1745ad01c5bcfbd50cab8ed93e19e1a19736b495d0e470f92cda5baf202c.scope - libcontainer container 5d1e1745ad01c5bcfbd50cab8ed93e19e1a19736b495d0e470f92cda5baf202c. May 14 05:09:33.232711 systemd[1]: Started cri-containerd-b5e622fec5f28ba0533dbe6325d88453edba755cd160639a4f2247a13377aa9b.scope - libcontainer container b5e622fec5f28ba0533dbe6325d88453edba755cd160639a4f2247a13377aa9b. May 14 05:09:33.267122 containerd[1494]: time="2025-05-14T05:09:33.267065570Z" level=info msg="StartContainer for \"4c7c7ed8685d02cd2e52252aea31334ce321e265edce8e3a7f4e236a3f06507a\" returns successfully" May 14 05:09:33.292350 kubelet[2239]: I0514 05:09:33.287213 2239 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 14 05:09:33.292350 kubelet[2239]: E0514 05:09:33.287537 2239 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.132:6443/api/v1/nodes\": dial tcp 10.0.0.132:6443: connect: connection refused" node="localhost" May 14 05:09:33.292458 containerd[1494]: time="2025-05-14T05:09:33.289870180Z" level=info msg="StartContainer for \"5d1e1745ad01c5bcfbd50cab8ed93e19e1a19736b495d0e470f92cda5baf202c\" returns successfully" May 14 05:09:33.292458 containerd[1494]: time="2025-05-14T05:09:33.289929232Z" level=info msg="StartContainer for \"b5e622fec5f28ba0533dbe6325d88453edba755cd160639a4f2247a13377aa9b\" returns successfully" May 14 05:09:34.092040 kubelet[2239]: I0514 05:09:34.092013 2239 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 14 05:09:34.473422 kubelet[2239]: E0514 05:09:34.473326 2239 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 05:09:34.714485 kubelet[2239]: E0514 05:09:34.714446 2239 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" May 14 05:09:34.887644 kubelet[2239]: I0514 05:09:34.887539 2239 kubelet_node_status.go:75] "Successfully registered node" node="localhost" May 14 05:09:35.183800 kubelet[2239]: E0514 05:09:35.183690 2239 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" May 14 05:09:35.184105 kubelet[2239]: E0514 05:09:35.183861 2239 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 05:09:35.425107 kubelet[2239]: I0514 05:09:35.425069 2239 apiserver.go:52] "Watching apiserver" May 14 05:09:35.437669 kubelet[2239]: I0514 05:09:35.437595 2239 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" May 14 05:09:35.475941 kubelet[2239]: E0514 05:09:35.475914 2239 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" May 14 05:09:35.476076 kubelet[2239]: E0514 05:09:35.476061 2239 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 05:09:36.510978 systemd[1]: Reload requested from client PID 2508 ('systemctl') (unit session-7.scope)... May 14 05:09:36.510995 systemd[1]: Reloading... May 14 05:09:36.579534 zram_generator::config[2551]: No configuration found. May 14 05:09:36.736470 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 05:09:36.831780 systemd[1]: Reloading finished in 320 ms. May 14 05:09:36.858599 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 14 05:09:36.868701 systemd[1]: kubelet.service: Deactivated successfully. May 14 05:09:36.869589 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 14 05:09:36.869662 systemd[1]: kubelet.service: Consumed 1.259s CPU time, 115.8M memory peak. May 14 05:09:36.871406 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 05:09:37.018785 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 05:09:37.022846 (kubelet)[2593]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 14 05:09:37.068155 kubelet[2593]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 14 05:09:37.068155 kubelet[2593]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 14 05:09:37.068155 kubelet[2593]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 14 05:09:37.068527 kubelet[2593]: I0514 05:09:37.068200 2593 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 14 05:09:37.073889 kubelet[2593]: I0514 05:09:37.073837 2593 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" May 14 05:09:37.074164 kubelet[2593]: I0514 05:09:37.074136 2593 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 14 05:09:37.074448 kubelet[2593]: I0514 05:09:37.074432 2593 server.go:929] "Client rotation is on, will bootstrap in background" May 14 05:09:37.075754 kubelet[2593]: I0514 05:09:37.075734 2593 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 14 05:09:37.078257 kubelet[2593]: I0514 05:09:37.078225 2593 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 14 05:09:37.081718 kubelet[2593]: I0514 05:09:37.081697 2593 server.go:1426] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 14 05:09:37.084185 kubelet[2593]: I0514 05:09:37.084095 2593 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 14 05:09:37.084372 kubelet[2593]: I0514 05:09:37.084357 2593 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 14 05:09:37.084615 kubelet[2593]: I0514 05:09:37.084582 2593 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 14 05:09:37.084842 kubelet[2593]: I0514 05:09:37.084675 2593 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 14 05:09:37.084984 kubelet[2593]: I0514 05:09:37.084954 2593 topology_manager.go:138] "Creating topology manager with none policy" May 14 05:09:37.085040 kubelet[2593]: I0514 05:09:37.085032 2593 container_manager_linux.go:300] "Creating device plugin manager" May 14 05:09:37.085121 kubelet[2593]: I0514 05:09:37.085111 2593 state_mem.go:36] "Initialized new in-memory state store" May 14 05:09:37.085280 kubelet[2593]: I0514 05:09:37.085267 2593 kubelet.go:408] "Attempting to sync node with API server" May 14 05:09:37.085364 kubelet[2593]: I0514 05:09:37.085354 2593 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 14 05:09:37.085439 kubelet[2593]: I0514 05:09:37.085430 2593 kubelet.go:314] "Adding apiserver pod source" May 14 05:09:37.085505 kubelet[2593]: I0514 05:09:37.085484 2593 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 14 05:09:37.086358 kubelet[2593]: I0514 05:09:37.086320 2593 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 14 05:09:37.086797 kubelet[2593]: I0514 05:09:37.086775 2593 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 14 05:09:37.087521 kubelet[2593]: I0514 05:09:37.087475 2593 server.go:1269] "Started kubelet" May 14 05:09:37.088383 kubelet[2593]: I0514 05:09:37.088326 2593 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 14 05:09:37.088610 kubelet[2593]: I0514 05:09:37.088585 2593 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 14 05:09:37.088821 kubelet[2593]: I0514 05:09:37.088649 2593 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 14 05:09:37.089747 kubelet[2593]: I0514 05:09:37.089719 2593 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 14 05:09:37.095503 kubelet[2593]: I0514 05:09:37.092619 2593 server.go:460] "Adding debug handlers to kubelet server" May 14 05:09:37.095503 kubelet[2593]: I0514 05:09:37.094478 2593 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 14 05:09:37.096307 kubelet[2593]: I0514 05:09:37.096155 2593 volume_manager.go:289] "Starting Kubelet Volume Manager" May 14 05:09:37.096580 kubelet[2593]: E0514 05:09:37.096562 2593 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 14 05:09:37.098790 kubelet[2593]: I0514 05:09:37.097616 2593 desired_state_of_world_populator.go:146] "Desired state populator starts to run" May 14 05:09:37.098790 kubelet[2593]: I0514 05:09:37.097750 2593 reconciler.go:26] "Reconciler: start to sync state" May 14 05:09:37.101359 kubelet[2593]: I0514 05:09:37.101325 2593 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 14 05:09:37.103462 kubelet[2593]: I0514 05:09:37.102517 2593 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 14 05:09:37.103462 kubelet[2593]: I0514 05:09:37.102547 2593 status_manager.go:217] "Starting to sync pod status with apiserver" May 14 05:09:37.103462 kubelet[2593]: I0514 05:09:37.102572 2593 kubelet.go:2321] "Starting kubelet main sync loop" May 14 05:09:37.103462 kubelet[2593]: E0514 05:09:37.102614 2593 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 14 05:09:37.110012 kubelet[2593]: I0514 05:09:37.109984 2593 factory.go:221] Registration of the containerd container factory successfully May 14 05:09:37.110012 kubelet[2593]: I0514 05:09:37.110008 2593 factory.go:221] Registration of the systemd container factory successfully May 14 05:09:37.110148 kubelet[2593]: I0514 05:09:37.110102 2593 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 14 05:09:37.115448 kubelet[2593]: E0514 05:09:37.115416 2593 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 14 05:09:37.145828 kubelet[2593]: I0514 05:09:37.145794 2593 cpu_manager.go:214] "Starting CPU manager" policy="none" May 14 05:09:37.145828 kubelet[2593]: I0514 05:09:37.145815 2593 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 14 05:09:37.145828 kubelet[2593]: I0514 05:09:37.145835 2593 state_mem.go:36] "Initialized new in-memory state store" May 14 05:09:37.145995 kubelet[2593]: I0514 05:09:37.145985 2593 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 14 05:09:37.146020 kubelet[2593]: I0514 05:09:37.145995 2593 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 14 05:09:37.146020 kubelet[2593]: I0514 05:09:37.146011 2593 policy_none.go:49] "None policy: Start" May 14 05:09:37.146656 kubelet[2593]: I0514 05:09:37.146638 2593 memory_manager.go:170] "Starting memorymanager" policy="None" May 14 05:09:37.146701 kubelet[2593]: I0514 05:09:37.146661 2593 state_mem.go:35] "Initializing new in-memory state store" May 14 05:09:37.146829 kubelet[2593]: I0514 05:09:37.146815 2593 state_mem.go:75] "Updated machine memory state" May 14 05:09:37.150990 kubelet[2593]: I0514 05:09:37.150965 2593 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 14 05:09:37.151146 kubelet[2593]: I0514 05:09:37.151116 2593 eviction_manager.go:189] "Eviction manager: starting control loop" May 14 05:09:37.151187 kubelet[2593]: I0514 05:09:37.151132 2593 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 14 05:09:37.151303 kubelet[2593]: I0514 05:09:37.151285 2593 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 14 05:09:37.255140 kubelet[2593]: I0514 05:09:37.255111 2593 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 14 05:09:37.262280 kubelet[2593]: I0514 05:09:37.262246 2593 kubelet_node_status.go:111] "Node was previously registered" node="localhost" May 14 05:09:37.262381 kubelet[2593]: I0514 05:09:37.262320 2593 kubelet_node_status.go:75] "Successfully registered node" node="localhost" May 14 05:09:37.399326 kubelet[2593]: I0514 05:09:37.399180 2593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b17a71ca9b1d3b4692a995b09ab73e26-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"b17a71ca9b1d3b4692a995b09ab73e26\") " pod="kube-system/kube-apiserver-localhost" May 14 05:09:37.399326 kubelet[2593]: I0514 05:09:37.399218 2593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b17a71ca9b1d3b4692a995b09ab73e26-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"b17a71ca9b1d3b4692a995b09ab73e26\") " pod="kube-system/kube-apiserver-localhost" May 14 05:09:37.399326 kubelet[2593]: I0514 05:09:37.399241 2593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 14 05:09:37.399326 kubelet[2593]: I0514 05:09:37.399261 2593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 14 05:09:37.399326 kubelet[2593]: I0514 05:09:37.399277 2593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0613557c150e4f35d1f3f822b5f32ff1-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"0613557c150e4f35d1f3f822b5f32ff1\") " pod="kube-system/kube-scheduler-localhost" May 14 05:09:37.399541 kubelet[2593]: I0514 05:09:37.399292 2593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b17a71ca9b1d3b4692a995b09ab73e26-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"b17a71ca9b1d3b4692a995b09ab73e26\") " pod="kube-system/kube-apiserver-localhost" May 14 05:09:37.399541 kubelet[2593]: I0514 05:09:37.399308 2593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 14 05:09:37.399541 kubelet[2593]: I0514 05:09:37.399323 2593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 14 05:09:37.399541 kubelet[2593]: I0514 05:09:37.399337 2593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d4a6b755cb4739fbca401212ebb82b6d-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d4a6b755cb4739fbca401212ebb82b6d\") " pod="kube-system/kube-controller-manager-localhost" May 14 05:09:37.509956 kubelet[2593]: E0514 05:09:37.509917 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 05:09:37.511786 kubelet[2593]: E0514 05:09:37.511764 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 05:09:37.511894 kubelet[2593]: E0514 05:09:37.511861 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 05:09:37.518754 sudo[2629]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin May 14 05:09:37.519021 sudo[2629]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) May 14 05:09:37.942445 sudo[2629]: pam_unix(sudo:session): session closed for user root May 14 05:09:38.087075 kubelet[2593]: I0514 05:09:38.086989 2593 apiserver.go:52] "Watching apiserver" May 14 05:09:38.098544 kubelet[2593]: I0514 05:09:38.098486 2593 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" May 14 05:09:38.127092 kubelet[2593]: E0514 05:09:38.127041 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 05:09:38.134012 kubelet[2593]: E0514 05:09:38.133972 2593 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 14 05:09:38.134172 kubelet[2593]: E0514 05:09:38.134151 2593 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" May 14 05:09:38.134326 kubelet[2593]: E0514 05:09:38.134305 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 05:09:38.134464 kubelet[2593]: E0514 05:09:38.134441 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 05:09:38.148906 kubelet[2593]: I0514 05:09:38.148831 2593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.1488092619999999 podStartE2EDuration="1.148809262s" podCreationTimestamp="2025-05-14 05:09:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 05:09:38.14867914 +0000 UTC m=+1.122443355" watchObservedRunningTime="2025-05-14 05:09:38.148809262 +0000 UTC m=+1.122573477" May 14 05:09:38.165038 kubelet[2593]: I0514 05:09:38.164986 2593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.164971606 podStartE2EDuration="1.164971606s" podCreationTimestamp="2025-05-14 05:09:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 05:09:38.157840409 +0000 UTC m=+1.131604624" watchObservedRunningTime="2025-05-14 05:09:38.164971606 +0000 UTC m=+1.138735781" May 14 05:09:38.174224 kubelet[2593]: I0514 05:09:38.174147 2593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.174131156 podStartE2EDuration="1.174131156s" podCreationTimestamp="2025-05-14 05:09:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 05:09:38.165277943 +0000 UTC m=+1.139042158" watchObservedRunningTime="2025-05-14 05:09:38.174131156 +0000 UTC m=+1.147895411" May 14 05:09:39.129995 kubelet[2593]: E0514 05:09:39.129805 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 05:09:39.130892 kubelet[2593]: E0514 05:09:39.130476 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 05:09:39.533386 sudo[1697]: pam_unix(sudo:session): session closed for user root May 14 05:09:39.534713 sshd[1696]: Connection closed by 10.0.0.1 port 52596 May 14 05:09:39.535221 sshd-session[1694]: pam_unix(sshd:session): session closed for user core May 14 05:09:39.538863 systemd-logind[1475]: Session 7 logged out. Waiting for processes to exit. May 14 05:09:39.539084 systemd[1]: sshd@6-10.0.0.132:22-10.0.0.1:52596.service: Deactivated successfully. May 14 05:09:39.541269 systemd[1]: session-7.scope: Deactivated successfully. May 14 05:09:39.541432 systemd[1]: session-7.scope: Consumed 7.407s CPU time, 264.8M memory peak. May 14 05:09:39.543317 systemd-logind[1475]: Removed session 7. May 14 05:09:40.130075 kubelet[2593]: E0514 05:09:40.130038 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 05:09:43.790245 kubelet[2593]: I0514 05:09:43.790216 2593 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 14 05:09:43.791109 kubelet[2593]: I0514 05:09:43.791044 2593 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 14 05:09:43.791153 containerd[1494]: time="2025-05-14T05:09:43.790846437Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 14 05:09:44.601042 systemd[1]: Created slice kubepods-besteffort-pod0b1352fc_06b8_4e34_b468_4a7e061d8a84.slice - libcontainer container kubepods-besteffort-pod0b1352fc_06b8_4e34_b468_4a7e061d8a84.slice. May 14 05:09:44.613819 systemd[1]: Created slice kubepods-burstable-pod674f3bcf_3155_4a84_b9e2_0081a5851991.slice - libcontainer container kubepods-burstable-pod674f3bcf_3155_4a84_b9e2_0081a5851991.slice. May 14 05:09:44.645429 kubelet[2593]: I0514 05:09:44.645352 2593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/674f3bcf-3155-4a84-b9e2-0081a5851991-cilium-config-path\") pod \"cilium-jqhzj\" (UID: \"674f3bcf-3155-4a84-b9e2-0081a5851991\") " pod="kube-system/cilium-jqhzj" May 14 05:09:44.645689 kubelet[2593]: I0514 05:09:44.645653 2593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/674f3bcf-3155-4a84-b9e2-0081a5851991-lib-modules\") pod \"cilium-jqhzj\" (UID: \"674f3bcf-3155-4a84-b9e2-0081a5851991\") " pod="kube-system/cilium-jqhzj" May 14 05:09:44.645742 kubelet[2593]: I0514 05:09:44.645713 2593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/674f3bcf-3155-4a84-b9e2-0081a5851991-hostproc\") pod \"cilium-jqhzj\" (UID: \"674f3bcf-3155-4a84-b9e2-0081a5851991\") " pod="kube-system/cilium-jqhzj" May 14 05:09:44.645770 kubelet[2593]: I0514 05:09:44.645743 2593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/674f3bcf-3155-4a84-b9e2-0081a5851991-xtables-lock\") pod \"cilium-jqhzj\" (UID: \"674f3bcf-3155-4a84-b9e2-0081a5851991\") " pod="kube-system/cilium-jqhzj" May 14 05:09:44.645855 kubelet[2593]: I0514 05:09:44.645770 2593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/674f3bcf-3155-4a84-b9e2-0081a5851991-hubble-tls\") pod \"cilium-jqhzj\" (UID: \"674f3bcf-3155-4a84-b9e2-0081a5851991\") " pod="kube-system/cilium-jqhzj" May 14 05:09:44.645855 kubelet[2593]: I0514 05:09:44.645797 2593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0b1352fc-06b8-4e34-b468-4a7e061d8a84-lib-modules\") pod \"kube-proxy-5qkwg\" (UID: \"0b1352fc-06b8-4e34-b468-4a7e061d8a84\") " pod="kube-system/kube-proxy-5qkwg" May 14 05:09:44.645855 kubelet[2593]: I0514 05:09:44.645825 2593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/674f3bcf-3155-4a84-b9e2-0081a5851991-bpf-maps\") pod \"cilium-jqhzj\" (UID: \"674f3bcf-3155-4a84-b9e2-0081a5851991\") " pod="kube-system/cilium-jqhzj" May 14 05:09:44.645925 kubelet[2593]: I0514 05:09:44.645871 2593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/674f3bcf-3155-4a84-b9e2-0081a5851991-cni-path\") pod \"cilium-jqhzj\" (UID: \"674f3bcf-3155-4a84-b9e2-0081a5851991\") " pod="kube-system/cilium-jqhzj" May 14 05:09:44.645925 kubelet[2593]: I0514 05:09:44.645889 2593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/674f3bcf-3155-4a84-b9e2-0081a5851991-host-proc-sys-net\") pod \"cilium-jqhzj\" (UID: \"674f3bcf-3155-4a84-b9e2-0081a5851991\") " pod="kube-system/cilium-jqhzj" May 14 05:09:44.645925 kubelet[2593]: I0514 05:09:44.645907 2593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0b1352fc-06b8-4e34-b468-4a7e061d8a84-xtables-lock\") pod \"kube-proxy-5qkwg\" (UID: \"0b1352fc-06b8-4e34-b468-4a7e061d8a84\") " pod="kube-system/kube-proxy-5qkwg" May 14 05:09:44.645925 kubelet[2593]: I0514 05:09:44.645922 2593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-898nr\" (UniqueName: \"kubernetes.io/projected/674f3bcf-3155-4a84-b9e2-0081a5851991-kube-api-access-898nr\") pod \"cilium-jqhzj\" (UID: \"674f3bcf-3155-4a84-b9e2-0081a5851991\") " pod="kube-system/cilium-jqhzj" May 14 05:09:44.646013 kubelet[2593]: I0514 05:09:44.645938 2593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/674f3bcf-3155-4a84-b9e2-0081a5851991-cilium-cgroup\") pod \"cilium-jqhzj\" (UID: \"674f3bcf-3155-4a84-b9e2-0081a5851991\") " pod="kube-system/cilium-jqhzj" May 14 05:09:44.646013 kubelet[2593]: I0514 05:09:44.645952 2593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/674f3bcf-3155-4a84-b9e2-0081a5851991-etc-cni-netd\") pod \"cilium-jqhzj\" (UID: \"674f3bcf-3155-4a84-b9e2-0081a5851991\") " pod="kube-system/cilium-jqhzj" May 14 05:09:44.646013 kubelet[2593]: I0514 05:09:44.645968 2593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/674f3bcf-3155-4a84-b9e2-0081a5851991-cilium-run\") pod \"cilium-jqhzj\" (UID: \"674f3bcf-3155-4a84-b9e2-0081a5851991\") " pod="kube-system/cilium-jqhzj" May 14 05:09:44.646013 kubelet[2593]: I0514 05:09:44.645982 2593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/674f3bcf-3155-4a84-b9e2-0081a5851991-clustermesh-secrets\") pod \"cilium-jqhzj\" (UID: \"674f3bcf-3155-4a84-b9e2-0081a5851991\") " pod="kube-system/cilium-jqhzj" May 14 05:09:44.646013 kubelet[2593]: I0514 05:09:44.645998 2593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/0b1352fc-06b8-4e34-b468-4a7e061d8a84-kube-proxy\") pod \"kube-proxy-5qkwg\" (UID: \"0b1352fc-06b8-4e34-b468-4a7e061d8a84\") " pod="kube-system/kube-proxy-5qkwg" May 14 05:09:44.646013 kubelet[2593]: I0514 05:09:44.646013 2593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/674f3bcf-3155-4a84-b9e2-0081a5851991-host-proc-sys-kernel\") pod \"cilium-jqhzj\" (UID: \"674f3bcf-3155-4a84-b9e2-0081a5851991\") " pod="kube-system/cilium-jqhzj" May 14 05:09:44.646134 kubelet[2593]: I0514 05:09:44.646027 2593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6wl2\" (UniqueName: \"kubernetes.io/projected/0b1352fc-06b8-4e34-b468-4a7e061d8a84-kube-api-access-w6wl2\") pod \"kube-proxy-5qkwg\" (UID: \"0b1352fc-06b8-4e34-b468-4a7e061d8a84\") " pod="kube-system/kube-proxy-5qkwg" May 14 05:09:44.911296 kubelet[2593]: E0514 05:09:44.911179 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 05:09:44.912387 containerd[1494]: time="2025-05-14T05:09:44.912334565Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5qkwg,Uid:0b1352fc-06b8-4e34-b468-4a7e061d8a84,Namespace:kube-system,Attempt:0,}" May 14 05:09:44.916697 kubelet[2593]: E0514 05:09:44.916630 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 05:09:44.917695 containerd[1494]: time="2025-05-14T05:09:44.917569618Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-jqhzj,Uid:674f3bcf-3155-4a84-b9e2-0081a5851991,Namespace:kube-system,Attempt:0,}" May 14 05:09:44.946949 containerd[1494]: time="2025-05-14T05:09:44.946895123Z" level=info msg="connecting to shim f209afad64d83b2536fcaab46e21844a84366beaa447caa2399697f5e17253ef" address="unix:///run/containerd/s/09ecad11ec31abd1e89a27c585a5f6c17a4c66f953237fcfc1dea7c9ccf7040a" namespace=k8s.io protocol=ttrpc version=3 May 14 05:09:44.948173 kubelet[2593]: I0514 05:09:44.947317 2593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ce879c3c-c521-4cd7-95c2-68fdcfc90412-cilium-config-path\") pod \"cilium-operator-5d85765b45-w652r\" (UID: \"ce879c3c-c521-4cd7-95c2-68fdcfc90412\") " pod="kube-system/cilium-operator-5d85765b45-w652r" May 14 05:09:44.948173 kubelet[2593]: I0514 05:09:44.947416 2593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8lnph\" (UniqueName: \"kubernetes.io/projected/ce879c3c-c521-4cd7-95c2-68fdcfc90412-kube-api-access-8lnph\") pod \"cilium-operator-5d85765b45-w652r\" (UID: \"ce879c3c-c521-4cd7-95c2-68fdcfc90412\") " pod="kube-system/cilium-operator-5d85765b45-w652r" May 14 05:09:44.961225 systemd[1]: Created slice kubepods-besteffort-podce879c3c_c521_4cd7_95c2_68fdcfc90412.slice - libcontainer container kubepods-besteffort-podce879c3c_c521_4cd7_95c2_68fdcfc90412.slice. May 14 05:09:44.963750 containerd[1494]: time="2025-05-14T05:09:44.963705910Z" level=info msg="connecting to shim 75c1b30a32e56f7e52575eb9e3966792e4605a09a37a3acc2accf3730e7a193e" address="unix:///run/containerd/s/a7daf6d3074c9aac224f88b11e10c19686040dec064d0c4e960ac8c91e972200" namespace=k8s.io protocol=ttrpc version=3 May 14 05:09:44.981641 systemd[1]: Started cri-containerd-f209afad64d83b2536fcaab46e21844a84366beaa447caa2399697f5e17253ef.scope - libcontainer container f209afad64d83b2536fcaab46e21844a84366beaa447caa2399697f5e17253ef. May 14 05:09:44.984046 systemd[1]: Started cri-containerd-75c1b30a32e56f7e52575eb9e3966792e4605a09a37a3acc2accf3730e7a193e.scope - libcontainer container 75c1b30a32e56f7e52575eb9e3966792e4605a09a37a3acc2accf3730e7a193e. May 14 05:09:45.005967 containerd[1494]: time="2025-05-14T05:09:45.005915897Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5qkwg,Uid:0b1352fc-06b8-4e34-b468-4a7e061d8a84,Namespace:kube-system,Attempt:0,} returns sandbox id \"f209afad64d83b2536fcaab46e21844a84366beaa447caa2399697f5e17253ef\"" May 14 05:09:45.006860 kubelet[2593]: E0514 05:09:45.006830 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 05:09:45.011350 containerd[1494]: time="2025-05-14T05:09:45.011187383Z" level=info msg="CreateContainer within sandbox \"f209afad64d83b2536fcaab46e21844a84366beaa447caa2399697f5e17253ef\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 14 05:09:45.011868 containerd[1494]: time="2025-05-14T05:09:45.011673235Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-jqhzj,Uid:674f3bcf-3155-4a84-b9e2-0081a5851991,Namespace:kube-system,Attempt:0,} returns sandbox id \"75c1b30a32e56f7e52575eb9e3966792e4605a09a37a3acc2accf3730e7a193e\"" May 14 05:09:45.012364 kubelet[2593]: E0514 05:09:45.012344 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 05:09:45.014384 containerd[1494]: time="2025-05-14T05:09:45.014193896Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 14 05:09:45.022667 containerd[1494]: time="2025-05-14T05:09:45.022633298Z" level=info msg="Container a05e6251654d508d441739cbe9d0fbdde1070fcde00552e5f557d87e3c455104: CDI devices from CRI Config.CDIDevices: []" May 14 05:09:45.032920 containerd[1494]: time="2025-05-14T05:09:45.032873944Z" level=info msg="CreateContainer within sandbox \"f209afad64d83b2536fcaab46e21844a84366beaa447caa2399697f5e17253ef\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"a05e6251654d508d441739cbe9d0fbdde1070fcde00552e5f557d87e3c455104\"" May 14 05:09:45.033552 containerd[1494]: time="2025-05-14T05:09:45.033429798Z" level=info msg="StartContainer for \"a05e6251654d508d441739cbe9d0fbdde1070fcde00552e5f557d87e3c455104\"" May 14 05:09:45.035718 containerd[1494]: time="2025-05-14T05:09:45.035639651Z" level=info msg="connecting to shim a05e6251654d508d441739cbe9d0fbdde1070fcde00552e5f557d87e3c455104" address="unix:///run/containerd/s/09ecad11ec31abd1e89a27c585a5f6c17a4c66f953237fcfc1dea7c9ccf7040a" protocol=ttrpc version=3 May 14 05:09:45.056660 systemd[1]: Started cri-containerd-a05e6251654d508d441739cbe9d0fbdde1070fcde00552e5f557d87e3c455104.scope - libcontainer container a05e6251654d508d441739cbe9d0fbdde1070fcde00552e5f557d87e3c455104. May 14 05:09:45.090602 containerd[1494]: time="2025-05-14T05:09:45.090558411Z" level=info msg="StartContainer for \"a05e6251654d508d441739cbe9d0fbdde1070fcde00552e5f557d87e3c455104\" returns successfully" May 14 05:09:45.141087 kubelet[2593]: E0514 05:09:45.141044 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 05:09:45.266174 kubelet[2593]: E0514 05:09:45.266050 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 05:09:45.266776 containerd[1494]: time="2025-05-14T05:09:45.266726764Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-w652r,Uid:ce879c3c-c521-4cd7-95c2-68fdcfc90412,Namespace:kube-system,Attempt:0,}" May 14 05:09:45.284191 containerd[1494]: time="2025-05-14T05:09:45.283311563Z" level=info msg="connecting to shim db854e34975148b1b08d2e978dc713052c00a4ba3770f90a6d6f5ed15edc754f" address="unix:///run/containerd/s/4a87557a111d446582094929176985afcf6adcc9e10cb98e2433b608dae951f5" namespace=k8s.io protocol=ttrpc version=3 May 14 05:09:45.308646 systemd[1]: Started cri-containerd-db854e34975148b1b08d2e978dc713052c00a4ba3770f90a6d6f5ed15edc754f.scope - libcontainer container db854e34975148b1b08d2e978dc713052c00a4ba3770f90a6d6f5ed15edc754f. May 14 05:09:45.347608 containerd[1494]: time="2025-05-14T05:09:45.347562427Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-w652r,Uid:ce879c3c-c521-4cd7-95c2-68fdcfc90412,Namespace:kube-system,Attempt:0,} returns sandbox id \"db854e34975148b1b08d2e978dc713052c00a4ba3770f90a6d6f5ed15edc754f\"" May 14 05:09:45.348265 kubelet[2593]: E0514 05:09:45.348196 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 05:09:46.995392 kubelet[2593]: E0514 05:09:46.995348 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 05:09:47.010036 kubelet[2593]: I0514 05:09:47.008925 2593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-5qkwg" podStartSLOduration=3.008910801 podStartE2EDuration="3.008910801s" podCreationTimestamp="2025-05-14 05:09:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 05:09:45.14796143 +0000 UTC m=+8.121725645" watchObservedRunningTime="2025-05-14 05:09:47.008910801 +0000 UTC m=+9.982675016" May 14 05:09:47.134828 kubelet[2593]: E0514 05:09:47.134609 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 05:09:47.144519 kubelet[2593]: E0514 05:09:47.144456 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 05:09:47.144683 kubelet[2593]: E0514 05:09:47.144666 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 05:09:49.515757 kubelet[2593]: E0514 05:09:49.515729 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 05:09:51.610397 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1918663013.mount: Deactivated successfully. May 14 05:09:52.973869 containerd[1494]: time="2025-05-14T05:09:52.973815455Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 05:09:52.974440 containerd[1494]: time="2025-05-14T05:09:52.974405984Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" May 14 05:09:52.975299 containerd[1494]: time="2025-05-14T05:09:52.975271239Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 05:09:52.977411 containerd[1494]: time="2025-05-14T05:09:52.977319593Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 7.963084736s" May 14 05:09:52.977411 containerd[1494]: time="2025-05-14T05:09:52.977357273Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" May 14 05:09:52.983773 containerd[1494]: time="2025-05-14T05:09:52.983736179Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 14 05:09:52.991368 containerd[1494]: time="2025-05-14T05:09:52.991338344Z" level=info msg="CreateContainer within sandbox \"75c1b30a32e56f7e52575eb9e3966792e4605a09a37a3acc2accf3730e7a193e\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 14 05:09:52.998520 containerd[1494]: time="2025-05-14T05:09:52.998353180Z" level=info msg="Container 3651fd06c96c01acdcfe834ffae8095730e1541331d6684fbc172939190c7db1: CDI devices from CRI Config.CDIDevices: []" May 14 05:09:53.001299 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2417797187.mount: Deactivated successfully. May 14 05:09:53.003473 containerd[1494]: time="2025-05-14T05:09:53.003420301Z" level=info msg="CreateContainer within sandbox \"75c1b30a32e56f7e52575eb9e3966792e4605a09a37a3acc2accf3730e7a193e\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"3651fd06c96c01acdcfe834ffae8095730e1541331d6684fbc172939190c7db1\"" May 14 05:09:53.007607 containerd[1494]: time="2025-05-14T05:09:53.007567366Z" level=info msg="StartContainer for \"3651fd06c96c01acdcfe834ffae8095730e1541331d6684fbc172939190c7db1\"" May 14 05:09:53.008335 containerd[1494]: time="2025-05-14T05:09:53.008309858Z" level=info msg="connecting to shim 3651fd06c96c01acdcfe834ffae8095730e1541331d6684fbc172939190c7db1" address="unix:///run/containerd/s/a7daf6d3074c9aac224f88b11e10c19686040dec064d0c4e960ac8c91e972200" protocol=ttrpc version=3 May 14 05:09:53.058658 systemd[1]: Started cri-containerd-3651fd06c96c01acdcfe834ffae8095730e1541331d6684fbc172939190c7db1.scope - libcontainer container 3651fd06c96c01acdcfe834ffae8095730e1541331d6684fbc172939190c7db1. May 14 05:09:53.083869 containerd[1494]: time="2025-05-14T05:09:53.083828403Z" level=info msg="StartContainer for \"3651fd06c96c01acdcfe834ffae8095730e1541331d6684fbc172939190c7db1\" returns successfully" May 14 05:09:53.148400 systemd[1]: cri-containerd-3651fd06c96c01acdcfe834ffae8095730e1541331d6684fbc172939190c7db1.scope: Deactivated successfully. May 14 05:09:53.148841 systemd[1]: cri-containerd-3651fd06c96c01acdcfe834ffae8095730e1541331d6684fbc172939190c7db1.scope: Consumed 75ms CPU time, 7M memory peak, 3.1M written to disk. May 14 05:09:53.190188 kubelet[2593]: E0514 05:09:53.190144 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 05:09:53.212086 containerd[1494]: time="2025-05-14T05:09:53.211930413Z" level=info msg="received exit event container_id:\"3651fd06c96c01acdcfe834ffae8095730e1541331d6684fbc172939190c7db1\" id:\"3651fd06c96c01acdcfe834ffae8095730e1541331d6684fbc172939190c7db1\" pid:3018 exited_at:{seconds:1747199393 nanos:198881528}" May 14 05:09:53.219770 containerd[1494]: time="2025-05-14T05:09:53.219739415Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3651fd06c96c01acdcfe834ffae8095730e1541331d6684fbc172939190c7db1\" id:\"3651fd06c96c01acdcfe834ffae8095730e1541331d6684fbc172939190c7db1\" pid:3018 exited_at:{seconds:1747199393 nanos:198881528}" May 14 05:09:53.247349 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3651fd06c96c01acdcfe834ffae8095730e1541331d6684fbc172939190c7db1-rootfs.mount: Deactivated successfully. May 14 05:09:53.456611 update_engine[1481]: I20250514 05:09:53.456528 1481 update_attempter.cc:509] Updating boot flags... May 14 05:09:54.189420 kubelet[2593]: E0514 05:09:54.189392 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 05:09:54.192288 containerd[1494]: time="2025-05-14T05:09:54.192237166Z" level=info msg="CreateContainer within sandbox \"75c1b30a32e56f7e52575eb9e3966792e4605a09a37a3acc2accf3730e7a193e\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 14 05:09:54.207517 containerd[1494]: time="2025-05-14T05:09:54.207404232Z" level=info msg="Container a67613b2482ddf2894f039b7e4cdea7e73854bbf67c6f4dde0f608f9c36ccbc7: CDI devices from CRI Config.CDIDevices: []" May 14 05:09:54.210439 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1316794307.mount: Deactivated successfully. May 14 05:09:54.214159 containerd[1494]: time="2025-05-14T05:09:54.214120452Z" level=info msg="CreateContainer within sandbox \"75c1b30a32e56f7e52575eb9e3966792e4605a09a37a3acc2accf3730e7a193e\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"a67613b2482ddf2894f039b7e4cdea7e73854bbf67c6f4dde0f608f9c36ccbc7\"" May 14 05:09:54.214650 containerd[1494]: time="2025-05-14T05:09:54.214627060Z" level=info msg="StartContainer for \"a67613b2482ddf2894f039b7e4cdea7e73854bbf67c6f4dde0f608f9c36ccbc7\"" May 14 05:09:54.215524 containerd[1494]: time="2025-05-14T05:09:54.215477313Z" level=info msg="connecting to shim a67613b2482ddf2894f039b7e4cdea7e73854bbf67c6f4dde0f608f9c36ccbc7" address="unix:///run/containerd/s/a7daf6d3074c9aac224f88b11e10c19686040dec064d0c4e960ac8c91e972200" protocol=ttrpc version=3 May 14 05:09:54.234644 systemd[1]: Started cri-containerd-a67613b2482ddf2894f039b7e4cdea7e73854bbf67c6f4dde0f608f9c36ccbc7.scope - libcontainer container a67613b2482ddf2894f039b7e4cdea7e73854bbf67c6f4dde0f608f9c36ccbc7. May 14 05:09:54.258668 containerd[1494]: time="2025-05-14T05:09:54.258623636Z" level=info msg="StartContainer for \"a67613b2482ddf2894f039b7e4cdea7e73854bbf67c6f4dde0f608f9c36ccbc7\" returns successfully" May 14 05:09:54.271193 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 14 05:09:54.271398 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 14 05:09:54.272037 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... May 14 05:09:54.273340 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 14 05:09:54.275150 containerd[1494]: time="2025-05-14T05:09:54.275111402Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a67613b2482ddf2894f039b7e4cdea7e73854bbf67c6f4dde0f608f9c36ccbc7\" id:\"a67613b2482ddf2894f039b7e4cdea7e73854bbf67c6f4dde0f608f9c36ccbc7\" pid:3081 exited_at:{seconds:1747199394 nanos:274863038}" May 14 05:09:54.275258 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 14 05:09:54.275679 systemd[1]: cri-containerd-a67613b2482ddf2894f039b7e4cdea7e73854bbf67c6f4dde0f608f9c36ccbc7.scope: Deactivated successfully. May 14 05:09:54.283560 containerd[1494]: time="2025-05-14T05:09:54.283475327Z" level=info msg="received exit event container_id:\"a67613b2482ddf2894f039b7e4cdea7e73854bbf67c6f4dde0f608f9c36ccbc7\" id:\"a67613b2482ddf2894f039b7e4cdea7e73854bbf67c6f4dde0f608f9c36ccbc7\" pid:3081 exited_at:{seconds:1747199394 nanos:274863038}" May 14 05:09:54.301352 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 14 05:09:55.192767 kubelet[2593]: E0514 05:09:55.192735 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 05:09:55.197059 containerd[1494]: time="2025-05-14T05:09:55.196743250Z" level=info msg="CreateContainer within sandbox \"75c1b30a32e56f7e52575eb9e3966792e4605a09a37a3acc2accf3730e7a193e\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 14 05:09:55.206298 containerd[1494]: time="2025-05-14T05:09:55.206265185Z" level=info msg="Container 031d6c4ebbc7de2c9b5786c3350122816723949ad522d9e0479be56cd9ebb294: CDI devices from CRI Config.CDIDevices: []" May 14 05:09:55.208490 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a67613b2482ddf2894f039b7e4cdea7e73854bbf67c6f4dde0f608f9c36ccbc7-rootfs.mount: Deactivated successfully. May 14 05:09:55.213676 containerd[1494]: time="2025-05-14T05:09:55.213630769Z" level=info msg="CreateContainer within sandbox \"75c1b30a32e56f7e52575eb9e3966792e4605a09a37a3acc2accf3730e7a193e\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"031d6c4ebbc7de2c9b5786c3350122816723949ad522d9e0479be56cd9ebb294\"" May 14 05:09:55.214221 containerd[1494]: time="2025-05-14T05:09:55.214198497Z" level=info msg="StartContainer for \"031d6c4ebbc7de2c9b5786c3350122816723949ad522d9e0479be56cd9ebb294\"" May 14 05:09:55.215577 containerd[1494]: time="2025-05-14T05:09:55.215544877Z" level=info msg="connecting to shim 031d6c4ebbc7de2c9b5786c3350122816723949ad522d9e0479be56cd9ebb294" address="unix:///run/containerd/s/a7daf6d3074c9aac224f88b11e10c19686040dec064d0c4e960ac8c91e972200" protocol=ttrpc version=3 May 14 05:09:55.239661 systemd[1]: Started cri-containerd-031d6c4ebbc7de2c9b5786c3350122816723949ad522d9e0479be56cd9ebb294.scope - libcontainer container 031d6c4ebbc7de2c9b5786c3350122816723949ad522d9e0479be56cd9ebb294. May 14 05:09:55.274591 containerd[1494]: time="2025-05-14T05:09:55.274555634Z" level=info msg="StartContainer for \"031d6c4ebbc7de2c9b5786c3350122816723949ad522d9e0479be56cd9ebb294\" returns successfully" May 14 05:09:55.282372 systemd[1]: cri-containerd-031d6c4ebbc7de2c9b5786c3350122816723949ad522d9e0479be56cd9ebb294.scope: Deactivated successfully. May 14 05:09:55.292397 containerd[1494]: time="2025-05-14T05:09:55.292315966Z" level=info msg="received exit event container_id:\"031d6c4ebbc7de2c9b5786c3350122816723949ad522d9e0479be56cd9ebb294\" id:\"031d6c4ebbc7de2c9b5786c3350122816723949ad522d9e0479be56cd9ebb294\" pid:3136 exited_at:{seconds:1747199395 nanos:292147404}" May 14 05:09:55.292478 containerd[1494]: time="2025-05-14T05:09:55.292407528Z" level=info msg="TaskExit event in podsandbox handler container_id:\"031d6c4ebbc7de2c9b5786c3350122816723949ad522d9e0479be56cd9ebb294\" id:\"031d6c4ebbc7de2c9b5786c3350122816723949ad522d9e0479be56cd9ebb294\" pid:3136 exited_at:{seconds:1747199395 nanos:292147404}" May 14 05:09:55.308334 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-031d6c4ebbc7de2c9b5786c3350122816723949ad522d9e0479be56cd9ebb294-rootfs.mount: Deactivated successfully. May 14 05:09:56.199794 kubelet[2593]: E0514 05:09:56.199735 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 05:09:56.202679 containerd[1494]: time="2025-05-14T05:09:56.202591071Z" level=info msg="CreateContainer within sandbox \"75c1b30a32e56f7e52575eb9e3966792e4605a09a37a3acc2accf3730e7a193e\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 14 05:09:56.216973 containerd[1494]: time="2025-05-14T05:09:56.216910345Z" level=info msg="Container c8aa52a8c455e108e3f6cfd4b9c7e4d7399dc1023e5549d0befe52732fb13473: CDI devices from CRI Config.CDIDevices: []" May 14 05:09:56.217076 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2652001405.mount: Deactivated successfully. May 14 05:09:56.223685 containerd[1494]: time="2025-05-14T05:09:56.223654116Z" level=info msg="CreateContainer within sandbox \"75c1b30a32e56f7e52575eb9e3966792e4605a09a37a3acc2accf3730e7a193e\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"c8aa52a8c455e108e3f6cfd4b9c7e4d7399dc1023e5549d0befe52732fb13473\"" May 14 05:09:56.224303 containerd[1494]: time="2025-05-14T05:09:56.224275644Z" level=info msg="StartContainer for \"c8aa52a8c455e108e3f6cfd4b9c7e4d7399dc1023e5549d0befe52732fb13473\"" May 14 05:09:56.225207 containerd[1494]: time="2025-05-14T05:09:56.225179096Z" level=info msg="connecting to shim c8aa52a8c455e108e3f6cfd4b9c7e4d7399dc1023e5549d0befe52732fb13473" address="unix:///run/containerd/s/a7daf6d3074c9aac224f88b11e10c19686040dec064d0c4e960ac8c91e972200" protocol=ttrpc version=3 May 14 05:09:56.244706 systemd[1]: Started cri-containerd-c8aa52a8c455e108e3f6cfd4b9c7e4d7399dc1023e5549d0befe52732fb13473.scope - libcontainer container c8aa52a8c455e108e3f6cfd4b9c7e4d7399dc1023e5549d0befe52732fb13473. May 14 05:09:56.265259 systemd[1]: cri-containerd-c8aa52a8c455e108e3f6cfd4b9c7e4d7399dc1023e5549d0befe52732fb13473.scope: Deactivated successfully. May 14 05:09:56.266933 containerd[1494]: time="2025-05-14T05:09:56.266770339Z" level=info msg="received exit event container_id:\"c8aa52a8c455e108e3f6cfd4b9c7e4d7399dc1023e5549d0befe52732fb13473\" id:\"c8aa52a8c455e108e3f6cfd4b9c7e4d7399dc1023e5549d0befe52732fb13473\" pid:3176 exited_at:{seconds:1747199396 nanos:266038609}" May 14 05:09:56.267087 containerd[1494]: time="2025-05-14T05:09:56.267062703Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c8aa52a8c455e108e3f6cfd4b9c7e4d7399dc1023e5549d0befe52732fb13473\" id:\"c8aa52a8c455e108e3f6cfd4b9c7e4d7399dc1023e5549d0befe52732fb13473\" pid:3176 exited_at:{seconds:1747199396 nanos:266038609}" May 14 05:09:56.273137 containerd[1494]: time="2025-05-14T05:09:56.273104384Z" level=info msg="StartContainer for \"c8aa52a8c455e108e3f6cfd4b9c7e4d7399dc1023e5549d0befe52732fb13473\" returns successfully" May 14 05:09:56.282700 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c8aa52a8c455e108e3f6cfd4b9c7e4d7399dc1023e5549d0befe52732fb13473-rootfs.mount: Deactivated successfully. May 14 05:09:56.655537 containerd[1494]: time="2025-05-14T05:09:56.655401592Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 05:09:56.656914 containerd[1494]: time="2025-05-14T05:09:56.656886852Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" May 14 05:09:56.659532 containerd[1494]: time="2025-05-14T05:09:56.657891306Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 05:09:56.662275 containerd[1494]: time="2025-05-14T05:09:56.662231484Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 3.678461705s" May 14 05:09:56.662275 containerd[1494]: time="2025-05-14T05:09:56.662271645Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" May 14 05:09:56.664380 containerd[1494]: time="2025-05-14T05:09:56.664352593Z" level=info msg="CreateContainer within sandbox \"db854e34975148b1b08d2e978dc713052c00a4ba3770f90a6d6f5ed15edc754f\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 14 05:09:56.670982 containerd[1494]: time="2025-05-14T05:09:56.670912442Z" level=info msg="Container a6ee3b8ad39c2fa2f75f13c215fd510e29b7e002fc336040533f5dc8d066d688: CDI devices from CRI Config.CDIDevices: []" May 14 05:09:56.675927 containerd[1494]: time="2025-05-14T05:09:56.675840228Z" level=info msg="CreateContainer within sandbox \"db854e34975148b1b08d2e978dc713052c00a4ba3770f90a6d6f5ed15edc754f\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"a6ee3b8ad39c2fa2f75f13c215fd510e29b7e002fc336040533f5dc8d066d688\"" May 14 05:09:56.676263 containerd[1494]: time="2025-05-14T05:09:56.676231993Z" level=info msg="StartContainer for \"a6ee3b8ad39c2fa2f75f13c215fd510e29b7e002fc336040533f5dc8d066d688\"" May 14 05:09:56.677134 containerd[1494]: time="2025-05-14T05:09:56.677105485Z" level=info msg="connecting to shim a6ee3b8ad39c2fa2f75f13c215fd510e29b7e002fc336040533f5dc8d066d688" address="unix:///run/containerd/s/4a87557a111d446582094929176985afcf6adcc9e10cb98e2433b608dae951f5" protocol=ttrpc version=3 May 14 05:09:56.694669 systemd[1]: Started cri-containerd-a6ee3b8ad39c2fa2f75f13c215fd510e29b7e002fc336040533f5dc8d066d688.scope - libcontainer container a6ee3b8ad39c2fa2f75f13c215fd510e29b7e002fc336040533f5dc8d066d688. May 14 05:09:56.760926 containerd[1494]: time="2025-05-14T05:09:56.760887578Z" level=info msg="StartContainer for \"a6ee3b8ad39c2fa2f75f13c215fd510e29b7e002fc336040533f5dc8d066d688\" returns successfully" May 14 05:09:57.207793 kubelet[2593]: E0514 05:09:57.207586 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 05:09:57.210511 kubelet[2593]: E0514 05:09:57.210443 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 05:09:57.211638 containerd[1494]: time="2025-05-14T05:09:57.210644564Z" level=info msg="CreateContainer within sandbox \"75c1b30a32e56f7e52575eb9e3966792e4605a09a37a3acc2accf3730e7a193e\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 14 05:09:57.227696 containerd[1494]: time="2025-05-14T05:09:57.227380259Z" level=info msg="Container 71a3e38cc4a20e9156636b2703665c56863554838dee631c212742a7a7593d82: CDI devices from CRI Config.CDIDevices: []" May 14 05:09:57.239270 containerd[1494]: time="2025-05-14T05:09:57.239213492Z" level=info msg="CreateContainer within sandbox \"75c1b30a32e56f7e52575eb9e3966792e4605a09a37a3acc2accf3730e7a193e\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"71a3e38cc4a20e9156636b2703665c56863554838dee631c212742a7a7593d82\"" May 14 05:09:57.240089 containerd[1494]: time="2025-05-14T05:09:57.240050503Z" level=info msg="StartContainer for \"71a3e38cc4a20e9156636b2703665c56863554838dee631c212742a7a7593d82\"" May 14 05:09:57.241264 containerd[1494]: time="2025-05-14T05:09:57.241184997Z" level=info msg="connecting to shim 71a3e38cc4a20e9156636b2703665c56863554838dee631c212742a7a7593d82" address="unix:///run/containerd/s/a7daf6d3074c9aac224f88b11e10c19686040dec064d0c4e960ac8c91e972200" protocol=ttrpc version=3 May 14 05:09:57.291054 systemd[1]: Started cri-containerd-71a3e38cc4a20e9156636b2703665c56863554838dee631c212742a7a7593d82.scope - libcontainer container 71a3e38cc4a20e9156636b2703665c56863554838dee631c212742a7a7593d82. May 14 05:09:57.291921 kubelet[2593]: I0514 05:09:57.291396 2593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-w652r" podStartSLOduration=1.977234927 podStartE2EDuration="13.291376324s" podCreationTimestamp="2025-05-14 05:09:44 +0000 UTC" firstStartedPulling="2025-05-14 05:09:45.348766736 +0000 UTC m=+8.322530951" lastFinishedPulling="2025-05-14 05:09:56.662908133 +0000 UTC m=+19.636672348" observedRunningTime="2025-05-14 05:09:57.290911518 +0000 UTC m=+20.264675733" watchObservedRunningTime="2025-05-14 05:09:57.291376324 +0000 UTC m=+20.265140539" May 14 05:09:57.351090 containerd[1494]: time="2025-05-14T05:09:57.350978212Z" level=info msg="StartContainer for \"71a3e38cc4a20e9156636b2703665c56863554838dee631c212742a7a7593d82\" returns successfully" May 14 05:09:57.496358 containerd[1494]: time="2025-05-14T05:09:57.496224323Z" level=info msg="TaskExit event in podsandbox handler container_id:\"71a3e38cc4a20e9156636b2703665c56863554838dee631c212742a7a7593d82\" id:\"a87ee1875f9eb47f3b5c8237e19a346164d3b15e2df4ba03d529671c5a913575\" pid:3285 exited_at:{seconds:1747199397 nanos:494950186}" May 14 05:09:57.523356 kubelet[2593]: I0514 05:09:57.523312 2593 kubelet_node_status.go:488] "Fast updating node status as it just became ready" May 14 05:09:57.561459 systemd[1]: Created slice kubepods-burstable-pod08c21bcb_58ba_4436_87e8_45e99de588a1.slice - libcontainer container kubepods-burstable-pod08c21bcb_58ba_4436_87e8_45e99de588a1.slice. May 14 05:09:57.579132 systemd[1]: Created slice kubepods-burstable-pod005844b3_128b_4e42_9826_20ce3b852644.slice - libcontainer container kubepods-burstable-pod005844b3_128b_4e42_9826_20ce3b852644.slice. May 14 05:09:57.733433 kubelet[2593]: I0514 05:09:57.733370 2593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/08c21bcb-58ba-4436-87e8-45e99de588a1-config-volume\") pod \"coredns-6f6b679f8f-kb64l\" (UID: \"08c21bcb-58ba-4436-87e8-45e99de588a1\") " pod="kube-system/coredns-6f6b679f8f-kb64l" May 14 05:09:57.733433 kubelet[2593]: I0514 05:09:57.733426 2593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rc9qd\" (UniqueName: \"kubernetes.io/projected/005844b3-128b-4e42-9826-20ce3b852644-kube-api-access-rc9qd\") pod \"coredns-6f6b679f8f-hcgrw\" (UID: \"005844b3-128b-4e42-9826-20ce3b852644\") " pod="kube-system/coredns-6f6b679f8f-hcgrw" May 14 05:09:57.733433 kubelet[2593]: I0514 05:09:57.733452 2593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/005844b3-128b-4e42-9826-20ce3b852644-config-volume\") pod \"coredns-6f6b679f8f-hcgrw\" (UID: \"005844b3-128b-4e42-9826-20ce3b852644\") " pod="kube-system/coredns-6f6b679f8f-hcgrw" May 14 05:09:57.733433 kubelet[2593]: I0514 05:09:57.733473 2593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p98zc\" (UniqueName: \"kubernetes.io/projected/08c21bcb-58ba-4436-87e8-45e99de588a1-kube-api-access-p98zc\") pod \"coredns-6f6b679f8f-kb64l\" (UID: \"08c21bcb-58ba-4436-87e8-45e99de588a1\") " pod="kube-system/coredns-6f6b679f8f-kb64l" May 14 05:09:57.877189 kubelet[2593]: E0514 05:09:57.876313 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 05:09:57.878072 containerd[1494]: time="2025-05-14T05:09:57.877933800Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-kb64l,Uid:08c21bcb-58ba-4436-87e8-45e99de588a1,Namespace:kube-system,Attempt:0,}" May 14 05:09:57.881802 kubelet[2593]: E0514 05:09:57.881773 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 05:09:57.882976 containerd[1494]: time="2025-05-14T05:09:57.882602740Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-hcgrw,Uid:005844b3-128b-4e42-9826-20ce3b852644,Namespace:kube-system,Attempt:0,}" May 14 05:09:58.216832 kubelet[2593]: E0514 05:09:58.216514 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 05:09:58.216832 kubelet[2593]: E0514 05:09:58.216599 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 05:09:58.231060 kubelet[2593]: I0514 05:09:58.230995 2593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-jqhzj" podStartSLOduration=6.260188336 podStartE2EDuration="14.23098013s" podCreationTimestamp="2025-05-14 05:09:44 +0000 UTC" firstStartedPulling="2025-05-14 05:09:45.012796822 +0000 UTC m=+7.986561037" lastFinishedPulling="2025-05-14 05:09:52.983588616 +0000 UTC m=+15.957352831" observedRunningTime="2025-05-14 05:09:58.230368643 +0000 UTC m=+21.204132858" watchObservedRunningTime="2025-05-14 05:09:58.23098013 +0000 UTC m=+21.204744345" May 14 05:09:59.218014 kubelet[2593]: E0514 05:09:59.217978 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 05:10:00.220223 kubelet[2593]: E0514 05:10:00.220127 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 05:10:00.470966 systemd-networkd[1401]: cilium_host: Link UP May 14 05:10:00.471102 systemd-networkd[1401]: cilium_net: Link UP May 14 05:10:00.471225 systemd-networkd[1401]: cilium_net: Gained carrier May 14 05:10:00.471332 systemd-networkd[1401]: cilium_host: Gained carrier May 14 05:10:00.547617 systemd-networkd[1401]: cilium_vxlan: Link UP May 14 05:10:00.547622 systemd-networkd[1401]: cilium_vxlan: Gained carrier May 14 05:10:00.624754 systemd-networkd[1401]: cilium_host: Gained IPv6LL May 14 05:10:00.856527 kernel: NET: Registered PF_ALG protocol family May 14 05:10:00.880696 systemd-networkd[1401]: cilium_net: Gained IPv6LL May 14 05:10:01.420244 systemd-networkd[1401]: lxc_health: Link UP May 14 05:10:01.421649 systemd-networkd[1401]: lxc_health: Gained carrier May 14 05:10:02.003786 systemd-networkd[1401]: lxcd7d13c2108ce: Link UP May 14 05:10:02.011545 kernel: eth0: renamed from tmp78234 May 14 05:10:02.012577 systemd-networkd[1401]: lxc9ff72f1ebf60: Link UP May 14 05:10:02.014534 kernel: eth0: renamed from tmp004c8 May 14 05:10:02.014878 systemd-networkd[1401]: lxc9ff72f1ebf60: Gained carrier May 14 05:10:02.015356 systemd-networkd[1401]: lxcd7d13c2108ce: Gained carrier May 14 05:10:02.208932 systemd-networkd[1401]: cilium_vxlan: Gained IPv6LL May 14 05:10:02.931353 kubelet[2593]: E0514 05:10:02.931299 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 05:10:03.040664 systemd-networkd[1401]: lxc_health: Gained IPv6LL May 14 05:10:03.225617 kubelet[2593]: E0514 05:10:03.225530 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 05:10:03.680678 systemd-networkd[1401]: lxcd7d13c2108ce: Gained IPv6LL May 14 05:10:03.808669 systemd-networkd[1401]: lxc9ff72f1ebf60: Gained IPv6LL May 14 05:10:04.647802 systemd[1]: Started sshd@7-10.0.0.132:22-10.0.0.1:49242.service - OpenSSH per-connection server daemon (10.0.0.1:49242). May 14 05:10:04.702222 sshd[3771]: Accepted publickey for core from 10.0.0.1 port 49242 ssh2: RSA SHA256:smyBmIa3wdfW9qC8bkPmwJMNCzTtNvEfnmjMEHeX+hQ May 14 05:10:04.703601 sshd-session[3771]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 05:10:04.710923 systemd-logind[1475]: New session 8 of user core. May 14 05:10:04.721666 systemd[1]: Started session-8.scope - Session 8 of User core. May 14 05:10:04.859578 sshd[3773]: Connection closed by 10.0.0.1 port 49242 May 14 05:10:04.859914 sshd-session[3771]: pam_unix(sshd:session): session closed for user core May 14 05:10:04.863354 systemd[1]: sshd@7-10.0.0.132:22-10.0.0.1:49242.service: Deactivated successfully. May 14 05:10:04.866964 systemd[1]: session-8.scope: Deactivated successfully. May 14 05:10:04.867710 systemd-logind[1475]: Session 8 logged out. Waiting for processes to exit. May 14 05:10:04.869080 systemd-logind[1475]: Removed session 8. May 14 05:10:05.644229 containerd[1494]: time="2025-05-14T05:10:05.644104496Z" level=info msg="connecting to shim 782346c3f2588f50d4cccbd0c35891dfc97f00003fb150fc314fff0a899387d7" address="unix:///run/containerd/s/5b712c3d7eacfc9a11f8162206604adc0d68e27a25e2e11ba4cf7f843aecd9fb" namespace=k8s.io protocol=ttrpc version=3 May 14 05:10:05.645859 containerd[1494]: time="2025-05-14T05:10:05.645653590Z" level=info msg="connecting to shim 004c847ffae2037e7cdebd5b27eeafb3bf04b94779b599736f8e8f31a166386b" address="unix:///run/containerd/s/2008e0e2677d89024a2a36f517004777e4959df27bf6d8e0f7dd5ab0aae67e89" namespace=k8s.io protocol=ttrpc version=3 May 14 05:10:05.674673 systemd[1]: Started cri-containerd-004c847ffae2037e7cdebd5b27eeafb3bf04b94779b599736f8e8f31a166386b.scope - libcontainer container 004c847ffae2037e7cdebd5b27eeafb3bf04b94779b599736f8e8f31a166386b. May 14 05:10:05.675895 systemd[1]: Started cri-containerd-782346c3f2588f50d4cccbd0c35891dfc97f00003fb150fc314fff0a899387d7.scope - libcontainer container 782346c3f2588f50d4cccbd0c35891dfc97f00003fb150fc314fff0a899387d7. May 14 05:10:05.687803 systemd-resolved[1354]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 14 05:10:05.689560 systemd-resolved[1354]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 14 05:10:05.710993 containerd[1494]: time="2025-05-14T05:10:05.710932019Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-hcgrw,Uid:005844b3-128b-4e42-9826-20ce3b852644,Namespace:kube-system,Attempt:0,} returns sandbox id \"782346c3f2588f50d4cccbd0c35891dfc97f00003fb150fc314fff0a899387d7\"" May 14 05:10:05.711922 kubelet[2593]: E0514 05:10:05.711899 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 05:10:05.715438 containerd[1494]: time="2025-05-14T05:10:05.715398700Z" level=info msg="CreateContainer within sandbox \"782346c3f2588f50d4cccbd0c35891dfc97f00003fb150fc314fff0a899387d7\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 14 05:10:05.718802 containerd[1494]: time="2025-05-14T05:10:05.718710050Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-6f6b679f8f-kb64l,Uid:08c21bcb-58ba-4436-87e8-45e99de588a1,Namespace:kube-system,Attempt:0,} returns sandbox id \"004c847ffae2037e7cdebd5b27eeafb3bf04b94779b599736f8e8f31a166386b\"" May 14 05:10:05.719596 kubelet[2593]: E0514 05:10:05.719567 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 05:10:05.721727 containerd[1494]: time="2025-05-14T05:10:05.721695237Z" level=info msg="CreateContainer within sandbox \"004c847ffae2037e7cdebd5b27eeafb3bf04b94779b599736f8e8f31a166386b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 14 05:10:05.725442 containerd[1494]: time="2025-05-14T05:10:05.725383870Z" level=info msg="Container 58ec78c4b945434ca8f803394486fad56937c49a93b9816e0845f92e4f6109d5: CDI devices from CRI Config.CDIDevices: []" May 14 05:10:05.742974 containerd[1494]: time="2025-05-14T05:10:05.742901468Z" level=info msg="CreateContainer within sandbox \"782346c3f2588f50d4cccbd0c35891dfc97f00003fb150fc314fff0a899387d7\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"58ec78c4b945434ca8f803394486fad56937c49a93b9816e0845f92e4f6109d5\"" May 14 05:10:05.743129 containerd[1494]: time="2025-05-14T05:10:05.743082910Z" level=info msg="Container f60641b983f8618aa6db05d2800cde6287e681ca33414d9cc4c9df4c2a24a441: CDI devices from CRI Config.CDIDevices: []" May 14 05:10:05.744589 containerd[1494]: time="2025-05-14T05:10:05.743711876Z" level=info msg="StartContainer for \"58ec78c4b945434ca8f803394486fad56937c49a93b9816e0845f92e4f6109d5\"" May 14 05:10:05.746577 containerd[1494]: time="2025-05-14T05:10:05.746546101Z" level=info msg="connecting to shim 58ec78c4b945434ca8f803394486fad56937c49a93b9816e0845f92e4f6109d5" address="unix:///run/containerd/s/5b712c3d7eacfc9a11f8162206604adc0d68e27a25e2e11ba4cf7f843aecd9fb" protocol=ttrpc version=3 May 14 05:10:05.750017 containerd[1494]: time="2025-05-14T05:10:05.749974372Z" level=info msg="CreateContainer within sandbox \"004c847ffae2037e7cdebd5b27eeafb3bf04b94779b599736f8e8f31a166386b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f60641b983f8618aa6db05d2800cde6287e681ca33414d9cc4c9df4c2a24a441\"" May 14 05:10:05.750905 containerd[1494]: time="2025-05-14T05:10:05.750869020Z" level=info msg="StartContainer for \"f60641b983f8618aa6db05d2800cde6287e681ca33414d9cc4c9df4c2a24a441\"" May 14 05:10:05.751838 containerd[1494]: time="2025-05-14T05:10:05.751805909Z" level=info msg="connecting to shim f60641b983f8618aa6db05d2800cde6287e681ca33414d9cc4c9df4c2a24a441" address="unix:///run/containerd/s/2008e0e2677d89024a2a36f517004777e4959df27bf6d8e0f7dd5ab0aae67e89" protocol=ttrpc version=3 May 14 05:10:05.769716 systemd[1]: Started cri-containerd-58ec78c4b945434ca8f803394486fad56937c49a93b9816e0845f92e4f6109d5.scope - libcontainer container 58ec78c4b945434ca8f803394486fad56937c49a93b9816e0845f92e4f6109d5. May 14 05:10:05.774120 systemd[1]: Started cri-containerd-f60641b983f8618aa6db05d2800cde6287e681ca33414d9cc4c9df4c2a24a441.scope - libcontainer container f60641b983f8618aa6db05d2800cde6287e681ca33414d9cc4c9df4c2a24a441. May 14 05:10:05.812024 containerd[1494]: time="2025-05-14T05:10:05.811985213Z" level=info msg="StartContainer for \"58ec78c4b945434ca8f803394486fad56937c49a93b9816e0845f92e4f6109d5\" returns successfully" May 14 05:10:05.825166 containerd[1494]: time="2025-05-14T05:10:05.825112651Z" level=info msg="StartContainer for \"f60641b983f8618aa6db05d2800cde6287e681ca33414d9cc4c9df4c2a24a441\" returns successfully" May 14 05:10:06.235744 kubelet[2593]: E0514 05:10:06.235695 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 05:10:06.236161 kubelet[2593]: E0514 05:10:06.235816 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 05:10:06.261515 kubelet[2593]: I0514 05:10:06.260382 2593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-kb64l" podStartSLOduration=22.260364653 podStartE2EDuration="22.260364653s" podCreationTimestamp="2025-05-14 05:09:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 05:10:06.249530319 +0000 UTC m=+29.223294574" watchObservedRunningTime="2025-05-14 05:10:06.260364653 +0000 UTC m=+29.234128828" May 14 05:10:06.272434 kubelet[2593]: I0514 05:10:06.272016 2593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-6f6b679f8f-hcgrw" podStartSLOduration=22.271998554 podStartE2EDuration="22.271998554s" podCreationTimestamp="2025-05-14 05:09:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 05:10:06.271913033 +0000 UTC m=+29.245677288" watchObservedRunningTime="2025-05-14 05:10:06.271998554 +0000 UTC m=+29.245762769" May 14 05:10:07.238189 kubelet[2593]: E0514 05:10:07.238085 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 05:10:07.240542 kubelet[2593]: E0514 05:10:07.238033 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 05:10:08.240305 kubelet[2593]: E0514 05:10:08.240249 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 05:10:08.246713 kubelet[2593]: E0514 05:10:08.246673 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 05:10:09.876025 systemd[1]: Started sshd@8-10.0.0.132:22-10.0.0.1:49244.service - OpenSSH per-connection server daemon (10.0.0.1:49244). May 14 05:10:09.931604 sshd[3963]: Accepted publickey for core from 10.0.0.1 port 49244 ssh2: RSA SHA256:smyBmIa3wdfW9qC8bkPmwJMNCzTtNvEfnmjMEHeX+hQ May 14 05:10:09.932946 sshd-session[3963]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 05:10:09.937379 systemd-logind[1475]: New session 9 of user core. May 14 05:10:09.952710 systemd[1]: Started session-9.scope - Session 9 of User core. May 14 05:10:10.070249 sshd[3965]: Connection closed by 10.0.0.1 port 49244 May 14 05:10:10.070634 sshd-session[3963]: pam_unix(sshd:session): session closed for user core May 14 05:10:10.074224 systemd[1]: sshd@8-10.0.0.132:22-10.0.0.1:49244.service: Deactivated successfully. May 14 05:10:10.076037 systemd[1]: session-9.scope: Deactivated successfully. May 14 05:10:10.077232 systemd-logind[1475]: Session 9 logged out. Waiting for processes to exit. May 14 05:10:10.078457 systemd-logind[1475]: Removed session 9. May 14 05:10:15.086073 systemd[1]: Started sshd@9-10.0.0.132:22-10.0.0.1:54734.service - OpenSSH per-connection server daemon (10.0.0.1:54734). May 14 05:10:15.131782 sshd[3981]: Accepted publickey for core from 10.0.0.1 port 54734 ssh2: RSA SHA256:smyBmIa3wdfW9qC8bkPmwJMNCzTtNvEfnmjMEHeX+hQ May 14 05:10:15.133362 sshd-session[3981]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 05:10:15.138157 systemd-logind[1475]: New session 10 of user core. May 14 05:10:15.147990 systemd[1]: Started session-10.scope - Session 10 of User core. May 14 05:10:15.265003 sshd[3983]: Connection closed by 10.0.0.1 port 54734 May 14 05:10:15.265362 sshd-session[3981]: pam_unix(sshd:session): session closed for user core May 14 05:10:15.268989 systemd[1]: sshd@9-10.0.0.132:22-10.0.0.1:54734.service: Deactivated successfully. May 14 05:10:15.270771 systemd[1]: session-10.scope: Deactivated successfully. May 14 05:10:15.273434 systemd-logind[1475]: Session 10 logged out. Waiting for processes to exit. May 14 05:10:15.275157 systemd-logind[1475]: Removed session 10. May 14 05:10:20.278278 systemd[1]: Started sshd@10-10.0.0.132:22-10.0.0.1:54750.service - OpenSSH per-connection server daemon (10.0.0.1:54750). May 14 05:10:20.322477 sshd[4001]: Accepted publickey for core from 10.0.0.1 port 54750 ssh2: RSA SHA256:smyBmIa3wdfW9qC8bkPmwJMNCzTtNvEfnmjMEHeX+hQ May 14 05:10:20.323845 sshd-session[4001]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 05:10:20.327866 systemd-logind[1475]: New session 11 of user core. May 14 05:10:20.338711 systemd[1]: Started session-11.scope - Session 11 of User core. May 14 05:10:20.450973 sshd[4003]: Connection closed by 10.0.0.1 port 54750 May 14 05:10:20.450348 sshd-session[4001]: pam_unix(sshd:session): session closed for user core May 14 05:10:20.459919 systemd[1]: sshd@10-10.0.0.132:22-10.0.0.1:54750.service: Deactivated successfully. May 14 05:10:20.461668 systemd[1]: session-11.scope: Deactivated successfully. May 14 05:10:20.463218 systemd-logind[1475]: Session 11 logged out. Waiting for processes to exit. May 14 05:10:20.466381 systemd[1]: Started sshd@11-10.0.0.132:22-10.0.0.1:54762.service - OpenSSH per-connection server daemon (10.0.0.1:54762). May 14 05:10:20.467276 systemd-logind[1475]: Removed session 11. May 14 05:10:20.527163 sshd[4017]: Accepted publickey for core from 10.0.0.1 port 54762 ssh2: RSA SHA256:smyBmIa3wdfW9qC8bkPmwJMNCzTtNvEfnmjMEHeX+hQ May 14 05:10:20.528484 sshd-session[4017]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 05:10:20.534025 systemd-logind[1475]: New session 12 of user core. May 14 05:10:20.541759 systemd[1]: Started session-12.scope - Session 12 of User core. May 14 05:10:20.696943 sshd[4019]: Connection closed by 10.0.0.1 port 54762 May 14 05:10:20.697387 sshd-session[4017]: pam_unix(sshd:session): session closed for user core May 14 05:10:20.714510 systemd[1]: sshd@11-10.0.0.132:22-10.0.0.1:54762.service: Deactivated successfully. May 14 05:10:20.718789 systemd[1]: session-12.scope: Deactivated successfully. May 14 05:10:20.720321 systemd-logind[1475]: Session 12 logged out. Waiting for processes to exit. May 14 05:10:20.725830 systemd[1]: Started sshd@12-10.0.0.132:22-10.0.0.1:54776.service - OpenSSH per-connection server daemon (10.0.0.1:54776). May 14 05:10:20.726902 systemd-logind[1475]: Removed session 12. May 14 05:10:20.784471 sshd[4031]: Accepted publickey for core from 10.0.0.1 port 54776 ssh2: RSA SHA256:smyBmIa3wdfW9qC8bkPmwJMNCzTtNvEfnmjMEHeX+hQ May 14 05:10:20.785828 sshd-session[4031]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 05:10:20.791290 systemd-logind[1475]: New session 13 of user core. May 14 05:10:20.801701 systemd[1]: Started session-13.scope - Session 13 of User core. May 14 05:10:20.919645 sshd[4033]: Connection closed by 10.0.0.1 port 54776 May 14 05:10:20.920120 sshd-session[4031]: pam_unix(sshd:session): session closed for user core May 14 05:10:20.923771 systemd[1]: sshd@12-10.0.0.132:22-10.0.0.1:54776.service: Deactivated successfully. May 14 05:10:20.925614 systemd[1]: session-13.scope: Deactivated successfully. May 14 05:10:20.926315 systemd-logind[1475]: Session 13 logged out. Waiting for processes to exit. May 14 05:10:20.927874 systemd-logind[1475]: Removed session 13. May 14 05:10:25.936157 systemd[1]: Started sshd@13-10.0.0.132:22-10.0.0.1:41338.service - OpenSSH per-connection server daemon (10.0.0.1:41338). May 14 05:10:26.000358 sshd[4047]: Accepted publickey for core from 10.0.0.1 port 41338 ssh2: RSA SHA256:smyBmIa3wdfW9qC8bkPmwJMNCzTtNvEfnmjMEHeX+hQ May 14 05:10:26.001744 sshd-session[4047]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 05:10:26.006299 systemd-logind[1475]: New session 14 of user core. May 14 05:10:26.016762 systemd[1]: Started session-14.scope - Session 14 of User core. May 14 05:10:26.131140 sshd[4049]: Connection closed by 10.0.0.1 port 41338 May 14 05:10:26.131478 sshd-session[4047]: pam_unix(sshd:session): session closed for user core May 14 05:10:26.134387 systemd[1]: sshd@13-10.0.0.132:22-10.0.0.1:41338.service: Deactivated successfully. May 14 05:10:26.136239 systemd[1]: session-14.scope: Deactivated successfully. May 14 05:10:26.140191 systemd-logind[1475]: Session 14 logged out. Waiting for processes to exit. May 14 05:10:26.142324 systemd-logind[1475]: Removed session 14. May 14 05:10:31.142967 systemd[1]: Started sshd@14-10.0.0.132:22-10.0.0.1:41346.service - OpenSSH per-connection server daemon (10.0.0.1:41346). May 14 05:10:31.202436 sshd[4063]: Accepted publickey for core from 10.0.0.1 port 41346 ssh2: RSA SHA256:smyBmIa3wdfW9qC8bkPmwJMNCzTtNvEfnmjMEHeX+hQ May 14 05:10:31.203796 sshd-session[4063]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 05:10:31.208394 systemd-logind[1475]: New session 15 of user core. May 14 05:10:31.219689 systemd[1]: Started session-15.scope - Session 15 of User core. May 14 05:10:31.336132 sshd[4065]: Connection closed by 10.0.0.1 port 41346 May 14 05:10:31.336693 sshd-session[4063]: pam_unix(sshd:session): session closed for user core May 14 05:10:31.348925 systemd[1]: sshd@14-10.0.0.132:22-10.0.0.1:41346.service: Deactivated successfully. May 14 05:10:31.350754 systemd[1]: session-15.scope: Deactivated successfully. May 14 05:10:31.351415 systemd-logind[1475]: Session 15 logged out. Waiting for processes to exit. May 14 05:10:31.354761 systemd[1]: Started sshd@15-10.0.0.132:22-10.0.0.1:41360.service - OpenSSH per-connection server daemon (10.0.0.1:41360). May 14 05:10:31.355443 systemd-logind[1475]: Removed session 15. May 14 05:10:31.414880 sshd[4079]: Accepted publickey for core from 10.0.0.1 port 41360 ssh2: RSA SHA256:smyBmIa3wdfW9qC8bkPmwJMNCzTtNvEfnmjMEHeX+hQ May 14 05:10:31.416427 sshd-session[4079]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 05:10:31.420796 systemd-logind[1475]: New session 16 of user core. May 14 05:10:31.430676 systemd[1]: Started session-16.scope - Session 16 of User core. May 14 05:10:31.671622 sshd[4081]: Connection closed by 10.0.0.1 port 41360 May 14 05:10:31.672073 sshd-session[4079]: pam_unix(sshd:session): session closed for user core May 14 05:10:31.683918 systemd[1]: sshd@15-10.0.0.132:22-10.0.0.1:41360.service: Deactivated successfully. May 14 05:10:31.685685 systemd[1]: session-16.scope: Deactivated successfully. May 14 05:10:31.686350 systemd-logind[1475]: Session 16 logged out. Waiting for processes to exit. May 14 05:10:31.689224 systemd[1]: Started sshd@16-10.0.0.132:22-10.0.0.1:41362.service - OpenSSH per-connection server daemon (10.0.0.1:41362). May 14 05:10:31.689744 systemd-logind[1475]: Removed session 16. May 14 05:10:31.747973 sshd[4092]: Accepted publickey for core from 10.0.0.1 port 41362 ssh2: RSA SHA256:smyBmIa3wdfW9qC8bkPmwJMNCzTtNvEfnmjMEHeX+hQ May 14 05:10:31.749319 sshd-session[4092]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 05:10:31.753328 systemd-logind[1475]: New session 17 of user core. May 14 05:10:31.767697 systemd[1]: Started session-17.scope - Session 17 of User core. May 14 05:10:33.043029 sshd[4094]: Connection closed by 10.0.0.1 port 41362 May 14 05:10:33.043152 sshd-session[4092]: pam_unix(sshd:session): session closed for user core May 14 05:10:33.059119 systemd[1]: sshd@16-10.0.0.132:22-10.0.0.1:41362.service: Deactivated successfully. May 14 05:10:33.061055 systemd[1]: session-17.scope: Deactivated successfully. May 14 05:10:33.063311 systemd-logind[1475]: Session 17 logged out. Waiting for processes to exit. May 14 05:10:33.066729 systemd[1]: Started sshd@17-10.0.0.132:22-10.0.0.1:43016.service - OpenSSH per-connection server daemon (10.0.0.1:43016). May 14 05:10:33.068566 systemd-logind[1475]: Removed session 17. May 14 05:10:33.123813 sshd[4114]: Accepted publickey for core from 10.0.0.1 port 43016 ssh2: RSA SHA256:smyBmIa3wdfW9qC8bkPmwJMNCzTtNvEfnmjMEHeX+hQ May 14 05:10:33.125242 sshd-session[4114]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 05:10:33.129689 systemd-logind[1475]: New session 18 of user core. May 14 05:10:33.144737 systemd[1]: Started session-18.scope - Session 18 of User core. May 14 05:10:33.379657 sshd[4117]: Connection closed by 10.0.0.1 port 43016 May 14 05:10:33.379760 sshd-session[4114]: pam_unix(sshd:session): session closed for user core May 14 05:10:33.390978 systemd[1]: sshd@17-10.0.0.132:22-10.0.0.1:43016.service: Deactivated successfully. May 14 05:10:33.393314 systemd[1]: session-18.scope: Deactivated successfully. May 14 05:10:33.394728 systemd-logind[1475]: Session 18 logged out. Waiting for processes to exit. May 14 05:10:33.398095 systemd[1]: Started sshd@18-10.0.0.132:22-10.0.0.1:43028.service - OpenSSH per-connection server daemon (10.0.0.1:43028). May 14 05:10:33.399843 systemd-logind[1475]: Removed session 18. May 14 05:10:33.452132 sshd[4128]: Accepted publickey for core from 10.0.0.1 port 43028 ssh2: RSA SHA256:smyBmIa3wdfW9qC8bkPmwJMNCzTtNvEfnmjMEHeX+hQ May 14 05:10:33.454296 sshd-session[4128]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 05:10:33.458951 systemd-logind[1475]: New session 19 of user core. May 14 05:10:33.466682 systemd[1]: Started session-19.scope - Session 19 of User core. May 14 05:10:33.580242 sshd[4130]: Connection closed by 10.0.0.1 port 43028 May 14 05:10:33.580726 sshd-session[4128]: pam_unix(sshd:session): session closed for user core May 14 05:10:33.584331 systemd[1]: sshd@18-10.0.0.132:22-10.0.0.1:43028.service: Deactivated successfully. May 14 05:10:33.587210 systemd[1]: session-19.scope: Deactivated successfully. May 14 05:10:33.588067 systemd-logind[1475]: Session 19 logged out. Waiting for processes to exit. May 14 05:10:33.589227 systemd-logind[1475]: Removed session 19. May 14 05:10:38.591701 systemd[1]: Started sshd@19-10.0.0.132:22-10.0.0.1:43030.service - OpenSSH per-connection server daemon (10.0.0.1:43030). May 14 05:10:38.649304 sshd[4150]: Accepted publickey for core from 10.0.0.1 port 43030 ssh2: RSA SHA256:smyBmIa3wdfW9qC8bkPmwJMNCzTtNvEfnmjMEHeX+hQ May 14 05:10:38.650985 sshd-session[4150]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 05:10:38.660681 systemd-logind[1475]: New session 20 of user core. May 14 05:10:38.678679 systemd[1]: Started session-20.scope - Session 20 of User core. May 14 05:10:38.786735 sshd[4152]: Connection closed by 10.0.0.1 port 43030 May 14 05:10:38.787059 sshd-session[4150]: pam_unix(sshd:session): session closed for user core May 14 05:10:38.790275 systemd[1]: sshd@19-10.0.0.132:22-10.0.0.1:43030.service: Deactivated successfully. May 14 05:10:38.791861 systemd[1]: session-20.scope: Deactivated successfully. May 14 05:10:38.792575 systemd-logind[1475]: Session 20 logged out. Waiting for processes to exit. May 14 05:10:38.793534 systemd-logind[1475]: Removed session 20. May 14 05:10:43.798970 systemd[1]: Started sshd@20-10.0.0.132:22-10.0.0.1:54834.service - OpenSSH per-connection server daemon (10.0.0.1:54834). May 14 05:10:43.860317 sshd[4166]: Accepted publickey for core from 10.0.0.1 port 54834 ssh2: RSA SHA256:smyBmIa3wdfW9qC8bkPmwJMNCzTtNvEfnmjMEHeX+hQ May 14 05:10:43.861734 sshd-session[4166]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 05:10:43.865780 systemd-logind[1475]: New session 21 of user core. May 14 05:10:43.880693 systemd[1]: Started session-21.scope - Session 21 of User core. May 14 05:10:43.998281 sshd[4168]: Connection closed by 10.0.0.1 port 54834 May 14 05:10:43.998725 sshd-session[4166]: pam_unix(sshd:session): session closed for user core May 14 05:10:44.001523 systemd[1]: sshd@20-10.0.0.132:22-10.0.0.1:54834.service: Deactivated successfully. May 14 05:10:44.003802 systemd[1]: session-21.scope: Deactivated successfully. May 14 05:10:44.004726 systemd-logind[1475]: Session 21 logged out. Waiting for processes to exit. May 14 05:10:44.005831 systemd-logind[1475]: Removed session 21. May 14 05:10:49.025203 systemd[1]: Started sshd@21-10.0.0.132:22-10.0.0.1:54844.service - OpenSSH per-connection server daemon (10.0.0.1:54844). May 14 05:10:49.069467 sshd[4185]: Accepted publickey for core from 10.0.0.1 port 54844 ssh2: RSA SHA256:smyBmIa3wdfW9qC8bkPmwJMNCzTtNvEfnmjMEHeX+hQ May 14 05:10:49.070777 sshd-session[4185]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 05:10:49.074939 systemd-logind[1475]: New session 22 of user core. May 14 05:10:49.085685 systemd[1]: Started session-22.scope - Session 22 of User core. May 14 05:10:49.107471 kubelet[2593]: E0514 05:10:49.107431 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 05:10:49.200587 sshd[4187]: Connection closed by 10.0.0.1 port 54844 May 14 05:10:49.200347 sshd-session[4185]: pam_unix(sshd:session): session closed for user core May 14 05:10:49.213810 systemd[1]: sshd@21-10.0.0.132:22-10.0.0.1:54844.service: Deactivated successfully. May 14 05:10:49.216197 systemd[1]: session-22.scope: Deactivated successfully. May 14 05:10:49.216956 systemd-logind[1475]: Session 22 logged out. Waiting for processes to exit. May 14 05:10:49.220252 systemd[1]: Started sshd@22-10.0.0.132:22-10.0.0.1:54850.service - OpenSSH per-connection server daemon (10.0.0.1:54850). May 14 05:10:49.221097 systemd-logind[1475]: Removed session 22. May 14 05:10:49.277870 sshd[4200]: Accepted publickey for core from 10.0.0.1 port 54850 ssh2: RSA SHA256:smyBmIa3wdfW9qC8bkPmwJMNCzTtNvEfnmjMEHeX+hQ May 14 05:10:49.279087 sshd-session[4200]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 05:10:49.283335 systemd-logind[1475]: New session 23 of user core. May 14 05:10:49.289725 systemd[1]: Started session-23.scope - Session 23 of User core. May 14 05:10:51.104017 kubelet[2593]: E0514 05:10:51.103979 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 05:10:51.302460 containerd[1494]: time="2025-05-14T05:10:51.302412539Z" level=info msg="StopContainer for \"a6ee3b8ad39c2fa2f75f13c215fd510e29b7e002fc336040533f5dc8d066d688\" with timeout 30 (s)" May 14 05:10:51.303355 containerd[1494]: time="2025-05-14T05:10:51.303016629Z" level=info msg="Stop container \"a6ee3b8ad39c2fa2f75f13c215fd510e29b7e002fc336040533f5dc8d066d688\" with signal terminated" May 14 05:10:51.316673 systemd[1]: cri-containerd-a6ee3b8ad39c2fa2f75f13c215fd510e29b7e002fc336040533f5dc8d066d688.scope: Deactivated successfully. May 14 05:10:51.319101 containerd[1494]: time="2025-05-14T05:10:51.319023035Z" level=info msg="received exit event container_id:\"a6ee3b8ad39c2fa2f75f13c215fd510e29b7e002fc336040533f5dc8d066d688\" id:\"a6ee3b8ad39c2fa2f75f13c215fd510e29b7e002fc336040533f5dc8d066d688\" pid:3221 exited_at:{seconds:1747199451 nanos:318681750}" May 14 05:10:51.326264 containerd[1494]: time="2025-05-14T05:10:51.319722926Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a6ee3b8ad39c2fa2f75f13c215fd510e29b7e002fc336040533f5dc8d066d688\" id:\"a6ee3b8ad39c2fa2f75f13c215fd510e29b7e002fc336040533f5dc8d066d688\" pid:3221 exited_at:{seconds:1747199451 nanos:318681750}" May 14 05:10:51.337283 containerd[1494]: time="2025-05-14T05:10:51.337219595Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 14 05:10:51.342549 containerd[1494]: time="2025-05-14T05:10:51.342457236Z" level=info msg="TaskExit event in podsandbox handler container_id:\"71a3e38cc4a20e9156636b2703665c56863554838dee631c212742a7a7593d82\" id:\"4260572f7c94bb7253496f4fafff03728f846b56113bab9a6c0290df6cfab73c\" pid:4230 exited_at:{seconds:1747199451 nanos:341651224}" May 14 05:10:51.343318 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a6ee3b8ad39c2fa2f75f13c215fd510e29b7e002fc336040533f5dc8d066d688-rootfs.mount: Deactivated successfully. May 14 05:10:51.345674 containerd[1494]: time="2025-05-14T05:10:51.345635445Z" level=info msg="StopContainer for \"71a3e38cc4a20e9156636b2703665c56863554838dee631c212742a7a7593d82\" with timeout 2 (s)" May 14 05:10:51.346022 containerd[1494]: time="2025-05-14T05:10:51.345993651Z" level=info msg="Stop container \"71a3e38cc4a20e9156636b2703665c56863554838dee631c212742a7a7593d82\" with signal terminated" May 14 05:10:51.352666 systemd-networkd[1401]: lxc_health: Link DOWN May 14 05:10:51.352676 systemd-networkd[1401]: lxc_health: Lost carrier May 14 05:10:51.359843 containerd[1494]: time="2025-05-14T05:10:51.359738182Z" level=info msg="StopContainer for \"a6ee3b8ad39c2fa2f75f13c215fd510e29b7e002fc336040533f5dc8d066d688\" returns successfully" May 14 05:10:51.363403 containerd[1494]: time="2025-05-14T05:10:51.363237076Z" level=info msg="StopPodSandbox for \"db854e34975148b1b08d2e978dc713052c00a4ba3770f90a6d6f5ed15edc754f\"" May 14 05:10:51.368120 systemd[1]: cri-containerd-71a3e38cc4a20e9156636b2703665c56863554838dee631c212742a7a7593d82.scope: Deactivated successfully. May 14 05:10:51.368440 systemd[1]: cri-containerd-71a3e38cc4a20e9156636b2703665c56863554838dee631c212742a7a7593d82.scope: Consumed 6.536s CPU time, 121.3M memory peak, 156K read from disk, 12.9M written to disk. May 14 05:10:51.370463 containerd[1494]: time="2025-05-14T05:10:51.370403907Z" level=info msg="TaskExit event in podsandbox handler container_id:\"71a3e38cc4a20e9156636b2703665c56863554838dee631c212742a7a7593d82\" id:\"71a3e38cc4a20e9156636b2703665c56863554838dee631c212742a7a7593d82\" pid:3255 exited_at:{seconds:1747199451 nanos:370016021}" May 14 05:10:51.370463 containerd[1494]: time="2025-05-14T05:10:51.370404707Z" level=info msg="received exit event container_id:\"71a3e38cc4a20e9156636b2703665c56863554838dee631c212742a7a7593d82\" id:\"71a3e38cc4a20e9156636b2703665c56863554838dee631c212742a7a7593d82\" pid:3255 exited_at:{seconds:1747199451 nanos:370016021}" May 14 05:10:51.372037 containerd[1494]: time="2025-05-14T05:10:51.371811088Z" level=info msg="Container to stop \"a6ee3b8ad39c2fa2f75f13c215fd510e29b7e002fc336040533f5dc8d066d688\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 14 05:10:51.378482 systemd[1]: cri-containerd-db854e34975148b1b08d2e978dc713052c00a4ba3770f90a6d6f5ed15edc754f.scope: Deactivated successfully. May 14 05:10:51.385279 containerd[1494]: time="2025-05-14T05:10:51.385220455Z" level=info msg="TaskExit event in podsandbox handler container_id:\"db854e34975148b1b08d2e978dc713052c00a4ba3770f90a6d6f5ed15edc754f\" id:\"db854e34975148b1b08d2e978dc713052c00a4ba3770f90a6d6f5ed15edc754f\" pid:2864 exit_status:137 exited_at:{seconds:1747199451 nanos:384727167}" May 14 05:10:51.389103 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-71a3e38cc4a20e9156636b2703665c56863554838dee631c212742a7a7593d82-rootfs.mount: Deactivated successfully. May 14 05:10:51.399802 containerd[1494]: time="2025-05-14T05:10:51.399753639Z" level=info msg="StopContainer for \"71a3e38cc4a20e9156636b2703665c56863554838dee631c212742a7a7593d82\" returns successfully" May 14 05:10:51.400594 containerd[1494]: time="2025-05-14T05:10:51.400561531Z" level=info msg="StopPodSandbox for \"75c1b30a32e56f7e52575eb9e3966792e4605a09a37a3acc2accf3730e7a193e\"" May 14 05:10:51.400699 containerd[1494]: time="2025-05-14T05:10:51.400634012Z" level=info msg="Container to stop \"3651fd06c96c01acdcfe834ffae8095730e1541331d6684fbc172939190c7db1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 14 05:10:51.400699 containerd[1494]: time="2025-05-14T05:10:51.400646493Z" level=info msg="Container to stop \"a67613b2482ddf2894f039b7e4cdea7e73854bbf67c6f4dde0f608f9c36ccbc7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 14 05:10:51.400699 containerd[1494]: time="2025-05-14T05:10:51.400656413Z" level=info msg="Container to stop \"031d6c4ebbc7de2c9b5786c3350122816723949ad522d9e0479be56cd9ebb294\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 14 05:10:51.400699 containerd[1494]: time="2025-05-14T05:10:51.400665493Z" level=info msg="Container to stop \"c8aa52a8c455e108e3f6cfd4b9c7e4d7399dc1023e5549d0befe52732fb13473\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 14 05:10:51.400699 containerd[1494]: time="2025-05-14T05:10:51.400672933Z" level=info msg="Container to stop \"71a3e38cc4a20e9156636b2703665c56863554838dee631c212742a7a7593d82\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 14 05:10:51.405890 systemd[1]: cri-containerd-75c1b30a32e56f7e52575eb9e3966792e4605a09a37a3acc2accf3730e7a193e.scope: Deactivated successfully. May 14 05:10:51.412264 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-db854e34975148b1b08d2e978dc713052c00a4ba3770f90a6d6f5ed15edc754f-rootfs.mount: Deactivated successfully. May 14 05:10:51.414629 containerd[1494]: time="2025-05-14T05:10:51.414594828Z" level=info msg="shim disconnected" id=db854e34975148b1b08d2e978dc713052c00a4ba3770f90a6d6f5ed15edc754f namespace=k8s.io May 14 05:10:51.421868 containerd[1494]: time="2025-05-14T05:10:51.414957553Z" level=warning msg="cleaning up after shim disconnected" id=db854e34975148b1b08d2e978dc713052c00a4ba3770f90a6d6f5ed15edc754f namespace=k8s.io May 14 05:10:51.421868 containerd[1494]: time="2025-05-14T05:10:51.421699457Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 14 05:10:51.431618 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-75c1b30a32e56f7e52575eb9e3966792e4605a09a37a3acc2accf3730e7a193e-rootfs.mount: Deactivated successfully. May 14 05:10:51.435799 containerd[1494]: time="2025-05-14T05:10:51.435763554Z" level=info msg="shim disconnected" id=75c1b30a32e56f7e52575eb9e3966792e4605a09a37a3acc2accf3730e7a193e namespace=k8s.io May 14 05:10:51.435974 containerd[1494]: time="2025-05-14T05:10:51.435796914Z" level=warning msg="cleaning up after shim disconnected" id=75c1b30a32e56f7e52575eb9e3966792e4605a09a37a3acc2accf3730e7a193e namespace=k8s.io May 14 05:10:51.435974 containerd[1494]: time="2025-05-14T05:10:51.435826475Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 14 05:10:51.436881 containerd[1494]: time="2025-05-14T05:10:51.436832770Z" level=info msg="TaskExit event in podsandbox handler container_id:\"75c1b30a32e56f7e52575eb9e3966792e4605a09a37a3acc2accf3730e7a193e\" id:\"75c1b30a32e56f7e52575eb9e3966792e4605a09a37a3acc2accf3730e7a193e\" pid:2747 exit_status:137 exited_at:{seconds:1747199451 nanos:411708663}" May 14 05:10:51.437011 containerd[1494]: time="2025-05-14T05:10:51.436982812Z" level=info msg="received exit event sandbox_id:\"75c1b30a32e56f7e52575eb9e3966792e4605a09a37a3acc2accf3730e7a193e\" exit_status:137 exited_at:{seconds:1747199451 nanos:411708663}" May 14 05:10:51.438193 containerd[1494]: time="2025-05-14T05:10:51.438151350Z" level=info msg="received exit event sandbox_id:\"db854e34975148b1b08d2e978dc713052c00a4ba3770f90a6d6f5ed15edc754f\" exit_status:137 exited_at:{seconds:1747199451 nanos:384727167}" May 14 05:10:51.439007 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-db854e34975148b1b08d2e978dc713052c00a4ba3770f90a6d6f5ed15edc754f-shm.mount: Deactivated successfully. May 14 05:10:51.439484 containerd[1494]: time="2025-05-14T05:10:51.438189871Z" level=info msg="TearDown network for sandbox \"db854e34975148b1b08d2e978dc713052c00a4ba3770f90a6d6f5ed15edc754f\" successfully" May 14 05:10:51.439484 containerd[1494]: time="2025-05-14T05:10:51.439437970Z" level=info msg="StopPodSandbox for \"db854e34975148b1b08d2e978dc713052c00a4ba3770f90a6d6f5ed15edc754f\" returns successfully" May 14 05:10:51.440219 containerd[1494]: time="2025-05-14T05:10:51.439987379Z" level=info msg="TearDown network for sandbox \"75c1b30a32e56f7e52575eb9e3966792e4605a09a37a3acc2accf3730e7a193e\" successfully" May 14 05:10:51.440219 containerd[1494]: time="2025-05-14T05:10:51.440013339Z" level=info msg="StopPodSandbox for \"75c1b30a32e56f7e52575eb9e3966792e4605a09a37a3acc2accf3730e7a193e\" returns successfully" May 14 05:10:51.632293 kubelet[2593]: I0514 05:10:51.632162 2593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/674f3bcf-3155-4a84-b9e2-0081a5851991-cni-path\") pod \"674f3bcf-3155-4a84-b9e2-0081a5851991\" (UID: \"674f3bcf-3155-4a84-b9e2-0081a5851991\") " May 14 05:10:51.632293 kubelet[2593]: I0514 05:10:51.632222 2593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/674f3bcf-3155-4a84-b9e2-0081a5851991-cilium-run\") pod \"674f3bcf-3155-4a84-b9e2-0081a5851991\" (UID: \"674f3bcf-3155-4a84-b9e2-0081a5851991\") " May 14 05:10:51.632293 kubelet[2593]: I0514 05:10:51.632248 2593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/674f3bcf-3155-4a84-b9e2-0081a5851991-clustermesh-secrets\") pod \"674f3bcf-3155-4a84-b9e2-0081a5851991\" (UID: \"674f3bcf-3155-4a84-b9e2-0081a5851991\") " May 14 05:10:51.632293 kubelet[2593]: I0514 05:10:51.632269 2593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8lnph\" (UniqueName: \"kubernetes.io/projected/ce879c3c-c521-4cd7-95c2-68fdcfc90412-kube-api-access-8lnph\") pod \"ce879c3c-c521-4cd7-95c2-68fdcfc90412\" (UID: \"ce879c3c-c521-4cd7-95c2-68fdcfc90412\") " May 14 05:10:51.632293 kubelet[2593]: I0514 05:10:51.632285 2593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/674f3bcf-3155-4a84-b9e2-0081a5851991-xtables-lock\") pod \"674f3bcf-3155-4a84-b9e2-0081a5851991\" (UID: \"674f3bcf-3155-4a84-b9e2-0081a5851991\") " May 14 05:10:51.632568 kubelet[2593]: I0514 05:10:51.632302 2593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ce879c3c-c521-4cd7-95c2-68fdcfc90412-cilium-config-path\") pod \"ce879c3c-c521-4cd7-95c2-68fdcfc90412\" (UID: \"ce879c3c-c521-4cd7-95c2-68fdcfc90412\") " May 14 05:10:51.632568 kubelet[2593]: I0514 05:10:51.632321 2593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/674f3bcf-3155-4a84-b9e2-0081a5851991-cilium-config-path\") pod \"674f3bcf-3155-4a84-b9e2-0081a5851991\" (UID: \"674f3bcf-3155-4a84-b9e2-0081a5851991\") " May 14 05:10:51.632568 kubelet[2593]: I0514 05:10:51.632335 2593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/674f3bcf-3155-4a84-b9e2-0081a5851991-bpf-maps\") pod \"674f3bcf-3155-4a84-b9e2-0081a5851991\" (UID: \"674f3bcf-3155-4a84-b9e2-0081a5851991\") " May 14 05:10:51.632568 kubelet[2593]: I0514 05:10:51.632358 2593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/674f3bcf-3155-4a84-b9e2-0081a5851991-etc-cni-netd\") pod \"674f3bcf-3155-4a84-b9e2-0081a5851991\" (UID: \"674f3bcf-3155-4a84-b9e2-0081a5851991\") " May 14 05:10:51.632568 kubelet[2593]: I0514 05:10:51.632375 2593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/674f3bcf-3155-4a84-b9e2-0081a5851991-host-proc-sys-kernel\") pod \"674f3bcf-3155-4a84-b9e2-0081a5851991\" (UID: \"674f3bcf-3155-4a84-b9e2-0081a5851991\") " May 14 05:10:51.632568 kubelet[2593]: I0514 05:10:51.632488 2593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/674f3bcf-3155-4a84-b9e2-0081a5851991-hostproc\") pod \"674f3bcf-3155-4a84-b9e2-0081a5851991\" (UID: \"674f3bcf-3155-4a84-b9e2-0081a5851991\") " May 14 05:10:51.632700 kubelet[2593]: I0514 05:10:51.632541 2593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/674f3bcf-3155-4a84-b9e2-0081a5851991-hubble-tls\") pod \"674f3bcf-3155-4a84-b9e2-0081a5851991\" (UID: \"674f3bcf-3155-4a84-b9e2-0081a5851991\") " May 14 05:10:51.632700 kubelet[2593]: I0514 05:10:51.632579 2593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/674f3bcf-3155-4a84-b9e2-0081a5851991-lib-modules\") pod \"674f3bcf-3155-4a84-b9e2-0081a5851991\" (UID: \"674f3bcf-3155-4a84-b9e2-0081a5851991\") " May 14 05:10:51.632700 kubelet[2593]: I0514 05:10:51.632601 2593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/674f3bcf-3155-4a84-b9e2-0081a5851991-host-proc-sys-net\") pod \"674f3bcf-3155-4a84-b9e2-0081a5851991\" (UID: \"674f3bcf-3155-4a84-b9e2-0081a5851991\") " May 14 05:10:51.632700 kubelet[2593]: I0514 05:10:51.632621 2593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-898nr\" (UniqueName: \"kubernetes.io/projected/674f3bcf-3155-4a84-b9e2-0081a5851991-kube-api-access-898nr\") pod \"674f3bcf-3155-4a84-b9e2-0081a5851991\" (UID: \"674f3bcf-3155-4a84-b9e2-0081a5851991\") " May 14 05:10:51.632700 kubelet[2593]: I0514 05:10:51.632661 2593 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/674f3bcf-3155-4a84-b9e2-0081a5851991-cilium-cgroup\") pod \"674f3bcf-3155-4a84-b9e2-0081a5851991\" (UID: \"674f3bcf-3155-4a84-b9e2-0081a5851991\") " May 14 05:10:51.637249 kubelet[2593]: I0514 05:10:51.637212 2593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/674f3bcf-3155-4a84-b9e2-0081a5851991-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "674f3bcf-3155-4a84-b9e2-0081a5851991" (UID: "674f3bcf-3155-4a84-b9e2-0081a5851991"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 05:10:51.637325 kubelet[2593]: I0514 05:10:51.637272 2593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/674f3bcf-3155-4a84-b9e2-0081a5851991-hostproc" (OuterVolumeSpecName: "hostproc") pod "674f3bcf-3155-4a84-b9e2-0081a5851991" (UID: "674f3bcf-3155-4a84-b9e2-0081a5851991"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 05:10:51.637987 kubelet[2593]: I0514 05:10:51.637956 2593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/674f3bcf-3155-4a84-b9e2-0081a5851991-cni-path" (OuterVolumeSpecName: "cni-path") pod "674f3bcf-3155-4a84-b9e2-0081a5851991" (UID: "674f3bcf-3155-4a84-b9e2-0081a5851991"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 05:10:51.638154 kubelet[2593]: I0514 05:10:51.638136 2593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/674f3bcf-3155-4a84-b9e2-0081a5851991-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "674f3bcf-3155-4a84-b9e2-0081a5851991" (UID: "674f3bcf-3155-4a84-b9e2-0081a5851991"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 05:10:51.638385 kubelet[2593]: I0514 05:10:51.638243 2593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/674f3bcf-3155-4a84-b9e2-0081a5851991-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "674f3bcf-3155-4a84-b9e2-0081a5851991" (UID: "674f3bcf-3155-4a84-b9e2-0081a5851991"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 05:10:51.640285 kubelet[2593]: I0514 05:10:51.640243 2593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/674f3bcf-3155-4a84-b9e2-0081a5851991-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "674f3bcf-3155-4a84-b9e2-0081a5851991" (UID: "674f3bcf-3155-4a84-b9e2-0081a5851991"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 05:10:51.640350 kubelet[2593]: I0514 05:10:51.640306 2593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/674f3bcf-3155-4a84-b9e2-0081a5851991-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "674f3bcf-3155-4a84-b9e2-0081a5851991" (UID: "674f3bcf-3155-4a84-b9e2-0081a5851991"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 05:10:51.640350 kubelet[2593]: I0514 05:10:51.640322 2593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/674f3bcf-3155-4a84-b9e2-0081a5851991-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "674f3bcf-3155-4a84-b9e2-0081a5851991" (UID: "674f3bcf-3155-4a84-b9e2-0081a5851991"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 05:10:51.642137 kubelet[2593]: I0514 05:10:51.641892 2593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/674f3bcf-3155-4a84-b9e2-0081a5851991-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "674f3bcf-3155-4a84-b9e2-0081a5851991" (UID: "674f3bcf-3155-4a84-b9e2-0081a5851991"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 14 05:10:51.642629 kubelet[2593]: I0514 05:10:51.642601 2593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/674f3bcf-3155-4a84-b9e2-0081a5851991-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "674f3bcf-3155-4a84-b9e2-0081a5851991" (UID: "674f3bcf-3155-4a84-b9e2-0081a5851991"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" May 14 05:10:51.642707 kubelet[2593]: I0514 05:10:51.642658 2593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/674f3bcf-3155-4a84-b9e2-0081a5851991-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "674f3bcf-3155-4a84-b9e2-0081a5851991" (UID: "674f3bcf-3155-4a84-b9e2-0081a5851991"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 05:10:51.642707 kubelet[2593]: I0514 05:10:51.642665 2593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/674f3bcf-3155-4a84-b9e2-0081a5851991-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "674f3bcf-3155-4a84-b9e2-0081a5851991" (UID: "674f3bcf-3155-4a84-b9e2-0081a5851991"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 14 05:10:51.642707 kubelet[2593]: I0514 05:10:51.642705 2593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/674f3bcf-3155-4a84-b9e2-0081a5851991-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "674f3bcf-3155-4a84-b9e2-0081a5851991" (UID: "674f3bcf-3155-4a84-b9e2-0081a5851991"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" May 14 05:10:51.643590 kubelet[2593]: I0514 05:10:51.643561 2593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce879c3c-c521-4cd7-95c2-68fdcfc90412-kube-api-access-8lnph" (OuterVolumeSpecName: "kube-api-access-8lnph") pod "ce879c3c-c521-4cd7-95c2-68fdcfc90412" (UID: "ce879c3c-c521-4cd7-95c2-68fdcfc90412"). InnerVolumeSpecName "kube-api-access-8lnph". PluginName "kubernetes.io/projected", VolumeGidValue "" May 14 05:10:51.644238 kubelet[2593]: I0514 05:10:51.644215 2593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce879c3c-c521-4cd7-95c2-68fdcfc90412-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "ce879c3c-c521-4cd7-95c2-68fdcfc90412" (UID: "ce879c3c-c521-4cd7-95c2-68fdcfc90412"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 14 05:10:51.644484 kubelet[2593]: I0514 05:10:51.644441 2593 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/674f3bcf-3155-4a84-b9e2-0081a5851991-kube-api-access-898nr" (OuterVolumeSpecName: "kube-api-access-898nr") pod "674f3bcf-3155-4a84-b9e2-0081a5851991" (UID: "674f3bcf-3155-4a84-b9e2-0081a5851991"). InnerVolumeSpecName "kube-api-access-898nr". PluginName "kubernetes.io/projected", VolumeGidValue "" May 14 05:10:51.733634 kubelet[2593]: I0514 05:10:51.733601 2593 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/674f3bcf-3155-4a84-b9e2-0081a5851991-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 14 05:10:51.733728 kubelet[2593]: I0514 05:10:51.733642 2593 reconciler_common.go:288] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/674f3bcf-3155-4a84-b9e2-0081a5851991-bpf-maps\") on node \"localhost\" DevicePath \"\"" May 14 05:10:51.733728 kubelet[2593]: I0514 05:10:51.733653 2593 reconciler_common.go:288] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/674f3bcf-3155-4a84-b9e2-0081a5851991-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" May 14 05:10:51.733728 kubelet[2593]: I0514 05:10:51.733661 2593 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/674f3bcf-3155-4a84-b9e2-0081a5851991-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" May 14 05:10:51.733728 kubelet[2593]: I0514 05:10:51.733669 2593 reconciler_common.go:288] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/674f3bcf-3155-4a84-b9e2-0081a5851991-hubble-tls\") on node \"localhost\" DevicePath \"\"" May 14 05:10:51.733728 kubelet[2593]: I0514 05:10:51.733676 2593 reconciler_common.go:288] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/674f3bcf-3155-4a84-b9e2-0081a5851991-hostproc\") on node \"localhost\" DevicePath \"\"" May 14 05:10:51.733728 kubelet[2593]: I0514 05:10:51.733683 2593 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/674f3bcf-3155-4a84-b9e2-0081a5851991-lib-modules\") on node \"localhost\" DevicePath \"\"" May 14 05:10:51.733728 kubelet[2593]: I0514 05:10:51.733690 2593 reconciler_common.go:288] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/674f3bcf-3155-4a84-b9e2-0081a5851991-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" May 14 05:10:51.733728 kubelet[2593]: I0514 05:10:51.733698 2593 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-898nr\" (UniqueName: \"kubernetes.io/projected/674f3bcf-3155-4a84-b9e2-0081a5851991-kube-api-access-898nr\") on node \"localhost\" DevicePath \"\"" May 14 05:10:51.733892 kubelet[2593]: I0514 05:10:51.733705 2593 reconciler_common.go:288] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/674f3bcf-3155-4a84-b9e2-0081a5851991-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" May 14 05:10:51.733892 kubelet[2593]: I0514 05:10:51.733719 2593 reconciler_common.go:288] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/674f3bcf-3155-4a84-b9e2-0081a5851991-cni-path\") on node \"localhost\" DevicePath \"\"" May 14 05:10:51.733892 kubelet[2593]: I0514 05:10:51.733726 2593 reconciler_common.go:288] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/674f3bcf-3155-4a84-b9e2-0081a5851991-cilium-run\") on node \"localhost\" DevicePath \"\"" May 14 05:10:51.733892 kubelet[2593]: I0514 05:10:51.733734 2593 reconciler_common.go:288] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/674f3bcf-3155-4a84-b9e2-0081a5851991-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" May 14 05:10:51.733892 kubelet[2593]: I0514 05:10:51.733740 2593 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/674f3bcf-3155-4a84-b9e2-0081a5851991-xtables-lock\") on node \"localhost\" DevicePath \"\"" May 14 05:10:51.733892 kubelet[2593]: I0514 05:10:51.733747 2593 reconciler_common.go:288] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ce879c3c-c521-4cd7-95c2-68fdcfc90412-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 14 05:10:51.733892 kubelet[2593]: I0514 05:10:51.733755 2593 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-8lnph\" (UniqueName: \"kubernetes.io/projected/ce879c3c-c521-4cd7-95c2-68fdcfc90412-kube-api-access-8lnph\") on node \"localhost\" DevicePath \"\"" May 14 05:10:52.174290 kubelet[2593]: E0514 05:10:52.174234 2593 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 14 05:10:52.336337 kubelet[2593]: I0514 05:10:52.336304 2593 scope.go:117] "RemoveContainer" containerID="71a3e38cc4a20e9156636b2703665c56863554838dee631c212742a7a7593d82" May 14 05:10:52.339665 containerd[1494]: time="2025-05-14T05:10:52.339608813Z" level=info msg="RemoveContainer for \"71a3e38cc4a20e9156636b2703665c56863554838dee631c212742a7a7593d82\"" May 14 05:10:52.342927 systemd[1]: var-lib-kubelet-pods-ce879c3c\x2dc521\x2d4cd7\x2d95c2\x2d68fdcfc90412-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d8lnph.mount: Deactivated successfully. May 14 05:10:52.343027 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-75c1b30a32e56f7e52575eb9e3966792e4605a09a37a3acc2accf3730e7a193e-shm.mount: Deactivated successfully. May 14 05:10:52.343086 systemd[1]: var-lib-kubelet-pods-674f3bcf\x2d3155\x2d4a84\x2db9e2\x2d0081a5851991-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d898nr.mount: Deactivated successfully. May 14 05:10:52.343146 systemd[1]: var-lib-kubelet-pods-674f3bcf\x2d3155\x2d4a84\x2db9e2\x2d0081a5851991-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 14 05:10:52.343211 systemd[1]: var-lib-kubelet-pods-674f3bcf\x2d3155\x2d4a84\x2db9e2\x2d0081a5851991-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 14 05:10:52.347363 systemd[1]: Removed slice kubepods-burstable-pod674f3bcf_3155_4a84_b9e2_0081a5851991.slice - libcontainer container kubepods-burstable-pod674f3bcf_3155_4a84_b9e2_0081a5851991.slice. May 14 05:10:52.347458 systemd[1]: kubepods-burstable-pod674f3bcf_3155_4a84_b9e2_0081a5851991.slice: Consumed 6.679s CPU time, 121.6M memory peak, 160K read from disk, 16.1M written to disk. May 14 05:10:52.349364 systemd[1]: Removed slice kubepods-besteffort-podce879c3c_c521_4cd7_95c2_68fdcfc90412.slice - libcontainer container kubepods-besteffort-podce879c3c_c521_4cd7_95c2_68fdcfc90412.slice. May 14 05:10:52.359765 containerd[1494]: time="2025-05-14T05:10:52.359701556Z" level=info msg="RemoveContainer for \"71a3e38cc4a20e9156636b2703665c56863554838dee631c212742a7a7593d82\" returns successfully" May 14 05:10:52.364713 kubelet[2593]: I0514 05:10:52.364669 2593 scope.go:117] "RemoveContainer" containerID="c8aa52a8c455e108e3f6cfd4b9c7e4d7399dc1023e5549d0befe52732fb13473" May 14 05:10:52.367393 containerd[1494]: time="2025-05-14T05:10:52.367337271Z" level=info msg="RemoveContainer for \"c8aa52a8c455e108e3f6cfd4b9c7e4d7399dc1023e5549d0befe52732fb13473\"" May 14 05:10:52.372923 containerd[1494]: time="2025-05-14T05:10:52.372875115Z" level=info msg="RemoveContainer for \"c8aa52a8c455e108e3f6cfd4b9c7e4d7399dc1023e5549d0befe52732fb13473\" returns successfully" May 14 05:10:52.373136 kubelet[2593]: I0514 05:10:52.373104 2593 scope.go:117] "RemoveContainer" containerID="031d6c4ebbc7de2c9b5786c3350122816723949ad522d9e0479be56cd9ebb294" May 14 05:10:52.375442 containerd[1494]: time="2025-05-14T05:10:52.375412793Z" level=info msg="RemoveContainer for \"031d6c4ebbc7de2c9b5786c3350122816723949ad522d9e0479be56cd9ebb294\"" May 14 05:10:52.378961 containerd[1494]: time="2025-05-14T05:10:52.378924686Z" level=info msg="RemoveContainer for \"031d6c4ebbc7de2c9b5786c3350122816723949ad522d9e0479be56cd9ebb294\" returns successfully" May 14 05:10:52.379173 kubelet[2593]: I0514 05:10:52.379129 2593 scope.go:117] "RemoveContainer" containerID="a67613b2482ddf2894f039b7e4cdea7e73854bbf67c6f4dde0f608f9c36ccbc7" May 14 05:10:52.380693 containerd[1494]: time="2025-05-14T05:10:52.380663832Z" level=info msg="RemoveContainer for \"a67613b2482ddf2894f039b7e4cdea7e73854bbf67c6f4dde0f608f9c36ccbc7\"" May 14 05:10:52.383698 containerd[1494]: time="2025-05-14T05:10:52.383665437Z" level=info msg="RemoveContainer for \"a67613b2482ddf2894f039b7e4cdea7e73854bbf67c6f4dde0f608f9c36ccbc7\" returns successfully" May 14 05:10:52.383910 kubelet[2593]: I0514 05:10:52.383890 2593 scope.go:117] "RemoveContainer" containerID="3651fd06c96c01acdcfe834ffae8095730e1541331d6684fbc172939190c7db1" May 14 05:10:52.385655 containerd[1494]: time="2025-05-14T05:10:52.385629027Z" level=info msg="RemoveContainer for \"3651fd06c96c01acdcfe834ffae8095730e1541331d6684fbc172939190c7db1\"" May 14 05:10:52.388651 containerd[1494]: time="2025-05-14T05:10:52.388565591Z" level=info msg="RemoveContainer for \"3651fd06c96c01acdcfe834ffae8095730e1541331d6684fbc172939190c7db1\" returns successfully" May 14 05:10:52.388779 kubelet[2593]: I0514 05:10:52.388747 2593 scope.go:117] "RemoveContainer" containerID="71a3e38cc4a20e9156636b2703665c56863554838dee631c212742a7a7593d82" May 14 05:10:52.389128 containerd[1494]: time="2025-05-14T05:10:52.389042279Z" level=error msg="ContainerStatus for \"71a3e38cc4a20e9156636b2703665c56863554838dee631c212742a7a7593d82\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"71a3e38cc4a20e9156636b2703665c56863554838dee631c212742a7a7593d82\": not found" May 14 05:10:52.392020 kubelet[2593]: E0514 05:10:52.391973 2593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"71a3e38cc4a20e9156636b2703665c56863554838dee631c212742a7a7593d82\": not found" containerID="71a3e38cc4a20e9156636b2703665c56863554838dee631c212742a7a7593d82" May 14 05:10:52.392130 kubelet[2593]: I0514 05:10:52.392028 2593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"71a3e38cc4a20e9156636b2703665c56863554838dee631c212742a7a7593d82"} err="failed to get container status \"71a3e38cc4a20e9156636b2703665c56863554838dee631c212742a7a7593d82\": rpc error: code = NotFound desc = an error occurred when try to find container \"71a3e38cc4a20e9156636b2703665c56863554838dee631c212742a7a7593d82\": not found" May 14 05:10:52.392130 kubelet[2593]: I0514 05:10:52.392124 2593 scope.go:117] "RemoveContainer" containerID="c8aa52a8c455e108e3f6cfd4b9c7e4d7399dc1023e5549d0befe52732fb13473" May 14 05:10:52.392405 containerd[1494]: time="2025-05-14T05:10:52.392363409Z" level=error msg="ContainerStatus for \"c8aa52a8c455e108e3f6cfd4b9c7e4d7399dc1023e5549d0befe52732fb13473\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c8aa52a8c455e108e3f6cfd4b9c7e4d7399dc1023e5549d0befe52732fb13473\": not found" May 14 05:10:52.392545 kubelet[2593]: E0514 05:10:52.392524 2593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c8aa52a8c455e108e3f6cfd4b9c7e4d7399dc1023e5549d0befe52732fb13473\": not found" containerID="c8aa52a8c455e108e3f6cfd4b9c7e4d7399dc1023e5549d0befe52732fb13473" May 14 05:10:52.392592 kubelet[2593]: I0514 05:10:52.392551 2593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c8aa52a8c455e108e3f6cfd4b9c7e4d7399dc1023e5549d0befe52732fb13473"} err="failed to get container status \"c8aa52a8c455e108e3f6cfd4b9c7e4d7399dc1023e5549d0befe52732fb13473\": rpc error: code = NotFound desc = an error occurred when try to find container \"c8aa52a8c455e108e3f6cfd4b9c7e4d7399dc1023e5549d0befe52732fb13473\": not found" May 14 05:10:52.392592 kubelet[2593]: I0514 05:10:52.392570 2593 scope.go:117] "RemoveContainer" containerID="031d6c4ebbc7de2c9b5786c3350122816723949ad522d9e0479be56cd9ebb294" May 14 05:10:52.392794 containerd[1494]: time="2025-05-14T05:10:52.392760895Z" level=error msg="ContainerStatus for \"031d6c4ebbc7de2c9b5786c3350122816723949ad522d9e0479be56cd9ebb294\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"031d6c4ebbc7de2c9b5786c3350122816723949ad522d9e0479be56cd9ebb294\": not found" May 14 05:10:52.393034 kubelet[2593]: E0514 05:10:52.393013 2593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"031d6c4ebbc7de2c9b5786c3350122816723949ad522d9e0479be56cd9ebb294\": not found" containerID="031d6c4ebbc7de2c9b5786c3350122816723949ad522d9e0479be56cd9ebb294" May 14 05:10:52.393082 kubelet[2593]: I0514 05:10:52.393041 2593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"031d6c4ebbc7de2c9b5786c3350122816723949ad522d9e0479be56cd9ebb294"} err="failed to get container status \"031d6c4ebbc7de2c9b5786c3350122816723949ad522d9e0479be56cd9ebb294\": rpc error: code = NotFound desc = an error occurred when try to find container \"031d6c4ebbc7de2c9b5786c3350122816723949ad522d9e0479be56cd9ebb294\": not found" May 14 05:10:52.393082 kubelet[2593]: I0514 05:10:52.393059 2593 scope.go:117] "RemoveContainer" containerID="a67613b2482ddf2894f039b7e4cdea7e73854bbf67c6f4dde0f608f9c36ccbc7" May 14 05:10:52.393323 containerd[1494]: time="2025-05-14T05:10:52.393283383Z" level=error msg="ContainerStatus for \"a67613b2482ddf2894f039b7e4cdea7e73854bbf67c6f4dde0f608f9c36ccbc7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a67613b2482ddf2894f039b7e4cdea7e73854bbf67c6f4dde0f608f9c36ccbc7\": not found" May 14 05:10:52.393466 kubelet[2593]: E0514 05:10:52.393411 2593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a67613b2482ddf2894f039b7e4cdea7e73854bbf67c6f4dde0f608f9c36ccbc7\": not found" containerID="a67613b2482ddf2894f039b7e4cdea7e73854bbf67c6f4dde0f608f9c36ccbc7" May 14 05:10:52.393524 kubelet[2593]: I0514 05:10:52.393472 2593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a67613b2482ddf2894f039b7e4cdea7e73854bbf67c6f4dde0f608f9c36ccbc7"} err="failed to get container status \"a67613b2482ddf2894f039b7e4cdea7e73854bbf67c6f4dde0f608f9c36ccbc7\": rpc error: code = NotFound desc = an error occurred when try to find container \"a67613b2482ddf2894f039b7e4cdea7e73854bbf67c6f4dde0f608f9c36ccbc7\": not found" May 14 05:10:52.393524 kubelet[2593]: I0514 05:10:52.393489 2593 scope.go:117] "RemoveContainer" containerID="3651fd06c96c01acdcfe834ffae8095730e1541331d6684fbc172939190c7db1" May 14 05:10:52.393715 containerd[1494]: time="2025-05-14T05:10:52.393679949Z" level=error msg="ContainerStatus for \"3651fd06c96c01acdcfe834ffae8095730e1541331d6684fbc172939190c7db1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3651fd06c96c01acdcfe834ffae8095730e1541331d6684fbc172939190c7db1\": not found" May 14 05:10:52.393984 kubelet[2593]: E0514 05:10:52.393799 2593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3651fd06c96c01acdcfe834ffae8095730e1541331d6684fbc172939190c7db1\": not found" containerID="3651fd06c96c01acdcfe834ffae8095730e1541331d6684fbc172939190c7db1" May 14 05:10:52.393984 kubelet[2593]: I0514 05:10:52.393825 2593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3651fd06c96c01acdcfe834ffae8095730e1541331d6684fbc172939190c7db1"} err="failed to get container status \"3651fd06c96c01acdcfe834ffae8095730e1541331d6684fbc172939190c7db1\": rpc error: code = NotFound desc = an error occurred when try to find container \"3651fd06c96c01acdcfe834ffae8095730e1541331d6684fbc172939190c7db1\": not found" May 14 05:10:52.393984 kubelet[2593]: I0514 05:10:52.393840 2593 scope.go:117] "RemoveContainer" containerID="a6ee3b8ad39c2fa2f75f13c215fd510e29b7e002fc336040533f5dc8d066d688" May 14 05:10:52.395539 containerd[1494]: time="2025-05-14T05:10:52.395506496Z" level=info msg="RemoveContainer for \"a6ee3b8ad39c2fa2f75f13c215fd510e29b7e002fc336040533f5dc8d066d688\"" May 14 05:10:52.398240 containerd[1494]: time="2025-05-14T05:10:52.398211817Z" level=info msg="RemoveContainer for \"a6ee3b8ad39c2fa2f75f13c215fd510e29b7e002fc336040533f5dc8d066d688\" returns successfully" May 14 05:10:52.398464 kubelet[2593]: I0514 05:10:52.398428 2593 scope.go:117] "RemoveContainer" containerID="a6ee3b8ad39c2fa2f75f13c215fd510e29b7e002fc336040533f5dc8d066d688" May 14 05:10:52.398788 containerd[1494]: time="2025-05-14T05:10:52.398755465Z" level=error msg="ContainerStatus for \"a6ee3b8ad39c2fa2f75f13c215fd510e29b7e002fc336040533f5dc8d066d688\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a6ee3b8ad39c2fa2f75f13c215fd510e29b7e002fc336040533f5dc8d066d688\": not found" May 14 05:10:52.399026 kubelet[2593]: E0514 05:10:52.398999 2593 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a6ee3b8ad39c2fa2f75f13c215fd510e29b7e002fc336040533f5dc8d066d688\": not found" containerID="a6ee3b8ad39c2fa2f75f13c215fd510e29b7e002fc336040533f5dc8d066d688" May 14 05:10:52.399068 kubelet[2593]: I0514 05:10:52.399034 2593 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a6ee3b8ad39c2fa2f75f13c215fd510e29b7e002fc336040533f5dc8d066d688"} err="failed to get container status \"a6ee3b8ad39c2fa2f75f13c215fd510e29b7e002fc336040533f5dc8d066d688\": rpc error: code = NotFound desc = an error occurred when try to find container \"a6ee3b8ad39c2fa2f75f13c215fd510e29b7e002fc336040533f5dc8d066d688\": not found" May 14 05:10:53.107342 kubelet[2593]: I0514 05:10:53.106588 2593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="674f3bcf-3155-4a84-b9e2-0081a5851991" path="/var/lib/kubelet/pods/674f3bcf-3155-4a84-b9e2-0081a5851991/volumes" May 14 05:10:53.107342 kubelet[2593]: I0514 05:10:53.107122 2593 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce879c3c-c521-4cd7-95c2-68fdcfc90412" path="/var/lib/kubelet/pods/ce879c3c-c521-4cd7-95c2-68fdcfc90412/volumes" May 14 05:10:53.261867 sshd[4202]: Connection closed by 10.0.0.1 port 54850 May 14 05:10:53.262226 sshd-session[4200]: pam_unix(sshd:session): session closed for user core May 14 05:10:53.275429 systemd[1]: sshd@22-10.0.0.132:22-10.0.0.1:54850.service: Deactivated successfully. May 14 05:10:53.279487 systemd[1]: session-23.scope: Deactivated successfully. May 14 05:10:53.282543 systemd[1]: session-23.scope: Consumed 1.351s CPU time, 24.5M memory peak. May 14 05:10:53.283160 systemd-logind[1475]: Session 23 logged out. Waiting for processes to exit. May 14 05:10:53.286718 systemd[1]: Started sshd@23-10.0.0.132:22-10.0.0.1:44916.service - OpenSSH per-connection server daemon (10.0.0.1:44916). May 14 05:10:53.288117 systemd-logind[1475]: Removed session 23. May 14 05:10:53.337771 sshd[4353]: Accepted publickey for core from 10.0.0.1 port 44916 ssh2: RSA SHA256:smyBmIa3wdfW9qC8bkPmwJMNCzTtNvEfnmjMEHeX+hQ May 14 05:10:53.339033 sshd-session[4353]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 05:10:53.345001 systemd-logind[1475]: New session 24 of user core. May 14 05:10:53.354696 systemd[1]: Started session-24.scope - Session 24 of User core. May 14 05:10:55.053575 sshd[4355]: Connection closed by 10.0.0.1 port 44916 May 14 05:10:55.053935 sshd-session[4353]: pam_unix(sshd:session): session closed for user core May 14 05:10:55.071279 systemd[1]: sshd@23-10.0.0.132:22-10.0.0.1:44916.service: Deactivated successfully. May 14 05:10:55.075943 systemd[1]: session-24.scope: Deactivated successfully. May 14 05:10:55.077715 systemd[1]: session-24.scope: Consumed 1.620s CPU time, 23.7M memory peak. May 14 05:10:55.078138 kubelet[2593]: E0514 05:10:55.078096 2593 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="674f3bcf-3155-4a84-b9e2-0081a5851991" containerName="clean-cilium-state" May 14 05:10:55.078138 kubelet[2593]: E0514 05:10:55.078131 2593 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="674f3bcf-3155-4a84-b9e2-0081a5851991" containerName="mount-cgroup" May 14 05:10:55.078138 kubelet[2593]: E0514 05:10:55.078140 2593 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="674f3bcf-3155-4a84-b9e2-0081a5851991" containerName="apply-sysctl-overwrites" May 14 05:10:55.078508 kubelet[2593]: E0514 05:10:55.078146 2593 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="674f3bcf-3155-4a84-b9e2-0081a5851991" containerName="mount-bpf-fs" May 14 05:10:55.078508 kubelet[2593]: E0514 05:10:55.078154 2593 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="ce879c3c-c521-4cd7-95c2-68fdcfc90412" containerName="cilium-operator" May 14 05:10:55.078508 kubelet[2593]: E0514 05:10:55.078160 2593 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="674f3bcf-3155-4a84-b9e2-0081a5851991" containerName="cilium-agent" May 14 05:10:55.078508 kubelet[2593]: I0514 05:10:55.078186 2593 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce879c3c-c521-4cd7-95c2-68fdcfc90412" containerName="cilium-operator" May 14 05:10:55.078508 kubelet[2593]: I0514 05:10:55.078192 2593 memory_manager.go:354] "RemoveStaleState removing state" podUID="674f3bcf-3155-4a84-b9e2-0081a5851991" containerName="cilium-agent" May 14 05:10:55.078362 systemd-logind[1475]: Session 24 logged out. Waiting for processes to exit. May 14 05:10:55.084328 systemd-logind[1475]: Removed session 24. May 14 05:10:55.087286 systemd[1]: Started sshd@24-10.0.0.132:22-10.0.0.1:44924.service - OpenSSH per-connection server daemon (10.0.0.1:44924). May 14 05:10:55.096509 systemd[1]: Created slice kubepods-burstable-poda09af460_74aa_441e_ad0c_b007cd396486.slice - libcontainer container kubepods-burstable-poda09af460_74aa_441e_ad0c_b007cd396486.slice. May 14 05:10:55.149598 sshd[4367]: Accepted publickey for core from 10.0.0.1 port 44924 ssh2: RSA SHA256:smyBmIa3wdfW9qC8bkPmwJMNCzTtNvEfnmjMEHeX+hQ May 14 05:10:55.150925 sshd-session[4367]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 05:10:55.155724 systemd-logind[1475]: New session 25 of user core. May 14 05:10:55.170742 systemd[1]: Started session-25.scope - Session 25 of User core. May 14 05:10:55.222572 sshd[4369]: Connection closed by 10.0.0.1 port 44924 May 14 05:10:55.222904 sshd-session[4367]: pam_unix(sshd:session): session closed for user core May 14 05:10:55.233771 systemd[1]: sshd@24-10.0.0.132:22-10.0.0.1:44924.service: Deactivated successfully. May 14 05:10:55.235456 systemd[1]: session-25.scope: Deactivated successfully. May 14 05:10:55.236166 systemd-logind[1475]: Session 25 logged out. Waiting for processes to exit. May 14 05:10:55.238887 systemd[1]: Started sshd@25-10.0.0.132:22-10.0.0.1:44936.service - OpenSSH per-connection server daemon (10.0.0.1:44936). May 14 05:10:55.239372 systemd-logind[1475]: Removed session 25. May 14 05:10:55.252235 kubelet[2593]: I0514 05:10:55.252118 2593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a09af460-74aa-441e-ad0c-b007cd396486-bpf-maps\") pod \"cilium-j224g\" (UID: \"a09af460-74aa-441e-ad0c-b007cd396486\") " pod="kube-system/cilium-j224g" May 14 05:10:55.252235 kubelet[2593]: I0514 05:10:55.252156 2593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a09af460-74aa-441e-ad0c-b007cd396486-hostproc\") pod \"cilium-j224g\" (UID: \"a09af460-74aa-441e-ad0c-b007cd396486\") " pod="kube-system/cilium-j224g" May 14 05:10:55.252235 kubelet[2593]: I0514 05:10:55.252177 2593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a09af460-74aa-441e-ad0c-b007cd396486-host-proc-sys-net\") pod \"cilium-j224g\" (UID: \"a09af460-74aa-441e-ad0c-b007cd396486\") " pod="kube-system/cilium-j224g" May 14 05:10:55.252235 kubelet[2593]: I0514 05:10:55.252196 2593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a09af460-74aa-441e-ad0c-b007cd396486-host-proc-sys-kernel\") pod \"cilium-j224g\" (UID: \"a09af460-74aa-441e-ad0c-b007cd396486\") " pod="kube-system/cilium-j224g" May 14 05:10:55.252511 kubelet[2593]: I0514 05:10:55.252213 2593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a09af460-74aa-441e-ad0c-b007cd396486-cni-path\") pod \"cilium-j224g\" (UID: \"a09af460-74aa-441e-ad0c-b007cd396486\") " pod="kube-system/cilium-j224g" May 14 05:10:55.252511 kubelet[2593]: I0514 05:10:55.252477 2593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a09af460-74aa-441e-ad0c-b007cd396486-etc-cni-netd\") pod \"cilium-j224g\" (UID: \"a09af460-74aa-441e-ad0c-b007cd396486\") " pod="kube-system/cilium-j224g" May 14 05:10:55.252663 kubelet[2593]: I0514 05:10:55.252588 2593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/a09af460-74aa-441e-ad0c-b007cd396486-cilium-ipsec-secrets\") pod \"cilium-j224g\" (UID: \"a09af460-74aa-441e-ad0c-b007cd396486\") " pod="kube-system/cilium-j224g" May 14 05:10:55.252663 kubelet[2593]: I0514 05:10:55.252625 2593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a09af460-74aa-441e-ad0c-b007cd396486-lib-modules\") pod \"cilium-j224g\" (UID: \"a09af460-74aa-441e-ad0c-b007cd396486\") " pod="kube-system/cilium-j224g" May 14 05:10:55.252663 kubelet[2593]: I0514 05:10:55.252644 2593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a09af460-74aa-441e-ad0c-b007cd396486-cilium-run\") pod \"cilium-j224g\" (UID: \"a09af460-74aa-441e-ad0c-b007cd396486\") " pod="kube-system/cilium-j224g" May 14 05:10:55.252828 kubelet[2593]: I0514 05:10:55.252762 2593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a09af460-74aa-441e-ad0c-b007cd396486-clustermesh-secrets\") pod \"cilium-j224g\" (UID: \"a09af460-74aa-441e-ad0c-b007cd396486\") " pod="kube-system/cilium-j224g" May 14 05:10:55.252828 kubelet[2593]: I0514 05:10:55.252783 2593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a09af460-74aa-441e-ad0c-b007cd396486-hubble-tls\") pod \"cilium-j224g\" (UID: \"a09af460-74aa-441e-ad0c-b007cd396486\") " pod="kube-system/cilium-j224g" May 14 05:10:55.252828 kubelet[2593]: I0514 05:10:55.252798 2593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b9f95\" (UniqueName: \"kubernetes.io/projected/a09af460-74aa-441e-ad0c-b007cd396486-kube-api-access-b9f95\") pod \"cilium-j224g\" (UID: \"a09af460-74aa-441e-ad0c-b007cd396486\") " pod="kube-system/cilium-j224g" May 14 05:10:55.252998 kubelet[2593]: I0514 05:10:55.252816 2593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a09af460-74aa-441e-ad0c-b007cd396486-xtables-lock\") pod \"cilium-j224g\" (UID: \"a09af460-74aa-441e-ad0c-b007cd396486\") " pod="kube-system/cilium-j224g" May 14 05:10:55.252998 kubelet[2593]: I0514 05:10:55.252951 2593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a09af460-74aa-441e-ad0c-b007cd396486-cilium-cgroup\") pod \"cilium-j224g\" (UID: \"a09af460-74aa-441e-ad0c-b007cd396486\") " pod="kube-system/cilium-j224g" May 14 05:10:55.252998 kubelet[2593]: I0514 05:10:55.252972 2593 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a09af460-74aa-441e-ad0c-b007cd396486-cilium-config-path\") pod \"cilium-j224g\" (UID: \"a09af460-74aa-441e-ad0c-b007cd396486\") " pod="kube-system/cilium-j224g" May 14 05:10:55.295173 sshd[4376]: Accepted publickey for core from 10.0.0.1 port 44936 ssh2: RSA SHA256:smyBmIa3wdfW9qC8bkPmwJMNCzTtNvEfnmjMEHeX+hQ May 14 05:10:55.296483 sshd-session[4376]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 05:10:55.301111 systemd-logind[1475]: New session 26 of user core. May 14 05:10:55.310733 systemd[1]: Started session-26.scope - Session 26 of User core. May 14 05:10:55.413433 kubelet[2593]: E0514 05:10:55.413382 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 05:10:55.415572 containerd[1494]: time="2025-05-14T05:10:55.415520793Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-j224g,Uid:a09af460-74aa-441e-ad0c-b007cd396486,Namespace:kube-system,Attempt:0,}" May 14 05:10:55.433485 containerd[1494]: time="2025-05-14T05:10:55.433430246Z" level=info msg="connecting to shim 14b630469f3a17c6b3862a66633d3a8bf0e18765ace4b3867e1d76e260403cfc" address="unix:///run/containerd/s/7f6d6979a92d56cf7edc9e260718af3ca5e05fe5ee285cc4b9035800f21deeef" namespace=k8s.io protocol=ttrpc version=3 May 14 05:10:55.459732 systemd[1]: Started cri-containerd-14b630469f3a17c6b3862a66633d3a8bf0e18765ace4b3867e1d76e260403cfc.scope - libcontainer container 14b630469f3a17c6b3862a66633d3a8bf0e18765ace4b3867e1d76e260403cfc. May 14 05:10:55.482756 containerd[1494]: time="2025-05-14T05:10:55.482710544Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-j224g,Uid:a09af460-74aa-441e-ad0c-b007cd396486,Namespace:kube-system,Attempt:0,} returns sandbox id \"14b630469f3a17c6b3862a66633d3a8bf0e18765ace4b3867e1d76e260403cfc\"" May 14 05:10:55.483521 kubelet[2593]: E0514 05:10:55.483450 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 05:10:55.487432 containerd[1494]: time="2025-05-14T05:10:55.487299849Z" level=info msg="CreateContainer within sandbox \"14b630469f3a17c6b3862a66633d3a8bf0e18765ace4b3867e1d76e260403cfc\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 14 05:10:55.493204 containerd[1494]: time="2025-05-14T05:10:55.493152732Z" level=info msg="Container 48160e3918d52990f7c55305a65aa0d17986815a30c743e67983898c47369358: CDI devices from CRI Config.CDIDevices: []" May 14 05:10:55.498819 containerd[1494]: time="2025-05-14T05:10:55.498768851Z" level=info msg="CreateContainer within sandbox \"14b630469f3a17c6b3862a66633d3a8bf0e18765ace4b3867e1d76e260403cfc\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"48160e3918d52990f7c55305a65aa0d17986815a30c743e67983898c47369358\"" May 14 05:10:55.499518 containerd[1494]: time="2025-05-14T05:10:55.499284419Z" level=info msg="StartContainer for \"48160e3918d52990f7c55305a65aa0d17986815a30c743e67983898c47369358\"" May 14 05:10:55.500334 containerd[1494]: time="2025-05-14T05:10:55.500299433Z" level=info msg="connecting to shim 48160e3918d52990f7c55305a65aa0d17986815a30c743e67983898c47369358" address="unix:///run/containerd/s/7f6d6979a92d56cf7edc9e260718af3ca5e05fe5ee285cc4b9035800f21deeef" protocol=ttrpc version=3 May 14 05:10:55.520699 systemd[1]: Started cri-containerd-48160e3918d52990f7c55305a65aa0d17986815a30c743e67983898c47369358.scope - libcontainer container 48160e3918d52990f7c55305a65aa0d17986815a30c743e67983898c47369358. May 14 05:10:55.549088 containerd[1494]: time="2025-05-14T05:10:55.549048483Z" level=info msg="StartContainer for \"48160e3918d52990f7c55305a65aa0d17986815a30c743e67983898c47369358\" returns successfully" May 14 05:10:55.573480 systemd[1]: cri-containerd-48160e3918d52990f7c55305a65aa0d17986815a30c743e67983898c47369358.scope: Deactivated successfully. May 14 05:10:55.579190 containerd[1494]: time="2025-05-14T05:10:55.579135949Z" level=info msg="received exit event container_id:\"48160e3918d52990f7c55305a65aa0d17986815a30c743e67983898c47369358\" id:\"48160e3918d52990f7c55305a65aa0d17986815a30c743e67983898c47369358\" pid:4450 exited_at:{seconds:1747199455 nanos:578862985}" May 14 05:10:55.579538 containerd[1494]: time="2025-05-14T05:10:55.579477914Z" level=info msg="TaskExit event in podsandbox handler container_id:\"48160e3918d52990f7c55305a65aa0d17986815a30c743e67983898c47369358\" id:\"48160e3918d52990f7c55305a65aa0d17986815a30c743e67983898c47369358\" pid:4450 exited_at:{seconds:1747199455 nanos:578862985}" May 14 05:10:56.356848 kubelet[2593]: E0514 05:10:56.356732 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 05:10:56.360225 containerd[1494]: time="2025-05-14T05:10:56.360186829Z" level=info msg="CreateContainer within sandbox \"14b630469f3a17c6b3862a66633d3a8bf0e18765ace4b3867e1d76e260403cfc\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 14 05:10:56.370458 containerd[1494]: time="2025-05-14T05:10:56.369909804Z" level=info msg="Container b2e84bd48418173132b94512a9c0fecc4cb2463a84228aaaaf11c4f9ea1736ec: CDI devices from CRI Config.CDIDevices: []" May 14 05:10:56.376730 containerd[1494]: time="2025-05-14T05:10:56.376658338Z" level=info msg="CreateContainer within sandbox \"14b630469f3a17c6b3862a66633d3a8bf0e18765ace4b3867e1d76e260403cfc\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"b2e84bd48418173132b94512a9c0fecc4cb2463a84228aaaaf11c4f9ea1736ec\"" May 14 05:10:56.377640 containerd[1494]: time="2025-05-14T05:10:56.377598831Z" level=info msg="StartContainer for \"b2e84bd48418173132b94512a9c0fecc4cb2463a84228aaaaf11c4f9ea1736ec\"" May 14 05:10:56.378562 containerd[1494]: time="2025-05-14T05:10:56.378532164Z" level=info msg="connecting to shim b2e84bd48418173132b94512a9c0fecc4cb2463a84228aaaaf11c4f9ea1736ec" address="unix:///run/containerd/s/7f6d6979a92d56cf7edc9e260718af3ca5e05fe5ee285cc4b9035800f21deeef" protocol=ttrpc version=3 May 14 05:10:56.397695 systemd[1]: Started cri-containerd-b2e84bd48418173132b94512a9c0fecc4cb2463a84228aaaaf11c4f9ea1736ec.scope - libcontainer container b2e84bd48418173132b94512a9c0fecc4cb2463a84228aaaaf11c4f9ea1736ec. May 14 05:10:56.424392 containerd[1494]: time="2025-05-14T05:10:56.424338239Z" level=info msg="StartContainer for \"b2e84bd48418173132b94512a9c0fecc4cb2463a84228aaaaf11c4f9ea1736ec\" returns successfully" May 14 05:10:56.431299 systemd[1]: cri-containerd-b2e84bd48418173132b94512a9c0fecc4cb2463a84228aaaaf11c4f9ea1736ec.scope: Deactivated successfully. May 14 05:10:56.432852 containerd[1494]: time="2025-05-14T05:10:56.432801637Z" level=info msg="received exit event container_id:\"b2e84bd48418173132b94512a9c0fecc4cb2463a84228aaaaf11c4f9ea1736ec\" id:\"b2e84bd48418173132b94512a9c0fecc4cb2463a84228aaaaf11c4f9ea1736ec\" pid:4495 exited_at:{seconds:1747199456 nanos:432578833}" May 14 05:10:56.433007 containerd[1494]: time="2025-05-14T05:10:56.432970479Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b2e84bd48418173132b94512a9c0fecc4cb2463a84228aaaaf11c4f9ea1736ec\" id:\"b2e84bd48418173132b94512a9c0fecc4cb2463a84228aaaaf11c4f9ea1736ec\" pid:4495 exited_at:{seconds:1747199456 nanos:432578833}" May 14 05:10:57.175058 kubelet[2593]: E0514 05:10:57.175017 2593 kubelet.go:2901] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 14 05:10:57.361133 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b2e84bd48418173132b94512a9c0fecc4cb2463a84228aaaaf11c4f9ea1736ec-rootfs.mount: Deactivated successfully. May 14 05:10:57.361917 kubelet[2593]: E0514 05:10:57.361848 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 05:10:57.371963 containerd[1494]: time="2025-05-14T05:10:57.371914678Z" level=info msg="CreateContainer within sandbox \"14b630469f3a17c6b3862a66633d3a8bf0e18765ace4b3867e1d76e260403cfc\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 14 05:10:57.382530 containerd[1494]: time="2025-05-14T05:10:57.381416647Z" level=info msg="Container 345f5ad0bf26502a1a5d53559db902a0659cfc4e0af550afc69759beee415089: CDI devices from CRI Config.CDIDevices: []" May 14 05:10:57.399879 containerd[1494]: time="2025-05-14T05:10:57.399819978Z" level=info msg="CreateContainer within sandbox \"14b630469f3a17c6b3862a66633d3a8bf0e18765ace4b3867e1d76e260403cfc\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"345f5ad0bf26502a1a5d53559db902a0659cfc4e0af550afc69759beee415089\"" May 14 05:10:57.400441 containerd[1494]: time="2025-05-14T05:10:57.400401945Z" level=info msg="StartContainer for \"345f5ad0bf26502a1a5d53559db902a0659cfc4e0af550afc69759beee415089\"" May 14 05:10:57.401906 containerd[1494]: time="2025-05-14T05:10:57.401878886Z" level=info msg="connecting to shim 345f5ad0bf26502a1a5d53559db902a0659cfc4e0af550afc69759beee415089" address="unix:///run/containerd/s/7f6d6979a92d56cf7edc9e260718af3ca5e05fe5ee285cc4b9035800f21deeef" protocol=ttrpc version=3 May 14 05:10:57.423727 systemd[1]: Started cri-containerd-345f5ad0bf26502a1a5d53559db902a0659cfc4e0af550afc69759beee415089.scope - libcontainer container 345f5ad0bf26502a1a5d53559db902a0659cfc4e0af550afc69759beee415089. May 14 05:10:57.461339 systemd[1]: cri-containerd-345f5ad0bf26502a1a5d53559db902a0659cfc4e0af550afc69759beee415089.scope: Deactivated successfully. May 14 05:10:57.461539 containerd[1494]: time="2025-05-14T05:10:57.461388974Z" level=info msg="StartContainer for \"345f5ad0bf26502a1a5d53559db902a0659cfc4e0af550afc69759beee415089\" returns successfully" May 14 05:10:57.462896 containerd[1494]: time="2025-05-14T05:10:57.462852034Z" level=info msg="received exit event container_id:\"345f5ad0bf26502a1a5d53559db902a0659cfc4e0af550afc69759beee415089\" id:\"345f5ad0bf26502a1a5d53559db902a0659cfc4e0af550afc69759beee415089\" pid:4540 exited_at:{seconds:1747199457 nanos:462667192}" May 14 05:10:57.463227 containerd[1494]: time="2025-05-14T05:10:57.463132438Z" level=info msg="TaskExit event in podsandbox handler container_id:\"345f5ad0bf26502a1a5d53559db902a0659cfc4e0af550afc69759beee415089\" id:\"345f5ad0bf26502a1a5d53559db902a0659cfc4e0af550afc69759beee415089\" pid:4540 exited_at:{seconds:1747199457 nanos:462667192}" May 14 05:10:57.483215 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-345f5ad0bf26502a1a5d53559db902a0659cfc4e0af550afc69759beee415089-rootfs.mount: Deactivated successfully. May 14 05:10:58.366764 kubelet[2593]: E0514 05:10:58.366734 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 05:10:58.368977 containerd[1494]: time="2025-05-14T05:10:58.368937372Z" level=info msg="CreateContainer within sandbox \"14b630469f3a17c6b3862a66633d3a8bf0e18765ace4b3867e1d76e260403cfc\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 14 05:10:58.381772 containerd[1494]: time="2025-05-14T05:10:58.381715342Z" level=info msg="Container dcedf48baa7ffdd23d9fa63f288b7e7cde33f2621e2689487d42bd20b4c60581: CDI devices from CRI Config.CDIDevices: []" May 14 05:10:58.388588 containerd[1494]: time="2025-05-14T05:10:58.388489952Z" level=info msg="CreateContainer within sandbox \"14b630469f3a17c6b3862a66633d3a8bf0e18765ace4b3867e1d76e260403cfc\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"dcedf48baa7ffdd23d9fa63f288b7e7cde33f2621e2689487d42bd20b4c60581\"" May 14 05:10:58.389125 containerd[1494]: time="2025-05-14T05:10:58.389075720Z" level=info msg="StartContainer for \"dcedf48baa7ffdd23d9fa63f288b7e7cde33f2621e2689487d42bd20b4c60581\"" May 14 05:10:58.391603 containerd[1494]: time="2025-05-14T05:10:58.391565393Z" level=info msg="connecting to shim dcedf48baa7ffdd23d9fa63f288b7e7cde33f2621e2689487d42bd20b4c60581" address="unix:///run/containerd/s/7f6d6979a92d56cf7edc9e260718af3ca5e05fe5ee285cc4b9035800f21deeef" protocol=ttrpc version=3 May 14 05:10:58.419658 systemd[1]: Started cri-containerd-dcedf48baa7ffdd23d9fa63f288b7e7cde33f2621e2689487d42bd20b4c60581.scope - libcontainer container dcedf48baa7ffdd23d9fa63f288b7e7cde33f2621e2689487d42bd20b4c60581. May 14 05:10:58.443995 systemd[1]: cri-containerd-dcedf48baa7ffdd23d9fa63f288b7e7cde33f2621e2689487d42bd20b4c60581.scope: Deactivated successfully. May 14 05:10:58.445588 containerd[1494]: time="2025-05-14T05:10:58.444867983Z" level=info msg="TaskExit event in podsandbox handler container_id:\"dcedf48baa7ffdd23d9fa63f288b7e7cde33f2621e2689487d42bd20b4c60581\" id:\"dcedf48baa7ffdd23d9fa63f288b7e7cde33f2621e2689487d42bd20b4c60581\" pid:4579 exited_at:{seconds:1747199458 nanos:444661941}" May 14 05:10:58.445685 containerd[1494]: time="2025-05-14T05:10:58.445647914Z" level=info msg="received exit event container_id:\"dcedf48baa7ffdd23d9fa63f288b7e7cde33f2621e2689487d42bd20b4c60581\" id:\"dcedf48baa7ffdd23d9fa63f288b7e7cde33f2621e2689487d42bd20b4c60581\" pid:4579 exited_at:{seconds:1747199458 nanos:444661941}" May 14 05:10:58.446911 containerd[1494]: time="2025-05-14T05:10:58.446887450Z" level=info msg="StartContainer for \"dcedf48baa7ffdd23d9fa63f288b7e7cde33f2621e2689487d42bd20b4c60581\" returns successfully" May 14 05:10:59.343751 kubelet[2593]: I0514 05:10:59.343705 2593 setters.go:600] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-05-14T05:10:59Z","lastTransitionTime":"2025-05-14T05:10:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} May 14 05:10:59.372291 kubelet[2593]: E0514 05:10:59.372250 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 05:10:59.375576 containerd[1494]: time="2025-05-14T05:10:59.374840872Z" level=info msg="CreateContainer within sandbox \"14b630469f3a17c6b3862a66633d3a8bf0e18765ace4b3867e1d76e260403cfc\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 14 05:10:59.377639 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dcedf48baa7ffdd23d9fa63f288b7e7cde33f2621e2689487d42bd20b4c60581-rootfs.mount: Deactivated successfully. May 14 05:10:59.386524 containerd[1494]: time="2025-05-14T05:10:59.386016178Z" level=info msg="Container 5f7d9dcc23d263da3622acad38214b4ecfe6dbaa14e8e949225117c2bc4edf6f: CDI devices from CRI Config.CDIDevices: []" May 14 05:10:59.394747 containerd[1494]: time="2025-05-14T05:10:59.394709331Z" level=info msg="CreateContainer within sandbox \"14b630469f3a17c6b3862a66633d3a8bf0e18765ace4b3867e1d76e260403cfc\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"5f7d9dcc23d263da3622acad38214b4ecfe6dbaa14e8e949225117c2bc4edf6f\"" May 14 05:10:59.395398 containerd[1494]: time="2025-05-14T05:10:59.395374900Z" level=info msg="StartContainer for \"5f7d9dcc23d263da3622acad38214b4ecfe6dbaa14e8e949225117c2bc4edf6f\"" May 14 05:10:59.396332 containerd[1494]: time="2025-05-14T05:10:59.396306912Z" level=info msg="connecting to shim 5f7d9dcc23d263da3622acad38214b4ecfe6dbaa14e8e949225117c2bc4edf6f" address="unix:///run/containerd/s/7f6d6979a92d56cf7edc9e260718af3ca5e05fe5ee285cc4b9035800f21deeef" protocol=ttrpc version=3 May 14 05:10:59.421720 systemd[1]: Started cri-containerd-5f7d9dcc23d263da3622acad38214b4ecfe6dbaa14e8e949225117c2bc4edf6f.scope - libcontainer container 5f7d9dcc23d263da3622acad38214b4ecfe6dbaa14e8e949225117c2bc4edf6f. May 14 05:10:59.452198 containerd[1494]: time="2025-05-14T05:10:59.452161561Z" level=info msg="StartContainer for \"5f7d9dcc23d263da3622acad38214b4ecfe6dbaa14e8e949225117c2bc4edf6f\" returns successfully" May 14 05:10:59.503058 containerd[1494]: time="2025-05-14T05:10:59.502995745Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5f7d9dcc23d263da3622acad38214b4ecfe6dbaa14e8e949225117c2bc4edf6f\" id:\"0f846ae93e5a768a5c6c4d85f4d67e1c44b3e981e915ce4f2340e3e1064d837f\" pid:4647 exited_at:{seconds:1747199459 nanos:502551619}" May 14 05:10:59.742545 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) May 14 05:11:00.378509 kubelet[2593]: E0514 05:11:00.378457 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 05:11:00.394789 kubelet[2593]: I0514 05:11:00.394678 2593 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-j224g" podStartSLOduration=5.394663448 podStartE2EDuration="5.394663448s" podCreationTimestamp="2025-05-14 05:10:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 05:11:00.393470672 +0000 UTC m=+83.367234887" watchObservedRunningTime="2025-05-14 05:11:00.394663448 +0000 UTC m=+83.368427663" May 14 05:11:01.415055 kubelet[2593]: E0514 05:11:01.415011 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 05:11:01.774806 containerd[1494]: time="2025-05-14T05:11:01.774746807Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5f7d9dcc23d263da3622acad38214b4ecfe6dbaa14e8e949225117c2bc4edf6f\" id:\"24d3683256619d9a372ba04962ecd65ba4588dcabef45609fcd01263740d1d75\" pid:4919 exit_status:1 exited_at:{seconds:1747199461 nanos:774408642}" May 14 05:11:02.104050 kubelet[2593]: E0514 05:11:02.103864 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 05:11:02.649726 systemd-networkd[1401]: lxc_health: Link UP May 14 05:11:02.661864 systemd-networkd[1401]: lxc_health: Gained carrier May 14 05:11:03.416276 kubelet[2593]: E0514 05:11:03.416235 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 05:11:03.840672 systemd-networkd[1401]: lxc_health: Gained IPv6LL May 14 05:11:03.918255 containerd[1494]: time="2025-05-14T05:11:03.918214654Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5f7d9dcc23d263da3622acad38214b4ecfe6dbaa14e8e949225117c2bc4edf6f\" id:\"a19d8771bbedc40fefa7df50cf132f803d7516e9d4e9a4e34e288ebebe84adc6\" pid:5187 exited_at:{seconds:1747199463 nanos:917801289}" May 14 05:11:04.385216 kubelet[2593]: E0514 05:11:04.385187 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 05:11:05.386434 kubelet[2593]: E0514 05:11:05.386399 2593 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 14 05:11:06.027708 containerd[1494]: time="2025-05-14T05:11:06.027667551Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5f7d9dcc23d263da3622acad38214b4ecfe6dbaa14e8e949225117c2bc4edf6f\" id:\"5c9d91730574c69cdfedb16d7835fd63ee72e8fb8958ba89b467706e40074415\" pid:5216 exited_at:{seconds:1747199466 nanos:27318507}" May 14 05:11:08.120651 containerd[1494]: time="2025-05-14T05:11:08.120550679Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5f7d9dcc23d263da3622acad38214b4ecfe6dbaa14e8e949225117c2bc4edf6f\" id:\"9e7830e80ca8e66e0be222b229d9bebfcd3e30680bafc131948cbeee25a43dd1\" pid:5246 exited_at:{seconds:1747199468 nanos:120255796}" May 14 05:11:08.126515 sshd[4378]: Connection closed by 10.0.0.1 port 44936 May 14 05:11:08.126915 sshd-session[4376]: pam_unix(sshd:session): session closed for user core May 14 05:11:08.130177 systemd[1]: sshd@25-10.0.0.132:22-10.0.0.1:44936.service: Deactivated successfully. May 14 05:11:08.131857 systemd[1]: session-26.scope: Deactivated successfully. May 14 05:11:08.133243 systemd-logind[1475]: Session 26 logged out. Waiting for processes to exit. May 14 05:11:08.134592 systemd-logind[1475]: Removed session 26.