May 27 02:55:43.811397 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] May 27 02:55:43.811417 kernel: Linux version 6.12.30-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT Tue May 27 01:20:04 -00 2025 May 27 02:55:43.811426 kernel: KASLR enabled May 27 02:55:43.811432 kernel: efi: EFI v2.7 by EDK II May 27 02:55:43.811437 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb832018 ACPI 2.0=0xdbfd0018 RNG=0xdbfd0a18 MEMRESERVE=0xdb838218 May 27 02:55:43.811443 kernel: random: crng init done May 27 02:55:43.811449 kernel: secureboot: Secure boot disabled May 27 02:55:43.811455 kernel: ACPI: Early table checksum verification disabled May 27 02:55:43.811461 kernel: ACPI: RSDP 0x00000000DBFD0018 000024 (v02 BOCHS ) May 27 02:55:43.811468 kernel: ACPI: XSDT 0x00000000DBFD0F18 000064 (v01 BOCHS BXPC 00000001 01000013) May 27 02:55:43.811474 kernel: ACPI: FACP 0x00000000DBFD0B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) May 27 02:55:43.811479 kernel: ACPI: DSDT 0x00000000DBF0E018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 27 02:55:43.811490 kernel: ACPI: APIC 0x00000000DBFD0C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) May 27 02:55:43.811496 kernel: ACPI: PPTT 0x00000000DBFD0098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) May 27 02:55:43.811503 kernel: ACPI: GTDT 0x00000000DBFD0818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 27 02:55:43.811515 kernel: ACPI: MCFG 0x00000000DBFD0A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 27 02:55:43.811521 kernel: ACPI: SPCR 0x00000000DBFD0918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 27 02:55:43.811536 kernel: ACPI: DBG2 0x00000000DBFD0998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) May 27 02:55:43.811543 kernel: ACPI: IORT 0x00000000DBFD0198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 27 02:55:43.811549 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 May 27 02:55:43.811555 kernel: ACPI: Use ACPI SPCR as default console: Yes May 27 02:55:43.811561 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] May 27 02:55:43.811567 kernel: NODE_DATA(0) allocated [mem 0xdc964dc0-0xdc96bfff] May 27 02:55:43.811573 kernel: Zone ranges: May 27 02:55:43.811579 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] May 27 02:55:43.811587 kernel: DMA32 empty May 27 02:55:43.811593 kernel: Normal empty May 27 02:55:43.811598 kernel: Device empty May 27 02:55:43.811604 kernel: Movable zone start for each node May 27 02:55:43.811610 kernel: Early memory node ranges May 27 02:55:43.811616 kernel: node 0: [mem 0x0000000040000000-0x00000000db81ffff] May 27 02:55:43.811622 kernel: node 0: [mem 0x00000000db820000-0x00000000db82ffff] May 27 02:55:43.811628 kernel: node 0: [mem 0x00000000db830000-0x00000000dc09ffff] May 27 02:55:43.811634 kernel: node 0: [mem 0x00000000dc0a0000-0x00000000dc2dffff] May 27 02:55:43.811640 kernel: node 0: [mem 0x00000000dc2e0000-0x00000000dc36ffff] May 27 02:55:43.811646 kernel: node 0: [mem 0x00000000dc370000-0x00000000dc45ffff] May 27 02:55:43.811652 kernel: node 0: [mem 0x00000000dc460000-0x00000000dc52ffff] May 27 02:55:43.811659 kernel: node 0: [mem 0x00000000dc530000-0x00000000dc5cffff] May 27 02:55:43.811666 kernel: node 0: [mem 0x00000000dc5d0000-0x00000000dce1ffff] May 27 02:55:43.811682 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] May 27 02:55:43.811692 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] May 27 02:55:43.811698 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] May 27 02:55:43.811705 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] May 27 02:55:43.811712 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] May 27 02:55:43.811718 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges May 27 02:55:43.811725 kernel: psci: probing for conduit method from ACPI. May 27 02:55:43.811731 kernel: psci: PSCIv1.1 detected in firmware. May 27 02:55:43.811738 kernel: psci: Using standard PSCI v0.2 function IDs May 27 02:55:43.811744 kernel: psci: Trusted OS migration not required May 27 02:55:43.811750 kernel: psci: SMC Calling Convention v1.1 May 27 02:55:43.811757 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) May 27 02:55:43.811763 kernel: percpu: Embedded 33 pages/cpu s98136 r8192 d28840 u135168 May 27 02:55:43.811770 kernel: pcpu-alloc: s98136 r8192 d28840 u135168 alloc=33*4096 May 27 02:55:43.811778 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 May 27 02:55:43.811784 kernel: Detected PIPT I-cache on CPU0 May 27 02:55:43.811790 kernel: CPU features: detected: GIC system register CPU interface May 27 02:55:43.811797 kernel: CPU features: detected: Spectre-v4 May 27 02:55:43.811803 kernel: CPU features: detected: Spectre-BHB May 27 02:55:43.811809 kernel: CPU features: kernel page table isolation forced ON by KASLR May 27 02:55:43.811816 kernel: CPU features: detected: Kernel page table isolation (KPTI) May 27 02:55:43.811822 kernel: CPU features: detected: ARM erratum 1418040 May 27 02:55:43.811833 kernel: CPU features: detected: SSBS not fully self-synchronizing May 27 02:55:43.811840 kernel: alternatives: applying boot alternatives May 27 02:55:43.811848 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=4c3f98aae7a61b3dcbab6391ba922461adab29dbcb79fd6e18169f93c5a4ab5a May 27 02:55:43.811857 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 27 02:55:43.811864 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 27 02:55:43.811870 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 27 02:55:43.811877 kernel: Fallback order for Node 0: 0 May 27 02:55:43.811883 kernel: Built 1 zonelists, mobility grouping on. Total pages: 643072 May 27 02:55:43.811889 kernel: Policy zone: DMA May 27 02:55:43.811896 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 27 02:55:43.811902 kernel: software IO TLB: SWIOTLB bounce buffer size adjusted to 2MB May 27 02:55:43.811908 kernel: software IO TLB: area num 4. May 27 02:55:43.811915 kernel: software IO TLB: SWIOTLB bounce buffer size roundup to 4MB May 27 02:55:43.811921 kernel: software IO TLB: mapped [mem 0x00000000d8c00000-0x00000000d9000000] (4MB) May 27 02:55:43.811928 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 27 02:55:43.811935 kernel: rcu: Preemptible hierarchical RCU implementation. May 27 02:55:43.811942 kernel: rcu: RCU event tracing is enabled. May 27 02:55:43.811949 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 27 02:55:43.811956 kernel: Trampoline variant of Tasks RCU enabled. May 27 02:55:43.811962 kernel: Tracing variant of Tasks RCU enabled. May 27 02:55:43.811969 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 27 02:55:43.811975 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 27 02:55:43.811982 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 27 02:55:43.811988 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 27 02:55:43.811995 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 May 27 02:55:43.812001 kernel: GICv3: 256 SPIs implemented May 27 02:55:43.812009 kernel: GICv3: 0 Extended SPIs implemented May 27 02:55:43.812015 kernel: Root IRQ handler: gic_handle_irq May 27 02:55:43.812024 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI May 27 02:55:43.812030 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 May 27 02:55:43.812036 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 May 27 02:55:43.812043 kernel: ITS [mem 0x08080000-0x0809ffff] May 27 02:55:43.812050 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400d0000 (indirect, esz 8, psz 64K, shr 1) May 27 02:55:43.812056 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400e0000 (flat, esz 8, psz 64K, shr 1) May 27 02:55:43.812063 kernel: GICv3: using LPI property table @0x00000000400f0000 May 27 02:55:43.812076 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040110000 May 27 02:55:43.812083 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 27 02:55:43.812089 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 27 02:55:43.812097 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). May 27 02:55:43.812104 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns May 27 02:55:43.812111 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns May 27 02:55:43.812117 kernel: arm-pv: using stolen time PV May 27 02:55:43.812124 kernel: Console: colour dummy device 80x25 May 27 02:55:43.812131 kernel: ACPI: Core revision 20240827 May 27 02:55:43.812137 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) May 27 02:55:43.812144 kernel: pid_max: default: 32768 minimum: 301 May 27 02:55:43.812151 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima May 27 02:55:43.812177 kernel: landlock: Up and running. May 27 02:55:43.812183 kernel: SELinux: Initializing. May 27 02:55:43.812190 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 27 02:55:43.812196 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 27 02:55:43.812203 kernel: rcu: Hierarchical SRCU implementation. May 27 02:55:43.812210 kernel: rcu: Max phase no-delay instances is 400. May 27 02:55:43.812216 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level May 27 02:55:43.812223 kernel: Remapping and enabling EFI services. May 27 02:55:43.812229 kernel: smp: Bringing up secondary CPUs ... May 27 02:55:43.812236 kernel: Detected PIPT I-cache on CPU1 May 27 02:55:43.812248 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 May 27 02:55:43.812255 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040120000 May 27 02:55:43.812263 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 27 02:55:43.812270 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] May 27 02:55:43.812277 kernel: Detected PIPT I-cache on CPU2 May 27 02:55:43.812284 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 May 27 02:55:43.812291 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040130000 May 27 02:55:43.812300 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 27 02:55:43.812307 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] May 27 02:55:43.812313 kernel: Detected PIPT I-cache on CPU3 May 27 02:55:43.812320 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 May 27 02:55:43.812327 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040140000 May 27 02:55:43.812334 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 27 02:55:43.812341 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] May 27 02:55:43.812348 kernel: smp: Brought up 1 node, 4 CPUs May 27 02:55:43.812355 kernel: SMP: Total of 4 processors activated. May 27 02:55:43.812362 kernel: CPU: All CPU(s) started at EL1 May 27 02:55:43.812370 kernel: CPU features: detected: 32-bit EL0 Support May 27 02:55:43.812377 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence May 27 02:55:43.812384 kernel: CPU features: detected: Common not Private translations May 27 02:55:43.812391 kernel: CPU features: detected: CRC32 instructions May 27 02:55:43.812398 kernel: CPU features: detected: Enhanced Virtualization Traps May 27 02:55:43.812405 kernel: CPU features: detected: RCpc load-acquire (LDAPR) May 27 02:55:43.812412 kernel: CPU features: detected: LSE atomic instructions May 27 02:55:43.812419 kernel: CPU features: detected: Privileged Access Never May 27 02:55:43.812425 kernel: CPU features: detected: RAS Extension Support May 27 02:55:43.812434 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) May 27 02:55:43.812441 kernel: alternatives: applying system-wide alternatives May 27 02:55:43.812447 kernel: CPU features: detected: Hardware dirty bit management on CPU0-3 May 27 02:55:43.812455 kernel: Memory: 2440980K/2572288K available (11072K kernel code, 2276K rwdata, 8936K rodata, 39424K init, 1034K bss, 125540K reserved, 0K cma-reserved) May 27 02:55:43.812462 kernel: devtmpfs: initialized May 27 02:55:43.812469 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 27 02:55:43.812476 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 27 02:55:43.812484 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL May 27 02:55:43.812491 kernel: 0 pages in range for non-PLT usage May 27 02:55:43.812499 kernel: 508544 pages in range for PLT usage May 27 02:55:43.812506 kernel: pinctrl core: initialized pinctrl subsystem May 27 02:55:43.812513 kernel: SMBIOS 3.0.0 present. May 27 02:55:43.812520 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 May 27 02:55:43.812527 kernel: DMI: Memory slots populated: 1/1 May 27 02:55:43.812534 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 27 02:55:43.812541 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations May 27 02:55:43.812549 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations May 27 02:55:43.812556 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations May 27 02:55:43.812564 kernel: audit: initializing netlink subsys (disabled) May 27 02:55:43.812571 kernel: audit: type=2000 audit(0.031:1): state=initialized audit_enabled=0 res=1 May 27 02:55:43.812578 kernel: thermal_sys: Registered thermal governor 'step_wise' May 27 02:55:43.812585 kernel: cpuidle: using governor menu May 27 02:55:43.812592 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. May 27 02:55:43.812599 kernel: ASID allocator initialised with 32768 entries May 27 02:55:43.812606 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 27 02:55:43.812613 kernel: Serial: AMBA PL011 UART driver May 27 02:55:43.812620 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 27 02:55:43.812628 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page May 27 02:55:43.812635 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages May 27 02:55:43.812642 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page May 27 02:55:43.812649 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 27 02:55:43.812656 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page May 27 02:55:43.812662 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages May 27 02:55:43.812669 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page May 27 02:55:43.812684 kernel: ACPI: Added _OSI(Module Device) May 27 02:55:43.812691 kernel: ACPI: Added _OSI(Processor Device) May 27 02:55:43.812700 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 27 02:55:43.812707 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 27 02:55:43.812714 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 27 02:55:43.812721 kernel: ACPI: Interpreter enabled May 27 02:55:43.812728 kernel: ACPI: Using GIC for interrupt routing May 27 02:55:43.812735 kernel: ACPI: MCFG table detected, 1 entries May 27 02:55:43.812742 kernel: ACPI: CPU0 has been hot-added May 27 02:55:43.812749 kernel: ACPI: CPU1 has been hot-added May 27 02:55:43.812756 kernel: ACPI: CPU2 has been hot-added May 27 02:55:43.812764 kernel: ACPI: CPU3 has been hot-added May 27 02:55:43.812771 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA May 27 02:55:43.812778 kernel: printk: legacy console [ttyAMA0] enabled May 27 02:55:43.812785 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 27 02:55:43.812915 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 27 02:55:43.812984 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] May 27 02:55:43.813046 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] May 27 02:55:43.813120 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 May 27 02:55:43.813183 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] May 27 02:55:43.813193 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] May 27 02:55:43.813200 kernel: PCI host bridge to bus 0000:00 May 27 02:55:43.813267 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] May 27 02:55:43.813325 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] May 27 02:55:43.813379 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] May 27 02:55:43.813434 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 27 02:55:43.813527 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint May 27 02:55:43.813600 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint May 27 02:55:43.813662 kernel: pci 0000:00:01.0: BAR 0 [io 0x0000-0x001f] May 27 02:55:43.813736 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff] May 27 02:55:43.813798 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] May 27 02:55:43.813858 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned May 27 02:55:43.813918 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff]: assigned May 27 02:55:43.813983 kernel: pci 0000:00:01.0: BAR 0 [io 0x1000-0x101f]: assigned May 27 02:55:43.814042 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] May 27 02:55:43.814105 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] May 27 02:55:43.814161 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] May 27 02:55:43.814170 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 May 27 02:55:43.814177 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 May 27 02:55:43.814184 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 May 27 02:55:43.814193 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 May 27 02:55:43.814201 kernel: iommu: Default domain type: Translated May 27 02:55:43.814208 kernel: iommu: DMA domain TLB invalidation policy: strict mode May 27 02:55:43.814220 kernel: efivars: Registered efivars operations May 27 02:55:43.814227 kernel: vgaarb: loaded May 27 02:55:43.814234 kernel: clocksource: Switched to clocksource arch_sys_counter May 27 02:55:43.814241 kernel: VFS: Disk quotas dquot_6.6.0 May 27 02:55:43.814248 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 27 02:55:43.814255 kernel: pnp: PnP ACPI init May 27 02:55:43.814330 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved May 27 02:55:43.814340 kernel: pnp: PnP ACPI: found 1 devices May 27 02:55:43.814347 kernel: NET: Registered PF_INET protocol family May 27 02:55:43.814354 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 27 02:55:43.814361 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 27 02:55:43.814368 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 27 02:55:43.814376 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 27 02:55:43.814383 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 27 02:55:43.814392 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 27 02:55:43.814399 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 27 02:55:43.814406 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 27 02:55:43.814413 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 27 02:55:43.814420 kernel: PCI: CLS 0 bytes, default 64 May 27 02:55:43.814427 kernel: kvm [1]: HYP mode not available May 27 02:55:43.814434 kernel: Initialise system trusted keyrings May 27 02:55:43.814441 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 27 02:55:43.814448 kernel: Key type asymmetric registered May 27 02:55:43.814456 kernel: Asymmetric key parser 'x509' registered May 27 02:55:43.814463 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) May 27 02:55:43.814470 kernel: io scheduler mq-deadline registered May 27 02:55:43.814477 kernel: io scheduler kyber registered May 27 02:55:43.814484 kernel: io scheduler bfq registered May 27 02:55:43.814491 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 May 27 02:55:43.814498 kernel: ACPI: button: Power Button [PWRB] May 27 02:55:43.814505 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 May 27 02:55:43.814567 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) May 27 02:55:43.814578 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 27 02:55:43.814585 kernel: thunder_xcv, ver 1.0 May 27 02:55:43.814592 kernel: thunder_bgx, ver 1.0 May 27 02:55:43.814599 kernel: nicpf, ver 1.0 May 27 02:55:43.814606 kernel: nicvf, ver 1.0 May 27 02:55:43.814699 kernel: rtc-efi rtc-efi.0: registered as rtc0 May 27 02:55:43.814760 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-05-27T02:55:43 UTC (1748314543) May 27 02:55:43.814769 kernel: hid: raw HID events driver (C) Jiri Kosina May 27 02:55:43.814778 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available May 27 02:55:43.814785 kernel: watchdog: NMI not fully supported May 27 02:55:43.814792 kernel: watchdog: Hard watchdog permanently disabled May 27 02:55:43.814799 kernel: NET: Registered PF_INET6 protocol family May 27 02:55:43.814806 kernel: Segment Routing with IPv6 May 27 02:55:43.814813 kernel: In-situ OAM (IOAM) with IPv6 May 27 02:55:43.814820 kernel: NET: Registered PF_PACKET protocol family May 27 02:55:43.814827 kernel: Key type dns_resolver registered May 27 02:55:43.814834 kernel: registered taskstats version 1 May 27 02:55:43.814842 kernel: Loading compiled-in X.509 certificates May 27 02:55:43.814850 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.30-flatcar: 6bbf5412ef1f8a32378a640b6d048f74e6d74df0' May 27 02:55:43.814857 kernel: Demotion targets for Node 0: null May 27 02:55:43.814864 kernel: Key type .fscrypt registered May 27 02:55:43.814871 kernel: Key type fscrypt-provisioning registered May 27 02:55:43.814878 kernel: ima: No TPM chip found, activating TPM-bypass! May 27 02:55:43.814885 kernel: ima: Allocated hash algorithm: sha1 May 27 02:55:43.814892 kernel: ima: No architecture policies found May 27 02:55:43.814900 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) May 27 02:55:43.814908 kernel: clk: Disabling unused clocks May 27 02:55:43.814915 kernel: PM: genpd: Disabling unused power domains May 27 02:55:43.814922 kernel: Warning: unable to open an initial console. May 27 02:55:43.814929 kernel: Freeing unused kernel memory: 39424K May 27 02:55:43.814936 kernel: Run /init as init process May 27 02:55:43.814943 kernel: with arguments: May 27 02:55:43.814950 kernel: /init May 27 02:55:43.814957 kernel: with environment: May 27 02:55:43.814963 kernel: HOME=/ May 27 02:55:43.814972 kernel: TERM=linux May 27 02:55:43.814979 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 27 02:55:43.814987 systemd[1]: Successfully made /usr/ read-only. May 27 02:55:43.814997 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 27 02:55:43.815005 systemd[1]: Detected virtualization kvm. May 27 02:55:43.815012 systemd[1]: Detected architecture arm64. May 27 02:55:43.815020 systemd[1]: Running in initrd. May 27 02:55:43.815027 systemd[1]: No hostname configured, using default hostname. May 27 02:55:43.815037 systemd[1]: Hostname set to . May 27 02:55:43.815045 systemd[1]: Initializing machine ID from VM UUID. May 27 02:55:43.815052 systemd[1]: Queued start job for default target initrd.target. May 27 02:55:43.815060 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 27 02:55:43.815074 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 27 02:55:43.815082 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 27 02:55:43.815090 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 27 02:55:43.815097 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 27 02:55:43.815107 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 27 02:55:43.815116 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 27 02:55:43.815124 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 27 02:55:43.815132 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 27 02:55:43.815140 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 27 02:55:43.815147 systemd[1]: Reached target paths.target - Path Units. May 27 02:55:43.815156 systemd[1]: Reached target slices.target - Slice Units. May 27 02:55:43.815164 systemd[1]: Reached target swap.target - Swaps. May 27 02:55:43.815171 systemd[1]: Reached target timers.target - Timer Units. May 27 02:55:43.815179 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 27 02:55:43.815187 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 27 02:55:43.815195 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 27 02:55:43.815203 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 27 02:55:43.815211 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 27 02:55:43.815219 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 27 02:55:43.815228 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 27 02:55:43.815236 systemd[1]: Reached target sockets.target - Socket Units. May 27 02:55:43.815244 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 27 02:55:43.815252 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 27 02:55:43.815260 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 27 02:55:43.815268 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). May 27 02:55:43.815276 systemd[1]: Starting systemd-fsck-usr.service... May 27 02:55:43.815284 systemd[1]: Starting systemd-journald.service - Journal Service... May 27 02:55:43.815292 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 27 02:55:43.815300 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 27 02:55:43.815308 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 27 02:55:43.815316 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 27 02:55:43.815324 systemd[1]: Finished systemd-fsck-usr.service. May 27 02:55:43.815333 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 27 02:55:43.815356 systemd-journald[242]: Collecting audit messages is disabled. May 27 02:55:43.815374 systemd-journald[242]: Journal started May 27 02:55:43.815393 systemd-journald[242]: Runtime Journal (/run/log/journal/df31c976001a442fae655e25bfcff839) is 6M, max 48.5M, 42.4M free. May 27 02:55:43.806316 systemd-modules-load[245]: Inserted module 'overlay' May 27 02:55:43.818141 systemd[1]: Started systemd-journald.service - Journal Service. May 27 02:55:43.818760 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 27 02:55:43.820238 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 27 02:55:43.824430 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 27 02:55:43.826230 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 27 02:55:43.831094 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 27 02:55:43.831744 systemd-modules-load[245]: Inserted module 'br_netfilter' May 27 02:55:43.832720 kernel: Bridge firewalling registered May 27 02:55:43.832796 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 27 02:55:43.839831 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 27 02:55:43.841967 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 27 02:55:43.851003 systemd-tmpfiles[263]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. May 27 02:55:43.852876 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 27 02:55:43.854384 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 27 02:55:43.858300 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 27 02:55:43.860892 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 27 02:55:43.872974 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 27 02:55:43.876960 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 27 02:55:43.883936 dracut-cmdline[287]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=4c3f98aae7a61b3dcbab6391ba922461adab29dbcb79fd6e18169f93c5a4ab5a May 27 02:55:43.913074 systemd-resolved[294]: Positive Trust Anchors: May 27 02:55:43.913092 systemd-resolved[294]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 27 02:55:43.913123 systemd-resolved[294]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 27 02:55:43.917836 systemd-resolved[294]: Defaulting to hostname 'linux'. May 27 02:55:43.918782 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 27 02:55:43.922060 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 27 02:55:43.959707 kernel: SCSI subsystem initialized May 27 02:55:43.964691 kernel: Loading iSCSI transport class v2.0-870. May 27 02:55:43.972715 kernel: iscsi: registered transport (tcp) May 27 02:55:43.984701 kernel: iscsi: registered transport (qla4xxx) May 27 02:55:43.984726 kernel: QLogic iSCSI HBA Driver May 27 02:55:44.003425 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 27 02:55:44.018496 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 27 02:55:44.021346 systemd[1]: Reached target network-pre.target - Preparation for Network. May 27 02:55:44.068123 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 27 02:55:44.070412 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 27 02:55:44.136715 kernel: raid6: neonx8 gen() 15760 MB/s May 27 02:55:44.153710 kernel: raid6: neonx4 gen() 15660 MB/s May 27 02:55:44.170699 kernel: raid6: neonx2 gen() 13082 MB/s May 27 02:55:44.187705 kernel: raid6: neonx1 gen() 9675 MB/s May 27 02:55:44.204703 kernel: raid6: int64x8 gen() 6707 MB/s May 27 02:55:44.221699 kernel: raid6: int64x4 gen() 7272 MB/s May 27 02:55:44.238704 kernel: raid6: int64x2 gen() 6093 MB/s May 27 02:55:44.255805 kernel: raid6: int64x1 gen() 5049 MB/s May 27 02:55:44.255824 kernel: raid6: using algorithm neonx8 gen() 15760 MB/s May 27 02:55:44.273790 kernel: raid6: .... xor() 12066 MB/s, rmw enabled May 27 02:55:44.273815 kernel: raid6: using neon recovery algorithm May 27 02:55:44.282704 kernel: xor: measuring software checksum speed May 27 02:55:44.282731 kernel: 8regs : 20697 MB/sec May 27 02:55:44.283914 kernel: 32regs : 18166 MB/sec May 27 02:55:44.283928 kernel: arm64_neon : 27889 MB/sec May 27 02:55:44.283937 kernel: xor: using function: arm64_neon (27889 MB/sec) May 27 02:55:44.342703 kernel: Btrfs loaded, zoned=no, fsverity=no May 27 02:55:44.350174 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 27 02:55:44.355777 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 27 02:55:44.390801 systemd-udevd[500]: Using default interface naming scheme 'v255'. May 27 02:55:44.395090 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 27 02:55:44.397555 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 27 02:55:44.430825 dracut-pre-trigger[509]: rd.md=0: removing MD RAID activation May 27 02:55:44.455199 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 27 02:55:44.457580 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 27 02:55:44.517993 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 27 02:55:44.522784 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 27 02:55:44.567806 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues May 27 02:55:44.568071 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 27 02:55:44.577861 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 27 02:55:44.577909 kernel: GPT:9289727 != 19775487 May 27 02:55:44.577919 kernel: GPT:Alternate GPT header not at the end of the disk. May 27 02:55:44.578932 kernel: GPT:9289727 != 19775487 May 27 02:55:44.579171 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 27 02:55:44.579293 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 27 02:55:44.583826 kernel: GPT: Use GNU Parted to correct GPT errors. May 27 02:55:44.583845 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 27 02:55:44.583852 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 27 02:55:44.585863 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 27 02:55:44.611349 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 27 02:55:44.630257 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 27 02:55:44.631778 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 27 02:55:44.640259 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 27 02:55:44.651530 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 27 02:55:44.652782 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 27 02:55:44.661068 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 27 02:55:44.662281 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 27 02:55:44.664371 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 27 02:55:44.666478 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 27 02:55:44.669236 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 27 02:55:44.670999 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 27 02:55:44.694584 disk-uuid[592]: Primary Header is updated. May 27 02:55:44.694584 disk-uuid[592]: Secondary Entries is updated. May 27 02:55:44.694584 disk-uuid[592]: Secondary Header is updated. May 27 02:55:44.699426 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 27 02:55:44.702688 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 27 02:55:45.708707 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 27 02:55:45.708762 disk-uuid[598]: The operation has completed successfully. May 27 02:55:45.743169 systemd[1]: disk-uuid.service: Deactivated successfully. May 27 02:55:45.743275 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 27 02:55:45.767013 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 27 02:55:45.795744 sh[612]: Success May 27 02:55:45.809566 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 27 02:55:45.809632 kernel: device-mapper: uevent: version 1.0.3 May 27 02:55:45.809653 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev May 27 02:55:45.827698 kernel: device-mapper: verity: sha256 using shash "sha256-ce" May 27 02:55:45.852229 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 27 02:55:45.854985 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 27 02:55:45.876086 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 27 02:55:45.886070 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' May 27 02:55:45.886112 kernel: BTRFS: device fsid 5c6341ea-4eb5-44b6-ac57-c4d29847e384 devid 1 transid 41 /dev/mapper/usr (253:0) scanned by mount (624) May 27 02:55:45.887441 kernel: BTRFS info (device dm-0): first mount of filesystem 5c6341ea-4eb5-44b6-ac57-c4d29847e384 May 27 02:55:45.888407 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm May 27 02:55:45.888426 kernel: BTRFS info (device dm-0): using free-space-tree May 27 02:55:45.893215 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 27 02:55:45.894556 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. May 27 02:55:45.895870 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 27 02:55:45.896708 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 27 02:55:45.899150 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 27 02:55:45.925818 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 (254:6) scanned by mount (653) May 27 02:55:45.925860 kernel: BTRFS info (device vda6): first mount of filesystem eabe2c18-04ac-4289-8962-26387aada3f9 May 27 02:55:45.925870 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 27 02:55:45.927213 kernel: BTRFS info (device vda6): using free-space-tree May 27 02:55:45.934004 kernel: BTRFS info (device vda6): last unmount of filesystem eabe2c18-04ac-4289-8962-26387aada3f9 May 27 02:55:45.934938 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 27 02:55:45.938513 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 27 02:55:46.014502 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 27 02:55:46.017492 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 27 02:55:46.061082 systemd-networkd[800]: lo: Link UP May 27 02:55:46.061093 systemd-networkd[800]: lo: Gained carrier May 27 02:55:46.061832 systemd-networkd[800]: Enumeration completed May 27 02:55:46.061910 systemd[1]: Started systemd-networkd.service - Network Configuration. May 27 02:55:46.062228 systemd-networkd[800]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 27 02:55:46.062231 systemd-networkd[800]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 27 02:55:46.063022 systemd-networkd[800]: eth0: Link UP May 27 02:55:46.063025 systemd-networkd[800]: eth0: Gained carrier May 27 02:55:46.063035 systemd-networkd[800]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 27 02:55:46.063499 systemd[1]: Reached target network.target - Network. May 27 02:55:46.082433 ignition[706]: Ignition 2.21.0 May 27 02:55:46.082448 ignition[706]: Stage: fetch-offline May 27 02:55:46.082486 ignition[706]: no configs at "/usr/lib/ignition/base.d" May 27 02:55:46.082494 ignition[706]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 27 02:55:46.082759 ignition[706]: parsed url from cmdline: "" May 27 02:55:46.082762 ignition[706]: no config URL provided May 27 02:55:46.082767 ignition[706]: reading system config file "/usr/lib/ignition/user.ign" May 27 02:55:46.082774 ignition[706]: no config at "/usr/lib/ignition/user.ign" May 27 02:55:46.082794 ignition[706]: op(1): [started] loading QEMU firmware config module May 27 02:55:46.082798 ignition[706]: op(1): executing: "modprobe" "qemu_fw_cfg" May 27 02:55:46.091737 systemd-networkd[800]: eth0: DHCPv4 address 10.0.0.92/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 27 02:55:46.093505 ignition[706]: op(1): [finished] loading QEMU firmware config module May 27 02:55:46.130668 ignition[706]: parsing config with SHA512: 15e66a56289d00772849e478b1fe54bd568d846a3ffcd80e180d8f12212fa67cdd00950bbd9034ce2173dc52349ab2d263b0d8e5d51642ead2b6f37b3d57fea7 May 27 02:55:46.134648 unknown[706]: fetched base config from "system" May 27 02:55:46.134658 unknown[706]: fetched user config from "qemu" May 27 02:55:46.135068 ignition[706]: fetch-offline: fetch-offline passed May 27 02:55:46.137408 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 27 02:55:46.135124 ignition[706]: Ignition finished successfully May 27 02:55:46.138639 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 27 02:55:46.139421 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 27 02:55:46.174389 ignition[815]: Ignition 2.21.0 May 27 02:55:46.174406 ignition[815]: Stage: kargs May 27 02:55:46.174551 ignition[815]: no configs at "/usr/lib/ignition/base.d" May 27 02:55:46.174560 ignition[815]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 27 02:55:46.176248 ignition[815]: kargs: kargs passed May 27 02:55:46.179025 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 27 02:55:46.176313 ignition[815]: Ignition finished successfully May 27 02:55:46.180965 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 27 02:55:46.208101 ignition[823]: Ignition 2.21.0 May 27 02:55:46.208115 ignition[823]: Stage: disks May 27 02:55:46.208323 ignition[823]: no configs at "/usr/lib/ignition/base.d" May 27 02:55:46.208332 ignition[823]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 27 02:55:46.209653 ignition[823]: disks: disks passed May 27 02:55:46.212079 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 27 02:55:46.209718 ignition[823]: Ignition finished successfully May 27 02:55:46.213305 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 27 02:55:46.214894 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 27 02:55:46.216656 systemd[1]: Reached target local-fs.target - Local File Systems. May 27 02:55:46.218454 systemd[1]: Reached target sysinit.target - System Initialization. May 27 02:55:46.220387 systemd[1]: Reached target basic.target - Basic System. May 27 02:55:46.222808 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 27 02:55:46.252739 systemd-fsck[834]: ROOT: clean, 15/553520 files, 52789/553472 blocks May 27 02:55:46.256844 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 27 02:55:46.258807 systemd[1]: Mounting sysroot.mount - /sysroot... May 27 02:55:46.324695 kernel: EXT4-fs (vda9): mounted filesystem 5656cec4-efbd-4a2d-be98-2263e6ae16bd r/w with ordered data mode. Quota mode: none. May 27 02:55:46.324962 systemd[1]: Mounted sysroot.mount - /sysroot. May 27 02:55:46.326163 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 27 02:55:46.330960 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 27 02:55:46.333244 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 27 02:55:46.334215 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 27 02:55:46.334252 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 27 02:55:46.334274 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 27 02:55:46.341896 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 27 02:55:46.343818 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 27 02:55:46.348693 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 (254:6) scanned by mount (842) May 27 02:55:46.348725 kernel: BTRFS info (device vda6): first mount of filesystem eabe2c18-04ac-4289-8962-26387aada3f9 May 27 02:55:46.350935 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 27 02:55:46.350968 kernel: BTRFS info (device vda6): using free-space-tree May 27 02:55:46.354336 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 27 02:55:46.388775 initrd-setup-root[866]: cut: /sysroot/etc/passwd: No such file or directory May 27 02:55:46.391771 initrd-setup-root[873]: cut: /sysroot/etc/group: No such file or directory May 27 02:55:46.394647 initrd-setup-root[880]: cut: /sysroot/etc/shadow: No such file or directory May 27 02:55:46.398286 initrd-setup-root[887]: cut: /sysroot/etc/gshadow: No such file or directory May 27 02:55:46.468397 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 27 02:55:46.470387 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 27 02:55:46.471863 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 27 02:55:46.494704 kernel: BTRFS info (device vda6): last unmount of filesystem eabe2c18-04ac-4289-8962-26387aada3f9 May 27 02:55:46.506406 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 27 02:55:46.516645 ignition[956]: INFO : Ignition 2.21.0 May 27 02:55:46.516645 ignition[956]: INFO : Stage: mount May 27 02:55:46.519480 ignition[956]: INFO : no configs at "/usr/lib/ignition/base.d" May 27 02:55:46.519480 ignition[956]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 27 02:55:46.519480 ignition[956]: INFO : mount: mount passed May 27 02:55:46.519480 ignition[956]: INFO : Ignition finished successfully May 27 02:55:46.520032 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 27 02:55:46.522118 systemd[1]: Starting ignition-files.service - Ignition (files)... May 27 02:55:46.884907 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 27 02:55:46.886348 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 27 02:55:46.910398 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 (254:6) scanned by mount (969) May 27 02:55:46.910435 kernel: BTRFS info (device vda6): first mount of filesystem eabe2c18-04ac-4289-8962-26387aada3f9 May 27 02:55:46.910445 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 27 02:55:46.911962 kernel: BTRFS info (device vda6): using free-space-tree May 27 02:55:46.914342 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 27 02:55:46.946686 ignition[986]: INFO : Ignition 2.21.0 May 27 02:55:46.946686 ignition[986]: INFO : Stage: files May 27 02:55:46.948209 ignition[986]: INFO : no configs at "/usr/lib/ignition/base.d" May 27 02:55:46.948209 ignition[986]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 27 02:55:46.950269 ignition[986]: DEBUG : files: compiled without relabeling support, skipping May 27 02:55:46.950269 ignition[986]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 27 02:55:46.950269 ignition[986]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 27 02:55:46.953965 ignition[986]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 27 02:55:46.953965 ignition[986]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 27 02:55:46.953965 ignition[986]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 27 02:55:46.953965 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 27 02:55:46.953965 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 May 27 02:55:46.951796 unknown[986]: wrote ssh authorized keys file for user: core May 27 02:55:47.029354 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 27 02:55:47.189623 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" May 27 02:55:47.189623 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" May 27 02:55:47.193330 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 May 27 02:55:47.494662 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 27 02:55:47.672655 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" May 27 02:55:47.674387 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 27 02:55:47.674387 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 27 02:55:47.674387 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 27 02:55:47.674387 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 27 02:55:47.674387 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 27 02:55:47.674387 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 27 02:55:47.674387 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 27 02:55:47.674387 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 27 02:55:47.687173 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 27 02:55:47.687173 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 27 02:55:47.687173 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" May 27 02:55:47.687173 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" May 27 02:55:47.687173 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" May 27 02:55:47.687173 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-arm64.raw: attempt #1 May 27 02:55:47.804921 systemd-networkd[800]: eth0: Gained IPv6LL May 27 02:55:48.095730 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK May 27 02:55:48.511590 ignition[986]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" May 27 02:55:48.511590 ignition[986]: INFO : files: op(c): [started] processing unit "prepare-helm.service" May 27 02:55:48.515186 ignition[986]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 27 02:55:48.515186 ignition[986]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 27 02:55:48.515186 ignition[986]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" May 27 02:55:48.515186 ignition[986]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" May 27 02:55:48.515186 ignition[986]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 27 02:55:48.515186 ignition[986]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 27 02:55:48.515186 ignition[986]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" May 27 02:55:48.515186 ignition[986]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" May 27 02:55:48.529470 ignition[986]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" May 27 02:55:48.532265 ignition[986]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 27 02:55:48.534819 ignition[986]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" May 27 02:55:48.534819 ignition[986]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" May 27 02:55:48.534819 ignition[986]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" May 27 02:55:48.534819 ignition[986]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" May 27 02:55:48.534819 ignition[986]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" May 27 02:55:48.534819 ignition[986]: INFO : files: files passed May 27 02:55:48.534819 ignition[986]: INFO : Ignition finished successfully May 27 02:55:48.535481 systemd[1]: Finished ignition-files.service - Ignition (files). May 27 02:55:48.538027 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 27 02:55:48.540801 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 27 02:55:48.563891 systemd[1]: ignition-quench.service: Deactivated successfully. May 27 02:55:48.564654 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 27 02:55:48.567441 initrd-setup-root-after-ignition[1015]: grep: /sysroot/oem/oem-release: No such file or directory May 27 02:55:48.568758 initrd-setup-root-after-ignition[1017]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 27 02:55:48.568758 initrd-setup-root-after-ignition[1017]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 27 02:55:48.571867 initrd-setup-root-after-ignition[1021]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 27 02:55:48.571948 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 27 02:55:48.574472 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 27 02:55:48.576827 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 27 02:55:48.607571 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 27 02:55:48.607725 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 27 02:55:48.609851 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 27 02:55:48.611614 systemd[1]: Reached target initrd.target - Initrd Default Target. May 27 02:55:48.613389 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 27 02:55:48.614203 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 27 02:55:48.628236 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 27 02:55:48.630572 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 27 02:55:48.660833 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 27 02:55:48.662029 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 27 02:55:48.664005 systemd[1]: Stopped target timers.target - Timer Units. May 27 02:55:48.665631 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 27 02:55:48.665778 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 27 02:55:48.668180 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 27 02:55:48.670089 systemd[1]: Stopped target basic.target - Basic System. May 27 02:55:48.671607 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 27 02:55:48.673267 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 27 02:55:48.675126 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 27 02:55:48.676988 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. May 27 02:55:48.678810 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 27 02:55:48.680583 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 27 02:55:48.682740 systemd[1]: Stopped target sysinit.target - System Initialization. May 27 02:55:48.684686 systemd[1]: Stopped target local-fs.target - Local File Systems. May 27 02:55:48.686457 systemd[1]: Stopped target swap.target - Swaps. May 27 02:55:48.687923 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 27 02:55:48.688058 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 27 02:55:48.690334 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 27 02:55:48.692273 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 27 02:55:48.694133 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 27 02:55:48.698714 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 27 02:55:48.699984 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 27 02:55:48.700105 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 27 02:55:48.702788 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 27 02:55:48.702905 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 27 02:55:48.704796 systemd[1]: Stopped target paths.target - Path Units. May 27 02:55:48.706335 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 27 02:55:48.711702 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 27 02:55:48.712897 systemd[1]: Stopped target slices.target - Slice Units. May 27 02:55:48.714837 systemd[1]: Stopped target sockets.target - Socket Units. May 27 02:55:48.716320 systemd[1]: iscsid.socket: Deactivated successfully. May 27 02:55:48.716403 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 27 02:55:48.717836 systemd[1]: iscsiuio.socket: Deactivated successfully. May 27 02:55:48.717911 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 27 02:55:48.719370 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 27 02:55:48.719478 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 27 02:55:48.721184 systemd[1]: ignition-files.service: Deactivated successfully. May 27 02:55:48.721281 systemd[1]: Stopped ignition-files.service - Ignition (files). May 27 02:55:48.723487 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 27 02:55:48.725937 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 27 02:55:48.727061 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 27 02:55:48.727177 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 27 02:55:48.728843 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 27 02:55:48.728939 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 27 02:55:48.733887 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 27 02:55:48.743709 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 27 02:55:48.749981 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 27 02:55:48.755500 ignition[1041]: INFO : Ignition 2.21.0 May 27 02:55:48.755500 ignition[1041]: INFO : Stage: umount May 27 02:55:48.757410 ignition[1041]: INFO : no configs at "/usr/lib/ignition/base.d" May 27 02:55:48.757410 ignition[1041]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 27 02:55:48.757410 ignition[1041]: INFO : umount: umount passed May 27 02:55:48.757410 ignition[1041]: INFO : Ignition finished successfully May 27 02:55:48.760019 systemd[1]: ignition-mount.service: Deactivated successfully. May 27 02:55:48.762227 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 27 02:55:48.764438 systemd[1]: Stopped target network.target - Network. May 27 02:55:48.765795 systemd[1]: ignition-disks.service: Deactivated successfully. May 27 02:55:48.765858 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 27 02:55:48.767371 systemd[1]: ignition-kargs.service: Deactivated successfully. May 27 02:55:48.767419 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 27 02:55:48.770722 systemd[1]: ignition-setup.service: Deactivated successfully. May 27 02:55:48.770777 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 27 02:55:48.772462 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 27 02:55:48.772503 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 27 02:55:48.774267 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 27 02:55:48.775867 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 27 02:55:48.783586 systemd[1]: systemd-resolved.service: Deactivated successfully. May 27 02:55:48.783750 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 27 02:55:48.787610 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 27 02:55:48.787878 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 27 02:55:48.787916 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 27 02:55:48.791341 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 27 02:55:48.792461 systemd[1]: systemd-networkd.service: Deactivated successfully. May 27 02:55:48.792558 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 27 02:55:48.795487 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 27 02:55:48.795608 systemd[1]: Stopped target network-pre.target - Preparation for Network. May 27 02:55:48.796781 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 27 02:55:48.796815 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 27 02:55:48.799409 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 27 02:55:48.800448 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 27 02:55:48.800520 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 27 02:55:48.802510 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 27 02:55:48.802554 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 27 02:55:48.805322 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 27 02:55:48.805362 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 27 02:55:48.807293 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 27 02:55:48.810379 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 27 02:55:48.818270 systemd[1]: systemd-udevd.service: Deactivated successfully. May 27 02:55:48.819848 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 27 02:55:48.821367 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 27 02:55:48.821407 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 27 02:55:48.823106 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 27 02:55:48.823135 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 27 02:55:48.824905 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 27 02:55:48.824960 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 27 02:55:48.827988 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 27 02:55:48.828042 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 27 02:55:48.830770 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 27 02:55:48.830821 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 27 02:55:48.835001 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 27 02:55:48.836200 systemd[1]: systemd-network-generator.service: Deactivated successfully. May 27 02:55:48.836255 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. May 27 02:55:48.839167 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 27 02:55:48.839208 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 27 02:55:48.842214 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. May 27 02:55:48.842256 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 27 02:55:48.845527 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 27 02:55:48.845568 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 27 02:55:48.847820 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 27 02:55:48.847861 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 27 02:55:48.851488 systemd[1]: sysroot-boot.service: Deactivated successfully. May 27 02:55:48.851562 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 27 02:55:48.852908 systemd[1]: network-cleanup.service: Deactivated successfully. May 27 02:55:48.852987 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 27 02:55:48.854747 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 27 02:55:48.854816 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 27 02:55:48.857476 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 27 02:55:48.859059 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 27 02:55:48.859119 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 27 02:55:48.861342 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 27 02:55:48.869882 systemd[1]: Switching root. May 27 02:55:48.895746 systemd-journald[242]: Journal stopped May 27 02:55:49.679571 systemd-journald[242]: Received SIGTERM from PID 1 (systemd). May 27 02:55:49.679623 kernel: SELinux: policy capability network_peer_controls=1 May 27 02:55:49.679634 kernel: SELinux: policy capability open_perms=1 May 27 02:55:49.679648 kernel: SELinux: policy capability extended_socket_class=1 May 27 02:55:49.679661 kernel: SELinux: policy capability always_check_network=0 May 27 02:55:49.679705 kernel: SELinux: policy capability cgroup_seclabel=1 May 27 02:55:49.679721 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 27 02:55:49.679730 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 27 02:55:49.679743 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 27 02:55:49.679752 kernel: SELinux: policy capability userspace_initial_context=0 May 27 02:55:49.679762 kernel: audit: type=1403 audit(1748314549.077:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 27 02:55:49.679776 systemd[1]: Successfully loaded SELinux policy in 50.760ms. May 27 02:55:49.679792 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 9.596ms. May 27 02:55:49.679805 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 27 02:55:49.679818 systemd[1]: Detected virtualization kvm. May 27 02:55:49.679827 systemd[1]: Detected architecture arm64. May 27 02:55:49.679837 systemd[1]: Detected first boot. May 27 02:55:49.679847 systemd[1]: Initializing machine ID from VM UUID. May 27 02:55:49.679857 zram_generator::config[1090]: No configuration found. May 27 02:55:49.679868 kernel: NET: Registered PF_VSOCK protocol family May 27 02:55:49.679877 systemd[1]: Populated /etc with preset unit settings. May 27 02:55:49.679887 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 27 02:55:49.679898 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 27 02:55:49.679908 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 27 02:55:49.679918 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 27 02:55:49.679928 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 27 02:55:49.679938 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 27 02:55:49.679948 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 27 02:55:49.679958 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 27 02:55:49.679968 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 27 02:55:49.679980 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 27 02:55:49.679991 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 27 02:55:49.680000 systemd[1]: Created slice user.slice - User and Session Slice. May 27 02:55:49.680010 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 27 02:55:49.680021 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 27 02:55:49.680031 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 27 02:55:49.680041 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 27 02:55:49.680058 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 27 02:55:49.680070 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 27 02:55:49.680081 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... May 27 02:55:49.680102 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 27 02:55:49.680111 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 27 02:55:49.680125 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 27 02:55:49.680135 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 27 02:55:49.680145 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 27 02:55:49.680155 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 27 02:55:49.680166 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 27 02:55:49.680177 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 27 02:55:49.680188 systemd[1]: Reached target slices.target - Slice Units. May 27 02:55:49.680198 systemd[1]: Reached target swap.target - Swaps. May 27 02:55:49.680208 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 27 02:55:49.680217 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 27 02:55:49.680227 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 27 02:55:49.680237 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 27 02:55:49.680247 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 27 02:55:49.680257 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 27 02:55:49.680267 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 27 02:55:49.680279 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 27 02:55:49.680289 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 27 02:55:49.680298 systemd[1]: Mounting media.mount - External Media Directory... May 27 02:55:49.680308 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 27 02:55:49.680318 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 27 02:55:49.680328 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 27 02:55:49.680339 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 27 02:55:49.680349 systemd[1]: Reached target machines.target - Containers. May 27 02:55:49.680360 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 27 02:55:49.680370 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 27 02:55:49.680381 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 27 02:55:49.680391 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 27 02:55:49.680402 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 27 02:55:49.680412 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 27 02:55:49.680423 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 27 02:55:49.680433 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 27 02:55:49.680444 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 27 02:55:49.680454 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 27 02:55:49.680464 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 27 02:55:49.680474 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 27 02:55:49.680484 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 27 02:55:49.680494 systemd[1]: Stopped systemd-fsck-usr.service. May 27 02:55:49.680504 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 27 02:55:49.680514 systemd[1]: Starting systemd-journald.service - Journal Service... May 27 02:55:49.680524 kernel: loop: module loaded May 27 02:55:49.680534 kernel: ACPI: bus type drm_connector registered May 27 02:55:49.680544 kernel: fuse: init (API version 7.41) May 27 02:55:49.680553 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 27 02:55:49.680563 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 27 02:55:49.680573 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 27 02:55:49.680584 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 27 02:55:49.680593 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 27 02:55:49.680605 systemd[1]: verity-setup.service: Deactivated successfully. May 27 02:55:49.680614 systemd[1]: Stopped verity-setup.service. May 27 02:55:49.680626 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 27 02:55:49.680636 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 27 02:55:49.680646 systemd[1]: Mounted media.mount - External Media Directory. May 27 02:55:49.680683 systemd-journald[1162]: Collecting audit messages is disabled. May 27 02:55:49.680712 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 27 02:55:49.680724 systemd-journald[1162]: Journal started May 27 02:55:49.680745 systemd-journald[1162]: Runtime Journal (/run/log/journal/df31c976001a442fae655e25bfcff839) is 6M, max 48.5M, 42.4M free. May 27 02:55:49.446763 systemd[1]: Queued start job for default target multi-user.target. May 27 02:55:49.471590 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 27 02:55:49.471982 systemd[1]: systemd-journald.service: Deactivated successfully. May 27 02:55:49.682699 systemd[1]: Started systemd-journald.service - Journal Service. May 27 02:55:49.683302 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 27 02:55:49.684512 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 27 02:55:49.687713 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 27 02:55:49.689106 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 27 02:55:49.690560 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 27 02:55:49.690760 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 27 02:55:49.692168 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 27 02:55:49.692329 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 27 02:55:49.695088 systemd[1]: modprobe@drm.service: Deactivated successfully. May 27 02:55:49.695259 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 27 02:55:49.696512 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 27 02:55:49.696689 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 27 02:55:49.698132 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 27 02:55:49.698296 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 27 02:55:49.699551 systemd[1]: modprobe@loop.service: Deactivated successfully. May 27 02:55:49.699829 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 27 02:55:49.701265 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 27 02:55:49.702599 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 27 02:55:49.704209 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 27 02:55:49.705644 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 27 02:55:49.717251 systemd[1]: Reached target network-pre.target - Preparation for Network. May 27 02:55:49.719652 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 27 02:55:49.721626 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 27 02:55:49.722758 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 27 02:55:49.722794 systemd[1]: Reached target local-fs.target - Local File Systems. May 27 02:55:49.724585 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 27 02:55:49.735466 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 27 02:55:49.736612 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 27 02:55:49.737592 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 27 02:55:49.739556 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 27 02:55:49.740753 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 27 02:55:49.744789 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 27 02:55:49.745861 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 27 02:55:49.746836 systemd-journald[1162]: Time spent on flushing to /var/log/journal/df31c976001a442fae655e25bfcff839 is 15.810ms for 885 entries. May 27 02:55:49.746836 systemd-journald[1162]: System Journal (/var/log/journal/df31c976001a442fae655e25bfcff839) is 8M, max 195.6M, 187.6M free. May 27 02:55:49.766467 systemd-journald[1162]: Received client request to flush runtime journal. May 27 02:55:49.746922 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 27 02:55:49.750225 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 27 02:55:49.752915 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 27 02:55:49.755543 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 27 02:55:49.759207 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 27 02:55:49.761224 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 27 02:55:49.762804 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 27 02:55:49.766042 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 27 02:55:49.772688 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 27 02:55:49.775192 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 27 02:55:49.776783 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 27 02:55:49.779717 kernel: loop0: detected capacity change from 0 to 138376 May 27 02:55:49.796700 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 27 02:55:49.799906 systemd-tmpfiles[1207]: ACLs are not supported, ignoring. May 27 02:55:49.799922 systemd-tmpfiles[1207]: ACLs are not supported, ignoring. May 27 02:55:49.803760 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 27 02:55:49.807643 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 27 02:55:49.810261 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 27 02:55:49.815693 kernel: loop1: detected capacity change from 0 to 107312 May 27 02:55:49.831725 kernel: loop2: detected capacity change from 0 to 203944 May 27 02:55:49.869136 kernel: loop3: detected capacity change from 0 to 138376 May 27 02:55:49.867720 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 27 02:55:49.870991 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 27 02:55:49.878742 kernel: loop4: detected capacity change from 0 to 107312 May 27 02:55:49.885657 kernel: loop5: detected capacity change from 0 to 203944 May 27 02:55:49.889490 (sd-merge)[1227]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. May 27 02:55:49.889871 (sd-merge)[1227]: Merged extensions into '/usr'. May 27 02:55:49.894965 systemd[1]: Reload requested from client PID 1206 ('systemd-sysext') (unit systemd-sysext.service)... May 27 02:55:49.894983 systemd[1]: Reloading... May 27 02:55:49.907207 systemd-tmpfiles[1229]: ACLs are not supported, ignoring. May 27 02:55:49.907226 systemd-tmpfiles[1229]: ACLs are not supported, ignoring. May 27 02:55:49.959201 zram_generator::config[1257]: No configuration found. May 27 02:55:50.041099 ldconfig[1201]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 27 02:55:50.045570 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 27 02:55:50.107509 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 27 02:55:50.107821 systemd[1]: Reloading finished in 212 ms. May 27 02:55:50.135393 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 27 02:55:50.138703 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 27 02:55:50.140218 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 27 02:55:50.160154 systemd[1]: Starting ensure-sysext.service... May 27 02:55:50.161966 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 27 02:55:50.177440 systemd[1]: Reload requested from client PID 1292 ('systemctl') (unit ensure-sysext.service)... May 27 02:55:50.177458 systemd[1]: Reloading... May 27 02:55:50.179884 systemd-tmpfiles[1293]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. May 27 02:55:50.180200 systemd-tmpfiles[1293]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. May 27 02:55:50.180461 systemd-tmpfiles[1293]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 27 02:55:50.180697 systemd-tmpfiles[1293]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 27 02:55:50.181424 systemd-tmpfiles[1293]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 27 02:55:50.181814 systemd-tmpfiles[1293]: ACLs are not supported, ignoring. May 27 02:55:50.181858 systemd-tmpfiles[1293]: ACLs are not supported, ignoring. May 27 02:55:50.184703 systemd-tmpfiles[1293]: Detected autofs mount point /boot during canonicalization of boot. May 27 02:55:50.184712 systemd-tmpfiles[1293]: Skipping /boot May 27 02:55:50.193350 systemd-tmpfiles[1293]: Detected autofs mount point /boot during canonicalization of boot. May 27 02:55:50.193364 systemd-tmpfiles[1293]: Skipping /boot May 27 02:55:50.224713 zram_generator::config[1321]: No configuration found. May 27 02:55:50.286842 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 27 02:55:50.347916 systemd[1]: Reloading finished in 170 ms. May 27 02:55:50.370163 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 27 02:55:50.376708 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 27 02:55:50.384863 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 27 02:55:50.387318 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 27 02:55:50.398485 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 27 02:55:50.401794 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 27 02:55:50.407821 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 27 02:55:50.411789 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 27 02:55:50.429974 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 27 02:55:50.434348 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 27 02:55:50.435336 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 27 02:55:50.437279 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 27 02:55:50.439568 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 27 02:55:50.440873 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 27 02:55:50.440983 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 27 02:55:50.443714 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 27 02:55:50.446083 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 27 02:55:50.451090 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 27 02:55:50.451252 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 27 02:55:50.453031 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 27 02:55:50.453202 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 27 02:55:50.454765 systemd[1]: modprobe@loop.service: Deactivated successfully. May 27 02:55:50.454898 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 27 02:55:50.462281 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 27 02:55:50.462479 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 27 02:55:50.464610 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 27 02:55:50.466837 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 27 02:55:50.468118 systemd-udevd[1361]: Using default interface naming scheme 'v255'. May 27 02:55:50.470177 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 27 02:55:50.474892 augenrules[1391]: No rules May 27 02:55:50.475715 systemd[1]: audit-rules.service: Deactivated successfully. May 27 02:55:50.480922 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 27 02:55:50.483292 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 27 02:55:50.486084 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 27 02:55:50.493915 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 27 02:55:50.495858 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 27 02:55:50.498480 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 27 02:55:50.499835 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 27 02:55:50.499945 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 27 02:55:50.500024 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 27 02:55:50.502711 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 27 02:55:50.505392 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 27 02:55:50.509300 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 27 02:55:50.509802 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 27 02:55:50.513181 systemd[1]: modprobe@loop.service: Deactivated successfully. May 27 02:55:50.513321 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 27 02:55:50.525523 systemd[1]: Finished ensure-sysext.service. May 27 02:55:50.538313 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 27 02:55:50.539844 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 27 02:55:50.542069 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 27 02:55:50.543802 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 27 02:55:50.547576 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 27 02:55:50.548642 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 27 02:55:50.548697 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 27 02:55:50.550948 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 27 02:55:50.555993 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 27 02:55:50.557823 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 27 02:55:50.558395 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 27 02:55:50.559731 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 27 02:55:50.561175 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 27 02:55:50.561317 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 27 02:55:50.563702 systemd[1]: modprobe@loop.service: Deactivated successfully. May 27 02:55:50.563844 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 27 02:55:50.576985 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 27 02:55:50.577029 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 27 02:55:50.581866 systemd[1]: modprobe@drm.service: Deactivated successfully. May 27 02:55:50.583744 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 27 02:55:50.603432 augenrules[1436]: /sbin/augenrules: No change May 27 02:55:50.619001 augenrules[1470]: No rules May 27 02:55:50.621066 systemd[1]: audit-rules.service: Deactivated successfully. May 27 02:55:50.621782 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 27 02:55:50.630554 systemd-resolved[1360]: Positive Trust Anchors: May 27 02:55:50.630856 systemd-resolved[1360]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 27 02:55:50.630941 systemd-resolved[1360]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 27 02:55:50.636906 systemd-resolved[1360]: Defaulting to hostname 'linux'. May 27 02:55:50.640097 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 27 02:55:50.643357 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 27 02:55:50.644760 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 27 02:55:50.649837 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 27 02:55:50.657850 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. May 27 02:55:50.680750 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 27 02:55:50.683248 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 27 02:55:50.685220 systemd[1]: Reached target sysinit.target - System Initialization. May 27 02:55:50.686663 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 27 02:55:50.688245 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 27 02:55:50.689644 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 27 02:55:50.690904 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 27 02:55:50.690941 systemd[1]: Reached target paths.target - Path Units. May 27 02:55:50.691184 systemd-networkd[1440]: lo: Link UP May 27 02:55:50.691199 systemd-networkd[1440]: lo: Gained carrier May 27 02:55:50.691779 systemd[1]: Reached target time-set.target - System Time Set. May 27 02:55:50.692085 systemd-networkd[1440]: Enumeration completed May 27 02:55:50.692533 systemd-networkd[1440]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 27 02:55:50.692537 systemd-networkd[1440]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 27 02:55:50.693020 systemd-networkd[1440]: eth0: Link UP May 27 02:55:50.693074 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 27 02:55:50.693158 systemd-networkd[1440]: eth0: Gained carrier May 27 02:55:50.693173 systemd-networkd[1440]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 27 02:55:50.694629 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 27 02:55:50.695949 systemd[1]: Reached target timers.target - Timer Units. May 27 02:55:50.698181 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 27 02:55:50.700503 systemd[1]: Starting docker.socket - Docker Socket for the API... May 27 02:55:50.705014 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 27 02:55:50.706804 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). May 27 02:55:50.707731 systemd-networkd[1440]: eth0: DHCPv4 address 10.0.0.92/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 27 02:55:50.708033 systemd[1]: Reached target ssh-access.target - SSH Access Available. May 27 02:55:50.711630 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 27 02:55:50.714501 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 27 02:55:50.716101 systemd-timesyncd[1441]: Network configuration changed, trying to establish connection. May 27 02:55:50.716656 systemd[1]: Started systemd-networkd.service - Network Configuration. May 27 02:55:50.716775 systemd-timesyncd[1441]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 27 02:55:50.716830 systemd-timesyncd[1441]: Initial clock synchronization to Tue 2025-05-27 02:55:50.752157 UTC. May 27 02:55:50.718253 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 27 02:55:50.720285 systemd[1]: Reached target network.target - Network. May 27 02:55:50.721194 systemd[1]: Reached target sockets.target - Socket Units. May 27 02:55:50.722188 systemd[1]: Reached target basic.target - Basic System. May 27 02:55:50.723208 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 27 02:55:50.723249 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 27 02:55:50.724279 systemd[1]: Starting containerd.service - containerd container runtime... May 27 02:55:50.728917 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 27 02:55:50.730856 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 27 02:55:50.735175 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 27 02:55:50.753686 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 27 02:55:50.756787 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 27 02:55:50.758433 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 27 02:55:50.761072 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 27 02:55:50.766809 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 27 02:55:50.769582 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 27 02:55:50.773920 systemd[1]: Starting systemd-logind.service - User Login Management... May 27 02:55:50.778809 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 27 02:55:50.786778 jq[1496]: false May 27 02:55:50.784874 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 27 02:55:50.786908 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 27 02:55:50.787386 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 27 02:55:50.790964 systemd[1]: Starting update-engine.service - Update Engine... May 27 02:55:50.795856 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 27 02:55:50.800791 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 27 02:55:50.802401 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 27 02:55:50.803717 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 27 02:55:50.803991 systemd[1]: motdgen.service: Deactivated successfully. May 27 02:55:50.804170 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 27 02:55:50.805077 extend-filesystems[1497]: Found loop3 May 27 02:55:50.806715 extend-filesystems[1497]: Found loop4 May 27 02:55:50.806715 extend-filesystems[1497]: Found loop5 May 27 02:55:50.806715 extend-filesystems[1497]: Found vda May 27 02:55:50.806715 extend-filesystems[1497]: Found vda1 May 27 02:55:50.806715 extend-filesystems[1497]: Found vda2 May 27 02:55:50.806715 extend-filesystems[1497]: Found vda3 May 27 02:55:50.806715 extend-filesystems[1497]: Found usr May 27 02:55:50.806715 extend-filesystems[1497]: Found vda4 May 27 02:55:50.806715 extend-filesystems[1497]: Found vda6 May 27 02:55:50.806715 extend-filesystems[1497]: Found vda7 May 27 02:55:50.806715 extend-filesystems[1497]: Found vda9 May 27 02:55:50.806715 extend-filesystems[1497]: Checking size of /dev/vda9 May 27 02:55:50.818860 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 27 02:55:50.820757 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 27 02:55:50.827390 (ntainerd)[1524]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 27 02:55:50.838098 jq[1519]: true May 27 02:55:50.859725 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 27 02:55:50.866163 extend-filesystems[1497]: Resized partition /dev/vda9 May 27 02:55:50.868026 jq[1533]: true May 27 02:55:50.866247 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 27 02:55:50.872762 extend-filesystems[1540]: resize2fs 1.47.2 (1-Jan-2025) May 27 02:55:50.876411 tar[1522]: linux-arm64/helm May 27 02:55:50.891696 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 27 02:55:50.901153 dbus-daemon[1493]: [system] SELinux support is enabled May 27 02:55:50.901336 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 27 02:55:50.907186 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 27 02:55:50.907218 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 27 02:55:50.909260 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 27 02:55:50.909285 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 27 02:55:50.911845 update_engine[1515]: I20250527 02:55:50.911159 1515 main.cc:92] Flatcar Update Engine starting May 27 02:55:50.915026 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 27 02:55:50.926053 update_engine[1515]: I20250527 02:55:50.921916 1515 update_check_scheduler.cc:74] Next update check in 6m26s May 27 02:55:50.922208 systemd[1]: Started update-engine.service - Update Engine. May 27 02:55:50.926437 extend-filesystems[1540]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 27 02:55:50.926437 extend-filesystems[1540]: old_desc_blocks = 1, new_desc_blocks = 1 May 27 02:55:50.926437 extend-filesystems[1540]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 27 02:55:50.936126 extend-filesystems[1497]: Resized filesystem in /dev/vda9 May 27 02:55:50.926802 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 27 02:55:50.933654 systemd[1]: extend-filesystems.service: Deactivated successfully. May 27 02:55:50.935748 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 27 02:55:50.942738 bash[1557]: Updated "/home/core/.ssh/authorized_keys" May 27 02:55:50.944681 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 27 02:55:50.947235 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 27 02:55:50.977370 systemd-logind[1508]: Watching system buttons on /dev/input/event0 (Power Button) May 27 02:55:50.979139 systemd-logind[1508]: New seat seat0. May 27 02:55:50.979915 systemd[1]: Started systemd-logind.service - User Login Management. May 27 02:55:51.008751 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 27 02:55:51.038670 locksmithd[1558]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 27 02:55:51.139593 containerd[1524]: time="2025-05-27T02:55:51Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 May 27 02:55:51.142308 containerd[1524]: time="2025-05-27T02:55:51.142278522Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 May 27 02:55:51.150722 containerd[1524]: time="2025-05-27T02:55:51.150667526Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="9.611µs" May 27 02:55:51.151715 containerd[1524]: time="2025-05-27T02:55:51.150797230Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 May 27 02:55:51.151715 containerd[1524]: time="2025-05-27T02:55:51.150820976Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 May 27 02:55:51.151715 containerd[1524]: time="2025-05-27T02:55:51.150966537Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 May 27 02:55:51.151715 containerd[1524]: time="2025-05-27T02:55:51.150982915Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 May 27 02:55:51.151715 containerd[1524]: time="2025-05-27T02:55:51.151006662Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 27 02:55:51.151715 containerd[1524]: time="2025-05-27T02:55:51.151052512Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 27 02:55:51.151715 containerd[1524]: time="2025-05-27T02:55:51.151062564Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 27 02:55:51.151715 containerd[1524]: time="2025-05-27T02:55:51.151300187Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 27 02:55:51.151715 containerd[1524]: time="2025-05-27T02:55:51.151315283Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 27 02:55:51.151715 containerd[1524]: time="2025-05-27T02:55:51.151326015Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 27 02:55:51.151715 containerd[1524]: time="2025-05-27T02:55:51.151333704Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 May 27 02:55:51.151715 containerd[1524]: time="2025-05-27T02:55:51.151404342Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 May 27 02:55:51.151946 containerd[1524]: time="2025-05-27T02:55:51.151643647Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 27 02:55:51.151998 containerd[1524]: time="2025-05-27T02:55:51.151979058Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 27 02:55:51.152040 containerd[1524]: time="2025-05-27T02:55:51.152029233Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 May 27 02:55:51.152115 containerd[1524]: time="2025-05-27T02:55:51.152100072Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 May 27 02:55:51.152639 containerd[1524]: time="2025-05-27T02:55:51.152588813Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 May 27 02:55:51.152726 containerd[1524]: time="2025-05-27T02:55:51.152705542Z" level=info msg="metadata content store policy set" policy=shared May 27 02:55:51.155922 containerd[1524]: time="2025-05-27T02:55:51.155888345Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 May 27 02:55:51.155988 containerd[1524]: time="2025-05-27T02:55:51.155941083Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 May 27 02:55:51.155988 containerd[1524]: time="2025-05-27T02:55:51.155957181Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 May 27 02:55:51.155988 containerd[1524]: time="2025-05-27T02:55:51.155979085Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 May 27 02:55:51.156044 containerd[1524]: time="2025-05-27T02:55:51.155991939Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 May 27 02:55:51.156044 containerd[1524]: time="2025-05-27T02:55:51.156002471Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 May 27 02:55:51.156044 containerd[1524]: time="2025-05-27T02:55:51.156013283Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 May 27 02:55:51.156044 containerd[1524]: time="2025-05-27T02:55:51.156025697Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 May 27 02:55:51.156044 containerd[1524]: time="2025-05-27T02:55:51.156037510Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 May 27 02:55:51.156166 containerd[1524]: time="2025-05-27T02:55:51.156047641Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 May 27 02:55:51.156166 containerd[1524]: time="2025-05-27T02:55:51.156057612Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 May 27 02:55:51.156166 containerd[1524]: time="2025-05-27T02:55:51.156070026Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 May 27 02:55:51.156239 containerd[1524]: time="2025-05-27T02:55:51.156182550Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 May 27 02:55:51.156239 containerd[1524]: time="2025-05-27T02:55:51.156202532Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 May 27 02:55:51.156239 containerd[1524]: time="2025-05-27T02:55:51.156226759Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 May 27 02:55:51.156288 containerd[1524]: time="2025-05-27T02:55:51.156239934Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 May 27 02:55:51.156288 containerd[1524]: time="2025-05-27T02:55:51.156251667Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 May 27 02:55:51.156288 containerd[1524]: time="2025-05-27T02:55:51.156261678Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 May 27 02:55:51.156288 containerd[1524]: time="2025-05-27T02:55:51.156272610Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 May 27 02:55:51.156288 containerd[1524]: time="2025-05-27T02:55:51.156283262Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 May 27 02:55:51.156378 containerd[1524]: time="2025-05-27T02:55:51.156294354Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 May 27 02:55:51.156378 containerd[1524]: time="2025-05-27T02:55:51.156306007Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 May 27 02:55:51.156378 containerd[1524]: time="2025-05-27T02:55:51.156316218Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 May 27 02:55:51.156621 containerd[1524]: time="2025-05-27T02:55:51.156603056Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" May 27 02:55:51.156663 containerd[1524]: time="2025-05-27T02:55:51.156623999Z" level=info msg="Start snapshots syncer" May 27 02:55:51.156663 containerd[1524]: time="2025-05-27T02:55:51.156648866Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 May 27 02:55:51.156919 containerd[1524]: time="2025-05-27T02:55:51.156878240Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" May 27 02:55:51.157026 containerd[1524]: time="2025-05-27T02:55:51.156933942Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 May 27 02:55:51.157026 containerd[1524]: time="2025-05-27T02:55:51.156998533Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 May 27 02:55:51.157189 containerd[1524]: time="2025-05-27T02:55:51.157149741Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 May 27 02:55:51.157189 containerd[1524]: time="2025-05-27T02:55:51.157182096Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 May 27 02:55:51.157245 containerd[1524]: time="2025-05-27T02:55:51.157202079Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 May 27 02:55:51.157245 containerd[1524]: time="2025-05-27T02:55:51.157212650Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 May 27 02:55:51.157245 containerd[1524]: time="2025-05-27T02:55:51.157232232Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 May 27 02:55:51.157245 containerd[1524]: time="2025-05-27T02:55:51.157242563Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 May 27 02:55:51.157319 containerd[1524]: time="2025-05-27T02:55:51.157258381Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 May 27 02:55:51.157319 containerd[1524]: time="2025-05-27T02:55:51.157283969Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 May 27 02:55:51.157319 containerd[1524]: time="2025-05-27T02:55:51.157295462Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 May 27 02:55:51.157319 containerd[1524]: time="2025-05-27T02:55:51.157306474Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 May 27 02:55:51.157383 containerd[1524]: time="2025-05-27T02:55:51.157349282Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 27 02:55:51.157383 containerd[1524]: time="2025-05-27T02:55:51.157362416Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 27 02:55:51.157383 containerd[1524]: time="2025-05-27T02:55:51.157370945Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 27 02:55:51.157383 containerd[1524]: time="2025-05-27T02:55:51.157380156Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 27 02:55:51.157468 containerd[1524]: time="2025-05-27T02:55:51.157388605Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 May 27 02:55:51.157468 containerd[1524]: time="2025-05-27T02:55:51.157462767Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 May 27 02:55:51.157502 containerd[1524]: time="2025-05-27T02:55:51.157477143Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 May 27 02:55:51.157618 containerd[1524]: time="2025-05-27T02:55:51.157605125Z" level=info msg="runtime interface created" May 27 02:55:51.157618 containerd[1524]: time="2025-05-27T02:55:51.157613454Z" level=info msg="created NRI interface" May 27 02:55:51.157655 containerd[1524]: time="2025-05-27T02:55:51.157623345Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 May 27 02:55:51.157655 containerd[1524]: time="2025-05-27T02:55:51.157634717Z" level=info msg="Connect containerd service" May 27 02:55:51.157699 containerd[1524]: time="2025-05-27T02:55:51.157659985Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 27 02:55:51.159380 containerd[1524]: time="2025-05-27T02:55:51.159330674Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 27 02:55:51.260302 sshd_keygen[1518]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 27 02:55:51.266156 containerd[1524]: time="2025-05-27T02:55:51.266119384Z" level=info msg="Start subscribing containerd event" May 27 02:55:51.266520 containerd[1524]: time="2025-05-27T02:55:51.266276118Z" level=info msg="Start recovering state" May 27 02:55:51.266520 containerd[1524]: time="2025-05-27T02:55:51.266379512Z" level=info msg="Start event monitor" May 27 02:55:51.266520 containerd[1524]: time="2025-05-27T02:55:51.266400856Z" level=info msg="Start cni network conf syncer for default" May 27 02:55:51.266520 containerd[1524]: time="2025-05-27T02:55:51.266409586Z" level=info msg="Start streaming server" May 27 02:55:51.266520 containerd[1524]: time="2025-05-27T02:55:51.266418355Z" level=info msg="Registered namespace \"k8s.io\" with NRI" May 27 02:55:51.266520 containerd[1524]: time="2025-05-27T02:55:51.266417554Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 27 02:55:51.266520 containerd[1524]: time="2025-05-27T02:55:51.266470093Z" level=info msg=serving... address=/run/containerd/containerd.sock May 27 02:55:51.266520 containerd[1524]: time="2025-05-27T02:55:51.266427005Z" level=info msg="runtime interface starting up..." May 27 02:55:51.267845 containerd[1524]: time="2025-05-27T02:55:51.266532081Z" level=info msg="starting plugins..." May 27 02:55:51.267845 containerd[1524]: time="2025-05-27T02:55:51.266552344Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" May 27 02:55:51.267845 containerd[1524]: time="2025-05-27T02:55:51.266705033Z" level=info msg="containerd successfully booted in 0.127463s" May 27 02:55:51.266789 systemd[1]: Started containerd.service - containerd container runtime. May 27 02:55:51.281005 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 27 02:55:51.283838 systemd[1]: Starting issuegen.service - Generate /run/issue... May 27 02:55:51.299729 tar[1522]: linux-arm64/LICENSE May 27 02:55:51.299825 tar[1522]: linux-arm64/README.md May 27 02:55:51.312467 systemd[1]: issuegen.service: Deactivated successfully. May 27 02:55:51.312658 systemd[1]: Finished issuegen.service - Generate /run/issue. May 27 02:55:51.314987 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 27 02:55:51.318218 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 27 02:55:51.351605 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 27 02:55:51.354636 systemd[1]: Started getty@tty1.service - Getty on tty1. May 27 02:55:51.356942 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. May 27 02:55:51.358458 systemd[1]: Reached target getty.target - Login Prompts. May 27 02:55:52.668924 systemd-networkd[1440]: eth0: Gained IPv6LL May 27 02:55:52.671296 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 27 02:55:52.673252 systemd[1]: Reached target network-online.target - Network is Online. May 27 02:55:52.677354 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... May 27 02:55:52.679901 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 02:55:52.705638 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 27 02:55:52.719886 systemd[1]: coreos-metadata.service: Deactivated successfully. May 27 02:55:52.722727 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. May 27 02:55:52.725410 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 27 02:55:52.733147 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 27 02:55:53.248036 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 02:55:53.249570 systemd[1]: Reached target multi-user.target - Multi-User System. May 27 02:55:53.251553 (kubelet)[1637]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 27 02:55:53.251566 systemd[1]: Startup finished in 2.141s (kernel) + 5.440s (initrd) + 4.231s (userspace) = 11.813s. May 27 02:55:53.704001 kubelet[1637]: E0527 02:55:53.703930 1637 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 27 02:55:53.706569 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 27 02:55:53.706760 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 27 02:55:53.707081 systemd[1]: kubelet.service: Consumed 863ms CPU time, 255.9M memory peak. May 27 02:55:56.926203 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 27 02:55:56.927298 systemd[1]: Started sshd@0-10.0.0.92:22-10.0.0.1:38722.service - OpenSSH per-connection server daemon (10.0.0.1:38722). May 27 02:55:57.005399 sshd[1650]: Accepted publickey for core from 10.0.0.1 port 38722 ssh2: RSA SHA256:SbE+pbEGsQ3+BBFd86hKUXv5mFEyG6MA7PyvB6kMiX8 May 27 02:55:57.006971 sshd-session[1650]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 02:55:57.012918 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 27 02:55:57.013856 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 27 02:55:57.019175 systemd-logind[1508]: New session 1 of user core. May 27 02:55:57.035525 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 27 02:55:57.039333 systemd[1]: Starting user@500.service - User Manager for UID 500... May 27 02:55:57.054641 (systemd)[1654]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 27 02:55:57.056655 systemd-logind[1508]: New session c1 of user core. May 27 02:55:57.167049 systemd[1654]: Queued start job for default target default.target. May 27 02:55:57.185754 systemd[1654]: Created slice app.slice - User Application Slice. May 27 02:55:57.185783 systemd[1654]: Reached target paths.target - Paths. May 27 02:55:57.185823 systemd[1654]: Reached target timers.target - Timers. May 27 02:55:57.187071 systemd[1654]: Starting dbus.socket - D-Bus User Message Bus Socket... May 27 02:55:57.196212 systemd[1654]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 27 02:55:57.196276 systemd[1654]: Reached target sockets.target - Sockets. May 27 02:55:57.196317 systemd[1654]: Reached target basic.target - Basic System. May 27 02:55:57.196366 systemd[1654]: Reached target default.target - Main User Target. May 27 02:55:57.196395 systemd[1654]: Startup finished in 134ms. May 27 02:55:57.196615 systemd[1]: Started user@500.service - User Manager for UID 500. May 27 02:55:57.197964 systemd[1]: Started session-1.scope - Session 1 of User core. May 27 02:55:57.254246 systemd[1]: Started sshd@1-10.0.0.92:22-10.0.0.1:38738.service - OpenSSH per-connection server daemon (10.0.0.1:38738). May 27 02:55:57.304193 sshd[1665]: Accepted publickey for core from 10.0.0.1 port 38738 ssh2: RSA SHA256:SbE+pbEGsQ3+BBFd86hKUXv5mFEyG6MA7PyvB6kMiX8 May 27 02:55:57.305422 sshd-session[1665]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 02:55:57.309750 systemd-logind[1508]: New session 2 of user core. May 27 02:55:57.322887 systemd[1]: Started session-2.scope - Session 2 of User core. May 27 02:55:57.373335 sshd[1667]: Connection closed by 10.0.0.1 port 38738 May 27 02:55:57.373627 sshd-session[1665]: pam_unix(sshd:session): session closed for user core May 27 02:55:57.383698 systemd[1]: sshd@1-10.0.0.92:22-10.0.0.1:38738.service: Deactivated successfully. May 27 02:55:57.385796 systemd[1]: session-2.scope: Deactivated successfully. May 27 02:55:57.386376 systemd-logind[1508]: Session 2 logged out. Waiting for processes to exit. May 27 02:55:57.388391 systemd[1]: Started sshd@2-10.0.0.92:22-10.0.0.1:38750.service - OpenSSH per-connection server daemon (10.0.0.1:38750). May 27 02:55:57.389141 systemd-logind[1508]: Removed session 2. May 27 02:55:57.445712 sshd[1673]: Accepted publickey for core from 10.0.0.1 port 38750 ssh2: RSA SHA256:SbE+pbEGsQ3+BBFd86hKUXv5mFEyG6MA7PyvB6kMiX8 May 27 02:55:57.447392 sshd-session[1673]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 02:55:57.451747 systemd-logind[1508]: New session 3 of user core. May 27 02:55:57.462860 systemd[1]: Started session-3.scope - Session 3 of User core. May 27 02:55:57.510476 sshd[1676]: Connection closed by 10.0.0.1 port 38750 May 27 02:55:57.510336 sshd-session[1673]: pam_unix(sshd:session): session closed for user core May 27 02:55:57.520620 systemd[1]: sshd@2-10.0.0.92:22-10.0.0.1:38750.service: Deactivated successfully. May 27 02:55:57.522900 systemd[1]: session-3.scope: Deactivated successfully. May 27 02:55:57.524516 systemd-logind[1508]: Session 3 logged out. Waiting for processes to exit. May 27 02:55:57.525810 systemd[1]: Started sshd@3-10.0.0.92:22-10.0.0.1:38752.service - OpenSSH per-connection server daemon (10.0.0.1:38752). May 27 02:55:57.526807 systemd-logind[1508]: Removed session 3. May 27 02:55:57.563642 sshd[1682]: Accepted publickey for core from 10.0.0.1 port 38752 ssh2: RSA SHA256:SbE+pbEGsQ3+BBFd86hKUXv5mFEyG6MA7PyvB6kMiX8 May 27 02:55:57.564816 sshd-session[1682]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 02:55:57.568558 systemd-logind[1508]: New session 4 of user core. May 27 02:55:57.578848 systemd[1]: Started session-4.scope - Session 4 of User core. May 27 02:55:57.640956 sshd[1684]: Connection closed by 10.0.0.1 port 38752 May 27 02:55:57.641994 sshd-session[1682]: pam_unix(sshd:session): session closed for user core May 27 02:55:57.656850 systemd[1]: sshd@3-10.0.0.92:22-10.0.0.1:38752.service: Deactivated successfully. May 27 02:55:57.660824 systemd[1]: session-4.scope: Deactivated successfully. May 27 02:55:57.665618 systemd-logind[1508]: Session 4 logged out. Waiting for processes to exit. May 27 02:55:57.668548 systemd[1]: Started sshd@4-10.0.0.92:22-10.0.0.1:38758.service - OpenSSH per-connection server daemon (10.0.0.1:38758). May 27 02:55:57.670499 systemd-logind[1508]: Removed session 4. May 27 02:55:57.727224 sshd[1690]: Accepted publickey for core from 10.0.0.1 port 38758 ssh2: RSA SHA256:SbE+pbEGsQ3+BBFd86hKUXv5mFEyG6MA7PyvB6kMiX8 May 27 02:55:57.731451 sshd-session[1690]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 02:55:57.738527 systemd-logind[1508]: New session 5 of user core. May 27 02:55:57.752078 systemd[1]: Started session-5.scope - Session 5 of User core. May 27 02:55:57.812609 sudo[1693]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 27 02:55:57.815590 sudo[1693]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 27 02:55:57.830418 sudo[1693]: pam_unix(sudo:session): session closed for user root May 27 02:55:57.833449 sshd[1692]: Connection closed by 10.0.0.1 port 38758 May 27 02:55:57.834570 sshd-session[1690]: pam_unix(sshd:session): session closed for user core May 27 02:55:57.846927 systemd[1]: sshd@4-10.0.0.92:22-10.0.0.1:38758.service: Deactivated successfully. May 27 02:55:57.849001 systemd[1]: session-5.scope: Deactivated successfully. May 27 02:55:57.849732 systemd-logind[1508]: Session 5 logged out. Waiting for processes to exit. May 27 02:55:57.851998 systemd[1]: Started sshd@5-10.0.0.92:22-10.0.0.1:38774.service - OpenSSH per-connection server daemon (10.0.0.1:38774). May 27 02:55:57.853550 systemd-logind[1508]: Removed session 5. May 27 02:55:57.908879 sshd[1699]: Accepted publickey for core from 10.0.0.1 port 38774 ssh2: RSA SHA256:SbE+pbEGsQ3+BBFd86hKUXv5mFEyG6MA7PyvB6kMiX8 May 27 02:55:57.910278 sshd-session[1699]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 02:55:57.915154 systemd-logind[1508]: New session 6 of user core. May 27 02:55:57.930861 systemd[1]: Started session-6.scope - Session 6 of User core. May 27 02:55:57.983203 sudo[1703]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 27 02:55:57.983887 sudo[1703]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 27 02:55:58.057593 sudo[1703]: pam_unix(sudo:session): session closed for user root May 27 02:55:58.063098 sudo[1702]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 27 02:55:58.063370 sudo[1702]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 27 02:55:58.076136 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 27 02:55:58.119258 augenrules[1725]: No rules May 27 02:55:58.120364 systemd[1]: audit-rules.service: Deactivated successfully. May 27 02:55:58.120574 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 27 02:55:58.121821 sudo[1702]: pam_unix(sudo:session): session closed for user root May 27 02:55:58.122948 sshd[1701]: Connection closed by 10.0.0.1 port 38774 May 27 02:55:58.123314 sshd-session[1699]: pam_unix(sshd:session): session closed for user core May 27 02:55:58.133420 systemd[1]: sshd@5-10.0.0.92:22-10.0.0.1:38774.service: Deactivated successfully. May 27 02:55:58.135942 systemd[1]: session-6.scope: Deactivated successfully. May 27 02:55:58.138658 systemd-logind[1508]: Session 6 logged out. Waiting for processes to exit. May 27 02:55:58.143981 systemd[1]: Started sshd@6-10.0.0.92:22-10.0.0.1:38790.service - OpenSSH per-connection server daemon (10.0.0.1:38790). May 27 02:55:58.145069 systemd-logind[1508]: Removed session 6. May 27 02:55:58.206878 sshd[1734]: Accepted publickey for core from 10.0.0.1 port 38790 ssh2: RSA SHA256:SbE+pbEGsQ3+BBFd86hKUXv5mFEyG6MA7PyvB6kMiX8 May 27 02:55:58.207706 sshd-session[1734]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 02:55:58.212738 systemd-logind[1508]: New session 7 of user core. May 27 02:55:58.220878 systemd[1]: Started session-7.scope - Session 7 of User core. May 27 02:55:58.271339 sudo[1737]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 27 02:55:58.271606 sudo[1737]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 27 02:55:58.655733 systemd[1]: Starting docker.service - Docker Application Container Engine... May 27 02:55:58.666044 (dockerd)[1758]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 27 02:55:58.962855 dockerd[1758]: time="2025-05-27T02:55:58.962728041Z" level=info msg="Starting up" May 27 02:55:58.964243 dockerd[1758]: time="2025-05-27T02:55:58.964214959Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" May 27 02:55:59.006897 dockerd[1758]: time="2025-05-27T02:55:59.006850689Z" level=info msg="Loading containers: start." May 27 02:55:59.014691 kernel: Initializing XFRM netlink socket May 27 02:55:59.209080 systemd-networkd[1440]: docker0: Link UP May 27 02:55:59.212018 dockerd[1758]: time="2025-05-27T02:55:59.211977758Z" level=info msg="Loading containers: done." May 27 02:55:59.227803 dockerd[1758]: time="2025-05-27T02:55:59.227547368Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 27 02:55:59.227803 dockerd[1758]: time="2025-05-27T02:55:59.227635243Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 May 27 02:55:59.227803 dockerd[1758]: time="2025-05-27T02:55:59.227761792Z" level=info msg="Initializing buildkit" May 27 02:55:59.248050 dockerd[1758]: time="2025-05-27T02:55:59.248012262Z" level=info msg="Completed buildkit initialization" May 27 02:55:59.253927 dockerd[1758]: time="2025-05-27T02:55:59.253843509Z" level=info msg="Daemon has completed initialization" May 27 02:55:59.254352 systemd[1]: Started docker.service - Docker Application Container Engine. May 27 02:55:59.255277 dockerd[1758]: time="2025-05-27T02:55:59.254466604Z" level=info msg="API listen on /run/docker.sock" May 27 02:55:59.836025 containerd[1524]: time="2025-05-27T02:55:59.835974202Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.9\"" May 27 02:55:59.986612 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3172047042-merged.mount: Deactivated successfully. May 27 02:56:00.453835 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2362077263.mount: Deactivated successfully. May 27 02:56:01.296436 containerd[1524]: time="2025-05-27T02:56:01.296345836Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 02:56:01.297240 containerd[1524]: time="2025-05-27T02:56:01.297203447Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.9: active requests=0, bytes read=25651976" May 27 02:56:01.297940 containerd[1524]: time="2025-05-27T02:56:01.297877991Z" level=info msg="ImageCreate event name:\"sha256:90d52158b7646075e7e560c1bd670904ba3f4f4c8c199106bf96ee0944663d61\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 02:56:01.300502 containerd[1524]: time="2025-05-27T02:56:01.300441457Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:5b68f0df22013422dc8fb9ddfcff513eb6fc92f9dbf8aae41555c895efef5a20\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 02:56:01.301468 containerd[1524]: time="2025-05-27T02:56:01.301412720Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.9\" with image id \"sha256:90d52158b7646075e7e560c1bd670904ba3f4f4c8c199106bf96ee0944663d61\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:5b68f0df22013422dc8fb9ddfcff513eb6fc92f9dbf8aae41555c895efef5a20\", size \"25648774\" in 1.465389676s" May 27 02:56:01.301468 containerd[1524]: time="2025-05-27T02:56:01.301449029Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.9\" returns image reference \"sha256:90d52158b7646075e7e560c1bd670904ba3f4f4c8c199106bf96ee0944663d61\"" May 27 02:56:01.304699 containerd[1524]: time="2025-05-27T02:56:01.304654933Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.9\"" May 27 02:56:02.254041 containerd[1524]: time="2025-05-27T02:56:02.253997792Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 02:56:02.254470 containerd[1524]: time="2025-05-27T02:56:02.254436574Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.9: active requests=0, bytes read=22459530" May 27 02:56:02.255291 containerd[1524]: time="2025-05-27T02:56:02.255239121Z" level=info msg="ImageCreate event name:\"sha256:2d03fe540daca1d9520c403342787715eab3b05fb6773ea41153572716c82dba\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 02:56:02.258148 containerd[1524]: time="2025-05-27T02:56:02.258080139Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:be9e7987d323b38a12e28436cff6d6ec6fc31ffdd3ea11eaa9d74852e9d31248\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 02:56:02.259075 containerd[1524]: time="2025-05-27T02:56:02.259038367Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.9\" with image id \"sha256:2d03fe540daca1d9520c403342787715eab3b05fb6773ea41153572716c82dba\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:be9e7987d323b38a12e28436cff6d6ec6fc31ffdd3ea11eaa9d74852e9d31248\", size \"23995294\" in 954.226148ms" May 27 02:56:02.259123 containerd[1524]: time="2025-05-27T02:56:02.259081921Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.9\" returns image reference \"sha256:2d03fe540daca1d9520c403342787715eab3b05fb6773ea41153572716c82dba\"" May 27 02:56:02.259798 containerd[1524]: time="2025-05-27T02:56:02.259776343Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.9\"" May 27 02:56:03.198482 containerd[1524]: time="2025-05-27T02:56:03.198416629Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 02:56:03.198964 containerd[1524]: time="2025-05-27T02:56:03.198917167Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.9: active requests=0, bytes read=17125281" May 27 02:56:03.200618 containerd[1524]: time="2025-05-27T02:56:03.200569577Z" level=info msg="ImageCreate event name:\"sha256:b333fec06af219faaf48f1784baa0b7274945b2e5be5bd2fca2681f7d1baff5f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 02:56:03.202562 containerd[1524]: time="2025-05-27T02:56:03.202531541Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:eb358c7346bb17ab2c639c3ff8ab76a147dec7ae609f5c0c2800233e42253ed1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 02:56:03.204538 containerd[1524]: time="2025-05-27T02:56:03.204498909Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.9\" with image id \"sha256:b333fec06af219faaf48f1784baa0b7274945b2e5be5bd2fca2681f7d1baff5f\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:eb358c7346bb17ab2c639c3ff8ab76a147dec7ae609f5c0c2800233e42253ed1\", size \"18661063\" in 944.693263ms" May 27 02:56:03.204568 containerd[1524]: time="2025-05-27T02:56:03.204539860Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.9\" returns image reference \"sha256:b333fec06af219faaf48f1784baa0b7274945b2e5be5bd2fca2681f7d1baff5f\"" May 27 02:56:03.205893 containerd[1524]: time="2025-05-27T02:56:03.205850412Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.9\"" May 27 02:56:03.852742 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 27 02:56:03.854146 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 02:56:04.001391 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 02:56:04.008965 (kubelet)[2044]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 27 02:56:04.055955 kubelet[2044]: E0527 02:56:04.055898 2044 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 27 02:56:04.059505 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 27 02:56:04.059631 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 27 02:56:04.059975 systemd[1]: kubelet.service: Consumed 158ms CPU time, 110.7M memory peak. May 27 02:56:04.124267 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4287147206.mount: Deactivated successfully. May 27 02:56:04.557639 containerd[1524]: time="2025-05-27T02:56:04.557250720Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 02:56:04.558283 containerd[1524]: time="2025-05-27T02:56:04.557854683Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.9: active requests=0, bytes read=26871377" May 27 02:56:04.558927 containerd[1524]: time="2025-05-27T02:56:04.558893764Z" level=info msg="ImageCreate event name:\"sha256:cbfba5e6542fe387b24d9e73bf5a054a6b07b95af1392268fd82b6f449ef1c27\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 02:56:04.560478 containerd[1524]: time="2025-05-27T02:56:04.560447943Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:fdf026cf2434537e499e9c739d189ca8fc57101d929ac5ccd8e24f979a9738c1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 02:56:04.561656 containerd[1524]: time="2025-05-27T02:56:04.561628768Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.9\" with image id \"sha256:cbfba5e6542fe387b24d9e73bf5a054a6b07b95af1392268fd82b6f449ef1c27\", repo tag \"registry.k8s.io/kube-proxy:v1.31.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:fdf026cf2434537e499e9c739d189ca8fc57101d929ac5ccd8e24f979a9738c1\", size \"26870394\" in 1.355743971s" May 27 02:56:04.561739 containerd[1524]: time="2025-05-27T02:56:04.561659591Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.9\" returns image reference \"sha256:cbfba5e6542fe387b24d9e73bf5a054a6b07b95af1392268fd82b6f449ef1c27\"" May 27 02:56:04.562071 containerd[1524]: time="2025-05-27T02:56:04.562049557Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" May 27 02:56:05.038759 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2090268934.mount: Deactivated successfully. May 27 02:56:05.642089 containerd[1524]: time="2025-05-27T02:56:05.642044135Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 02:56:05.643024 containerd[1524]: time="2025-05-27T02:56:05.642998853Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951624" May 27 02:56:05.643719 containerd[1524]: time="2025-05-27T02:56:05.643658561Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 02:56:05.646727 containerd[1524]: time="2025-05-27T02:56:05.646690193Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 02:56:05.648411 containerd[1524]: time="2025-05-27T02:56:05.648280202Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.086201944s" May 27 02:56:05.648411 containerd[1524]: time="2025-05-27T02:56:05.648311865Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" May 27 02:56:05.648897 containerd[1524]: time="2025-05-27T02:56:05.648872743Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 27 02:56:06.262570 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount407482592.mount: Deactivated successfully. May 27 02:56:06.266337 containerd[1524]: time="2025-05-27T02:56:06.266288964Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 27 02:56:06.267349 containerd[1524]: time="2025-05-27T02:56:06.267314029Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" May 27 02:56:06.268365 containerd[1524]: time="2025-05-27T02:56:06.268322082Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 27 02:56:06.270235 containerd[1524]: time="2025-05-27T02:56:06.270168512Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 27 02:56:06.272699 containerd[1524]: time="2025-05-27T02:56:06.272637610Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 623.730243ms" May 27 02:56:06.272699 containerd[1524]: time="2025-05-27T02:56:06.272702174Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" May 27 02:56:06.273854 containerd[1524]: time="2025-05-27T02:56:06.273749134Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" May 27 02:56:06.736726 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1948238629.mount: Deactivated successfully. May 27 02:56:08.007343 containerd[1524]: time="2025-05-27T02:56:08.007084017Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 02:56:08.008283 containerd[1524]: time="2025-05-27T02:56:08.008254813Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66406467" May 27 02:56:08.008991 containerd[1524]: time="2025-05-27T02:56:08.008950222Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 02:56:08.012122 containerd[1524]: time="2025-05-27T02:56:08.012091530Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 02:56:08.016400 containerd[1524]: time="2025-05-27T02:56:08.016365808Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 1.742585053s" May 27 02:56:08.016481 containerd[1524]: time="2025-05-27T02:56:08.016405954Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" May 27 02:56:11.693166 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 27 02:56:11.693297 systemd[1]: kubelet.service: Consumed 158ms CPU time, 110.7M memory peak. May 27 02:56:11.695104 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 02:56:11.714757 systemd[1]: Reload requested from client PID 2196 ('systemctl') (unit session-7.scope)... May 27 02:56:11.714767 systemd[1]: Reloading... May 27 02:56:11.778714 zram_generator::config[2242]: No configuration found. May 27 02:56:11.854955 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 27 02:56:11.938696 systemd[1]: Reloading finished in 223 ms. May 27 02:56:11.995096 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 27 02:56:11.995315 systemd[1]: kubelet.service: Failed with result 'signal'. May 27 02:56:11.997772 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 27 02:56:11.997841 systemd[1]: kubelet.service: Consumed 86ms CPU time, 95M memory peak. May 27 02:56:11.999284 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 02:56:12.109934 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 02:56:12.113775 (kubelet)[2284]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 27 02:56:12.148537 kubelet[2284]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 27 02:56:12.148537 kubelet[2284]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 27 02:56:12.148537 kubelet[2284]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 27 02:56:12.148871 kubelet[2284]: I0527 02:56:12.148574 2284 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 27 02:56:13.273225 kubelet[2284]: I0527 02:56:13.272857 2284 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" May 27 02:56:13.273225 kubelet[2284]: I0527 02:56:13.273156 2284 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 27 02:56:13.273691 kubelet[2284]: I0527 02:56:13.273579 2284 server.go:934] "Client rotation is on, will bootstrap in background" May 27 02:56:13.307977 kubelet[2284]: E0527 02:56:13.307911 2284 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.92:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.92:6443: connect: connection refused" logger="UnhandledError" May 27 02:56:13.308549 kubelet[2284]: I0527 02:56:13.308523 2284 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 27 02:56:13.315767 kubelet[2284]: I0527 02:56:13.315730 2284 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 27 02:56:13.319140 kubelet[2284]: I0527 02:56:13.319114 2284 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 27 02:56:13.319422 kubelet[2284]: I0527 02:56:13.319393 2284 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 27 02:56:13.319526 kubelet[2284]: I0527 02:56:13.319497 2284 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 27 02:56:13.319675 kubelet[2284]: I0527 02:56:13.319520 2284 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 27 02:56:13.319764 kubelet[2284]: I0527 02:56:13.319752 2284 topology_manager.go:138] "Creating topology manager with none policy" May 27 02:56:13.319764 kubelet[2284]: I0527 02:56:13.319761 2284 container_manager_linux.go:300] "Creating device plugin manager" May 27 02:56:13.319958 kubelet[2284]: I0527 02:56:13.319932 2284 state_mem.go:36] "Initialized new in-memory state store" May 27 02:56:13.322078 kubelet[2284]: I0527 02:56:13.321788 2284 kubelet.go:408] "Attempting to sync node with API server" May 27 02:56:13.322078 kubelet[2284]: I0527 02:56:13.321816 2284 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 27 02:56:13.322078 kubelet[2284]: I0527 02:56:13.321841 2284 kubelet.go:314] "Adding apiserver pod source" May 27 02:56:13.322078 kubelet[2284]: I0527 02:56:13.321918 2284 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 27 02:56:13.327572 kubelet[2284]: W0527 02:56:13.327398 2284 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.92:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.92:6443: connect: connection refused May 27 02:56:13.327572 kubelet[2284]: E0527 02:56:13.327541 2284 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.92:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.92:6443: connect: connection refused" logger="UnhandledError" May 27 02:56:13.327743 kubelet[2284]: W0527 02:56:13.327695 2284 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.92:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.92:6443: connect: connection refused May 27 02:56:13.327790 kubelet[2284]: E0527 02:56:13.327754 2284 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.92:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.92:6443: connect: connection refused" logger="UnhandledError" May 27 02:56:13.329377 kubelet[2284]: I0527 02:56:13.328317 2284 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 27 02:56:13.329377 kubelet[2284]: I0527 02:56:13.329218 2284 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 27 02:56:13.329686 kubelet[2284]: W0527 02:56:13.329652 2284 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 27 02:56:13.331142 kubelet[2284]: I0527 02:56:13.331120 2284 server.go:1274] "Started kubelet" May 27 02:56:13.332115 kubelet[2284]: I0527 02:56:13.331480 2284 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 27 02:56:13.332447 kubelet[2284]: I0527 02:56:13.332426 2284 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 27 02:56:13.333092 kubelet[2284]: I0527 02:56:13.333059 2284 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 27 02:56:13.336364 kubelet[2284]: I0527 02:56:13.334262 2284 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 27 02:56:13.336364 kubelet[2284]: I0527 02:56:13.334337 2284 server.go:449] "Adding debug handlers to kubelet server" May 27 02:56:13.336364 kubelet[2284]: I0527 02:56:13.334348 2284 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 27 02:56:13.336364 kubelet[2284]: I0527 02:56:13.335443 2284 volume_manager.go:289] "Starting Kubelet Volume Manager" May 27 02:56:13.336364 kubelet[2284]: I0527 02:56:13.335524 2284 desired_state_of_world_populator.go:147] "Desired state populator starts to run" May 27 02:56:13.336364 kubelet[2284]: I0527 02:56:13.335572 2284 reconciler.go:26] "Reconciler: start to sync state" May 27 02:56:13.336364 kubelet[2284]: W0527 02:56:13.336008 2284 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.92:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.92:6443: connect: connection refused May 27 02:56:13.336364 kubelet[2284]: E0527 02:56:13.336037 2284 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 27 02:56:13.336364 kubelet[2284]: E0527 02:56:13.336050 2284 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.92:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.92:6443: connect: connection refused" logger="UnhandledError" May 27 02:56:13.336364 kubelet[2284]: E0527 02:56:13.336100 2284 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.92:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.92:6443: connect: connection refused" interval="200ms" May 27 02:56:13.337160 kubelet[2284]: I0527 02:56:13.336994 2284 factory.go:221] Registration of the systemd container factory successfully May 27 02:56:13.337160 kubelet[2284]: I0527 02:56:13.337067 2284 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 27 02:56:13.338083 kubelet[2284]: E0527 02:56:13.337622 2284 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 27 02:56:13.338083 kubelet[2284]: I0527 02:56:13.337770 2284 factory.go:221] Registration of the containerd container factory successfully May 27 02:56:13.338794 kubelet[2284]: E0527 02:56:13.337654 2284 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.92:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.92:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.184342d76b09e383 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-27 02:56:13.331096451 +0000 UTC m=+1.214292924,LastTimestamp:2025-05-27 02:56:13.331096451 +0000 UTC m=+1.214292924,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 27 02:56:13.348026 kubelet[2284]: I0527 02:56:13.347986 2284 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 27 02:56:13.349054 kubelet[2284]: I0527 02:56:13.349029 2284 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 27 02:56:13.349054 kubelet[2284]: I0527 02:56:13.349050 2284 status_manager.go:217] "Starting to sync pod status with apiserver" May 27 02:56:13.349134 kubelet[2284]: I0527 02:56:13.349065 2284 kubelet.go:2321] "Starting kubelet main sync loop" May 27 02:56:13.349134 kubelet[2284]: E0527 02:56:13.349103 2284 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 27 02:56:13.350921 kubelet[2284]: I0527 02:56:13.350904 2284 cpu_manager.go:214] "Starting CPU manager" policy="none" May 27 02:56:13.351032 kubelet[2284]: I0527 02:56:13.351021 2284 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 27 02:56:13.351089 kubelet[2284]: I0527 02:56:13.351080 2284 state_mem.go:36] "Initialized new in-memory state store" May 27 02:56:13.353071 kubelet[2284]: W0527 02:56:13.353028 2284 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.92:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.92:6443: connect: connection refused May 27 02:56:13.353296 kubelet[2284]: E0527 02:56:13.353122 2284 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.92:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.92:6443: connect: connection refused" logger="UnhandledError" May 27 02:56:13.353296 kubelet[2284]: I0527 02:56:13.353184 2284 policy_none.go:49] "None policy: Start" May 27 02:56:13.353895 kubelet[2284]: I0527 02:56:13.353878 2284 memory_manager.go:170] "Starting memorymanager" policy="None" May 27 02:56:13.354055 kubelet[2284]: I0527 02:56:13.354028 2284 state_mem.go:35] "Initializing new in-memory state store" May 27 02:56:13.361548 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 27 02:56:13.382402 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 27 02:56:13.385269 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 27 02:56:13.395696 kubelet[2284]: I0527 02:56:13.395425 2284 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 27 02:56:13.395696 kubelet[2284]: I0527 02:56:13.395632 2284 eviction_manager.go:189] "Eviction manager: starting control loop" May 27 02:56:13.395696 kubelet[2284]: I0527 02:56:13.395646 2284 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 27 02:56:13.396392 kubelet[2284]: I0527 02:56:13.396369 2284 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 27 02:56:13.398084 kubelet[2284]: E0527 02:56:13.398049 2284 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 27 02:56:13.457089 systemd[1]: Created slice kubepods-burstable-pod88b330940328b35e1668526b6cb97047.slice - libcontainer container kubepods-burstable-pod88b330940328b35e1668526b6cb97047.slice. May 27 02:56:13.473985 systemd[1]: Created slice kubepods-burstable-poda3416600bab1918b24583836301c9096.slice - libcontainer container kubepods-burstable-poda3416600bab1918b24583836301c9096.slice. May 27 02:56:13.477662 systemd[1]: Created slice kubepods-burstable-podea5884ad3481d5218ff4c8f11f2934d5.slice - libcontainer container kubepods-burstable-podea5884ad3481d5218ff4c8f11f2934d5.slice. May 27 02:56:13.498201 kubelet[2284]: I0527 02:56:13.498162 2284 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 27 02:56:13.498705 kubelet[2284]: E0527 02:56:13.498653 2284 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.92:6443/api/v1/nodes\": dial tcp 10.0.0.92:6443: connect: connection refused" node="localhost" May 27 02:56:13.537715 kubelet[2284]: I0527 02:56:13.536933 2284 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/88b330940328b35e1668526b6cb97047-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"88b330940328b35e1668526b6cb97047\") " pod="kube-system/kube-apiserver-localhost" May 27 02:56:13.537715 kubelet[2284]: I0527 02:56:13.536971 2284 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 27 02:56:13.537715 kubelet[2284]: I0527 02:56:13.536990 2284 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 27 02:56:13.537715 kubelet[2284]: I0527 02:56:13.537006 2284 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 27 02:56:13.537715 kubelet[2284]: I0527 02:56:13.537066 2284 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/88b330940328b35e1668526b6cb97047-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"88b330940328b35e1668526b6cb97047\") " pod="kube-system/kube-apiserver-localhost" May 27 02:56:13.537876 kubelet[2284]: I0527 02:56:13.537104 2284 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 27 02:56:13.537876 kubelet[2284]: I0527 02:56:13.537125 2284 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 27 02:56:13.537876 kubelet[2284]: I0527 02:56:13.537145 2284 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ea5884ad3481d5218ff4c8f11f2934d5-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"ea5884ad3481d5218ff4c8f11f2934d5\") " pod="kube-system/kube-scheduler-localhost" May 27 02:56:13.537876 kubelet[2284]: I0527 02:56:13.537164 2284 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/88b330940328b35e1668526b6cb97047-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"88b330940328b35e1668526b6cb97047\") " pod="kube-system/kube-apiserver-localhost" May 27 02:56:13.537876 kubelet[2284]: E0527 02:56:13.537546 2284 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.92:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.92:6443: connect: connection refused" interval="400ms" May 27 02:56:13.700941 kubelet[2284]: I0527 02:56:13.700909 2284 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 27 02:56:13.701332 kubelet[2284]: E0527 02:56:13.701282 2284 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.92:6443/api/v1/nodes\": dial tcp 10.0.0.92:6443: connect: connection refused" node="localhost" May 27 02:56:13.771886 kubelet[2284]: E0527 02:56:13.771856 2284 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 02:56:13.775429 containerd[1524]: time="2025-05-27T02:56:13.775394469Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:88b330940328b35e1668526b6cb97047,Namespace:kube-system,Attempt:0,}" May 27 02:56:13.776828 kubelet[2284]: E0527 02:56:13.776785 2284 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 02:56:13.777363 containerd[1524]: time="2025-05-27T02:56:13.777225317Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:a3416600bab1918b24583836301c9096,Namespace:kube-system,Attempt:0,}" May 27 02:56:13.781705 kubelet[2284]: E0527 02:56:13.781664 2284 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 02:56:13.782192 containerd[1524]: time="2025-05-27T02:56:13.782162876Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:ea5884ad3481d5218ff4c8f11f2934d5,Namespace:kube-system,Attempt:0,}" May 27 02:56:13.801795 containerd[1524]: time="2025-05-27T02:56:13.801639203Z" level=info msg="connecting to shim 236ff3bc7e432cfe653696c0fd3a02a76462b1c553d4f70c436b058f6df917ba" address="unix:///run/containerd/s/d3269605e0cd6da1bc85b7b2a093143e932682d16ea651534307e0e17bb5435c" namespace=k8s.io protocol=ttrpc version=3 May 27 02:56:13.803459 containerd[1524]: time="2025-05-27T02:56:13.803403695Z" level=info msg="connecting to shim 97671f9c2453ba16f6d0ca8f1b234f8436e5bde159ce6ef4a7911fb090992409" address="unix:///run/containerd/s/7011dbd00e13a15997a455be8d1356a2fc4f4d2af120b7085ac3fdfe0550c5c3" namespace=k8s.io protocol=ttrpc version=3 May 27 02:56:13.807179 containerd[1524]: time="2025-05-27T02:56:13.807141513Z" level=info msg="connecting to shim 329438c551b4629c501b349e7187d219eea8e1ffedd865cf89d598b538f98f6f" address="unix:///run/containerd/s/9943832f497983cb550b7f79286a8cbd10e76078fd37e1c789a8c1d87aded027" namespace=k8s.io protocol=ttrpc version=3 May 27 02:56:13.832841 systemd[1]: Started cri-containerd-236ff3bc7e432cfe653696c0fd3a02a76462b1c553d4f70c436b058f6df917ba.scope - libcontainer container 236ff3bc7e432cfe653696c0fd3a02a76462b1c553d4f70c436b058f6df917ba. May 27 02:56:13.836526 systemd[1]: Started cri-containerd-329438c551b4629c501b349e7187d219eea8e1ffedd865cf89d598b538f98f6f.scope - libcontainer container 329438c551b4629c501b349e7187d219eea8e1ffedd865cf89d598b538f98f6f. May 27 02:56:13.837588 systemd[1]: Started cri-containerd-97671f9c2453ba16f6d0ca8f1b234f8436e5bde159ce6ef4a7911fb090992409.scope - libcontainer container 97671f9c2453ba16f6d0ca8f1b234f8436e5bde159ce6ef4a7911fb090992409. May 27 02:56:13.876357 containerd[1524]: time="2025-05-27T02:56:13.874515379Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:ea5884ad3481d5218ff4c8f11f2934d5,Namespace:kube-system,Attempt:0,} returns sandbox id \"329438c551b4629c501b349e7187d219eea8e1ffedd865cf89d598b538f98f6f\"" May 27 02:56:13.877453 kubelet[2284]: E0527 02:56:13.877168 2284 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 02:56:13.880523 containerd[1524]: time="2025-05-27T02:56:13.880486428Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:88b330940328b35e1668526b6cb97047,Namespace:kube-system,Attempt:0,} returns sandbox id \"236ff3bc7e432cfe653696c0fd3a02a76462b1c553d4f70c436b058f6df917ba\"" May 27 02:56:13.881378 kubelet[2284]: E0527 02:56:13.881357 2284 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 02:56:13.881794 containerd[1524]: time="2025-05-27T02:56:13.881761290Z" level=info msg="CreateContainer within sandbox \"329438c551b4629c501b349e7187d219eea8e1ffedd865cf89d598b538f98f6f\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 27 02:56:13.882800 containerd[1524]: time="2025-05-27T02:56:13.882758920Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:a3416600bab1918b24583836301c9096,Namespace:kube-system,Attempt:0,} returns sandbox id \"97671f9c2453ba16f6d0ca8f1b234f8436e5bde159ce6ef4a7911fb090992409\"" May 27 02:56:13.883350 containerd[1524]: time="2025-05-27T02:56:13.883311024Z" level=info msg="CreateContainer within sandbox \"236ff3bc7e432cfe653696c0fd3a02a76462b1c553d4f70c436b058f6df917ba\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 27 02:56:13.883492 kubelet[2284]: E0527 02:56:13.883472 2284 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 02:56:13.885736 containerd[1524]: time="2025-05-27T02:56:13.885694576Z" level=info msg="CreateContainer within sandbox \"97671f9c2453ba16f6d0ca8f1b234f8436e5bde159ce6ef4a7911fb090992409\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 27 02:56:13.892601 containerd[1524]: time="2025-05-27T02:56:13.892549832Z" level=info msg="Container 70aa0f1d07a7fae3a319faea88f5dfbdac4188666e523a032182d3de1138ed4f: CDI devices from CRI Config.CDIDevices: []" May 27 02:56:13.895770 containerd[1524]: time="2025-05-27T02:56:13.895737508Z" level=info msg="Container 56827a6ff9e2512a1160a97b4eed12ee1d1cc274d68d57760e662476b0dab585: CDI devices from CRI Config.CDIDevices: []" May 27 02:56:13.896386 containerd[1524]: time="2025-05-27T02:56:13.896354087Z" level=info msg="Container a512560e1f422fdddc73213ca2fdf16628c5130e99010bc469ef591c48a1dc69: CDI devices from CRI Config.CDIDevices: []" May 27 02:56:13.902053 containerd[1524]: time="2025-05-27T02:56:13.902017366Z" level=info msg="CreateContainer within sandbox \"329438c551b4629c501b349e7187d219eea8e1ffedd865cf89d598b538f98f6f\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"70aa0f1d07a7fae3a319faea88f5dfbdac4188666e523a032182d3de1138ed4f\"" May 27 02:56:13.902918 containerd[1524]: time="2025-05-27T02:56:13.902881962Z" level=info msg="StartContainer for \"70aa0f1d07a7fae3a319faea88f5dfbdac4188666e523a032182d3de1138ed4f\"" May 27 02:56:13.904208 containerd[1524]: time="2025-05-27T02:56:13.904035117Z" level=info msg="connecting to shim 70aa0f1d07a7fae3a319faea88f5dfbdac4188666e523a032182d3de1138ed4f" address="unix:///run/containerd/s/9943832f497983cb550b7f79286a8cbd10e76078fd37e1c789a8c1d87aded027" protocol=ttrpc version=3 May 27 02:56:13.904208 containerd[1524]: time="2025-05-27T02:56:13.904181878Z" level=info msg="CreateContainer within sandbox \"236ff3bc7e432cfe653696c0fd3a02a76462b1c553d4f70c436b058f6df917ba\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"56827a6ff9e2512a1160a97b4eed12ee1d1cc274d68d57760e662476b0dab585\"" May 27 02:56:13.904806 containerd[1524]: time="2025-05-27T02:56:13.904755714Z" level=info msg="StartContainer for \"56827a6ff9e2512a1160a97b4eed12ee1d1cc274d68d57760e662476b0dab585\"" May 27 02:56:13.906471 containerd[1524]: time="2025-05-27T02:56:13.905738896Z" level=info msg="connecting to shim 56827a6ff9e2512a1160a97b4eed12ee1d1cc274d68d57760e662476b0dab585" address="unix:///run/containerd/s/d3269605e0cd6da1bc85b7b2a093143e932682d16ea651534307e0e17bb5435c" protocol=ttrpc version=3 May 27 02:56:13.906471 containerd[1524]: time="2025-05-27T02:56:13.906381090Z" level=info msg="CreateContainer within sandbox \"97671f9c2453ba16f6d0ca8f1b234f8436e5bde159ce6ef4a7911fb090992409\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"a512560e1f422fdddc73213ca2fdf16628c5130e99010bc469ef591c48a1dc69\"" May 27 02:56:13.906839 containerd[1524]: time="2025-05-27T02:56:13.906814568Z" level=info msg="StartContainer for \"a512560e1f422fdddc73213ca2fdf16628c5130e99010bc469ef591c48a1dc69\"" May 27 02:56:13.908639 containerd[1524]: time="2025-05-27T02:56:13.908612318Z" level=info msg="connecting to shim a512560e1f422fdddc73213ca2fdf16628c5130e99010bc469ef591c48a1dc69" address="unix:///run/containerd/s/7011dbd00e13a15997a455be8d1356a2fc4f4d2af120b7085ac3fdfe0550c5c3" protocol=ttrpc version=3 May 27 02:56:13.924864 systemd[1]: Started cri-containerd-70aa0f1d07a7fae3a319faea88f5dfbdac4188666e523a032182d3de1138ed4f.scope - libcontainer container 70aa0f1d07a7fae3a319faea88f5dfbdac4188666e523a032182d3de1138ed4f. May 27 02:56:13.928882 systemd[1]: Started cri-containerd-56827a6ff9e2512a1160a97b4eed12ee1d1cc274d68d57760e662476b0dab585.scope - libcontainer container 56827a6ff9e2512a1160a97b4eed12ee1d1cc274d68d57760e662476b0dab585. May 27 02:56:13.929953 systemd[1]: Started cri-containerd-a512560e1f422fdddc73213ca2fdf16628c5130e99010bc469ef591c48a1dc69.scope - libcontainer container a512560e1f422fdddc73213ca2fdf16628c5130e99010bc469ef591c48a1dc69. May 27 02:56:13.938451 kubelet[2284]: E0527 02:56:13.938392 2284 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.92:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.92:6443: connect: connection refused" interval="800ms" May 27 02:56:13.976944 containerd[1524]: time="2025-05-27T02:56:13.976890603Z" level=info msg="StartContainer for \"56827a6ff9e2512a1160a97b4eed12ee1d1cc274d68d57760e662476b0dab585\" returns successfully" May 27 02:56:13.977581 containerd[1524]: time="2025-05-27T02:56:13.977149825Z" level=info msg="StartContainer for \"70aa0f1d07a7fae3a319faea88f5dfbdac4188666e523a032182d3de1138ed4f\" returns successfully" May 27 02:56:13.992250 containerd[1524]: time="2025-05-27T02:56:13.992166016Z" level=info msg="StartContainer for \"a512560e1f422fdddc73213ca2fdf16628c5130e99010bc469ef591c48a1dc69\" returns successfully" May 27 02:56:14.106962 kubelet[2284]: I0527 02:56:14.104137 2284 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 27 02:56:14.106962 kubelet[2284]: E0527 02:56:14.104844 2284 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.92:6443/api/v1/nodes\": dial tcp 10.0.0.92:6443: connect: connection refused" node="localhost" May 27 02:56:14.359268 kubelet[2284]: E0527 02:56:14.359105 2284 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 02:56:14.360066 kubelet[2284]: E0527 02:56:14.359717 2284 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 02:56:14.363059 kubelet[2284]: E0527 02:56:14.363041 2284 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 02:56:14.907076 kubelet[2284]: I0527 02:56:14.907046 2284 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 27 02:56:15.365307 kubelet[2284]: E0527 02:56:15.365113 2284 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 02:56:16.059551 kubelet[2284]: E0527 02:56:16.059509 2284 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" May 27 02:56:16.127705 kubelet[2284]: I0527 02:56:16.126000 2284 kubelet_node_status.go:75] "Successfully registered node" node="localhost" May 27 02:56:16.171433 kubelet[2284]: E0527 02:56:16.171312 2284 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.184342d76b09e383 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-27 02:56:13.331096451 +0000 UTC m=+1.214292924,LastTimestamp:2025-05-27 02:56:13.331096451 +0000 UTC m=+1.214292924,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 27 02:56:16.246854 kubelet[2284]: E0527 02:56:16.246800 2284 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" May 27 02:56:16.246995 kubelet[2284]: E0527 02:56:16.246974 2284 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 02:56:16.330242 kubelet[2284]: I0527 02:56:16.329374 2284 apiserver.go:52] "Watching apiserver" May 27 02:56:16.336099 kubelet[2284]: I0527 02:56:16.336075 2284 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" May 27 02:56:18.068423 systemd[1]: Reload requested from client PID 2558 ('systemctl') (unit session-7.scope)... May 27 02:56:18.068442 systemd[1]: Reloading... May 27 02:56:18.134711 zram_generator::config[2603]: No configuration found. May 27 02:56:18.202117 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 27 02:56:18.303218 systemd[1]: Reloading finished in 234 ms. May 27 02:56:18.324041 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 27 02:56:18.344556 systemd[1]: kubelet.service: Deactivated successfully. May 27 02:56:18.346758 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 27 02:56:18.346821 systemd[1]: kubelet.service: Consumed 1.620s CPU time, 128.6M memory peak. May 27 02:56:18.348503 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 02:56:18.483367 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 02:56:18.487798 (kubelet)[2643]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 27 02:56:18.526315 kubelet[2643]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 27 02:56:18.526315 kubelet[2643]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 27 02:56:18.526315 kubelet[2643]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 27 02:56:18.526658 kubelet[2643]: I0527 02:56:18.526343 2643 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 27 02:56:18.532701 kubelet[2643]: I0527 02:56:18.532614 2643 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" May 27 02:56:18.532701 kubelet[2643]: I0527 02:56:18.532643 2643 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 27 02:56:18.533093 kubelet[2643]: I0527 02:56:18.533031 2643 server.go:934] "Client rotation is on, will bootstrap in background" May 27 02:56:18.534986 kubelet[2643]: I0527 02:56:18.534958 2643 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 27 02:56:18.538608 kubelet[2643]: I0527 02:56:18.538525 2643 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 27 02:56:18.542218 kubelet[2643]: I0527 02:56:18.542200 2643 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 27 02:56:18.544550 kubelet[2643]: I0527 02:56:18.544522 2643 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 27 02:56:18.544782 kubelet[2643]: I0527 02:56:18.544768 2643 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 27 02:56:18.544893 kubelet[2643]: I0527 02:56:18.544868 2643 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 27 02:56:18.545053 kubelet[2643]: I0527 02:56:18.544896 2643 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 27 02:56:18.545127 kubelet[2643]: I0527 02:56:18.545061 2643 topology_manager.go:138] "Creating topology manager with none policy" May 27 02:56:18.545127 kubelet[2643]: I0527 02:56:18.545070 2643 container_manager_linux.go:300] "Creating device plugin manager" May 27 02:56:18.545127 kubelet[2643]: I0527 02:56:18.545102 2643 state_mem.go:36] "Initialized new in-memory state store" May 27 02:56:18.545212 kubelet[2643]: I0527 02:56:18.545200 2643 kubelet.go:408] "Attempting to sync node with API server" May 27 02:56:18.545237 kubelet[2643]: I0527 02:56:18.545216 2643 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 27 02:56:18.545237 kubelet[2643]: I0527 02:56:18.545234 2643 kubelet.go:314] "Adding apiserver pod source" May 27 02:56:18.545273 kubelet[2643]: I0527 02:56:18.545246 2643 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 27 02:56:18.546111 kubelet[2643]: I0527 02:56:18.546042 2643 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 27 02:56:18.547197 kubelet[2643]: I0527 02:56:18.547180 2643 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 27 02:56:18.548061 kubelet[2643]: I0527 02:56:18.547779 2643 server.go:1274] "Started kubelet" May 27 02:56:18.548061 kubelet[2643]: I0527 02:56:18.547987 2643 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 27 02:56:18.548786 kubelet[2643]: I0527 02:56:18.548062 2643 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 27 02:56:18.550619 kubelet[2643]: I0527 02:56:18.550584 2643 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 27 02:56:18.551771 kubelet[2643]: I0527 02:56:18.551747 2643 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 27 02:56:18.555683 kubelet[2643]: I0527 02:56:18.555254 2643 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 27 02:56:18.557939 kubelet[2643]: I0527 02:56:18.557917 2643 volume_manager.go:289] "Starting Kubelet Volume Manager" May 27 02:56:18.558228 kubelet[2643]: E0527 02:56:18.558199 2643 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" May 27 02:56:18.558696 kubelet[2643]: I0527 02:56:18.558605 2643 desired_state_of_world_populator.go:147] "Desired state populator starts to run" May 27 02:56:18.559457 kubelet[2643]: I0527 02:56:18.559409 2643 server.go:449] "Adding debug handlers to kubelet server" May 27 02:56:18.561722 kubelet[2643]: I0527 02:56:18.561696 2643 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 27 02:56:18.563015 kubelet[2643]: I0527 02:56:18.562735 2643 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 27 02:56:18.563015 kubelet[2643]: I0527 02:56:18.562759 2643 status_manager.go:217] "Starting to sync pod status with apiserver" May 27 02:56:18.563015 kubelet[2643]: I0527 02:56:18.562778 2643 kubelet.go:2321] "Starting kubelet main sync loop" May 27 02:56:18.563015 kubelet[2643]: E0527 02:56:18.562819 2643 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 27 02:56:18.566214 kubelet[2643]: E0527 02:56:18.566183 2643 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 27 02:56:18.568463 kubelet[2643]: I0527 02:56:18.568429 2643 reconciler.go:26] "Reconciler: start to sync state" May 27 02:56:18.568697 kubelet[2643]: I0527 02:56:18.568662 2643 factory.go:221] Registration of the systemd container factory successfully May 27 02:56:18.568931 kubelet[2643]: I0527 02:56:18.568904 2643 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 27 02:56:18.573806 kubelet[2643]: I0527 02:56:18.573780 2643 factory.go:221] Registration of the containerd container factory successfully May 27 02:56:18.601181 kubelet[2643]: I0527 02:56:18.601090 2643 cpu_manager.go:214] "Starting CPU manager" policy="none" May 27 02:56:18.601181 kubelet[2643]: I0527 02:56:18.601108 2643 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 27 02:56:18.601181 kubelet[2643]: I0527 02:56:18.601127 2643 state_mem.go:36] "Initialized new in-memory state store" May 27 02:56:18.601310 kubelet[2643]: I0527 02:56:18.601276 2643 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 27 02:56:18.601310 kubelet[2643]: I0527 02:56:18.601286 2643 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 27 02:56:18.601355 kubelet[2643]: I0527 02:56:18.601311 2643 policy_none.go:49] "None policy: Start" May 27 02:56:18.602665 kubelet[2643]: I0527 02:56:18.602628 2643 memory_manager.go:170] "Starting memorymanager" policy="None" May 27 02:56:18.602665 kubelet[2643]: I0527 02:56:18.602663 2643 state_mem.go:35] "Initializing new in-memory state store" May 27 02:56:18.602835 kubelet[2643]: I0527 02:56:18.602810 2643 state_mem.go:75] "Updated machine memory state" May 27 02:56:18.606984 kubelet[2643]: I0527 02:56:18.606554 2643 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 27 02:56:18.606984 kubelet[2643]: I0527 02:56:18.606733 2643 eviction_manager.go:189] "Eviction manager: starting control loop" May 27 02:56:18.606984 kubelet[2643]: I0527 02:56:18.606745 2643 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 27 02:56:18.606984 kubelet[2643]: I0527 02:56:18.606915 2643 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 27 02:56:18.710858 kubelet[2643]: I0527 02:56:18.710819 2643 kubelet_node_status.go:72] "Attempting to register node" node="localhost" May 27 02:56:18.716715 kubelet[2643]: I0527 02:56:18.716666 2643 kubelet_node_status.go:111] "Node was previously registered" node="localhost" May 27 02:56:18.716815 kubelet[2643]: I0527 02:56:18.716763 2643 kubelet_node_status.go:75] "Successfully registered node" node="localhost" May 27 02:56:18.770530 kubelet[2643]: I0527 02:56:18.770482 2643 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 27 02:56:18.770530 kubelet[2643]: I0527 02:56:18.770529 2643 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 27 02:56:18.770694 kubelet[2643]: I0527 02:56:18.770551 2643 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ea5884ad3481d5218ff4c8f11f2934d5-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"ea5884ad3481d5218ff4c8f11f2934d5\") " pod="kube-system/kube-scheduler-localhost" May 27 02:56:18.770694 kubelet[2643]: I0527 02:56:18.770568 2643 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/88b330940328b35e1668526b6cb97047-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"88b330940328b35e1668526b6cb97047\") " pod="kube-system/kube-apiserver-localhost" May 27 02:56:18.770694 kubelet[2643]: I0527 02:56:18.770583 2643 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/88b330940328b35e1668526b6cb97047-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"88b330940328b35e1668526b6cb97047\") " pod="kube-system/kube-apiserver-localhost" May 27 02:56:18.770694 kubelet[2643]: I0527 02:56:18.770597 2643 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/88b330940328b35e1668526b6cb97047-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"88b330940328b35e1668526b6cb97047\") " pod="kube-system/kube-apiserver-localhost" May 27 02:56:18.770694 kubelet[2643]: I0527 02:56:18.770614 2643 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 27 02:56:18.770844 kubelet[2643]: I0527 02:56:18.770646 2643 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 27 02:56:18.770844 kubelet[2643]: I0527 02:56:18.770759 2643 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a3416600bab1918b24583836301c9096-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"a3416600bab1918b24583836301c9096\") " pod="kube-system/kube-controller-manager-localhost" May 27 02:56:18.970557 kubelet[2643]: E0527 02:56:18.970523 2643 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 02:56:18.970557 kubelet[2643]: E0527 02:56:18.970539 2643 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 02:56:18.970725 kubelet[2643]: E0527 02:56:18.970533 2643 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 02:56:19.139844 sudo[2681]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin May 27 02:56:19.140111 sudo[2681]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) May 27 02:56:19.546714 kubelet[2643]: I0527 02:56:19.546686 2643 apiserver.go:52] "Watching apiserver" May 27 02:56:19.559720 kubelet[2643]: I0527 02:56:19.559658 2643 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" May 27 02:56:19.569935 sudo[2681]: pam_unix(sudo:session): session closed for user root May 27 02:56:19.590583 kubelet[2643]: E0527 02:56:19.590538 2643 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 02:56:19.590866 kubelet[2643]: E0527 02:56:19.590841 2643 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 02:56:19.599640 kubelet[2643]: E0527 02:56:19.599606 2643 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 27 02:56:19.599802 kubelet[2643]: E0527 02:56:19.599785 2643 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 02:56:19.641319 kubelet[2643]: I0527 02:56:19.641140 2643 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.641122678 podStartE2EDuration="1.641122678s" podCreationTimestamp="2025-05-27 02:56:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 02:56:19.628425738 +0000 UTC m=+1.137694432" watchObservedRunningTime="2025-05-27 02:56:19.641122678 +0000 UTC m=+1.150391332" May 27 02:56:19.650432 kubelet[2643]: I0527 02:56:19.650252 2643 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.650235227 podStartE2EDuration="1.650235227s" podCreationTimestamp="2025-05-27 02:56:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 02:56:19.641267104 +0000 UTC m=+1.150535798" watchObservedRunningTime="2025-05-27 02:56:19.650235227 +0000 UTC m=+1.159503921" May 27 02:56:19.650832 kubelet[2643]: I0527 02:56:19.650771 2643 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.650552531 podStartE2EDuration="1.650552531s" podCreationTimestamp="2025-05-27 02:56:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 02:56:19.65043976 +0000 UTC m=+1.159708454" watchObservedRunningTime="2025-05-27 02:56:19.650552531 +0000 UTC m=+1.159821225" May 27 02:56:20.592529 kubelet[2643]: E0527 02:56:20.592487 2643 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 02:56:21.431501 sudo[1737]: pam_unix(sudo:session): session closed for user root May 27 02:56:21.432738 sshd[1736]: Connection closed by 10.0.0.1 port 38790 May 27 02:56:21.433252 sshd-session[1734]: pam_unix(sshd:session): session closed for user core May 27 02:56:21.436935 systemd[1]: sshd@6-10.0.0.92:22-10.0.0.1:38790.service: Deactivated successfully. May 27 02:56:21.439871 systemd[1]: session-7.scope: Deactivated successfully. May 27 02:56:21.440141 systemd[1]: session-7.scope: Consumed 6.134s CPU time, 264.5M memory peak. May 27 02:56:21.441612 systemd-logind[1508]: Session 7 logged out. Waiting for processes to exit. May 27 02:56:21.443207 systemd-logind[1508]: Removed session 7. May 27 02:56:21.593713 kubelet[2643]: E0527 02:56:21.593601 2643 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 02:56:22.898027 kubelet[2643]: E0527 02:56:22.897933 2643 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 02:56:25.372949 kubelet[2643]: I0527 02:56:25.372907 2643 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 27 02:56:25.373392 containerd[1524]: time="2025-05-27T02:56:25.373237963Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 27 02:56:25.373575 kubelet[2643]: I0527 02:56:25.373562 2643 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 27 02:56:26.309909 systemd[1]: Created slice kubepods-besteffort-pod9b957d2e_6736_4e5a_966e_824ed134e8ac.slice - libcontainer container kubepods-besteffort-pod9b957d2e_6736_4e5a_966e_824ed134e8ac.slice. May 27 02:56:26.319990 kubelet[2643]: I0527 02:56:26.319951 2643 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9b957d2e-6736-4e5a-966e-824ed134e8ac-xtables-lock\") pod \"kube-proxy-7g5gh\" (UID: \"9b957d2e-6736-4e5a-966e-824ed134e8ac\") " pod="kube-system/kube-proxy-7g5gh" May 27 02:56:26.319990 kubelet[2643]: I0527 02:56:26.319986 2643 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9b957d2e-6736-4e5a-966e-824ed134e8ac-lib-modules\") pod \"kube-proxy-7g5gh\" (UID: \"9b957d2e-6736-4e5a-966e-824ed134e8ac\") " pod="kube-system/kube-proxy-7g5gh" May 27 02:56:26.320126 kubelet[2643]: I0527 02:56:26.320028 2643 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lkjkc\" (UniqueName: \"kubernetes.io/projected/9b957d2e-6736-4e5a-966e-824ed134e8ac-kube-api-access-lkjkc\") pod \"kube-proxy-7g5gh\" (UID: \"9b957d2e-6736-4e5a-966e-824ed134e8ac\") " pod="kube-system/kube-proxy-7g5gh" May 27 02:56:26.320126 kubelet[2643]: I0527 02:56:26.320053 2643 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/9b957d2e-6736-4e5a-966e-824ed134e8ac-kube-proxy\") pod \"kube-proxy-7g5gh\" (UID: \"9b957d2e-6736-4e5a-966e-824ed134e8ac\") " pod="kube-system/kube-proxy-7g5gh" May 27 02:56:26.331884 systemd[1]: Created slice kubepods-burstable-pod200aa7fb_f846_4325_800a_789721f45ac0.slice - libcontainer container kubepods-burstable-pod200aa7fb_f846_4325_800a_789721f45ac0.slice. May 27 02:56:26.379695 systemd[1]: Created slice kubepods-besteffort-pod37a10d60_069a_4922_a689_bcf347b5c771.slice - libcontainer container kubepods-besteffort-pod37a10d60_069a_4922_a689_bcf347b5c771.slice. May 27 02:56:26.421324 kubelet[2643]: I0527 02:56:26.421255 2643 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/200aa7fb-f846-4325-800a-789721f45ac0-xtables-lock\") pod \"cilium-snlxc\" (UID: \"200aa7fb-f846-4325-800a-789721f45ac0\") " pod="kube-system/cilium-snlxc" May 27 02:56:26.421324 kubelet[2643]: I0527 02:56:26.421309 2643 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/200aa7fb-f846-4325-800a-789721f45ac0-clustermesh-secrets\") pod \"cilium-snlxc\" (UID: \"200aa7fb-f846-4325-800a-789721f45ac0\") " pod="kube-system/cilium-snlxc" May 27 02:56:26.421697 kubelet[2643]: I0527 02:56:26.421368 2643 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/200aa7fb-f846-4325-800a-789721f45ac0-host-proc-sys-kernel\") pod \"cilium-snlxc\" (UID: \"200aa7fb-f846-4325-800a-789721f45ac0\") " pod="kube-system/cilium-snlxc" May 27 02:56:26.421697 kubelet[2643]: I0527 02:56:26.421388 2643 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/37a10d60-069a-4922-a689-bcf347b5c771-cilium-config-path\") pod \"cilium-operator-5d85765b45-4rsh4\" (UID: \"37a10d60-069a-4922-a689-bcf347b5c771\") " pod="kube-system/cilium-operator-5d85765b45-4rsh4" May 27 02:56:26.421697 kubelet[2643]: I0527 02:56:26.421424 2643 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/200aa7fb-f846-4325-800a-789721f45ac0-cilium-cgroup\") pod \"cilium-snlxc\" (UID: \"200aa7fb-f846-4325-800a-789721f45ac0\") " pod="kube-system/cilium-snlxc" May 27 02:56:26.421697 kubelet[2643]: I0527 02:56:26.421439 2643 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/200aa7fb-f846-4325-800a-789721f45ac0-cni-path\") pod \"cilium-snlxc\" (UID: \"200aa7fb-f846-4325-800a-789721f45ac0\") " pod="kube-system/cilium-snlxc" May 27 02:56:26.421697 kubelet[2643]: I0527 02:56:26.421454 2643 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/200aa7fb-f846-4325-800a-789721f45ac0-cilium-config-path\") pod \"cilium-snlxc\" (UID: \"200aa7fb-f846-4325-800a-789721f45ac0\") " pod="kube-system/cilium-snlxc" May 27 02:56:26.421809 kubelet[2643]: I0527 02:56:26.421469 2643 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/200aa7fb-f846-4325-800a-789721f45ac0-bpf-maps\") pod \"cilium-snlxc\" (UID: \"200aa7fb-f846-4325-800a-789721f45ac0\") " pod="kube-system/cilium-snlxc" May 27 02:56:26.421809 kubelet[2643]: I0527 02:56:26.421483 2643 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/200aa7fb-f846-4325-800a-789721f45ac0-etc-cni-netd\") pod \"cilium-snlxc\" (UID: \"200aa7fb-f846-4325-800a-789721f45ac0\") " pod="kube-system/cilium-snlxc" May 27 02:56:26.421809 kubelet[2643]: I0527 02:56:26.421498 2643 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/200aa7fb-f846-4325-800a-789721f45ac0-hubble-tls\") pod \"cilium-snlxc\" (UID: \"200aa7fb-f846-4325-800a-789721f45ac0\") " pod="kube-system/cilium-snlxc" May 27 02:56:26.421809 kubelet[2643]: I0527 02:56:26.421514 2643 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/200aa7fb-f846-4325-800a-789721f45ac0-cilium-run\") pod \"cilium-snlxc\" (UID: \"200aa7fb-f846-4325-800a-789721f45ac0\") " pod="kube-system/cilium-snlxc" May 27 02:56:26.421809 kubelet[2643]: I0527 02:56:26.421528 2643 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2lp62\" (UniqueName: \"kubernetes.io/projected/37a10d60-069a-4922-a689-bcf347b5c771-kube-api-access-2lp62\") pod \"cilium-operator-5d85765b45-4rsh4\" (UID: \"37a10d60-069a-4922-a689-bcf347b5c771\") " pod="kube-system/cilium-operator-5d85765b45-4rsh4" May 27 02:56:26.421940 kubelet[2643]: I0527 02:56:26.421542 2643 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/200aa7fb-f846-4325-800a-789721f45ac0-host-proc-sys-net\") pod \"cilium-snlxc\" (UID: \"200aa7fb-f846-4325-800a-789721f45ac0\") " pod="kube-system/cilium-snlxc" May 27 02:56:26.421940 kubelet[2643]: I0527 02:56:26.421556 2643 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f9gnj\" (UniqueName: \"kubernetes.io/projected/200aa7fb-f846-4325-800a-789721f45ac0-kube-api-access-f9gnj\") pod \"cilium-snlxc\" (UID: \"200aa7fb-f846-4325-800a-789721f45ac0\") " pod="kube-system/cilium-snlxc" May 27 02:56:26.421940 kubelet[2643]: I0527 02:56:26.421571 2643 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/200aa7fb-f846-4325-800a-789721f45ac0-lib-modules\") pod \"cilium-snlxc\" (UID: \"200aa7fb-f846-4325-800a-789721f45ac0\") " pod="kube-system/cilium-snlxc" May 27 02:56:26.421940 kubelet[2643]: I0527 02:56:26.421592 2643 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/200aa7fb-f846-4325-800a-789721f45ac0-hostproc\") pod \"cilium-snlxc\" (UID: \"200aa7fb-f846-4325-800a-789721f45ac0\") " pod="kube-system/cilium-snlxc" May 27 02:56:26.623055 kubelet[2643]: E0527 02:56:26.623009 2643 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 02:56:26.624164 containerd[1524]: time="2025-05-27T02:56:26.624128755Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7g5gh,Uid:9b957d2e-6736-4e5a-966e-824ed134e8ac,Namespace:kube-system,Attempt:0,}" May 27 02:56:26.639790 containerd[1524]: time="2025-05-27T02:56:26.639245226Z" level=info msg="connecting to shim 53e06c71a2086dca702a620e6b2cbaebdab3ce3ee807590286c70d7eaaa1b3dd" address="unix:///run/containerd/s/6384658adb3e66e68fd37a657c2447f440577193c7d242bde9ad72e66b69fcbe" namespace=k8s.io protocol=ttrpc version=3 May 27 02:56:26.640642 kubelet[2643]: E0527 02:56:26.640611 2643 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 02:56:26.641032 containerd[1524]: time="2025-05-27T02:56:26.641002067Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-snlxc,Uid:200aa7fb-f846-4325-800a-789721f45ac0,Namespace:kube-system,Attempt:0,}" May 27 02:56:26.657344 containerd[1524]: time="2025-05-27T02:56:26.656905585Z" level=info msg="connecting to shim 9b777d6a9035d0fc37fabd29fe265255143d12a519bccffb6b50747c3fcf68f2" address="unix:///run/containerd/s/d1c09a9713cf193cc2433dc3fe345c93cd040b9d55f40c53aea7fdd94343ae37" namespace=k8s.io protocol=ttrpc version=3 May 27 02:56:26.666843 systemd[1]: Started cri-containerd-53e06c71a2086dca702a620e6b2cbaebdab3ce3ee807590286c70d7eaaa1b3dd.scope - libcontainer container 53e06c71a2086dca702a620e6b2cbaebdab3ce3ee807590286c70d7eaaa1b3dd. May 27 02:56:26.674238 systemd[1]: Started cri-containerd-9b777d6a9035d0fc37fabd29fe265255143d12a519bccffb6b50747c3fcf68f2.scope - libcontainer container 9b777d6a9035d0fc37fabd29fe265255143d12a519bccffb6b50747c3fcf68f2. May 27 02:56:26.684322 kubelet[2643]: E0527 02:56:26.684299 2643 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 02:56:26.684814 containerd[1524]: time="2025-05-27T02:56:26.684661744Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-4rsh4,Uid:37a10d60-069a-4922-a689-bcf347b5c771,Namespace:kube-system,Attempt:0,}" May 27 02:56:26.698366 containerd[1524]: time="2025-05-27T02:56:26.698313841Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7g5gh,Uid:9b957d2e-6736-4e5a-966e-824ed134e8ac,Namespace:kube-system,Attempt:0,} returns sandbox id \"53e06c71a2086dca702a620e6b2cbaebdab3ce3ee807590286c70d7eaaa1b3dd\"" May 27 02:56:26.699300 kubelet[2643]: E0527 02:56:26.699275 2643 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 02:56:26.704387 containerd[1524]: time="2025-05-27T02:56:26.704233880Z" level=info msg="CreateContainer within sandbox \"53e06c71a2086dca702a620e6b2cbaebdab3ce3ee807590286c70d7eaaa1b3dd\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 27 02:56:26.709594 containerd[1524]: time="2025-05-27T02:56:26.709228341Z" level=info msg="connecting to shim c9a79047560fd652dc29eaf27b3e6935d7d79b7d35dfccffec57059e2e5deae1" address="unix:///run/containerd/s/440ccf452ba066205757ec6a63fe44724bc04981ff016fd7dff1a80c028db2b9" namespace=k8s.io protocol=ttrpc version=3 May 27 02:56:26.710194 containerd[1524]: time="2025-05-27T02:56:26.710163081Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-snlxc,Uid:200aa7fb-f846-4325-800a-789721f45ac0,Namespace:kube-system,Attempt:0,} returns sandbox id \"9b777d6a9035d0fc37fabd29fe265255143d12a519bccffb6b50747c3fcf68f2\"" May 27 02:56:26.710791 kubelet[2643]: E0527 02:56:26.710625 2643 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 02:56:26.712185 containerd[1524]: time="2025-05-27T02:56:26.712131319Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 27 02:56:26.717544 containerd[1524]: time="2025-05-27T02:56:26.716036703Z" level=info msg="Container ac41c88c73f24f16f202b721a0a8518aebbd851f341e0a4866523b5606d91d30: CDI devices from CRI Config.CDIDevices: []" May 27 02:56:26.724977 containerd[1524]: time="2025-05-27T02:56:26.724934947Z" level=info msg="CreateContainer within sandbox \"53e06c71a2086dca702a620e6b2cbaebdab3ce3ee807590286c70d7eaaa1b3dd\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"ac41c88c73f24f16f202b721a0a8518aebbd851f341e0a4866523b5606d91d30\"" May 27 02:56:26.727733 containerd[1524]: time="2025-05-27T02:56:26.727657619Z" level=info msg="StartContainer for \"ac41c88c73f24f16f202b721a0a8518aebbd851f341e0a4866523b5606d91d30\"" May 27 02:56:26.729830 containerd[1524]: time="2025-05-27T02:56:26.729793478Z" level=info msg="connecting to shim ac41c88c73f24f16f202b721a0a8518aebbd851f341e0a4866523b5606d91d30" address="unix:///run/containerd/s/6384658adb3e66e68fd37a657c2447f440577193c7d242bde9ad72e66b69fcbe" protocol=ttrpc version=3 May 27 02:56:26.737889 systemd[1]: Started cri-containerd-c9a79047560fd652dc29eaf27b3e6935d7d79b7d35dfccffec57059e2e5deae1.scope - libcontainer container c9a79047560fd652dc29eaf27b3e6935d7d79b7d35dfccffec57059e2e5deae1. May 27 02:56:26.744829 systemd[1]: Started cri-containerd-ac41c88c73f24f16f202b721a0a8518aebbd851f341e0a4866523b5606d91d30.scope - libcontainer container ac41c88c73f24f16f202b721a0a8518aebbd851f341e0a4866523b5606d91d30. May 27 02:56:26.783057 containerd[1524]: time="2025-05-27T02:56:26.782930651Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-4rsh4,Uid:37a10d60-069a-4922-a689-bcf347b5c771,Namespace:kube-system,Attempt:0,} returns sandbox id \"c9a79047560fd652dc29eaf27b3e6935d7d79b7d35dfccffec57059e2e5deae1\"" May 27 02:56:26.783660 kubelet[2643]: E0527 02:56:26.783625 2643 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 02:56:26.786774 containerd[1524]: time="2025-05-27T02:56:26.786357660Z" level=info msg="StartContainer for \"ac41c88c73f24f16f202b721a0a8518aebbd851f341e0a4866523b5606d91d30\" returns successfully" May 27 02:56:26.810232 kubelet[2643]: E0527 02:56:26.810193 2643 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 02:56:27.609320 kubelet[2643]: E0527 02:56:27.609284 2643 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 02:56:27.609320 kubelet[2643]: E0527 02:56:27.609289 2643 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 02:56:27.627345 kubelet[2643]: I0527 02:56:27.627290 2643 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-7g5gh" podStartSLOduration=1.627253374 podStartE2EDuration="1.627253374s" podCreationTimestamp="2025-05-27 02:56:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 02:56:27.626830825 +0000 UTC m=+9.136099559" watchObservedRunningTime="2025-05-27 02:56:27.627253374 +0000 UTC m=+9.136522068" May 27 02:56:30.183581 kubelet[2643]: E0527 02:56:30.183535 2643 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 02:56:30.613436 kubelet[2643]: E0527 02:56:30.613251 2643 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 02:56:32.905058 kubelet[2643]: E0527 02:56:32.905026 2643 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 02:56:36.447754 update_engine[1515]: I20250527 02:56:36.447378 1515 update_attempter.cc:509] Updating boot flags... May 27 02:56:38.006740 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1357269699.mount: Deactivated successfully. May 27 02:56:39.367953 containerd[1524]: time="2025-05-27T02:56:39.367895393Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 02:56:39.368896 containerd[1524]: time="2025-05-27T02:56:39.368485095Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" May 27 02:56:39.369207 containerd[1524]: time="2025-05-27T02:56:39.369176142Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 02:56:39.371094 containerd[1524]: time="2025-05-27T02:56:39.371061677Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 12.658890265s" May 27 02:56:39.371207 containerd[1524]: time="2025-05-27T02:56:39.371191228Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" May 27 02:56:39.380065 containerd[1524]: time="2025-05-27T02:56:39.380032041Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 27 02:56:39.385003 containerd[1524]: time="2025-05-27T02:56:39.384966712Z" level=info msg="CreateContainer within sandbox \"9b777d6a9035d0fc37fabd29fe265255143d12a519bccffb6b50747c3fcf68f2\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 27 02:56:39.393664 containerd[1524]: time="2025-05-27T02:56:39.393623961Z" level=info msg="Container 20078f16e0d2552dd6effb8122c710031707a92bfd5a8e3d19dc89756b47d79c: CDI devices from CRI Config.CDIDevices: []" May 27 02:56:39.396490 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3647254456.mount: Deactivated successfully. May 27 02:56:39.399714 containerd[1524]: time="2025-05-27T02:56:39.399652136Z" level=info msg="CreateContainer within sandbox \"9b777d6a9035d0fc37fabd29fe265255143d12a519bccffb6b50747c3fcf68f2\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"20078f16e0d2552dd6effb8122c710031707a92bfd5a8e3d19dc89756b47d79c\"" May 27 02:56:39.400138 containerd[1524]: time="2025-05-27T02:56:39.400104565Z" level=info msg="StartContainer for \"20078f16e0d2552dd6effb8122c710031707a92bfd5a8e3d19dc89756b47d79c\"" May 27 02:56:39.401024 containerd[1524]: time="2025-05-27T02:56:39.400992140Z" level=info msg="connecting to shim 20078f16e0d2552dd6effb8122c710031707a92bfd5a8e3d19dc89756b47d79c" address="unix:///run/containerd/s/d1c09a9713cf193cc2433dc3fe345c93cd040b9d55f40c53aea7fdd94343ae37" protocol=ttrpc version=3 May 27 02:56:39.445874 systemd[1]: Started cri-containerd-20078f16e0d2552dd6effb8122c710031707a92bfd5a8e3d19dc89756b47d79c.scope - libcontainer container 20078f16e0d2552dd6effb8122c710031707a92bfd5a8e3d19dc89756b47d79c. May 27 02:56:39.510405 containerd[1524]: time="2025-05-27T02:56:39.507387335Z" level=info msg="StartContainer for \"20078f16e0d2552dd6effb8122c710031707a92bfd5a8e3d19dc89756b47d79c\" returns successfully" May 27 02:56:39.537669 systemd[1]: cri-containerd-20078f16e0d2552dd6effb8122c710031707a92bfd5a8e3d19dc89756b47d79c.scope: Deactivated successfully. May 27 02:56:39.589246 containerd[1524]: time="2025-05-27T02:56:39.589193796Z" level=info msg="TaskExit event in podsandbox handler container_id:\"20078f16e0d2552dd6effb8122c710031707a92bfd5a8e3d19dc89756b47d79c\" id:\"20078f16e0d2552dd6effb8122c710031707a92bfd5a8e3d19dc89756b47d79c\" pid:3085 exited_at:{seconds:1748314599 nanos:564116225}" May 27 02:56:39.589246 containerd[1524]: time="2025-05-27T02:56:39.589197277Z" level=info msg="received exit event container_id:\"20078f16e0d2552dd6effb8122c710031707a92bfd5a8e3d19dc89756b47d79c\" id:\"20078f16e0d2552dd6effb8122c710031707a92bfd5a8e3d19dc89756b47d79c\" pid:3085 exited_at:{seconds:1748314599 nanos:564116225}" May 27 02:56:39.641037 kubelet[2643]: E0527 02:56:39.640877 2643 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 02:56:40.392133 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-20078f16e0d2552dd6effb8122c710031707a92bfd5a8e3d19dc89756b47d79c-rootfs.mount: Deactivated successfully. May 27 02:56:40.577908 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount385705375.mount: Deactivated successfully. May 27 02:56:40.648752 kubelet[2643]: E0527 02:56:40.648637 2643 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 02:56:40.653513 containerd[1524]: time="2025-05-27T02:56:40.653473797Z" level=info msg="CreateContainer within sandbox \"9b777d6a9035d0fc37fabd29fe265255143d12a519bccffb6b50747c3fcf68f2\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 27 02:56:40.664332 containerd[1524]: time="2025-05-27T02:56:40.662964656Z" level=info msg="Container 5b722d67086c167404974b6bc916ee55f262bfad8443c311b77f359204b17b01: CDI devices from CRI Config.CDIDevices: []" May 27 02:56:40.668587 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2795209394.mount: Deactivated successfully. May 27 02:56:40.671268 containerd[1524]: time="2025-05-27T02:56:40.671226307Z" level=info msg="CreateContainer within sandbox \"9b777d6a9035d0fc37fabd29fe265255143d12a519bccffb6b50747c3fcf68f2\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"5b722d67086c167404974b6bc916ee55f262bfad8443c311b77f359204b17b01\"" May 27 02:56:40.672378 containerd[1524]: time="2025-05-27T02:56:40.672345929Z" level=info msg="StartContainer for \"5b722d67086c167404974b6bc916ee55f262bfad8443c311b77f359204b17b01\"" May 27 02:56:40.674290 containerd[1524]: time="2025-05-27T02:56:40.674254295Z" level=info msg="connecting to shim 5b722d67086c167404974b6bc916ee55f262bfad8443c311b77f359204b17b01" address="unix:///run/containerd/s/d1c09a9713cf193cc2433dc3fe345c93cd040b9d55f40c53aea7fdd94343ae37" protocol=ttrpc version=3 May 27 02:56:40.697865 systemd[1]: Started cri-containerd-5b722d67086c167404974b6bc916ee55f262bfad8443c311b77f359204b17b01.scope - libcontainer container 5b722d67086c167404974b6bc916ee55f262bfad8443c311b77f359204b17b01. May 27 02:56:40.738808 containerd[1524]: time="2025-05-27T02:56:40.738712364Z" level=info msg="StartContainer for \"5b722d67086c167404974b6bc916ee55f262bfad8443c311b77f359204b17b01\" returns successfully" May 27 02:56:40.752507 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 27 02:56:40.753137 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 27 02:56:40.753452 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... May 27 02:56:40.754890 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 27 02:56:40.756961 containerd[1524]: time="2025-05-27T02:56:40.756639635Z" level=info msg="received exit event container_id:\"5b722d67086c167404974b6bc916ee55f262bfad8443c311b77f359204b17b01\" id:\"5b722d67086c167404974b6bc916ee55f262bfad8443c311b77f359204b17b01\" pid:3137 exited_at:{seconds:1748314600 nanos:756473116}" May 27 02:56:40.756750 systemd[1]: cri-containerd-5b722d67086c167404974b6bc916ee55f262bfad8443c311b77f359204b17b01.scope: Deactivated successfully. May 27 02:56:40.757598 containerd[1524]: time="2025-05-27T02:56:40.757576894Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5b722d67086c167404974b6bc916ee55f262bfad8443c311b77f359204b17b01\" id:\"5b722d67086c167404974b6bc916ee55f262bfad8443c311b77f359204b17b01\" pid:3137 exited_at:{seconds:1748314600 nanos:756473116}" May 27 02:56:40.797739 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 27 02:56:40.978445 containerd[1524]: time="2025-05-27T02:56:40.978347586Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 02:56:40.979310 containerd[1524]: time="2025-05-27T02:56:40.979271682Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" May 27 02:56:40.981864 containerd[1524]: time="2025-05-27T02:56:40.981816597Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 02:56:40.983082 containerd[1524]: time="2025-05-27T02:56:40.982950862Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 1.602769305s" May 27 02:56:40.983082 containerd[1524]: time="2025-05-27T02:56:40.983003435Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" May 27 02:56:40.984852 containerd[1524]: time="2025-05-27T02:56:40.984826461Z" level=info msg="CreateContainer within sandbox \"c9a79047560fd652dc29eaf27b3e6935d7d79b7d35dfccffec57059e2e5deae1\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 27 02:56:40.991734 containerd[1524]: time="2025-05-27T02:56:40.991398597Z" level=info msg="Container c2a9e6fa44fa85d5f4b095a5dcea7b8105167bb8c413a15a36d838d3f33e3231: CDI devices from CRI Config.CDIDevices: []" May 27 02:56:40.995850 containerd[1524]: time="2025-05-27T02:56:40.995818071Z" level=info msg="CreateContainer within sandbox \"c9a79047560fd652dc29eaf27b3e6935d7d79b7d35dfccffec57059e2e5deae1\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"c2a9e6fa44fa85d5f4b095a5dcea7b8105167bb8c413a15a36d838d3f33e3231\"" May 27 02:56:40.996200 containerd[1524]: time="2025-05-27T02:56:40.996181596Z" level=info msg="StartContainer for \"c2a9e6fa44fa85d5f4b095a5dcea7b8105167bb8c413a15a36d838d3f33e3231\"" May 27 02:56:40.997114 containerd[1524]: time="2025-05-27T02:56:40.997079725Z" level=info msg="connecting to shim c2a9e6fa44fa85d5f4b095a5dcea7b8105167bb8c413a15a36d838d3f33e3231" address="unix:///run/containerd/s/440ccf452ba066205757ec6a63fe44724bc04981ff016fd7dff1a80c028db2b9" protocol=ttrpc version=3 May 27 02:56:41.022886 systemd[1]: Started cri-containerd-c2a9e6fa44fa85d5f4b095a5dcea7b8105167bb8c413a15a36d838d3f33e3231.scope - libcontainer container c2a9e6fa44fa85d5f4b095a5dcea7b8105167bb8c413a15a36d838d3f33e3231. May 27 02:56:41.051860 containerd[1524]: time="2025-05-27T02:56:41.051808066Z" level=info msg="StartContainer for \"c2a9e6fa44fa85d5f4b095a5dcea7b8105167bb8c413a15a36d838d3f33e3231\" returns successfully" May 27 02:56:41.653817 kubelet[2643]: E0527 02:56:41.653086 2643 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 02:56:41.657096 kubelet[2643]: E0527 02:56:41.657062 2643 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 02:56:41.660054 containerd[1524]: time="2025-05-27T02:56:41.659909107Z" level=info msg="CreateContainer within sandbox \"9b777d6a9035d0fc37fabd29fe265255143d12a519bccffb6b50747c3fcf68f2\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 27 02:56:41.667876 kubelet[2643]: I0527 02:56:41.667820 2643 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-4rsh4" podStartSLOduration=1.468291846 podStartE2EDuration="15.667802975s" podCreationTimestamp="2025-05-27 02:56:26 +0000 UTC" firstStartedPulling="2025-05-27 02:56:26.784189029 +0000 UTC m=+8.293457723" lastFinishedPulling="2025-05-27 02:56:40.983700158 +0000 UTC m=+22.492968852" observedRunningTime="2025-05-27 02:56:41.66663511 +0000 UTC m=+23.175903764" watchObservedRunningTime="2025-05-27 02:56:41.667802975 +0000 UTC m=+23.177071669" May 27 02:56:41.701688 containerd[1524]: time="2025-05-27T02:56:41.701277516Z" level=info msg="Container eab9bb6fdc56cd402fdbd66cdf90fb809de17343e853f0467dfd21bfb729e04f: CDI devices from CRI Config.CDIDevices: []" May 27 02:56:41.703885 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount77635491.mount: Deactivated successfully. May 27 02:56:41.713579 containerd[1524]: time="2025-05-27T02:56:41.713535692Z" level=info msg="CreateContainer within sandbox \"9b777d6a9035d0fc37fabd29fe265255143d12a519bccffb6b50747c3fcf68f2\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"eab9bb6fdc56cd402fdbd66cdf90fb809de17343e853f0467dfd21bfb729e04f\"" May 27 02:56:41.714315 containerd[1524]: time="2025-05-27T02:56:41.714283342Z" level=info msg="StartContainer for \"eab9bb6fdc56cd402fdbd66cdf90fb809de17343e853f0467dfd21bfb729e04f\"" May 27 02:56:41.715976 containerd[1524]: time="2025-05-27T02:56:41.715938837Z" level=info msg="connecting to shim eab9bb6fdc56cd402fdbd66cdf90fb809de17343e853f0467dfd21bfb729e04f" address="unix:///run/containerd/s/d1c09a9713cf193cc2433dc3fe345c93cd040b9d55f40c53aea7fdd94343ae37" protocol=ttrpc version=3 May 27 02:56:41.745463 systemd[1]: Started cri-containerd-eab9bb6fdc56cd402fdbd66cdf90fb809de17343e853f0467dfd21bfb729e04f.scope - libcontainer container eab9bb6fdc56cd402fdbd66cdf90fb809de17343e853f0467dfd21bfb729e04f. May 27 02:56:41.805520 containerd[1524]: time="2025-05-27T02:56:41.805417541Z" level=info msg="StartContainer for \"eab9bb6fdc56cd402fdbd66cdf90fb809de17343e853f0467dfd21bfb729e04f\" returns successfully" May 27 02:56:41.830866 systemd[1]: cri-containerd-eab9bb6fdc56cd402fdbd66cdf90fb809de17343e853f0467dfd21bfb729e04f.scope: Deactivated successfully. May 27 02:56:41.845959 containerd[1524]: time="2025-05-27T02:56:41.845897669Z" level=info msg="received exit event container_id:\"eab9bb6fdc56cd402fdbd66cdf90fb809de17343e853f0467dfd21bfb729e04f\" id:\"eab9bb6fdc56cd402fdbd66cdf90fb809de17343e853f0467dfd21bfb729e04f\" pid:3226 exited_at:{seconds:1748314601 nanos:845655134}" May 27 02:56:41.846328 containerd[1524]: time="2025-05-27T02:56:41.846292079Z" level=info msg="TaskExit event in podsandbox handler container_id:\"eab9bb6fdc56cd402fdbd66cdf90fb809de17343e853f0467dfd21bfb729e04f\" id:\"eab9bb6fdc56cd402fdbd66cdf90fb809de17343e853f0467dfd21bfb729e04f\" pid:3226 exited_at:{seconds:1748314601 nanos:845655134}" May 27 02:56:41.863232 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-eab9bb6fdc56cd402fdbd66cdf90fb809de17343e853f0467dfd21bfb729e04f-rootfs.mount: Deactivated successfully. May 27 02:56:42.661596 kubelet[2643]: E0527 02:56:42.661299 2643 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 02:56:42.661596 kubelet[2643]: E0527 02:56:42.661412 2643 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 02:56:42.663795 containerd[1524]: time="2025-05-27T02:56:42.663753003Z" level=info msg="CreateContainer within sandbox \"9b777d6a9035d0fc37fabd29fe265255143d12a519bccffb6b50747c3fcf68f2\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 27 02:56:42.676535 containerd[1524]: time="2025-05-27T02:56:42.675729550Z" level=info msg="Container 8355952abc5d6ff60ec5fa47942440f41298d23e889385c09a26e38baca99641: CDI devices from CRI Config.CDIDevices: []" May 27 02:56:42.680035 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3789219054.mount: Deactivated successfully. May 27 02:56:42.684426 containerd[1524]: time="2025-05-27T02:56:42.684363084Z" level=info msg="CreateContainer within sandbox \"9b777d6a9035d0fc37fabd29fe265255143d12a519bccffb6b50747c3fcf68f2\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"8355952abc5d6ff60ec5fa47942440f41298d23e889385c09a26e38baca99641\"" May 27 02:56:42.684938 containerd[1524]: time="2025-05-27T02:56:42.684907084Z" level=info msg="StartContainer for \"8355952abc5d6ff60ec5fa47942440f41298d23e889385c09a26e38baca99641\"" May 27 02:56:42.686372 containerd[1524]: time="2025-05-27T02:56:42.686268463Z" level=info msg="connecting to shim 8355952abc5d6ff60ec5fa47942440f41298d23e889385c09a26e38baca99641" address="unix:///run/containerd/s/d1c09a9713cf193cc2433dc3fe345c93cd040b9d55f40c53aea7fdd94343ae37" protocol=ttrpc version=3 May 27 02:56:42.704810 systemd[1]: Started cri-containerd-8355952abc5d6ff60ec5fa47942440f41298d23e889385c09a26e38baca99641.scope - libcontainer container 8355952abc5d6ff60ec5fa47942440f41298d23e889385c09a26e38baca99641. May 27 02:56:42.724420 systemd[1]: cri-containerd-8355952abc5d6ff60ec5fa47942440f41298d23e889385c09a26e38baca99641.scope: Deactivated successfully. May 27 02:56:42.725509 containerd[1524]: time="2025-05-27T02:56:42.725388525Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8355952abc5d6ff60ec5fa47942440f41298d23e889385c09a26e38baca99641\" id:\"8355952abc5d6ff60ec5fa47942440f41298d23e889385c09a26e38baca99641\" pid:3266 exited_at:{seconds:1748314602 nanos:725197924}" May 27 02:56:42.726495 containerd[1524]: time="2025-05-27T02:56:42.726212546Z" level=info msg="received exit event container_id:\"8355952abc5d6ff60ec5fa47942440f41298d23e889385c09a26e38baca99641\" id:\"8355952abc5d6ff60ec5fa47942440f41298d23e889385c09a26e38baca99641\" pid:3266 exited_at:{seconds:1748314602 nanos:725197924}" May 27 02:56:42.734689 containerd[1524]: time="2025-05-27T02:56:42.734643996Z" level=info msg="StartContainer for \"8355952abc5d6ff60ec5fa47942440f41298d23e889385c09a26e38baca99641\" returns successfully" May 27 02:56:42.754089 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8355952abc5d6ff60ec5fa47942440f41298d23e889385c09a26e38baca99641-rootfs.mount: Deactivated successfully. May 27 02:56:43.666622 kubelet[2643]: E0527 02:56:43.666595 2643 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 02:56:43.670789 containerd[1524]: time="2025-05-27T02:56:43.670739668Z" level=info msg="CreateContainer within sandbox \"9b777d6a9035d0fc37fabd29fe265255143d12a519bccffb6b50747c3fcf68f2\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 27 02:56:43.689292 containerd[1524]: time="2025-05-27T02:56:43.689248402Z" level=info msg="Container 30fcecfa3916750c6d00b3b11d6fd8fee860fb1a561a7f6f8c730337033c3251: CDI devices from CRI Config.CDIDevices: []" May 27 02:56:43.695473 containerd[1524]: time="2025-05-27T02:56:43.695380066Z" level=info msg="CreateContainer within sandbox \"9b777d6a9035d0fc37fabd29fe265255143d12a519bccffb6b50747c3fcf68f2\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"30fcecfa3916750c6d00b3b11d6fd8fee860fb1a561a7f6f8c730337033c3251\"" May 27 02:56:43.695952 containerd[1524]: time="2025-05-27T02:56:43.695928502Z" level=info msg="StartContainer for \"30fcecfa3916750c6d00b3b11d6fd8fee860fb1a561a7f6f8c730337033c3251\"" May 27 02:56:43.697005 containerd[1524]: time="2025-05-27T02:56:43.696864941Z" level=info msg="connecting to shim 30fcecfa3916750c6d00b3b11d6fd8fee860fb1a561a7f6f8c730337033c3251" address="unix:///run/containerd/s/d1c09a9713cf193cc2433dc3fe345c93cd040b9d55f40c53aea7fdd94343ae37" protocol=ttrpc version=3 May 27 02:56:43.718832 systemd[1]: Started cri-containerd-30fcecfa3916750c6d00b3b11d6fd8fee860fb1a561a7f6f8c730337033c3251.scope - libcontainer container 30fcecfa3916750c6d00b3b11d6fd8fee860fb1a561a7f6f8c730337033c3251. May 27 02:56:43.744855 containerd[1524]: time="2025-05-27T02:56:43.744822054Z" level=info msg="StartContainer for \"30fcecfa3916750c6d00b3b11d6fd8fee860fb1a561a7f6f8c730337033c3251\" returns successfully" May 27 02:56:43.854435 containerd[1524]: time="2025-05-27T02:56:43.854394343Z" level=info msg="TaskExit event in podsandbox handler container_id:\"30fcecfa3916750c6d00b3b11d6fd8fee860fb1a561a7f6f8c730337033c3251\" id:\"81bbc31c1b6ca136f1c3aa8c82dbf9c73fc38f844856ff73a416a53f7924187c\" pid:3335 exited_at:{seconds:1748314603 nanos:854051351}" May 27 02:56:43.933669 kubelet[2643]: I0527 02:56:43.932706 2643 kubelet_node_status.go:488] "Fast updating node status as it just became ready" May 27 02:56:44.000863 systemd[1]: Created slice kubepods-burstable-podb2e59669_4ad3_4e42_9204_1b4b57b214f5.slice - libcontainer container kubepods-burstable-podb2e59669_4ad3_4e42_9204_1b4b57b214f5.slice. May 27 02:56:44.013796 systemd[1]: Created slice kubepods-burstable-pod05037d27_8c5d_4a52_9c2e_2ffd28d92b52.slice - libcontainer container kubepods-burstable-pod05037d27_8c5d_4a52_9c2e_2ffd28d92b52.slice. May 27 02:56:44.044022 kubelet[2643]: I0527 02:56:44.043977 2643 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9tc2\" (UniqueName: \"kubernetes.io/projected/b2e59669-4ad3-4e42-9204-1b4b57b214f5-kube-api-access-q9tc2\") pod \"coredns-7c65d6cfc9-f7nns\" (UID: \"b2e59669-4ad3-4e42-9204-1b4b57b214f5\") " pod="kube-system/coredns-7c65d6cfc9-f7nns" May 27 02:56:44.044022 kubelet[2643]: I0527 02:56:44.044022 2643 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/05037d27-8c5d-4a52-9c2e-2ffd28d92b52-config-volume\") pod \"coredns-7c65d6cfc9-cnxjx\" (UID: \"05037d27-8c5d-4a52-9c2e-2ffd28d92b52\") " pod="kube-system/coredns-7c65d6cfc9-cnxjx" May 27 02:56:44.044180 kubelet[2643]: I0527 02:56:44.044046 2643 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hdhbz\" (UniqueName: \"kubernetes.io/projected/05037d27-8c5d-4a52-9c2e-2ffd28d92b52-kube-api-access-hdhbz\") pod \"coredns-7c65d6cfc9-cnxjx\" (UID: \"05037d27-8c5d-4a52-9c2e-2ffd28d92b52\") " pod="kube-system/coredns-7c65d6cfc9-cnxjx" May 27 02:56:44.044180 kubelet[2643]: I0527 02:56:44.044063 2643 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b2e59669-4ad3-4e42-9204-1b4b57b214f5-config-volume\") pod \"coredns-7c65d6cfc9-f7nns\" (UID: \"b2e59669-4ad3-4e42-9204-1b4b57b214f5\") " pod="kube-system/coredns-7c65d6cfc9-f7nns" May 27 02:56:44.316653 kubelet[2643]: E0527 02:56:44.316537 2643 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 02:56:44.316653 kubelet[2643]: E0527 02:56:44.316753 2643 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 02:56:44.318421 containerd[1524]: time="2025-05-27T02:56:44.318097771Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-f7nns,Uid:b2e59669-4ad3-4e42-9204-1b4b57b214f5,Namespace:kube-system,Attempt:0,}" May 27 02:56:44.318421 containerd[1524]: time="2025-05-27T02:56:44.318100371Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-cnxjx,Uid:05037d27-8c5d-4a52-9c2e-2ffd28d92b52,Namespace:kube-system,Attempt:0,}" May 27 02:56:44.675190 kubelet[2643]: E0527 02:56:44.675142 2643 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 02:56:44.691702 kubelet[2643]: I0527 02:56:44.691451 2643 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-snlxc" podStartSLOduration=6.028814859 podStartE2EDuration="18.691433442s" podCreationTimestamp="2025-05-27 02:56:26 +0000 UTC" firstStartedPulling="2025-05-27 02:56:26.711033919 +0000 UTC m=+8.220302613" lastFinishedPulling="2025-05-27 02:56:39.373652182 +0000 UTC m=+20.882921196" observedRunningTime="2025-05-27 02:56:44.691230081 +0000 UTC m=+26.200498775" watchObservedRunningTime="2025-05-27 02:56:44.691433442 +0000 UTC m=+26.200702136" May 27 02:56:45.676904 kubelet[2643]: E0527 02:56:45.676876 2643 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 02:56:46.009499 systemd-networkd[1440]: cilium_host: Link UP May 27 02:56:46.010165 systemd-networkd[1440]: cilium_net: Link UP May 27 02:56:46.010773 systemd-networkd[1440]: cilium_net: Gained carrier May 27 02:56:46.010912 systemd-networkd[1440]: cilium_host: Gained carrier May 27 02:56:46.102008 systemd-networkd[1440]: cilium_vxlan: Link UP May 27 02:56:46.102017 systemd-networkd[1440]: cilium_vxlan: Gained carrier May 27 02:56:46.383695 kernel: NET: Registered PF_ALG protocol family May 27 02:56:46.678621 kubelet[2643]: E0527 02:56:46.678509 2643 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 02:56:46.685840 systemd-networkd[1440]: cilium_net: Gained IPv6LL May 27 02:56:46.813931 systemd-networkd[1440]: cilium_host: Gained IPv6LL May 27 02:56:46.942356 systemd-networkd[1440]: lxc_health: Link UP May 27 02:56:46.942590 systemd-networkd[1440]: lxc_health: Gained carrier May 27 02:56:47.324925 systemd-networkd[1440]: cilium_vxlan: Gained IPv6LL May 27 02:56:47.437471 systemd-networkd[1440]: lxc8289bea696d0: Link UP May 27 02:56:47.446694 kernel: eth0: renamed from tmp5b3cb May 27 02:56:47.447256 systemd-networkd[1440]: lxc19eaf164fca1: Link UP May 27 02:56:47.456752 kernel: eth0: renamed from tmp5ce16 May 27 02:56:47.459122 systemd-networkd[1440]: lxc8289bea696d0: Gained carrier May 27 02:56:47.459809 systemd-networkd[1440]: lxc19eaf164fca1: Gained carrier May 27 02:56:47.587432 systemd[1]: Started sshd@7-10.0.0.92:22-10.0.0.1:55022.service - OpenSSH per-connection server daemon (10.0.0.1:55022). May 27 02:56:47.643432 sshd[3807]: Accepted publickey for core from 10.0.0.1 port 55022 ssh2: RSA SHA256:SbE+pbEGsQ3+BBFd86hKUXv5mFEyG6MA7PyvB6kMiX8 May 27 02:56:47.644491 sshd-session[3807]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 02:56:47.650136 systemd-logind[1508]: New session 8 of user core. May 27 02:56:47.659813 systemd[1]: Started session-8.scope - Session 8 of User core. May 27 02:56:47.796654 sshd[3809]: Connection closed by 10.0.0.1 port 55022 May 27 02:56:47.797338 sshd-session[3807]: pam_unix(sshd:session): session closed for user core May 27 02:56:47.801907 systemd-logind[1508]: Session 8 logged out. Waiting for processes to exit. May 27 02:56:47.802475 systemd[1]: sshd@7-10.0.0.92:22-10.0.0.1:55022.service: Deactivated successfully. May 27 02:56:47.806439 systemd[1]: session-8.scope: Deactivated successfully. May 27 02:56:47.808331 systemd-logind[1508]: Removed session 8. May 27 02:56:48.157938 systemd-networkd[1440]: lxc_health: Gained IPv6LL May 27 02:56:48.648558 kubelet[2643]: E0527 02:56:48.648519 2643 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 02:56:48.683151 kubelet[2643]: E0527 02:56:48.683035 2643 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 02:56:49.118061 systemd-networkd[1440]: lxc19eaf164fca1: Gained IPv6LL May 27 02:56:49.436904 systemd-networkd[1440]: lxc8289bea696d0: Gained IPv6LL May 27 02:56:50.997625 containerd[1524]: time="2025-05-27T02:56:50.997549031Z" level=info msg="connecting to shim 5b3cbfc6000c4133bb44f4ea0aff3858ae5585b9c4b61904f44568987ef903ef" address="unix:///run/containerd/s/07f30b02440807f0e677385ba72613d0347aad7ad88e2832ca63cd889e010394" namespace=k8s.io protocol=ttrpc version=3 May 27 02:56:50.997963 containerd[1524]: time="2025-05-27T02:56:50.997559073Z" level=info msg="connecting to shim 5ce1629dd3173ccd709b0e714f2bc9b49feb481e9a12d6cbd6cf20e073e0c266" address="unix:///run/containerd/s/770cd8fcf7b462eb0d9bb63f6470d43db8de62ace2c0a6ceaaf203bdf7fde81d" namespace=k8s.io protocol=ttrpc version=3 May 27 02:56:51.027830 systemd[1]: Started cri-containerd-5ce1629dd3173ccd709b0e714f2bc9b49feb481e9a12d6cbd6cf20e073e0c266.scope - libcontainer container 5ce1629dd3173ccd709b0e714f2bc9b49feb481e9a12d6cbd6cf20e073e0c266. May 27 02:56:51.030855 systemd[1]: Started cri-containerd-5b3cbfc6000c4133bb44f4ea0aff3858ae5585b9c4b61904f44568987ef903ef.scope - libcontainer container 5b3cbfc6000c4133bb44f4ea0aff3858ae5585b9c4b61904f44568987ef903ef. May 27 02:56:51.039542 systemd-resolved[1360]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 27 02:56:51.042666 systemd-resolved[1360]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 27 02:56:51.061868 containerd[1524]: time="2025-05-27T02:56:51.061829366Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-f7nns,Uid:b2e59669-4ad3-4e42-9204-1b4b57b214f5,Namespace:kube-system,Attempt:0,} returns sandbox id \"5ce1629dd3173ccd709b0e714f2bc9b49feb481e9a12d6cbd6cf20e073e0c266\"" May 27 02:56:51.062983 kubelet[2643]: E0527 02:56:51.062626 2643 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 02:56:51.066707 containerd[1524]: time="2025-05-27T02:56:51.066662723Z" level=info msg="CreateContainer within sandbox \"5ce1629dd3173ccd709b0e714f2bc9b49feb481e9a12d6cbd6cf20e073e0c266\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 27 02:56:51.067170 containerd[1524]: time="2025-05-27T02:56:51.067133761Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-cnxjx,Uid:05037d27-8c5d-4a52-9c2e-2ffd28d92b52,Namespace:kube-system,Attempt:0,} returns sandbox id \"5b3cbfc6000c4133bb44f4ea0aff3858ae5585b9c4b61904f44568987ef903ef\"" May 27 02:56:51.068498 kubelet[2643]: E0527 02:56:51.068471 2643 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 02:56:51.070129 containerd[1524]: time="2025-05-27T02:56:51.070076286Z" level=info msg="CreateContainer within sandbox \"5b3cbfc6000c4133bb44f4ea0aff3858ae5585b9c4b61904f44568987ef903ef\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 27 02:56:51.075277 containerd[1524]: time="2025-05-27T02:56:51.075242818Z" level=info msg="Container 3a871fa641c88f72d14239f747995e9315f1cde1e90531255a08a1e92ccedb62: CDI devices from CRI Config.CDIDevices: []" May 27 02:56:51.083417 containerd[1524]: time="2025-05-27T02:56:51.082772899Z" level=info msg="Container fd94214a116b33eac36dc92c56908bc4828f54bb1efd91a4ce78d55e86515e46: CDI devices from CRI Config.CDIDevices: []" May 27 02:56:51.087387 containerd[1524]: time="2025-05-27T02:56:51.087345373Z" level=info msg="CreateContainer within sandbox \"5ce1629dd3173ccd709b0e714f2bc9b49feb481e9a12d6cbd6cf20e073e0c266\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3a871fa641c88f72d14239f747995e9315f1cde1e90531255a08a1e92ccedb62\"" May 27 02:56:51.087847 containerd[1524]: time="2025-05-27T02:56:51.087817051Z" level=info msg="StartContainer for \"3a871fa641c88f72d14239f747995e9315f1cde1e90531255a08a1e92ccedb62\"" May 27 02:56:51.088827 containerd[1524]: time="2025-05-27T02:56:51.088794252Z" level=info msg="connecting to shim 3a871fa641c88f72d14239f747995e9315f1cde1e90531255a08a1e92ccedb62" address="unix:///run/containerd/s/770cd8fcf7b462eb0d9bb63f6470d43db8de62ace2c0a6ceaaf203bdf7fde81d" protocol=ttrpc version=3 May 27 02:56:51.090831 containerd[1524]: time="2025-05-27T02:56:51.090801503Z" level=info msg="CreateContainer within sandbox \"5b3cbfc6000c4133bb44f4ea0aff3858ae5585b9c4b61904f44568987ef903ef\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"fd94214a116b33eac36dc92c56908bc4828f54bb1efd91a4ce78d55e86515e46\"" May 27 02:56:51.091201 containerd[1524]: time="2025-05-27T02:56:51.091166203Z" level=info msg="StartContainer for \"fd94214a116b33eac36dc92c56908bc4828f54bb1efd91a4ce78d55e86515e46\"" May 27 02:56:51.092998 containerd[1524]: time="2025-05-27T02:56:51.092969821Z" level=info msg="connecting to shim fd94214a116b33eac36dc92c56908bc4828f54bb1efd91a4ce78d55e86515e46" address="unix:///run/containerd/s/07f30b02440807f0e677385ba72613d0347aad7ad88e2832ca63cd889e010394" protocol=ttrpc version=3 May 27 02:56:51.116830 systemd[1]: Started cri-containerd-fd94214a116b33eac36dc92c56908bc4828f54bb1efd91a4ce78d55e86515e46.scope - libcontainer container fd94214a116b33eac36dc92c56908bc4828f54bb1efd91a4ce78d55e86515e46. May 27 02:56:51.121385 systemd[1]: Started cri-containerd-3a871fa641c88f72d14239f747995e9315f1cde1e90531255a08a1e92ccedb62.scope - libcontainer container 3a871fa641c88f72d14239f747995e9315f1cde1e90531255a08a1e92ccedb62. May 27 02:56:51.153091 containerd[1524]: time="2025-05-27T02:56:51.152940109Z" level=info msg="StartContainer for \"fd94214a116b33eac36dc92c56908bc4828f54bb1efd91a4ce78d55e86515e46\" returns successfully" May 27 02:56:51.154887 containerd[1524]: time="2025-05-27T02:56:51.154865946Z" level=info msg="StartContainer for \"3a871fa641c88f72d14239f747995e9315f1cde1e90531255a08a1e92ccedb62\" returns successfully" May 27 02:56:51.689023 kubelet[2643]: E0527 02:56:51.688937 2643 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 02:56:51.694367 kubelet[2643]: E0527 02:56:51.694335 2643 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 02:56:51.702888 kubelet[2643]: I0527 02:56:51.702541 2643 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-f7nns" podStartSLOduration=25.702523124 podStartE2EDuration="25.702523124s" podCreationTimestamp="2025-05-27 02:56:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 02:56:51.702514602 +0000 UTC m=+33.211783296" watchObservedRunningTime="2025-05-27 02:56:51.702523124 +0000 UTC m=+33.211791818" May 27 02:56:51.741136 kubelet[2643]: I0527 02:56:51.741058 2643 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-cnxjx" podStartSLOduration=25.741039794 podStartE2EDuration="25.741039794s" podCreationTimestamp="2025-05-27 02:56:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 02:56:51.740781832 +0000 UTC m=+33.250050526" watchObservedRunningTime="2025-05-27 02:56:51.741039794 +0000 UTC m=+33.250308488" May 27 02:56:51.982991 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3862707968.mount: Deactivated successfully. May 27 02:56:52.701067 kubelet[2643]: E0527 02:56:52.701030 2643 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 02:56:52.701417 kubelet[2643]: E0527 02:56:52.701104 2643 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 02:56:52.810961 systemd[1]: Started sshd@8-10.0.0.92:22-10.0.0.1:34198.service - OpenSSH per-connection server daemon (10.0.0.1:34198). May 27 02:56:52.870313 sshd[4009]: Accepted publickey for core from 10.0.0.1 port 34198 ssh2: RSA SHA256:SbE+pbEGsQ3+BBFd86hKUXv5mFEyG6MA7PyvB6kMiX8 May 27 02:56:52.871687 sshd-session[4009]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 02:56:52.875731 systemd-logind[1508]: New session 9 of user core. May 27 02:56:52.884838 systemd[1]: Started session-9.scope - Session 9 of User core. May 27 02:56:52.995095 sshd[4011]: Connection closed by 10.0.0.1 port 34198 May 27 02:56:52.995939 sshd-session[4009]: pam_unix(sshd:session): session closed for user core May 27 02:56:53.000451 systemd[1]: sshd@8-10.0.0.92:22-10.0.0.1:34198.service: Deactivated successfully. May 27 02:56:53.002443 systemd[1]: session-9.scope: Deactivated successfully. May 27 02:56:53.003216 systemd-logind[1508]: Session 9 logged out. Waiting for processes to exit. May 27 02:56:53.004459 systemd-logind[1508]: Removed session 9. May 27 02:56:53.702439 kubelet[2643]: E0527 02:56:53.702386 2643 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 02:56:53.703463 kubelet[2643]: E0527 02:56:53.702873 2643 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 02:56:58.011998 systemd[1]: Started sshd@9-10.0.0.92:22-10.0.0.1:34210.service - OpenSSH per-connection server daemon (10.0.0.1:34210). May 27 02:56:58.058380 sshd[4027]: Accepted publickey for core from 10.0.0.1 port 34210 ssh2: RSA SHA256:SbE+pbEGsQ3+BBFd86hKUXv5mFEyG6MA7PyvB6kMiX8 May 27 02:56:58.059449 sshd-session[4027]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 02:56:58.063180 systemd-logind[1508]: New session 10 of user core. May 27 02:56:58.073798 systemd[1]: Started session-10.scope - Session 10 of User core. May 27 02:56:58.184923 sshd[4029]: Connection closed by 10.0.0.1 port 34210 May 27 02:56:58.185484 sshd-session[4027]: pam_unix(sshd:session): session closed for user core May 27 02:56:58.188946 systemd[1]: sshd@9-10.0.0.92:22-10.0.0.1:34210.service: Deactivated successfully. May 27 02:56:58.190627 systemd[1]: session-10.scope: Deactivated successfully. May 27 02:56:58.192940 systemd-logind[1508]: Session 10 logged out. Waiting for processes to exit. May 27 02:56:58.194444 systemd-logind[1508]: Removed session 10. May 27 02:57:03.201078 systemd[1]: Started sshd@10-10.0.0.92:22-10.0.0.1:40440.service - OpenSSH per-connection server daemon (10.0.0.1:40440). May 27 02:57:03.239809 sshd[4045]: Accepted publickey for core from 10.0.0.1 port 40440 ssh2: RSA SHA256:SbE+pbEGsQ3+BBFd86hKUXv5mFEyG6MA7PyvB6kMiX8 May 27 02:57:03.241799 sshd-session[4045]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 02:57:03.247394 systemd-logind[1508]: New session 11 of user core. May 27 02:57:03.254836 systemd[1]: Started session-11.scope - Session 11 of User core. May 27 02:57:03.368368 sshd[4047]: Connection closed by 10.0.0.1 port 40440 May 27 02:57:03.369518 sshd-session[4045]: pam_unix(sshd:session): session closed for user core May 27 02:57:03.378721 systemd[1]: sshd@10-10.0.0.92:22-10.0.0.1:40440.service: Deactivated successfully. May 27 02:57:03.380946 systemd[1]: session-11.scope: Deactivated successfully. May 27 02:57:03.381606 systemd-logind[1508]: Session 11 logged out. Waiting for processes to exit. May 27 02:57:03.384630 systemd[1]: Started sshd@11-10.0.0.92:22-10.0.0.1:40444.service - OpenSSH per-connection server daemon (10.0.0.1:40444). May 27 02:57:03.385399 systemd-logind[1508]: Removed session 11. May 27 02:57:03.435861 sshd[4062]: Accepted publickey for core from 10.0.0.1 port 40444 ssh2: RSA SHA256:SbE+pbEGsQ3+BBFd86hKUXv5mFEyG6MA7PyvB6kMiX8 May 27 02:57:03.437076 sshd-session[4062]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 02:57:03.441771 systemd-logind[1508]: New session 12 of user core. May 27 02:57:03.452836 systemd[1]: Started session-12.scope - Session 12 of User core. May 27 02:57:03.595346 sshd[4064]: Connection closed by 10.0.0.1 port 40444 May 27 02:57:03.595863 sshd-session[4062]: pam_unix(sshd:session): session closed for user core May 27 02:57:03.610502 systemd[1]: sshd@11-10.0.0.92:22-10.0.0.1:40444.service: Deactivated successfully. May 27 02:57:03.612455 systemd[1]: session-12.scope: Deactivated successfully. May 27 02:57:03.613517 systemd-logind[1508]: Session 12 logged out. Waiting for processes to exit. May 27 02:57:03.616938 systemd[1]: Started sshd@12-10.0.0.92:22-10.0.0.1:40456.service - OpenSSH per-connection server daemon (10.0.0.1:40456). May 27 02:57:03.619807 systemd-logind[1508]: Removed session 12. May 27 02:57:03.665151 sshd[4076]: Accepted publickey for core from 10.0.0.1 port 40456 ssh2: RSA SHA256:SbE+pbEGsQ3+BBFd86hKUXv5mFEyG6MA7PyvB6kMiX8 May 27 02:57:03.666247 sshd-session[4076]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 02:57:03.670643 systemd-logind[1508]: New session 13 of user core. May 27 02:57:03.676804 systemd[1]: Started session-13.scope - Session 13 of User core. May 27 02:57:03.785766 sshd[4078]: Connection closed by 10.0.0.1 port 40456 May 27 02:57:03.785533 sshd-session[4076]: pam_unix(sshd:session): session closed for user core May 27 02:57:03.789281 systemd-logind[1508]: Session 13 logged out. Waiting for processes to exit. May 27 02:57:03.789471 systemd[1]: sshd@12-10.0.0.92:22-10.0.0.1:40456.service: Deactivated successfully. May 27 02:57:03.791747 systemd[1]: session-13.scope: Deactivated successfully. May 27 02:57:03.793380 systemd-logind[1508]: Removed session 13. May 27 02:57:08.801533 systemd[1]: Started sshd@13-10.0.0.92:22-10.0.0.1:40462.service - OpenSSH per-connection server daemon (10.0.0.1:40462). May 27 02:57:08.876434 sshd[4091]: Accepted publickey for core from 10.0.0.1 port 40462 ssh2: RSA SHA256:SbE+pbEGsQ3+BBFd86hKUXv5mFEyG6MA7PyvB6kMiX8 May 27 02:57:08.878374 sshd-session[4091]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 02:57:08.882712 systemd-logind[1508]: New session 14 of user core. May 27 02:57:08.896890 systemd[1]: Started session-14.scope - Session 14 of User core. May 27 02:57:09.034308 sshd[4093]: Connection closed by 10.0.0.1 port 40462 May 27 02:57:09.032968 sshd-session[4091]: pam_unix(sshd:session): session closed for user core May 27 02:57:09.039087 systemd[1]: sshd@13-10.0.0.92:22-10.0.0.1:40462.service: Deactivated successfully. May 27 02:57:09.042225 systemd[1]: session-14.scope: Deactivated successfully. May 27 02:57:09.043466 systemd-logind[1508]: Session 14 logged out. Waiting for processes to exit. May 27 02:57:09.045136 systemd-logind[1508]: Removed session 14. May 27 02:57:14.044799 systemd[1]: Started sshd@14-10.0.0.92:22-10.0.0.1:37956.service - OpenSSH per-connection server daemon (10.0.0.1:37956). May 27 02:57:14.108218 sshd[4107]: Accepted publickey for core from 10.0.0.1 port 37956 ssh2: RSA SHA256:SbE+pbEGsQ3+BBFd86hKUXv5mFEyG6MA7PyvB6kMiX8 May 27 02:57:14.109551 sshd-session[4107]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 02:57:14.113349 systemd-logind[1508]: New session 15 of user core. May 27 02:57:14.122836 systemd[1]: Started session-15.scope - Session 15 of User core. May 27 02:57:14.242698 sshd[4109]: Connection closed by 10.0.0.1 port 37956 May 27 02:57:14.243943 sshd-session[4107]: pam_unix(sshd:session): session closed for user core May 27 02:57:14.251032 systemd[1]: sshd@14-10.0.0.92:22-10.0.0.1:37956.service: Deactivated successfully. May 27 02:57:14.252848 systemd[1]: session-15.scope: Deactivated successfully. May 27 02:57:14.255344 systemd-logind[1508]: Session 15 logged out. Waiting for processes to exit. May 27 02:57:14.255957 systemd[1]: Started sshd@15-10.0.0.92:22-10.0.0.1:37972.service - OpenSSH per-connection server daemon (10.0.0.1:37972). May 27 02:57:14.257493 systemd-logind[1508]: Removed session 15. May 27 02:57:14.310814 sshd[4122]: Accepted publickey for core from 10.0.0.1 port 37972 ssh2: RSA SHA256:SbE+pbEGsQ3+BBFd86hKUXv5mFEyG6MA7PyvB6kMiX8 May 27 02:57:14.311203 sshd-session[4122]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 02:57:14.316142 systemd-logind[1508]: New session 16 of user core. May 27 02:57:14.326848 systemd[1]: Started session-16.scope - Session 16 of User core. May 27 02:57:14.555088 sshd[4124]: Connection closed by 10.0.0.1 port 37972 May 27 02:57:14.556609 sshd-session[4122]: pam_unix(sshd:session): session closed for user core May 27 02:57:14.562792 systemd[1]: sshd@15-10.0.0.92:22-10.0.0.1:37972.service: Deactivated successfully. May 27 02:57:14.565147 systemd[1]: session-16.scope: Deactivated successfully. May 27 02:57:14.566512 systemd-logind[1508]: Session 16 logged out. Waiting for processes to exit. May 27 02:57:14.569092 systemd[1]: Started sshd@16-10.0.0.92:22-10.0.0.1:37974.service - OpenSSH per-connection server daemon (10.0.0.1:37974). May 27 02:57:14.570173 systemd-logind[1508]: Removed session 16. May 27 02:57:14.633159 sshd[4136]: Accepted publickey for core from 10.0.0.1 port 37974 ssh2: RSA SHA256:SbE+pbEGsQ3+BBFd86hKUXv5mFEyG6MA7PyvB6kMiX8 May 27 02:57:14.634519 sshd-session[4136]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 02:57:14.639714 systemd-logind[1508]: New session 17 of user core. May 27 02:57:14.645835 systemd[1]: Started session-17.scope - Session 17 of User core. May 27 02:57:15.954550 sshd[4138]: Connection closed by 10.0.0.1 port 37974 May 27 02:57:15.954961 sshd-session[4136]: pam_unix(sshd:session): session closed for user core May 27 02:57:15.963989 systemd[1]: sshd@16-10.0.0.92:22-10.0.0.1:37974.service: Deactivated successfully. May 27 02:57:15.968617 systemd[1]: session-17.scope: Deactivated successfully. May 27 02:57:15.971922 systemd-logind[1508]: Session 17 logged out. Waiting for processes to exit. May 27 02:57:15.974644 systemd[1]: Started sshd@17-10.0.0.92:22-10.0.0.1:37978.service - OpenSSH per-connection server daemon (10.0.0.1:37978). May 27 02:57:15.976621 systemd-logind[1508]: Removed session 17. May 27 02:57:16.021923 sshd[4159]: Accepted publickey for core from 10.0.0.1 port 37978 ssh2: RSA SHA256:SbE+pbEGsQ3+BBFd86hKUXv5mFEyG6MA7PyvB6kMiX8 May 27 02:57:16.023280 sshd-session[4159]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 02:57:16.027778 systemd-logind[1508]: New session 18 of user core. May 27 02:57:16.037842 systemd[1]: Started session-18.scope - Session 18 of User core. May 27 02:57:16.248968 sshd[4161]: Connection closed by 10.0.0.1 port 37978 May 27 02:57:16.249758 sshd-session[4159]: pam_unix(sshd:session): session closed for user core May 27 02:57:16.258491 systemd[1]: sshd@17-10.0.0.92:22-10.0.0.1:37978.service: Deactivated successfully. May 27 02:57:16.260330 systemd[1]: session-18.scope: Deactivated successfully. May 27 02:57:16.261079 systemd-logind[1508]: Session 18 logged out. Waiting for processes to exit. May 27 02:57:16.263744 systemd[1]: Started sshd@18-10.0.0.92:22-10.0.0.1:37982.service - OpenSSH per-connection server daemon (10.0.0.1:37982). May 27 02:57:16.265475 systemd-logind[1508]: Removed session 18. May 27 02:57:16.314057 sshd[4173]: Accepted publickey for core from 10.0.0.1 port 37982 ssh2: RSA SHA256:SbE+pbEGsQ3+BBFd86hKUXv5mFEyG6MA7PyvB6kMiX8 May 27 02:57:16.315292 sshd-session[4173]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 02:57:16.320619 systemd-logind[1508]: New session 19 of user core. May 27 02:57:16.330874 systemd[1]: Started session-19.scope - Session 19 of User core. May 27 02:57:16.442979 sshd[4175]: Connection closed by 10.0.0.1 port 37982 May 27 02:57:16.443282 sshd-session[4173]: pam_unix(sshd:session): session closed for user core May 27 02:57:16.446024 systemd[1]: sshd@18-10.0.0.92:22-10.0.0.1:37982.service: Deactivated successfully. May 27 02:57:16.448254 systemd[1]: session-19.scope: Deactivated successfully. May 27 02:57:16.450186 systemd-logind[1508]: Session 19 logged out. Waiting for processes to exit. May 27 02:57:16.451041 systemd-logind[1508]: Removed session 19. May 27 02:57:21.461328 systemd[1]: Started sshd@19-10.0.0.92:22-10.0.0.1:37988.service - OpenSSH per-connection server daemon (10.0.0.1:37988). May 27 02:57:21.514508 sshd[4193]: Accepted publickey for core from 10.0.0.1 port 37988 ssh2: RSA SHA256:SbE+pbEGsQ3+BBFd86hKUXv5mFEyG6MA7PyvB6kMiX8 May 27 02:57:21.516581 sshd-session[4193]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 02:57:21.525866 systemd-logind[1508]: New session 20 of user core. May 27 02:57:21.535882 systemd[1]: Started session-20.scope - Session 20 of User core. May 27 02:57:21.667678 sshd[4195]: Connection closed by 10.0.0.1 port 37988 May 27 02:57:21.668257 sshd-session[4193]: pam_unix(sshd:session): session closed for user core May 27 02:57:21.671886 systemd[1]: sshd@19-10.0.0.92:22-10.0.0.1:37988.service: Deactivated successfully. May 27 02:57:21.674245 systemd[1]: session-20.scope: Deactivated successfully. May 27 02:57:21.674995 systemd-logind[1508]: Session 20 logged out. Waiting for processes to exit. May 27 02:57:21.676230 systemd-logind[1508]: Removed session 20. May 27 02:57:26.681776 systemd[1]: Started sshd@20-10.0.0.92:22-10.0.0.1:38950.service - OpenSSH per-connection server daemon (10.0.0.1:38950). May 27 02:57:26.735529 sshd[4208]: Accepted publickey for core from 10.0.0.1 port 38950 ssh2: RSA SHA256:SbE+pbEGsQ3+BBFd86hKUXv5mFEyG6MA7PyvB6kMiX8 May 27 02:57:26.736760 sshd-session[4208]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 02:57:26.740336 systemd-logind[1508]: New session 21 of user core. May 27 02:57:26.750824 systemd[1]: Started session-21.scope - Session 21 of User core. May 27 02:57:26.858510 sshd[4211]: Connection closed by 10.0.0.1 port 38950 May 27 02:57:26.859266 sshd-session[4208]: pam_unix(sshd:session): session closed for user core May 27 02:57:26.863007 systemd[1]: sshd@20-10.0.0.92:22-10.0.0.1:38950.service: Deactivated successfully. May 27 02:57:26.865504 systemd[1]: session-21.scope: Deactivated successfully. May 27 02:57:26.866416 systemd-logind[1508]: Session 21 logged out. Waiting for processes to exit. May 27 02:57:26.867726 systemd-logind[1508]: Removed session 21. May 27 02:57:31.875136 systemd[1]: Started sshd@21-10.0.0.92:22-10.0.0.1:38966.service - OpenSSH per-connection server daemon (10.0.0.1:38966). May 27 02:57:31.930451 sshd[4226]: Accepted publickey for core from 10.0.0.1 port 38966 ssh2: RSA SHA256:SbE+pbEGsQ3+BBFd86hKUXv5mFEyG6MA7PyvB6kMiX8 May 27 02:57:31.932066 sshd-session[4226]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 02:57:31.936301 systemd-logind[1508]: New session 22 of user core. May 27 02:57:31.951854 systemd[1]: Started session-22.scope - Session 22 of User core. May 27 02:57:32.063059 sshd[4228]: Connection closed by 10.0.0.1 port 38966 May 27 02:57:32.063564 sshd-session[4226]: pam_unix(sshd:session): session closed for user core May 27 02:57:32.078306 systemd[1]: sshd@21-10.0.0.92:22-10.0.0.1:38966.service: Deactivated successfully. May 27 02:57:32.080504 systemd[1]: session-22.scope: Deactivated successfully. May 27 02:57:32.081260 systemd-logind[1508]: Session 22 logged out. Waiting for processes to exit. May 27 02:57:32.084584 systemd[1]: Started sshd@22-10.0.0.92:22-10.0.0.1:38982.service - OpenSSH per-connection server daemon (10.0.0.1:38982). May 27 02:57:32.085783 systemd-logind[1508]: Removed session 22. May 27 02:57:32.134304 sshd[4241]: Accepted publickey for core from 10.0.0.1 port 38982 ssh2: RSA SHA256:SbE+pbEGsQ3+BBFd86hKUXv5mFEyG6MA7PyvB6kMiX8 May 27 02:57:32.135666 sshd-session[4241]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 02:57:32.140414 systemd-logind[1508]: New session 23 of user core. May 27 02:57:32.150897 systemd[1]: Started session-23.scope - Session 23 of User core. May 27 02:57:33.998713 containerd[1524]: time="2025-05-27T02:57:33.997945803Z" level=info msg="StopContainer for \"c2a9e6fa44fa85d5f4b095a5dcea7b8105167bb8c413a15a36d838d3f33e3231\" with timeout 30 (s)" May 27 02:57:33.999310 containerd[1524]: time="2025-05-27T02:57:33.999209338Z" level=info msg="Stop container \"c2a9e6fa44fa85d5f4b095a5dcea7b8105167bb8c413a15a36d838d3f33e3231\" with signal terminated" May 27 02:57:34.032244 systemd[1]: cri-containerd-c2a9e6fa44fa85d5f4b095a5dcea7b8105167bb8c413a15a36d838d3f33e3231.scope: Deactivated successfully. May 27 02:57:34.033481 containerd[1524]: time="2025-05-27T02:57:34.033444421Z" level=info msg="received exit event container_id:\"c2a9e6fa44fa85d5f4b095a5dcea7b8105167bb8c413a15a36d838d3f33e3231\" id:\"c2a9e6fa44fa85d5f4b095a5dcea7b8105167bb8c413a15a36d838d3f33e3231\" pid:3193 exited_at:{seconds:1748314654 nanos:33177250}" May 27 02:57:34.033740 containerd[1524]: time="2025-05-27T02:57:34.033589547Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c2a9e6fa44fa85d5f4b095a5dcea7b8105167bb8c413a15a36d838d3f33e3231\" id:\"c2a9e6fa44fa85d5f4b095a5dcea7b8105167bb8c413a15a36d838d3f33e3231\" pid:3193 exited_at:{seconds:1748314654 nanos:33177250}" May 27 02:57:34.042315 containerd[1524]: time="2025-05-27T02:57:34.042260312Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 27 02:57:34.047168 containerd[1524]: time="2025-05-27T02:57:34.047137798Z" level=info msg="TaskExit event in podsandbox handler container_id:\"30fcecfa3916750c6d00b3b11d6fd8fee860fb1a561a7f6f8c730337033c3251\" id:\"ca5ec857c81b2e72e11a87ed4e4fa8f87ef2fde4d19850aee0c249c3ceb6b8af\" pid:4270 exited_at:{seconds:1748314654 nanos:46854986}" May 27 02:57:34.050103 containerd[1524]: time="2025-05-27T02:57:34.050062321Z" level=info msg="StopContainer for \"30fcecfa3916750c6d00b3b11d6fd8fee860fb1a561a7f6f8c730337033c3251\" with timeout 2 (s)" May 27 02:57:34.050577 containerd[1524]: time="2025-05-27T02:57:34.050553062Z" level=info msg="Stop container \"30fcecfa3916750c6d00b3b11d6fd8fee860fb1a561a7f6f8c730337033c3251\" with signal terminated" May 27 02:57:34.053729 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c2a9e6fa44fa85d5f4b095a5dcea7b8105167bb8c413a15a36d838d3f33e3231-rootfs.mount: Deactivated successfully. May 27 02:57:34.059044 systemd-networkd[1440]: lxc_health: Link DOWN May 27 02:57:34.059483 systemd-networkd[1440]: lxc_health: Lost carrier May 27 02:57:34.067765 containerd[1524]: time="2025-05-27T02:57:34.067701664Z" level=info msg="StopContainer for \"c2a9e6fa44fa85d5f4b095a5dcea7b8105167bb8c413a15a36d838d3f33e3231\" returns successfully" May 27 02:57:34.071286 containerd[1524]: time="2025-05-27T02:57:34.071211531Z" level=info msg="StopPodSandbox for \"c9a79047560fd652dc29eaf27b3e6935d7d79b7d35dfccffec57059e2e5deae1\"" May 27 02:57:34.079520 systemd[1]: cri-containerd-30fcecfa3916750c6d00b3b11d6fd8fee860fb1a561a7f6f8c730337033c3251.scope: Deactivated successfully. May 27 02:57:34.079952 systemd[1]: cri-containerd-30fcecfa3916750c6d00b3b11d6fd8fee860fb1a561a7f6f8c730337033c3251.scope: Consumed 6.395s CPU time, 122.1M memory peak, 152K read from disk, 12.9M written to disk. May 27 02:57:34.080402 containerd[1524]: time="2025-05-27T02:57:34.080364237Z" level=info msg="received exit event container_id:\"30fcecfa3916750c6d00b3b11d6fd8fee860fb1a561a7f6f8c730337033c3251\" id:\"30fcecfa3916750c6d00b3b11d6fd8fee860fb1a561a7f6f8c730337033c3251\" pid:3305 exited_at:{seconds:1748314654 nanos:79995621}" May 27 02:57:34.080516 containerd[1524]: time="2025-05-27T02:57:34.080470481Z" level=info msg="TaskExit event in podsandbox handler container_id:\"30fcecfa3916750c6d00b3b11d6fd8fee860fb1a561a7f6f8c730337033c3251\" id:\"30fcecfa3916750c6d00b3b11d6fd8fee860fb1a561a7f6f8c730337033c3251\" pid:3305 exited_at:{seconds:1748314654 nanos:79995621}" May 27 02:57:34.083327 containerd[1524]: time="2025-05-27T02:57:34.083140434Z" level=info msg="Container to stop \"c2a9e6fa44fa85d5f4b095a5dcea7b8105167bb8c413a15a36d838d3f33e3231\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 27 02:57:34.094328 systemd[1]: cri-containerd-c9a79047560fd652dc29eaf27b3e6935d7d79b7d35dfccffec57059e2e5deae1.scope: Deactivated successfully. May 27 02:57:34.094956 containerd[1524]: time="2025-05-27T02:57:34.094883968Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c9a79047560fd652dc29eaf27b3e6935d7d79b7d35dfccffec57059e2e5deae1\" id:\"c9a79047560fd652dc29eaf27b3e6935d7d79b7d35dfccffec57059e2e5deae1\" pid:2856 exit_status:137 exited_at:{seconds:1748314654 nanos:94527633}" May 27 02:57:34.101922 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-30fcecfa3916750c6d00b3b11d6fd8fee860fb1a561a7f6f8c730337033c3251-rootfs.mount: Deactivated successfully. May 27 02:57:34.116093 containerd[1524]: time="2025-05-27T02:57:34.116048539Z" level=info msg="StopContainer for \"30fcecfa3916750c6d00b3b11d6fd8fee860fb1a561a7f6f8c730337033c3251\" returns successfully" May 27 02:57:34.117148 containerd[1524]: time="2025-05-27T02:57:34.117110344Z" level=info msg="StopPodSandbox for \"9b777d6a9035d0fc37fabd29fe265255143d12a519bccffb6b50747c3fcf68f2\"" May 27 02:57:34.117208 containerd[1524]: time="2025-05-27T02:57:34.117186227Z" level=info msg="Container to stop \"20078f16e0d2552dd6effb8122c710031707a92bfd5a8e3d19dc89756b47d79c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 27 02:57:34.117232 containerd[1524]: time="2025-05-27T02:57:34.117210268Z" level=info msg="Container to stop \"5b722d67086c167404974b6bc916ee55f262bfad8443c311b77f359204b17b01\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 27 02:57:34.117232 containerd[1524]: time="2025-05-27T02:57:34.117221789Z" level=info msg="Container to stop \"eab9bb6fdc56cd402fdbd66cdf90fb809de17343e853f0467dfd21bfb729e04f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 27 02:57:34.117232 containerd[1524]: time="2025-05-27T02:57:34.117230269Z" level=info msg="Container to stop \"8355952abc5d6ff60ec5fa47942440f41298d23e889385c09a26e38baca99641\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 27 02:57:34.117289 containerd[1524]: time="2025-05-27T02:57:34.117238149Z" level=info msg="Container to stop \"30fcecfa3916750c6d00b3b11d6fd8fee860fb1a561a7f6f8c730337033c3251\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 27 02:57:34.123306 systemd[1]: cri-containerd-9b777d6a9035d0fc37fabd29fe265255143d12a519bccffb6b50747c3fcf68f2.scope: Deactivated successfully. May 27 02:57:34.125824 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c9a79047560fd652dc29eaf27b3e6935d7d79b7d35dfccffec57059e2e5deae1-rootfs.mount: Deactivated successfully. May 27 02:57:34.141521 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9b777d6a9035d0fc37fabd29fe265255143d12a519bccffb6b50747c3fcf68f2-rootfs.mount: Deactivated successfully. May 27 02:57:34.144061 containerd[1524]: time="2025-05-27T02:57:34.144025877Z" level=info msg="shim disconnected" id=c9a79047560fd652dc29eaf27b3e6935d7d79b7d35dfccffec57059e2e5deae1 namespace=k8s.io May 27 02:57:34.150331 containerd[1524]: time="2025-05-27T02:57:34.144055798Z" level=warning msg="cleaning up after shim disconnected" id=c9a79047560fd652dc29eaf27b3e6935d7d79b7d35dfccffec57059e2e5deae1 namespace=k8s.io May 27 02:57:34.150464 containerd[1524]: time="2025-05-27T02:57:34.150338183Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 27 02:57:34.150464 containerd[1524]: time="2025-05-27T02:57:34.145817673Z" level=info msg="shim disconnected" id=9b777d6a9035d0fc37fabd29fe265255143d12a519bccffb6b50747c3fcf68f2 namespace=k8s.io May 27 02:57:34.150531 containerd[1524]: time="2025-05-27T02:57:34.150464628Z" level=warning msg="cleaning up after shim disconnected" id=9b777d6a9035d0fc37fabd29fe265255143d12a519bccffb6b50747c3fcf68f2 namespace=k8s.io May 27 02:57:34.150531 containerd[1524]: time="2025-05-27T02:57:34.150514430Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 27 02:57:34.164488 containerd[1524]: time="2025-05-27T02:57:34.164344093Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9b777d6a9035d0fc37fabd29fe265255143d12a519bccffb6b50747c3fcf68f2\" id:\"9b777d6a9035d0fc37fabd29fe265255143d12a519bccffb6b50747c3fcf68f2\" pid:2797 exit_status:137 exited_at:{seconds:1748314654 nanos:123874109}" May 27 02:57:34.164488 containerd[1524]: time="2025-05-27T02:57:34.164423016Z" level=info msg="received exit event sandbox_id:\"c9a79047560fd652dc29eaf27b3e6935d7d79b7d35dfccffec57059e2e5deae1\" exit_status:137 exited_at:{seconds:1748314654 nanos:94527633}" May 27 02:57:34.165723 containerd[1524]: time="2025-05-27T02:57:34.164375014Z" level=info msg="received exit event sandbox_id:\"9b777d6a9035d0fc37fabd29fe265255143d12a519bccffb6b50747c3fcf68f2\" exit_status:137 exited_at:{seconds:1748314654 nanos:123874109}" May 27 02:57:34.166142 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9b777d6a9035d0fc37fabd29fe265255143d12a519bccffb6b50747c3fcf68f2-shm.mount: Deactivated successfully. May 27 02:57:34.166363 containerd[1524]: time="2025-05-27T02:57:34.166327976Z" level=info msg="TearDown network for sandbox \"9b777d6a9035d0fc37fabd29fe265255143d12a519bccffb6b50747c3fcf68f2\" successfully" May 27 02:57:34.166363 containerd[1524]: time="2025-05-27T02:57:34.166356257Z" level=info msg="StopPodSandbox for \"9b777d6a9035d0fc37fabd29fe265255143d12a519bccffb6b50747c3fcf68f2\" returns successfully" May 27 02:57:34.166756 containerd[1524]: time="2025-05-27T02:57:34.166729833Z" level=info msg="TearDown network for sandbox \"c9a79047560fd652dc29eaf27b3e6935d7d79b7d35dfccffec57059e2e5deae1\" successfully" May 27 02:57:34.166756 containerd[1524]: time="2025-05-27T02:57:34.166752434Z" level=info msg="StopPodSandbox for \"c9a79047560fd652dc29eaf27b3e6935d7d79b7d35dfccffec57059e2e5deae1\" returns successfully" May 27 02:57:34.331762 kubelet[2643]: I0527 02:57:34.331111 2643 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/200aa7fb-f846-4325-800a-789721f45ac0-host-proc-sys-net\") pod \"200aa7fb-f846-4325-800a-789721f45ac0\" (UID: \"200aa7fb-f846-4325-800a-789721f45ac0\") " May 27 02:57:34.332102 kubelet[2643]: I0527 02:57:34.331909 2643 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/200aa7fb-f846-4325-800a-789721f45ac0-cilium-cgroup\") pod \"200aa7fb-f846-4325-800a-789721f45ac0\" (UID: \"200aa7fb-f846-4325-800a-789721f45ac0\") " May 27 02:57:34.332102 kubelet[2643]: I0527 02:57:34.331943 2643 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/200aa7fb-f846-4325-800a-789721f45ac0-clustermesh-secrets\") pod \"200aa7fb-f846-4325-800a-789721f45ac0\" (UID: \"200aa7fb-f846-4325-800a-789721f45ac0\") " May 27 02:57:34.332102 kubelet[2643]: I0527 02:57:34.331978 2643 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/200aa7fb-f846-4325-800a-789721f45ac0-host-proc-sys-kernel\") pod \"200aa7fb-f846-4325-800a-789721f45ac0\" (UID: \"200aa7fb-f846-4325-800a-789721f45ac0\") " May 27 02:57:34.332102 kubelet[2643]: I0527 02:57:34.331994 2643 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/200aa7fb-f846-4325-800a-789721f45ac0-cni-path\") pod \"200aa7fb-f846-4325-800a-789721f45ac0\" (UID: \"200aa7fb-f846-4325-800a-789721f45ac0\") " May 27 02:57:34.332102 kubelet[2643]: I0527 02:57:34.332011 2643 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f9gnj\" (UniqueName: \"kubernetes.io/projected/200aa7fb-f846-4325-800a-789721f45ac0-kube-api-access-f9gnj\") pod \"200aa7fb-f846-4325-800a-789721f45ac0\" (UID: \"200aa7fb-f846-4325-800a-789721f45ac0\") " May 27 02:57:34.332102 kubelet[2643]: I0527 02:57:34.332024 2643 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/200aa7fb-f846-4325-800a-789721f45ac0-lib-modules\") pod \"200aa7fb-f846-4325-800a-789721f45ac0\" (UID: \"200aa7fb-f846-4325-800a-789721f45ac0\") " May 27 02:57:34.332536 kubelet[2643]: I0527 02:57:34.332039 2643 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/200aa7fb-f846-4325-800a-789721f45ac0-hostproc\") pod \"200aa7fb-f846-4325-800a-789721f45ac0\" (UID: \"200aa7fb-f846-4325-800a-789721f45ac0\") " May 27 02:57:34.332536 kubelet[2643]: I0527 02:57:34.332053 2643 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/200aa7fb-f846-4325-800a-789721f45ac0-cilium-run\") pod \"200aa7fb-f846-4325-800a-789721f45ac0\" (UID: \"200aa7fb-f846-4325-800a-789721f45ac0\") " May 27 02:57:34.332536 kubelet[2643]: I0527 02:57:34.332069 2643 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2lp62\" (UniqueName: \"kubernetes.io/projected/37a10d60-069a-4922-a689-bcf347b5c771-kube-api-access-2lp62\") pod \"37a10d60-069a-4922-a689-bcf347b5c771\" (UID: \"37a10d60-069a-4922-a689-bcf347b5c771\") " May 27 02:57:34.332536 kubelet[2643]: I0527 02:57:34.332084 2643 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/200aa7fb-f846-4325-800a-789721f45ac0-xtables-lock\") pod \"200aa7fb-f846-4325-800a-789721f45ac0\" (UID: \"200aa7fb-f846-4325-800a-789721f45ac0\") " May 27 02:57:34.332536 kubelet[2643]: I0527 02:57:34.332101 2643 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/200aa7fb-f846-4325-800a-789721f45ac0-cilium-config-path\") pod \"200aa7fb-f846-4325-800a-789721f45ac0\" (UID: \"200aa7fb-f846-4325-800a-789721f45ac0\") " May 27 02:57:34.332536 kubelet[2643]: I0527 02:57:34.332115 2643 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/200aa7fb-f846-4325-800a-789721f45ac0-bpf-maps\") pod \"200aa7fb-f846-4325-800a-789721f45ac0\" (UID: \"200aa7fb-f846-4325-800a-789721f45ac0\") " May 27 02:57:34.332699 kubelet[2643]: I0527 02:57:34.332128 2643 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/200aa7fb-f846-4325-800a-789721f45ac0-etc-cni-netd\") pod \"200aa7fb-f846-4325-800a-789721f45ac0\" (UID: \"200aa7fb-f846-4325-800a-789721f45ac0\") " May 27 02:57:34.332699 kubelet[2643]: I0527 02:57:34.332144 2643 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/37a10d60-069a-4922-a689-bcf347b5c771-cilium-config-path\") pod \"37a10d60-069a-4922-a689-bcf347b5c771\" (UID: \"37a10d60-069a-4922-a689-bcf347b5c771\") " May 27 02:57:34.332699 kubelet[2643]: I0527 02:57:34.332160 2643 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/200aa7fb-f846-4325-800a-789721f45ac0-hubble-tls\") pod \"200aa7fb-f846-4325-800a-789721f45ac0\" (UID: \"200aa7fb-f846-4325-800a-789721f45ac0\") " May 27 02:57:34.332699 kubelet[2643]: I0527 02:57:34.332588 2643 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/200aa7fb-f846-4325-800a-789721f45ac0-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "200aa7fb-f846-4325-800a-789721f45ac0" (UID: "200aa7fb-f846-4325-800a-789721f45ac0"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 27 02:57:34.332699 kubelet[2643]: I0527 02:57:34.332634 2643 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/200aa7fb-f846-4325-800a-789721f45ac0-cni-path" (OuterVolumeSpecName: "cni-path") pod "200aa7fb-f846-4325-800a-789721f45ac0" (UID: "200aa7fb-f846-4325-800a-789721f45ac0"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 27 02:57:34.332818 kubelet[2643]: I0527 02:57:34.332586 2643 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/200aa7fb-f846-4325-800a-789721f45ac0-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "200aa7fb-f846-4325-800a-789721f45ac0" (UID: "200aa7fb-f846-4325-800a-789721f45ac0"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 27 02:57:34.332818 kubelet[2643]: I0527 02:57:34.332769 2643 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/200aa7fb-f846-4325-800a-789721f45ac0-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "200aa7fb-f846-4325-800a-789721f45ac0" (UID: "200aa7fb-f846-4325-800a-789721f45ac0"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 27 02:57:34.332818 kubelet[2643]: I0527 02:57:34.332789 2643 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/200aa7fb-f846-4325-800a-789721f45ac0-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "200aa7fb-f846-4325-800a-789721f45ac0" (UID: "200aa7fb-f846-4325-800a-789721f45ac0"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 27 02:57:34.332818 kubelet[2643]: I0527 02:57:34.332810 2643 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/200aa7fb-f846-4325-800a-789721f45ac0-hostproc" (OuterVolumeSpecName: "hostproc") pod "200aa7fb-f846-4325-800a-789721f45ac0" (UID: "200aa7fb-f846-4325-800a-789721f45ac0"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 27 02:57:34.332911 kubelet[2643]: I0527 02:57:34.332823 2643 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/200aa7fb-f846-4325-800a-789721f45ac0-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "200aa7fb-f846-4325-800a-789721f45ac0" (UID: "200aa7fb-f846-4325-800a-789721f45ac0"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 27 02:57:34.333118 kubelet[2643]: I0527 02:57:34.333085 2643 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/200aa7fb-f846-4325-800a-789721f45ac0-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "200aa7fb-f846-4325-800a-789721f45ac0" (UID: "200aa7fb-f846-4325-800a-789721f45ac0"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 27 02:57:34.334341 kubelet[2643]: I0527 02:57:34.333400 2643 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/200aa7fb-f846-4325-800a-789721f45ac0-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "200aa7fb-f846-4325-800a-789721f45ac0" (UID: "200aa7fb-f846-4325-800a-789721f45ac0"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 27 02:57:34.335216 kubelet[2643]: I0527 02:57:34.335187 2643 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/37a10d60-069a-4922-a689-bcf347b5c771-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "37a10d60-069a-4922-a689-bcf347b5c771" (UID: "37a10d60-069a-4922-a689-bcf347b5c771"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 27 02:57:34.335420 kubelet[2643]: I0527 02:57:34.335396 2643 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/200aa7fb-f846-4325-800a-789721f45ac0-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "200aa7fb-f846-4325-800a-789721f45ac0" (UID: "200aa7fb-f846-4325-800a-789721f45ac0"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" May 27 02:57:34.341948 kubelet[2643]: I0527 02:57:34.341916 2643 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/200aa7fb-f846-4325-800a-789721f45ac0-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "200aa7fb-f846-4325-800a-789721f45ac0" (UID: "200aa7fb-f846-4325-800a-789721f45ac0"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" May 27 02:57:34.342027 kubelet[2643]: I0527 02:57:34.342013 2643 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/200aa7fb-f846-4325-800a-789721f45ac0-kube-api-access-f9gnj" (OuterVolumeSpecName: "kube-api-access-f9gnj") pod "200aa7fb-f846-4325-800a-789721f45ac0" (UID: "200aa7fb-f846-4325-800a-789721f45ac0"). InnerVolumeSpecName "kube-api-access-f9gnj". PluginName "kubernetes.io/projected", VolumeGidValue "" May 27 02:57:34.342092 kubelet[2643]: I0527 02:57:34.342065 2643 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/200aa7fb-f846-4325-800a-789721f45ac0-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "200aa7fb-f846-4325-800a-789721f45ac0" (UID: "200aa7fb-f846-4325-800a-789721f45ac0"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" May 27 02:57:34.342725 kubelet[2643]: I0527 02:57:34.342701 2643 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/200aa7fb-f846-4325-800a-789721f45ac0-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "200aa7fb-f846-4325-800a-789721f45ac0" (UID: "200aa7fb-f846-4325-800a-789721f45ac0"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" May 27 02:57:34.342837 kubelet[2643]: I0527 02:57:34.342789 2643 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37a10d60-069a-4922-a689-bcf347b5c771-kube-api-access-2lp62" (OuterVolumeSpecName: "kube-api-access-2lp62") pod "37a10d60-069a-4922-a689-bcf347b5c771" (UID: "37a10d60-069a-4922-a689-bcf347b5c771"). InnerVolumeSpecName "kube-api-access-2lp62". PluginName "kubernetes.io/projected", VolumeGidValue "" May 27 02:57:34.433001 kubelet[2643]: I0527 02:57:34.432950 2643 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/200aa7fb-f846-4325-800a-789721f45ac0-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" May 27 02:57:34.433001 kubelet[2643]: I0527 02:57:34.432988 2643 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/200aa7fb-f846-4325-800a-789721f45ac0-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" May 27 02:57:34.433001 kubelet[2643]: I0527 02:57:34.432997 2643 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/200aa7fb-f846-4325-800a-789721f45ac0-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" May 27 02:57:34.433001 kubelet[2643]: I0527 02:57:34.433005 2643 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/200aa7fb-f846-4325-800a-789721f45ac0-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" May 27 02:57:34.433001 kubelet[2643]: I0527 02:57:34.433017 2643 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/200aa7fb-f846-4325-800a-789721f45ac0-cni-path\") on node \"localhost\" DevicePath \"\"" May 27 02:57:34.433218 kubelet[2643]: I0527 02:57:34.433026 2643 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f9gnj\" (UniqueName: \"kubernetes.io/projected/200aa7fb-f846-4325-800a-789721f45ac0-kube-api-access-f9gnj\") on node \"localhost\" DevicePath \"\"" May 27 02:57:34.433218 kubelet[2643]: I0527 02:57:34.433035 2643 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/200aa7fb-f846-4325-800a-789721f45ac0-lib-modules\") on node \"localhost\" DevicePath \"\"" May 27 02:57:34.433218 kubelet[2643]: I0527 02:57:34.433042 2643 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/200aa7fb-f846-4325-800a-789721f45ac0-hostproc\") on node \"localhost\" DevicePath \"\"" May 27 02:57:34.433218 kubelet[2643]: I0527 02:57:34.433050 2643 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2lp62\" (UniqueName: \"kubernetes.io/projected/37a10d60-069a-4922-a689-bcf347b5c771-kube-api-access-2lp62\") on node \"localhost\" DevicePath \"\"" May 27 02:57:34.433218 kubelet[2643]: I0527 02:57:34.433058 2643 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/200aa7fb-f846-4325-800a-789721f45ac0-cilium-run\") on node \"localhost\" DevicePath \"\"" May 27 02:57:34.433218 kubelet[2643]: I0527 02:57:34.433067 2643 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/200aa7fb-f846-4325-800a-789721f45ac0-xtables-lock\") on node \"localhost\" DevicePath \"\"" May 27 02:57:34.433218 kubelet[2643]: I0527 02:57:34.433074 2643 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/200aa7fb-f846-4325-800a-789721f45ac0-bpf-maps\") on node \"localhost\" DevicePath \"\"" May 27 02:57:34.433218 kubelet[2643]: I0527 02:57:34.433081 2643 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/200aa7fb-f846-4325-800a-789721f45ac0-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 27 02:57:34.433385 kubelet[2643]: I0527 02:57:34.433091 2643 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/200aa7fb-f846-4325-800a-789721f45ac0-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" May 27 02:57:34.433385 kubelet[2643]: I0527 02:57:34.433098 2643 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/37a10d60-069a-4922-a689-bcf347b5c771-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 27 02:57:34.433385 kubelet[2643]: I0527 02:57:34.433106 2643 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/200aa7fb-f846-4325-800a-789721f45ac0-hubble-tls\") on node \"localhost\" DevicePath \"\"" May 27 02:57:34.570302 systemd[1]: Removed slice kubepods-besteffort-pod37a10d60_069a_4922_a689_bcf347b5c771.slice - libcontainer container kubepods-besteffort-pod37a10d60_069a_4922_a689_bcf347b5c771.slice. May 27 02:57:34.571408 systemd[1]: Removed slice kubepods-burstable-pod200aa7fb_f846_4325_800a_789721f45ac0.slice - libcontainer container kubepods-burstable-pod200aa7fb_f846_4325_800a_789721f45ac0.slice. May 27 02:57:34.571500 systemd[1]: kubepods-burstable-pod200aa7fb_f846_4325_800a_789721f45ac0.slice: Consumed 6.551s CPU time, 122.4M memory peak, 156K read from disk, 12.9M written to disk. May 27 02:57:34.798551 kubelet[2643]: I0527 02:57:34.798255 2643 scope.go:117] "RemoveContainer" containerID="c2a9e6fa44fa85d5f4b095a5dcea7b8105167bb8c413a15a36d838d3f33e3231" May 27 02:57:34.800774 containerd[1524]: time="2025-05-27T02:57:34.800635803Z" level=info msg="RemoveContainer for \"c2a9e6fa44fa85d5f4b095a5dcea7b8105167bb8c413a15a36d838d3f33e3231\"" May 27 02:57:34.819589 containerd[1524]: time="2025-05-27T02:57:34.819546839Z" level=info msg="RemoveContainer for \"c2a9e6fa44fa85d5f4b095a5dcea7b8105167bb8c413a15a36d838d3f33e3231\" returns successfully" May 27 02:57:34.820043 kubelet[2643]: I0527 02:57:34.820014 2643 scope.go:117] "RemoveContainer" containerID="c2a9e6fa44fa85d5f4b095a5dcea7b8105167bb8c413a15a36d838d3f33e3231" May 27 02:57:34.820313 containerd[1524]: time="2025-05-27T02:57:34.820278110Z" level=error msg="ContainerStatus for \"c2a9e6fa44fa85d5f4b095a5dcea7b8105167bb8c413a15a36d838d3f33e3231\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c2a9e6fa44fa85d5f4b095a5dcea7b8105167bb8c413a15a36d838d3f33e3231\": not found" May 27 02:57:34.821838 kubelet[2643]: E0527 02:57:34.821609 2643 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c2a9e6fa44fa85d5f4b095a5dcea7b8105167bb8c413a15a36d838d3f33e3231\": not found" containerID="c2a9e6fa44fa85d5f4b095a5dcea7b8105167bb8c413a15a36d838d3f33e3231" May 27 02:57:34.821838 kubelet[2643]: I0527 02:57:34.821655 2643 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c2a9e6fa44fa85d5f4b095a5dcea7b8105167bb8c413a15a36d838d3f33e3231"} err="failed to get container status \"c2a9e6fa44fa85d5f4b095a5dcea7b8105167bb8c413a15a36d838d3f33e3231\": rpc error: code = NotFound desc = an error occurred when try to find container \"c2a9e6fa44fa85d5f4b095a5dcea7b8105167bb8c413a15a36d838d3f33e3231\": not found" May 27 02:57:34.821838 kubelet[2643]: I0527 02:57:34.821753 2643 scope.go:117] "RemoveContainer" containerID="30fcecfa3916750c6d00b3b11d6fd8fee860fb1a561a7f6f8c730337033c3251" May 27 02:57:34.825141 containerd[1524]: time="2025-05-27T02:57:34.825083032Z" level=info msg="RemoveContainer for \"30fcecfa3916750c6d00b3b11d6fd8fee860fb1a561a7f6f8c730337033c3251\"" May 27 02:57:34.830734 containerd[1524]: time="2025-05-27T02:57:34.830694229Z" level=info msg="RemoveContainer for \"30fcecfa3916750c6d00b3b11d6fd8fee860fb1a561a7f6f8c730337033c3251\" returns successfully" May 27 02:57:34.830931 kubelet[2643]: I0527 02:57:34.830881 2643 scope.go:117] "RemoveContainer" containerID="8355952abc5d6ff60ec5fa47942440f41298d23e889385c09a26e38baca99641" May 27 02:57:34.832241 containerd[1524]: time="2025-05-27T02:57:34.832215813Z" level=info msg="RemoveContainer for \"8355952abc5d6ff60ec5fa47942440f41298d23e889385c09a26e38baca99641\"" May 27 02:57:34.836366 containerd[1524]: time="2025-05-27T02:57:34.836331906Z" level=info msg="RemoveContainer for \"8355952abc5d6ff60ec5fa47942440f41298d23e889385c09a26e38baca99641\" returns successfully" May 27 02:57:34.836562 kubelet[2643]: I0527 02:57:34.836512 2643 scope.go:117] "RemoveContainer" containerID="eab9bb6fdc56cd402fdbd66cdf90fb809de17343e853f0467dfd21bfb729e04f" May 27 02:57:34.840629 containerd[1524]: time="2025-05-27T02:57:34.839793492Z" level=info msg="RemoveContainer for \"eab9bb6fdc56cd402fdbd66cdf90fb809de17343e853f0467dfd21bfb729e04f\"" May 27 02:57:34.842960 containerd[1524]: time="2025-05-27T02:57:34.842922384Z" level=info msg="RemoveContainer for \"eab9bb6fdc56cd402fdbd66cdf90fb809de17343e853f0467dfd21bfb729e04f\" returns successfully" May 27 02:57:34.843211 kubelet[2643]: I0527 02:57:34.843180 2643 scope.go:117] "RemoveContainer" containerID="5b722d67086c167404974b6bc916ee55f262bfad8443c311b77f359204b17b01" May 27 02:57:34.844663 containerd[1524]: time="2025-05-27T02:57:34.844642856Z" level=info msg="RemoveContainer for \"5b722d67086c167404974b6bc916ee55f262bfad8443c311b77f359204b17b01\"" May 27 02:57:34.847350 containerd[1524]: time="2025-05-27T02:57:34.847318129Z" level=info msg="RemoveContainer for \"5b722d67086c167404974b6bc916ee55f262bfad8443c311b77f359204b17b01\" returns successfully" May 27 02:57:34.847540 kubelet[2643]: I0527 02:57:34.847510 2643 scope.go:117] "RemoveContainer" containerID="20078f16e0d2552dd6effb8122c710031707a92bfd5a8e3d19dc89756b47d79c" May 27 02:57:34.849213 containerd[1524]: time="2025-05-27T02:57:34.849149326Z" level=info msg="RemoveContainer for \"20078f16e0d2552dd6effb8122c710031707a92bfd5a8e3d19dc89756b47d79c\"" May 27 02:57:34.851760 containerd[1524]: time="2025-05-27T02:57:34.851723634Z" level=info msg="RemoveContainer for \"20078f16e0d2552dd6effb8122c710031707a92bfd5a8e3d19dc89756b47d79c\" returns successfully" May 27 02:57:34.851960 kubelet[2643]: I0527 02:57:34.851923 2643 scope.go:117] "RemoveContainer" containerID="30fcecfa3916750c6d00b3b11d6fd8fee860fb1a561a7f6f8c730337033c3251" May 27 02:57:34.852226 containerd[1524]: time="2025-05-27T02:57:34.852154932Z" level=error msg="ContainerStatus for \"30fcecfa3916750c6d00b3b11d6fd8fee860fb1a561a7f6f8c730337033c3251\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"30fcecfa3916750c6d00b3b11d6fd8fee860fb1a561a7f6f8c730337033c3251\": not found" May 27 02:57:34.852380 kubelet[2643]: E0527 02:57:34.852351 2643 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"30fcecfa3916750c6d00b3b11d6fd8fee860fb1a561a7f6f8c730337033c3251\": not found" containerID="30fcecfa3916750c6d00b3b11d6fd8fee860fb1a561a7f6f8c730337033c3251" May 27 02:57:34.852411 kubelet[2643]: I0527 02:57:34.852385 2643 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"30fcecfa3916750c6d00b3b11d6fd8fee860fb1a561a7f6f8c730337033c3251"} err="failed to get container status \"30fcecfa3916750c6d00b3b11d6fd8fee860fb1a561a7f6f8c730337033c3251\": rpc error: code = NotFound desc = an error occurred when try to find container \"30fcecfa3916750c6d00b3b11d6fd8fee860fb1a561a7f6f8c730337033c3251\": not found" May 27 02:57:34.852411 kubelet[2643]: I0527 02:57:34.852407 2643 scope.go:117] "RemoveContainer" containerID="8355952abc5d6ff60ec5fa47942440f41298d23e889385c09a26e38baca99641" May 27 02:57:34.852611 containerd[1524]: time="2025-05-27T02:57:34.852572270Z" level=error msg="ContainerStatus for \"8355952abc5d6ff60ec5fa47942440f41298d23e889385c09a26e38baca99641\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"8355952abc5d6ff60ec5fa47942440f41298d23e889385c09a26e38baca99641\": not found" May 27 02:57:34.852739 kubelet[2643]: E0527 02:57:34.852717 2643 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"8355952abc5d6ff60ec5fa47942440f41298d23e889385c09a26e38baca99641\": not found" containerID="8355952abc5d6ff60ec5fa47942440f41298d23e889385c09a26e38baca99641" May 27 02:57:34.852773 kubelet[2643]: I0527 02:57:34.852752 2643 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"8355952abc5d6ff60ec5fa47942440f41298d23e889385c09a26e38baca99641"} err="failed to get container status \"8355952abc5d6ff60ec5fa47942440f41298d23e889385c09a26e38baca99641\": rpc error: code = NotFound desc = an error occurred when try to find container \"8355952abc5d6ff60ec5fa47942440f41298d23e889385c09a26e38baca99641\": not found" May 27 02:57:34.852802 kubelet[2643]: I0527 02:57:34.852775 2643 scope.go:117] "RemoveContainer" containerID="eab9bb6fdc56cd402fdbd66cdf90fb809de17343e853f0467dfd21bfb729e04f" May 27 02:57:34.853053 containerd[1524]: time="2025-05-27T02:57:34.853019529Z" level=error msg="ContainerStatus for \"eab9bb6fdc56cd402fdbd66cdf90fb809de17343e853f0467dfd21bfb729e04f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"eab9bb6fdc56cd402fdbd66cdf90fb809de17343e853f0467dfd21bfb729e04f\": not found" May 27 02:57:34.853181 kubelet[2643]: E0527 02:57:34.853160 2643 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"eab9bb6fdc56cd402fdbd66cdf90fb809de17343e853f0467dfd21bfb729e04f\": not found" containerID="eab9bb6fdc56cd402fdbd66cdf90fb809de17343e853f0467dfd21bfb729e04f" May 27 02:57:34.853244 kubelet[2643]: I0527 02:57:34.853226 2643 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"eab9bb6fdc56cd402fdbd66cdf90fb809de17343e853f0467dfd21bfb729e04f"} err="failed to get container status \"eab9bb6fdc56cd402fdbd66cdf90fb809de17343e853f0467dfd21bfb729e04f\": rpc error: code = NotFound desc = an error occurred when try to find container \"eab9bb6fdc56cd402fdbd66cdf90fb809de17343e853f0467dfd21bfb729e04f\": not found" May 27 02:57:34.853268 kubelet[2643]: I0527 02:57:34.853247 2643 scope.go:117] "RemoveContainer" containerID="5b722d67086c167404974b6bc916ee55f262bfad8443c311b77f359204b17b01" May 27 02:57:34.853432 containerd[1524]: time="2025-05-27T02:57:34.853406545Z" level=error msg="ContainerStatus for \"5b722d67086c167404974b6bc916ee55f262bfad8443c311b77f359204b17b01\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5b722d67086c167404974b6bc916ee55f262bfad8443c311b77f359204b17b01\": not found" May 27 02:57:34.853558 kubelet[2643]: E0527 02:57:34.853529 2643 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5b722d67086c167404974b6bc916ee55f262bfad8443c311b77f359204b17b01\": not found" containerID="5b722d67086c167404974b6bc916ee55f262bfad8443c311b77f359204b17b01" May 27 02:57:34.853583 kubelet[2643]: I0527 02:57:34.853565 2643 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5b722d67086c167404974b6bc916ee55f262bfad8443c311b77f359204b17b01"} err="failed to get container status \"5b722d67086c167404974b6bc916ee55f262bfad8443c311b77f359204b17b01\": rpc error: code = NotFound desc = an error occurred when try to find container \"5b722d67086c167404974b6bc916ee55f262bfad8443c311b77f359204b17b01\": not found" May 27 02:57:34.853583 kubelet[2643]: I0527 02:57:34.853581 2643 scope.go:117] "RemoveContainer" containerID="20078f16e0d2552dd6effb8122c710031707a92bfd5a8e3d19dc89756b47d79c" May 27 02:57:34.853890 containerd[1524]: time="2025-05-27T02:57:34.853844203Z" level=error msg="ContainerStatus for \"20078f16e0d2552dd6effb8122c710031707a92bfd5a8e3d19dc89756b47d79c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"20078f16e0d2552dd6effb8122c710031707a92bfd5a8e3d19dc89756b47d79c\": not found" May 27 02:57:34.854042 kubelet[2643]: E0527 02:57:34.854021 2643 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"20078f16e0d2552dd6effb8122c710031707a92bfd5a8e3d19dc89756b47d79c\": not found" containerID="20078f16e0d2552dd6effb8122c710031707a92bfd5a8e3d19dc89756b47d79c" May 27 02:57:34.854068 kubelet[2643]: I0527 02:57:34.854048 2643 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"20078f16e0d2552dd6effb8122c710031707a92bfd5a8e3d19dc89756b47d79c"} err="failed to get container status \"20078f16e0d2552dd6effb8122c710031707a92bfd5a8e3d19dc89756b47d79c\": rpc error: code = NotFound desc = an error occurred when try to find container \"20078f16e0d2552dd6effb8122c710031707a92bfd5a8e3d19dc89756b47d79c\": not found" May 27 02:57:35.053118 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c9a79047560fd652dc29eaf27b3e6935d7d79b7d35dfccffec57059e2e5deae1-shm.mount: Deactivated successfully. May 27 02:57:35.053211 systemd[1]: var-lib-kubelet-pods-37a10d60\x2d069a\x2d4922\x2da689\x2dbcf347b5c771-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d2lp62.mount: Deactivated successfully. May 27 02:57:35.053266 systemd[1]: var-lib-kubelet-pods-200aa7fb\x2df846\x2d4325\x2d800a\x2d789721f45ac0-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2df9gnj.mount: Deactivated successfully. May 27 02:57:35.053315 systemd[1]: var-lib-kubelet-pods-200aa7fb\x2df846\x2d4325\x2d800a\x2d789721f45ac0-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 27 02:57:35.053363 systemd[1]: var-lib-kubelet-pods-200aa7fb\x2df846\x2d4325\x2d800a\x2d789721f45ac0-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 27 02:57:35.953004 sshd[4243]: Connection closed by 10.0.0.1 port 38982 May 27 02:57:35.953557 sshd-session[4241]: pam_unix(sshd:session): session closed for user core May 27 02:57:35.966007 systemd[1]: sshd@22-10.0.0.92:22-10.0.0.1:38982.service: Deactivated successfully. May 27 02:57:35.967541 systemd[1]: session-23.scope: Deactivated successfully. May 27 02:57:35.967746 systemd[1]: session-23.scope: Consumed 1.177s CPU time, 23.8M memory peak. May 27 02:57:35.968204 systemd-logind[1508]: Session 23 logged out. Waiting for processes to exit. May 27 02:57:35.970850 systemd[1]: Started sshd@23-10.0.0.92:22-10.0.0.1:55514.service - OpenSSH per-connection server daemon (10.0.0.1:55514). May 27 02:57:35.973202 systemd-logind[1508]: Removed session 23. May 27 02:57:36.029617 sshd[4397]: Accepted publickey for core from 10.0.0.1 port 55514 ssh2: RSA SHA256:SbE+pbEGsQ3+BBFd86hKUXv5mFEyG6MA7PyvB6kMiX8 May 27 02:57:36.030920 sshd-session[4397]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 02:57:36.034625 systemd-logind[1508]: New session 24 of user core. May 27 02:57:36.042868 systemd[1]: Started session-24.scope - Session 24 of User core. May 27 02:57:36.565813 kubelet[2643]: I0527 02:57:36.565774 2643 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="200aa7fb-f846-4325-800a-789721f45ac0" path="/var/lib/kubelet/pods/200aa7fb-f846-4325-800a-789721f45ac0/volumes" May 27 02:57:36.567679 kubelet[2643]: I0527 02:57:36.566290 2643 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="37a10d60-069a-4922-a689-bcf347b5c771" path="/var/lib/kubelet/pods/37a10d60-069a-4922-a689-bcf347b5c771/volumes" May 27 02:57:37.137692 sshd[4399]: Connection closed by 10.0.0.1 port 55514 May 27 02:57:37.138168 sshd-session[4397]: pam_unix(sshd:session): session closed for user core May 27 02:57:37.149248 systemd[1]: sshd@23-10.0.0.92:22-10.0.0.1:55514.service: Deactivated successfully. May 27 02:57:37.153078 systemd[1]: session-24.scope: Deactivated successfully. May 27 02:57:37.153839 systemd[1]: session-24.scope: Consumed 1.015s CPU time, 24M memory peak. May 27 02:57:37.156568 systemd-logind[1508]: Session 24 logged out. Waiting for processes to exit. May 27 02:57:37.161123 kubelet[2643]: E0527 02:57:37.161089 2643 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="200aa7fb-f846-4325-800a-789721f45ac0" containerName="apply-sysctl-overwrites" May 27 02:57:37.161123 kubelet[2643]: E0527 02:57:37.161114 2643 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="37a10d60-069a-4922-a689-bcf347b5c771" containerName="cilium-operator" May 27 02:57:37.161123 kubelet[2643]: E0527 02:57:37.161122 2643 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="200aa7fb-f846-4325-800a-789721f45ac0" containerName="mount-bpf-fs" May 27 02:57:37.161123 kubelet[2643]: E0527 02:57:37.161129 2643 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="200aa7fb-f846-4325-800a-789721f45ac0" containerName="mount-cgroup" May 27 02:57:37.161260 kubelet[2643]: E0527 02:57:37.161134 2643 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="200aa7fb-f846-4325-800a-789721f45ac0" containerName="clean-cilium-state" May 27 02:57:37.161260 kubelet[2643]: E0527 02:57:37.161140 2643 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="200aa7fb-f846-4325-800a-789721f45ac0" containerName="cilium-agent" May 27 02:57:37.161260 kubelet[2643]: I0527 02:57:37.161164 2643 memory_manager.go:354] "RemoveStaleState removing state" podUID="37a10d60-069a-4922-a689-bcf347b5c771" containerName="cilium-operator" May 27 02:57:37.161260 kubelet[2643]: I0527 02:57:37.161169 2643 memory_manager.go:354] "RemoveStaleState removing state" podUID="200aa7fb-f846-4325-800a-789721f45ac0" containerName="cilium-agent" May 27 02:57:37.165420 systemd[1]: Started sshd@24-10.0.0.92:22-10.0.0.1:55520.service - OpenSSH per-connection server daemon (10.0.0.1:55520). May 27 02:57:37.170627 systemd-logind[1508]: Removed session 24. May 27 02:57:37.187645 systemd[1]: Created slice kubepods-burstable-pod9576483e_2ea5_4ba8_83e8_7b89f6659c6b.slice - libcontainer container kubepods-burstable-pod9576483e_2ea5_4ba8_83e8_7b89f6659c6b.slice. May 27 02:57:37.218347 sshd[4411]: Accepted publickey for core from 10.0.0.1 port 55520 ssh2: RSA SHA256:SbE+pbEGsQ3+BBFd86hKUXv5mFEyG6MA7PyvB6kMiX8 May 27 02:57:37.219591 sshd-session[4411]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 02:57:37.224243 systemd-logind[1508]: New session 25 of user core. May 27 02:57:37.231834 systemd[1]: Started session-25.scope - Session 25 of User core. May 27 02:57:37.281074 sshd[4413]: Connection closed by 10.0.0.1 port 55520 May 27 02:57:37.281533 sshd-session[4411]: pam_unix(sshd:session): session closed for user core May 27 02:57:37.291870 systemd[1]: sshd@24-10.0.0.92:22-10.0.0.1:55520.service: Deactivated successfully. May 27 02:57:37.293468 systemd[1]: session-25.scope: Deactivated successfully. May 27 02:57:37.294154 systemd-logind[1508]: Session 25 logged out. Waiting for processes to exit. May 27 02:57:37.296739 systemd[1]: Started sshd@25-10.0.0.92:22-10.0.0.1:55532.service - OpenSSH per-connection server daemon (10.0.0.1:55532). May 27 02:57:37.297353 systemd-logind[1508]: Removed session 25. May 27 02:57:37.342728 sshd[4420]: Accepted publickey for core from 10.0.0.1 port 55532 ssh2: RSA SHA256:SbE+pbEGsQ3+BBFd86hKUXv5mFEyG6MA7PyvB6kMiX8 May 27 02:57:37.343762 sshd-session[4420]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 02:57:37.347487 systemd-logind[1508]: New session 26 of user core. May 27 02:57:37.348308 kubelet[2643]: I0527 02:57:37.347831 2643 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/9576483e-2ea5-4ba8-83e8-7b89f6659c6b-cni-path\") pod \"cilium-jpn2f\" (UID: \"9576483e-2ea5-4ba8-83e8-7b89f6659c6b\") " pod="kube-system/cilium-jpn2f" May 27 02:57:37.348308 kubelet[2643]: I0527 02:57:37.347871 2643 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/9576483e-2ea5-4ba8-83e8-7b89f6659c6b-cilium-ipsec-secrets\") pod \"cilium-jpn2f\" (UID: \"9576483e-2ea5-4ba8-83e8-7b89f6659c6b\") " pod="kube-system/cilium-jpn2f" May 27 02:57:37.348308 kubelet[2643]: I0527 02:57:37.347891 2643 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/9576483e-2ea5-4ba8-83e8-7b89f6659c6b-hostproc\") pod \"cilium-jpn2f\" (UID: \"9576483e-2ea5-4ba8-83e8-7b89f6659c6b\") " pod="kube-system/cilium-jpn2f" May 27 02:57:37.348308 kubelet[2643]: I0527 02:57:37.347918 2643 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9576483e-2ea5-4ba8-83e8-7b89f6659c6b-lib-modules\") pod \"cilium-jpn2f\" (UID: \"9576483e-2ea5-4ba8-83e8-7b89f6659c6b\") " pod="kube-system/cilium-jpn2f" May 27 02:57:37.348308 kubelet[2643]: I0527 02:57:37.347936 2643 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/9576483e-2ea5-4ba8-83e8-7b89f6659c6b-clustermesh-secrets\") pod \"cilium-jpn2f\" (UID: \"9576483e-2ea5-4ba8-83e8-7b89f6659c6b\") " pod="kube-system/cilium-jpn2f" May 27 02:57:37.348308 kubelet[2643]: I0527 02:57:37.347954 2643 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/9576483e-2ea5-4ba8-83e8-7b89f6659c6b-cilium-config-path\") pod \"cilium-jpn2f\" (UID: \"9576483e-2ea5-4ba8-83e8-7b89f6659c6b\") " pod="kube-system/cilium-jpn2f" May 27 02:57:37.348481 kubelet[2643]: I0527 02:57:37.347971 2643 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9576483e-2ea5-4ba8-83e8-7b89f6659c6b-xtables-lock\") pod \"cilium-jpn2f\" (UID: \"9576483e-2ea5-4ba8-83e8-7b89f6659c6b\") " pod="kube-system/cilium-jpn2f" May 27 02:57:37.348481 kubelet[2643]: I0527 02:57:37.347987 2643 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/9576483e-2ea5-4ba8-83e8-7b89f6659c6b-hubble-tls\") pod \"cilium-jpn2f\" (UID: \"9576483e-2ea5-4ba8-83e8-7b89f6659c6b\") " pod="kube-system/cilium-jpn2f" May 27 02:57:37.348481 kubelet[2643]: I0527 02:57:37.348004 2643 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/9576483e-2ea5-4ba8-83e8-7b89f6659c6b-host-proc-sys-net\") pod \"cilium-jpn2f\" (UID: \"9576483e-2ea5-4ba8-83e8-7b89f6659c6b\") " pod="kube-system/cilium-jpn2f" May 27 02:57:37.348481 kubelet[2643]: I0527 02:57:37.348021 2643 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9576483e-2ea5-4ba8-83e8-7b89f6659c6b-etc-cni-netd\") pod \"cilium-jpn2f\" (UID: \"9576483e-2ea5-4ba8-83e8-7b89f6659c6b\") " pod="kube-system/cilium-jpn2f" May 27 02:57:37.352106 kubelet[2643]: I0527 02:57:37.348625 2643 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hgg22\" (UniqueName: \"kubernetes.io/projected/9576483e-2ea5-4ba8-83e8-7b89f6659c6b-kube-api-access-hgg22\") pod \"cilium-jpn2f\" (UID: \"9576483e-2ea5-4ba8-83e8-7b89f6659c6b\") " pod="kube-system/cilium-jpn2f" May 27 02:57:37.352172 kubelet[2643]: I0527 02:57:37.352138 2643 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/9576483e-2ea5-4ba8-83e8-7b89f6659c6b-bpf-maps\") pod \"cilium-jpn2f\" (UID: \"9576483e-2ea5-4ba8-83e8-7b89f6659c6b\") " pod="kube-system/cilium-jpn2f" May 27 02:57:37.352172 kubelet[2643]: I0527 02:57:37.352162 2643 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/9576483e-2ea5-4ba8-83e8-7b89f6659c6b-cilium-run\") pod \"cilium-jpn2f\" (UID: \"9576483e-2ea5-4ba8-83e8-7b89f6659c6b\") " pod="kube-system/cilium-jpn2f" May 27 02:57:37.352230 kubelet[2643]: I0527 02:57:37.352186 2643 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/9576483e-2ea5-4ba8-83e8-7b89f6659c6b-cilium-cgroup\") pod \"cilium-jpn2f\" (UID: \"9576483e-2ea5-4ba8-83e8-7b89f6659c6b\") " pod="kube-system/cilium-jpn2f" May 27 02:57:37.352230 kubelet[2643]: I0527 02:57:37.352204 2643 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/9576483e-2ea5-4ba8-83e8-7b89f6659c6b-host-proc-sys-kernel\") pod \"cilium-jpn2f\" (UID: \"9576483e-2ea5-4ba8-83e8-7b89f6659c6b\") " pod="kube-system/cilium-jpn2f" May 27 02:57:37.354841 systemd[1]: Started session-26.scope - Session 26 of User core. May 27 02:57:37.492132 kubelet[2643]: E0527 02:57:37.492015 2643 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 02:57:37.492648 containerd[1524]: time="2025-05-27T02:57:37.492610520Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-jpn2f,Uid:9576483e-2ea5-4ba8-83e8-7b89f6659c6b,Namespace:kube-system,Attempt:0,}" May 27 02:57:37.511214 containerd[1524]: time="2025-05-27T02:57:37.511153909Z" level=info msg="connecting to shim b859b8632fd113649bea72218eba3b84925cb669a8bfb64933e89c35ec623e04" address="unix:///run/containerd/s/129acc12468b85e33868e508e3a5e24170de7b20b5b25772cf41716180dda7c8" namespace=k8s.io protocol=ttrpc version=3 May 27 02:57:37.543882 systemd[1]: Started cri-containerd-b859b8632fd113649bea72218eba3b84925cb669a8bfb64933e89c35ec623e04.scope - libcontainer container b859b8632fd113649bea72218eba3b84925cb669a8bfb64933e89c35ec623e04. May 27 02:57:37.573257 containerd[1524]: time="2025-05-27T02:57:37.573215805Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-jpn2f,Uid:9576483e-2ea5-4ba8-83e8-7b89f6659c6b,Namespace:kube-system,Attempt:0,} returns sandbox id \"b859b8632fd113649bea72218eba3b84925cb669a8bfb64933e89c35ec623e04\"" May 27 02:57:37.574182 kubelet[2643]: E0527 02:57:37.574094 2643 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 02:57:37.577953 containerd[1524]: time="2025-05-27T02:57:37.577901104Z" level=info msg="CreateContainer within sandbox \"b859b8632fd113649bea72218eba3b84925cb669a8bfb64933e89c35ec623e04\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 27 02:57:37.587317 containerd[1524]: time="2025-05-27T02:57:37.586975932Z" level=info msg="Container de62e9eab7b49b5d574e4dfbde0d3ddc44f92a17d939f938ab6c670a764cd898: CDI devices from CRI Config.CDIDevices: []" May 27 02:57:37.593252 containerd[1524]: time="2025-05-27T02:57:37.593177129Z" level=info msg="CreateContainer within sandbox \"b859b8632fd113649bea72218eba3b84925cb669a8bfb64933e89c35ec623e04\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"de62e9eab7b49b5d574e4dfbde0d3ddc44f92a17d939f938ab6c670a764cd898\"" May 27 02:57:37.593694 containerd[1524]: time="2025-05-27T02:57:37.593662788Z" level=info msg="StartContainer for \"de62e9eab7b49b5d574e4dfbde0d3ddc44f92a17d939f938ab6c670a764cd898\"" May 27 02:57:37.594725 containerd[1524]: time="2025-05-27T02:57:37.594659266Z" level=info msg="connecting to shim de62e9eab7b49b5d574e4dfbde0d3ddc44f92a17d939f938ab6c670a764cd898" address="unix:///run/containerd/s/129acc12468b85e33868e508e3a5e24170de7b20b5b25772cf41716180dda7c8" protocol=ttrpc version=3 May 27 02:57:37.618886 systemd[1]: Started cri-containerd-de62e9eab7b49b5d574e4dfbde0d3ddc44f92a17d939f938ab6c670a764cd898.scope - libcontainer container de62e9eab7b49b5d574e4dfbde0d3ddc44f92a17d939f938ab6c670a764cd898. May 27 02:57:37.644047 containerd[1524]: time="2025-05-27T02:57:37.643995234Z" level=info msg="StartContainer for \"de62e9eab7b49b5d574e4dfbde0d3ddc44f92a17d939f938ab6c670a764cd898\" returns successfully" May 27 02:57:37.679552 systemd[1]: cri-containerd-de62e9eab7b49b5d574e4dfbde0d3ddc44f92a17d939f938ab6c670a764cd898.scope: Deactivated successfully. May 27 02:57:37.680873 containerd[1524]: time="2025-05-27T02:57:37.680833605Z" level=info msg="received exit event container_id:\"de62e9eab7b49b5d574e4dfbde0d3ddc44f92a17d939f938ab6c670a764cd898\" id:\"de62e9eab7b49b5d574e4dfbde0d3ddc44f92a17d939f938ab6c670a764cd898\" pid:4492 exited_at:{seconds:1748314657 nanos:680498432}" May 27 02:57:37.681152 containerd[1524]: time="2025-05-27T02:57:37.681131536Z" level=info msg="TaskExit event in podsandbox handler container_id:\"de62e9eab7b49b5d574e4dfbde0d3ddc44f92a17d939f938ab6c670a764cd898\" id:\"de62e9eab7b49b5d574e4dfbde0d3ddc44f92a17d939f938ab6c670a764cd898\" pid:4492 exited_at:{seconds:1748314657 nanos:680498432}" May 27 02:57:37.815609 kubelet[2643]: E0527 02:57:37.815383 2643 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 02:57:37.818467 containerd[1524]: time="2025-05-27T02:57:37.818429072Z" level=info msg="CreateContainer within sandbox \"b859b8632fd113649bea72218eba3b84925cb669a8bfb64933e89c35ec623e04\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 27 02:57:37.828143 containerd[1524]: time="2025-05-27T02:57:37.828098202Z" level=info msg="Container a94d42b8dfd848ad236ee5e7483593f96602f4c6877f5318464d7b7f653c21e4: CDI devices from CRI Config.CDIDevices: []" May 27 02:57:37.842409 containerd[1524]: time="2025-05-27T02:57:37.842367268Z" level=info msg="CreateContainer within sandbox \"b859b8632fd113649bea72218eba3b84925cb669a8bfb64933e89c35ec623e04\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"a94d42b8dfd848ad236ee5e7483593f96602f4c6877f5318464d7b7f653c21e4\"" May 27 02:57:37.843344 containerd[1524]: time="2025-05-27T02:57:37.843045654Z" level=info msg="StartContainer for \"a94d42b8dfd848ad236ee5e7483593f96602f4c6877f5318464d7b7f653c21e4\"" May 27 02:57:37.844122 containerd[1524]: time="2025-05-27T02:57:37.844094414Z" level=info msg="connecting to shim a94d42b8dfd848ad236ee5e7483593f96602f4c6877f5318464d7b7f653c21e4" address="unix:///run/containerd/s/129acc12468b85e33868e508e3a5e24170de7b20b5b25772cf41716180dda7c8" protocol=ttrpc version=3 May 27 02:57:37.861894 systemd[1]: Started cri-containerd-a94d42b8dfd848ad236ee5e7483593f96602f4c6877f5318464d7b7f653c21e4.scope - libcontainer container a94d42b8dfd848ad236ee5e7483593f96602f4c6877f5318464d7b7f653c21e4. May 27 02:57:37.886972 containerd[1524]: time="2025-05-27T02:57:37.886889012Z" level=info msg="StartContainer for \"a94d42b8dfd848ad236ee5e7483593f96602f4c6877f5318464d7b7f653c21e4\" returns successfully" May 27 02:57:37.892230 systemd[1]: cri-containerd-a94d42b8dfd848ad236ee5e7483593f96602f4c6877f5318464d7b7f653c21e4.scope: Deactivated successfully. May 27 02:57:37.893056 containerd[1524]: time="2025-05-27T02:57:37.892916083Z" level=info msg="received exit event container_id:\"a94d42b8dfd848ad236ee5e7483593f96602f4c6877f5318464d7b7f653c21e4\" id:\"a94d42b8dfd848ad236ee5e7483593f96602f4c6877f5318464d7b7f653c21e4\" pid:4537 exited_at:{seconds:1748314657 nanos:892427584}" May 27 02:57:37.893405 containerd[1524]: time="2025-05-27T02:57:37.893383741Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a94d42b8dfd848ad236ee5e7483593f96602f4c6877f5318464d7b7f653c21e4\" id:\"a94d42b8dfd848ad236ee5e7483593f96602f4c6877f5318464d7b7f653c21e4\" pid:4537 exited_at:{seconds:1748314657 nanos:892427584}" May 27 02:57:38.628980 kubelet[2643]: E0527 02:57:38.628926 2643 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 27 02:57:38.819307 kubelet[2643]: E0527 02:57:38.819228 2643 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 02:57:38.821632 containerd[1524]: time="2025-05-27T02:57:38.821561258Z" level=info msg="CreateContainer within sandbox \"b859b8632fd113649bea72218eba3b84925cb669a8bfb64933e89c35ec623e04\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 27 02:57:38.833370 containerd[1524]: time="2025-05-27T02:57:38.833305853Z" level=info msg="Container 91d2098c1099c9a9b75dc5de50a84ab685bc2f4583d87617ce3f7cf4d2958332: CDI devices from CRI Config.CDIDevices: []" May 27 02:57:38.845016 containerd[1524]: time="2025-05-27T02:57:38.844816400Z" level=info msg="CreateContainer within sandbox \"b859b8632fd113649bea72218eba3b84925cb669a8bfb64933e89c35ec623e04\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"91d2098c1099c9a9b75dc5de50a84ab685bc2f4583d87617ce3f7cf4d2958332\"" May 27 02:57:38.845470 containerd[1524]: time="2025-05-27T02:57:38.845428783Z" level=info msg="StartContainer for \"91d2098c1099c9a9b75dc5de50a84ab685bc2f4583d87617ce3f7cf4d2958332\"" May 27 02:57:38.847351 containerd[1524]: time="2025-05-27T02:57:38.847317053Z" level=info msg="connecting to shim 91d2098c1099c9a9b75dc5de50a84ab685bc2f4583d87617ce3f7cf4d2958332" address="unix:///run/containerd/s/129acc12468b85e33868e508e3a5e24170de7b20b5b25772cf41716180dda7c8" protocol=ttrpc version=3 May 27 02:57:38.864860 systemd[1]: Started cri-containerd-91d2098c1099c9a9b75dc5de50a84ab685bc2f4583d87617ce3f7cf4d2958332.scope - libcontainer container 91d2098c1099c9a9b75dc5de50a84ab685bc2f4583d87617ce3f7cf4d2958332. May 27 02:57:38.895926 systemd[1]: cri-containerd-91d2098c1099c9a9b75dc5de50a84ab685bc2f4583d87617ce3f7cf4d2958332.scope: Deactivated successfully. May 27 02:57:38.898096 containerd[1524]: time="2025-05-27T02:57:38.898025973Z" level=info msg="received exit event container_id:\"91d2098c1099c9a9b75dc5de50a84ab685bc2f4583d87617ce3f7cf4d2958332\" id:\"91d2098c1099c9a9b75dc5de50a84ab685bc2f4583d87617ce3f7cf4d2958332\" pid:4583 exited_at:{seconds:1748314658 nanos:897149861}" May 27 02:57:38.898178 containerd[1524]: time="2025-05-27T02:57:38.898114297Z" level=info msg="TaskExit event in podsandbox handler container_id:\"91d2098c1099c9a9b75dc5de50a84ab685bc2f4583d87617ce3f7cf4d2958332\" id:\"91d2098c1099c9a9b75dc5de50a84ab685bc2f4583d87617ce3f7cf4d2958332\" pid:4583 exited_at:{seconds:1748314658 nanos:897149861}" May 27 02:57:38.907228 containerd[1524]: time="2025-05-27T02:57:38.907163792Z" level=info msg="StartContainer for \"91d2098c1099c9a9b75dc5de50a84ab685bc2f4583d87617ce3f7cf4d2958332\" returns successfully" May 27 02:57:38.919906 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-91d2098c1099c9a9b75dc5de50a84ab685bc2f4583d87617ce3f7cf4d2958332-rootfs.mount: Deactivated successfully. May 27 02:57:39.828205 kubelet[2643]: E0527 02:57:39.828161 2643 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 02:57:39.831597 containerd[1524]: time="2025-05-27T02:57:39.831557950Z" level=info msg="CreateContainer within sandbox \"b859b8632fd113649bea72218eba3b84925cb669a8bfb64933e89c35ec623e04\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 27 02:57:39.842171 containerd[1524]: time="2025-05-27T02:57:39.840907806Z" level=info msg="Container 459532217a155d1a2ebca2f053b22e83519972ac8365eacb7da8741a5de07cc7: CDI devices from CRI Config.CDIDevices: []" May 27 02:57:39.849337 containerd[1524]: time="2025-05-27T02:57:39.849255426Z" level=info msg="CreateContainer within sandbox \"b859b8632fd113649bea72218eba3b84925cb669a8bfb64933e89c35ec623e04\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"459532217a155d1a2ebca2f053b22e83519972ac8365eacb7da8741a5de07cc7\"" May 27 02:57:39.849724 containerd[1524]: time="2025-05-27T02:57:39.849703962Z" level=info msg="StartContainer for \"459532217a155d1a2ebca2f053b22e83519972ac8365eacb7da8741a5de07cc7\"" May 27 02:57:39.850527 containerd[1524]: time="2025-05-27T02:57:39.850502350Z" level=info msg="connecting to shim 459532217a155d1a2ebca2f053b22e83519972ac8365eacb7da8741a5de07cc7" address="unix:///run/containerd/s/129acc12468b85e33868e508e3a5e24170de7b20b5b25772cf41716180dda7c8" protocol=ttrpc version=3 May 27 02:57:39.874865 systemd[1]: Started cri-containerd-459532217a155d1a2ebca2f053b22e83519972ac8365eacb7da8741a5de07cc7.scope - libcontainer container 459532217a155d1a2ebca2f053b22e83519972ac8365eacb7da8741a5de07cc7. May 27 02:57:39.896439 systemd[1]: cri-containerd-459532217a155d1a2ebca2f053b22e83519972ac8365eacb7da8741a5de07cc7.scope: Deactivated successfully. May 27 02:57:39.898848 containerd[1524]: time="2025-05-27T02:57:39.898752404Z" level=info msg="TaskExit event in podsandbox handler container_id:\"459532217a155d1a2ebca2f053b22e83519972ac8365eacb7da8741a5de07cc7\" id:\"459532217a155d1a2ebca2f053b22e83519972ac8365eacb7da8741a5de07cc7\" pid:4622 exited_at:{seconds:1748314659 nanos:897590642}" May 27 02:57:39.899098 containerd[1524]: time="2025-05-27T02:57:39.899020933Z" level=info msg="received exit event container_id:\"459532217a155d1a2ebca2f053b22e83519972ac8365eacb7da8741a5de07cc7\" id:\"459532217a155d1a2ebca2f053b22e83519972ac8365eacb7da8741a5de07cc7\" pid:4622 exited_at:{seconds:1748314659 nanos:897590642}" May 27 02:57:39.905960 containerd[1524]: time="2025-05-27T02:57:39.905923021Z" level=info msg="StartContainer for \"459532217a155d1a2ebca2f053b22e83519972ac8365eacb7da8741a5de07cc7\" returns successfully" May 27 02:57:39.917481 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-459532217a155d1a2ebca2f053b22e83519972ac8365eacb7da8741a5de07cc7-rootfs.mount: Deactivated successfully. May 27 02:57:40.437979 kubelet[2643]: I0527 02:57:40.437927 2643 setters.go:600] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-05-27T02:57:40Z","lastTransitionTime":"2025-05-27T02:57:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} May 27 02:57:40.834756 kubelet[2643]: E0527 02:57:40.834536 2643 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 02:57:40.840042 containerd[1524]: time="2025-05-27T02:57:40.839921592Z" level=info msg="CreateContainer within sandbox \"b859b8632fd113649bea72218eba3b84925cb669a8bfb64933e89c35ec623e04\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 27 02:57:40.851736 containerd[1524]: time="2025-05-27T02:57:40.851125462Z" level=info msg="Container ebcddd9168131797df0f5d6a4142b2a6de23f3270a274c40b0c1e5ef95492073: CDI devices from CRI Config.CDIDevices: []" May 27 02:57:40.858287 containerd[1524]: time="2025-05-27T02:57:40.858232590Z" level=info msg="CreateContainer within sandbox \"b859b8632fd113649bea72218eba3b84925cb669a8bfb64933e89c35ec623e04\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"ebcddd9168131797df0f5d6a4142b2a6de23f3270a274c40b0c1e5ef95492073\"" May 27 02:57:40.858898 containerd[1524]: time="2025-05-27T02:57:40.858861211Z" level=info msg="StartContainer for \"ebcddd9168131797df0f5d6a4142b2a6de23f3270a274c40b0c1e5ef95492073\"" May 27 02:57:40.859790 containerd[1524]: time="2025-05-27T02:57:40.859760243Z" level=info msg="connecting to shim ebcddd9168131797df0f5d6a4142b2a6de23f3270a274c40b0c1e5ef95492073" address="unix:///run/containerd/s/129acc12468b85e33868e508e3a5e24170de7b20b5b25772cf41716180dda7c8" protocol=ttrpc version=3 May 27 02:57:40.881851 systemd[1]: Started cri-containerd-ebcddd9168131797df0f5d6a4142b2a6de23f3270a274c40b0c1e5ef95492073.scope - libcontainer container ebcddd9168131797df0f5d6a4142b2a6de23f3270a274c40b0c1e5ef95492073. May 27 02:57:40.909691 containerd[1524]: time="2025-05-27T02:57:40.909633458Z" level=info msg="StartContainer for \"ebcddd9168131797df0f5d6a4142b2a6de23f3270a274c40b0c1e5ef95492073\" returns successfully" May 27 02:57:40.960264 containerd[1524]: time="2025-05-27T02:57:40.960045973Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ebcddd9168131797df0f5d6a4142b2a6de23f3270a274c40b0c1e5ef95492073\" id:\"db0cd0cbc7402e73d50468ddcf5cc706f3c932723cbfbe3b531e2755cc0573cd\" pid:4689 exited_at:{seconds:1748314660 nanos:959759323}" May 27 02:57:41.177704 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) May 27 02:57:41.840238 kubelet[2643]: E0527 02:57:41.840210 2643 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 02:57:41.854907 kubelet[2643]: I0527 02:57:41.854851 2643 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-jpn2f" podStartSLOduration=4.854831624 podStartE2EDuration="4.854831624s" podCreationTimestamp="2025-05-27 02:57:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 02:57:41.854008116 +0000 UTC m=+83.363276810" watchObservedRunningTime="2025-05-27 02:57:41.854831624 +0000 UTC m=+83.364100318" May 27 02:57:43.493557 kubelet[2643]: E0527 02:57:43.493308 2643 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 02:57:43.693194 containerd[1524]: time="2025-05-27T02:57:43.693142191Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ebcddd9168131797df0f5d6a4142b2a6de23f3270a274c40b0c1e5ef95492073\" id:\"f0ed969d142770162b382e6a97ec4efd469026b66d84533f180ea9d95238cd7c\" pid:5105 exit_status:1 exited_at:{seconds:1748314663 nanos:692825661}" May 27 02:57:43.979330 systemd-networkd[1440]: lxc_health: Link UP May 27 02:57:43.980188 systemd-networkd[1440]: lxc_health: Gained carrier May 27 02:57:44.564711 kubelet[2643]: E0527 02:57:44.563360 2643 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 02:57:45.494751 kubelet[2643]: E0527 02:57:45.494724 2643 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 02:57:45.563824 kubelet[2643]: E0527 02:57:45.563714 2643 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 02:57:45.629846 systemd-networkd[1440]: lxc_health: Gained IPv6LL May 27 02:57:45.827144 containerd[1524]: time="2025-05-27T02:57:45.827031630Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ebcddd9168131797df0f5d6a4142b2a6de23f3270a274c40b0c1e5ef95492073\" id:\"51c3611917bcacf18d3724533a995519b6078aa95a3c6f816722147cc4e24786\" pid:5229 exited_at:{seconds:1748314665 nanos:825984518}" May 27 02:57:45.850891 kubelet[2643]: E0527 02:57:45.850793 2643 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 02:57:47.564273 kubelet[2643]: E0527 02:57:47.564228 2643 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 02:57:47.956577 containerd[1524]: time="2025-05-27T02:57:47.956522187Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ebcddd9168131797df0f5d6a4142b2a6de23f3270a274c40b0c1e5ef95492073\" id:\"07ae61ba042fc83f3fc3f1b022c9e27443f12056b4d4309f5ba95c190e19982f\" pid:5263 exited_at:{seconds:1748314667 nanos:956149857}" May 27 02:57:50.067488 containerd[1524]: time="2025-05-27T02:57:50.067380906Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ebcddd9168131797df0f5d6a4142b2a6de23f3270a274c40b0c1e5ef95492073\" id:\"9be4ffc761cf06682f91121af381aacbfeb08a3854dfc5ee6df3fbd984140c65\" pid:5288 exited_at:{seconds:1748314670 nanos:66645087}" May 27 02:57:50.072318 sshd[4422]: Connection closed by 10.0.0.1 port 55532 May 27 02:57:50.072825 sshd-session[4420]: pam_unix(sshd:session): session closed for user core May 27 02:57:50.078293 systemd-logind[1508]: Session 26 logged out. Waiting for processes to exit. May 27 02:57:50.078783 systemd[1]: sshd@25-10.0.0.92:22-10.0.0.1:55532.service: Deactivated successfully. May 27 02:57:50.080570 systemd[1]: session-26.scope: Deactivated successfully. May 27 02:57:50.082282 systemd-logind[1508]: Removed session 26.