May 27 17:14:10.792635 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] May 27 17:14:10.792657 kernel: Linux version 6.12.30-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT Tue May 27 15:31:23 -00 2025 May 27 17:14:10.792666 kernel: KASLR enabled May 27 17:14:10.792672 kernel: efi: EFI v2.7 by EDK II May 27 17:14:10.792678 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb228018 ACPI 2.0=0xdb9b8018 RNG=0xdb9b8a18 MEMRESERVE=0xdb21fd18 May 27 17:14:10.792683 kernel: random: crng init done May 27 17:14:10.792690 kernel: Kernel is locked down from EFI Secure Boot; see man kernel_lockdown.7 May 27 17:14:10.792695 kernel: secureboot: Secure boot enabled May 27 17:14:10.792701 kernel: ACPI: Early table checksum verification disabled May 27 17:14:10.792708 kernel: ACPI: RSDP 0x00000000DB9B8018 000024 (v02 BOCHS ) May 27 17:14:10.792714 kernel: ACPI: XSDT 0x00000000DB9B8F18 000064 (v01 BOCHS BXPC 00000001 01000013) May 27 17:14:10.792720 kernel: ACPI: FACP 0x00000000DB9B8B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) May 27 17:14:10.792726 kernel: ACPI: DSDT 0x00000000DB904018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 27 17:14:10.792731 kernel: ACPI: APIC 0x00000000DB9B8C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) May 27 17:14:10.792738 kernel: ACPI: PPTT 0x00000000DB9B8098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) May 27 17:14:10.792745 kernel: ACPI: GTDT 0x00000000DB9B8818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 27 17:14:10.792752 kernel: ACPI: MCFG 0x00000000DB9B8A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) May 27 17:14:10.792758 kernel: ACPI: SPCR 0x00000000DB9B8918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) May 27 17:14:10.792764 kernel: ACPI: DBG2 0x00000000DB9B8998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) May 27 17:14:10.792770 kernel: ACPI: IORT 0x00000000DB9B8198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) May 27 17:14:10.792776 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 May 27 17:14:10.792782 kernel: ACPI: Use ACPI SPCR as default console: Yes May 27 17:14:10.792788 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] May 27 17:14:10.792794 kernel: NODE_DATA(0) allocated [mem 0xdc737dc0-0xdc73efff] May 27 17:14:10.792800 kernel: Zone ranges: May 27 17:14:10.792807 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] May 27 17:14:10.792813 kernel: DMA32 empty May 27 17:14:10.792819 kernel: Normal empty May 27 17:14:10.792825 kernel: Device empty May 27 17:14:10.792831 kernel: Movable zone start for each node May 27 17:14:10.792837 kernel: Early memory node ranges May 27 17:14:10.792843 kernel: node 0: [mem 0x0000000040000000-0x00000000dbb4ffff] May 27 17:14:10.792849 kernel: node 0: [mem 0x00000000dbb50000-0x00000000dbe7ffff] May 27 17:14:10.792855 kernel: node 0: [mem 0x00000000dbe80000-0x00000000dbe9ffff] May 27 17:14:10.792861 kernel: node 0: [mem 0x00000000dbea0000-0x00000000dbedffff] May 27 17:14:10.792868 kernel: node 0: [mem 0x00000000dbee0000-0x00000000dbf1ffff] May 27 17:14:10.792874 kernel: node 0: [mem 0x00000000dbf20000-0x00000000dbf6ffff] May 27 17:14:10.792892 kernel: node 0: [mem 0x00000000dbf70000-0x00000000dcbfffff] May 27 17:14:10.792899 kernel: node 0: [mem 0x00000000dcc00000-0x00000000dcfdffff] May 27 17:14:10.792906 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] May 27 17:14:10.792916 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] May 27 17:14:10.792922 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges May 27 17:14:10.792929 kernel: psci: probing for conduit method from ACPI. May 27 17:14:10.792935 kernel: psci: PSCIv1.1 detected in firmware. May 27 17:14:10.792943 kernel: psci: Using standard PSCI v0.2 function IDs May 27 17:14:10.792949 kernel: psci: Trusted OS migration not required May 27 17:14:10.792956 kernel: psci: SMC Calling Convention v1.1 May 27 17:14:10.792963 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) May 27 17:14:10.792970 kernel: percpu: Embedded 33 pages/cpu s98136 r8192 d28840 u135168 May 27 17:14:10.792976 kernel: pcpu-alloc: s98136 r8192 d28840 u135168 alloc=33*4096 May 27 17:14:10.792982 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 May 27 17:14:10.792989 kernel: Detected PIPT I-cache on CPU0 May 27 17:14:10.792995 kernel: CPU features: detected: GIC system register CPU interface May 27 17:14:10.793003 kernel: CPU features: detected: Spectre-v4 May 27 17:14:10.793009 kernel: CPU features: detected: Spectre-BHB May 27 17:14:10.793016 kernel: CPU features: kernel page table isolation forced ON by KASLR May 27 17:14:10.793022 kernel: CPU features: detected: Kernel page table isolation (KPTI) May 27 17:14:10.793028 kernel: CPU features: detected: ARM erratum 1418040 May 27 17:14:10.793035 kernel: CPU features: detected: SSBS not fully self-synchronizing May 27 17:14:10.793041 kernel: alternatives: applying boot alternatives May 27 17:14:10.793048 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=4e706b869299e1c88703222069cdfa08c45ebce568f762053eea5b3f5f0939c3 May 27 17:14:10.793102 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 27 17:14:10.793112 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 27 17:14:10.793118 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 27 17:14:10.793128 kernel: Fallback order for Node 0: 0 May 27 17:14:10.793134 kernel: Built 1 zonelists, mobility grouping on. Total pages: 643072 May 27 17:14:10.793140 kernel: Policy zone: DMA May 27 17:14:10.793147 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 27 17:14:10.793153 kernel: software IO TLB: SWIOTLB bounce buffer size adjusted to 2MB May 27 17:14:10.793160 kernel: software IO TLB: area num 4. May 27 17:14:10.793166 kernel: software IO TLB: SWIOTLB bounce buffer size roundup to 4MB May 27 17:14:10.793173 kernel: software IO TLB: mapped [mem 0x00000000db504000-0x00000000db904000] (4MB) May 27 17:14:10.793179 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 May 27 17:14:10.793185 kernel: rcu: Preemptible hierarchical RCU implementation. May 27 17:14:10.793193 kernel: rcu: RCU event tracing is enabled. May 27 17:14:10.793199 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. May 27 17:14:10.793207 kernel: Trampoline variant of Tasks RCU enabled. May 27 17:14:10.793214 kernel: Tracing variant of Tasks RCU enabled. May 27 17:14:10.793220 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 27 17:14:10.793227 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 May 27 17:14:10.793234 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 27 17:14:10.793240 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. May 27 17:14:10.793246 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 May 27 17:14:10.793253 kernel: GICv3: 256 SPIs implemented May 27 17:14:10.793259 kernel: GICv3: 0 Extended SPIs implemented May 27 17:14:10.793266 kernel: Root IRQ handler: gic_handle_irq May 27 17:14:10.793272 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI May 27 17:14:10.793280 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 May 27 17:14:10.793286 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 May 27 17:14:10.793293 kernel: ITS [mem 0x08080000-0x0809ffff] May 27 17:14:10.793299 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400d0000 (indirect, esz 8, psz 64K, shr 1) May 27 17:14:10.793306 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400e0000 (flat, esz 8, psz 64K, shr 1) May 27 17:14:10.793312 kernel: GICv3: using LPI property table @0x00000000400f0000 May 27 17:14:10.793318 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040110000 May 27 17:14:10.793325 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 27 17:14:10.793331 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 27 17:14:10.793338 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). May 27 17:14:10.793344 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns May 27 17:14:10.793351 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns May 27 17:14:10.793359 kernel: arm-pv: using stolen time PV May 27 17:14:10.793365 kernel: Console: colour dummy device 80x25 May 27 17:14:10.793372 kernel: ACPI: Core revision 20240827 May 27 17:14:10.793379 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) May 27 17:14:10.793386 kernel: pid_max: default: 32768 minimum: 301 May 27 17:14:10.793393 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima May 27 17:14:10.793399 kernel: landlock: Up and running. May 27 17:14:10.793405 kernel: SELinux: Initializing. May 27 17:14:10.793412 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 27 17:14:10.793420 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) May 27 17:14:10.793427 kernel: rcu: Hierarchical SRCU implementation. May 27 17:14:10.793434 kernel: rcu: Max phase no-delay instances is 400. May 27 17:14:10.793441 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level May 27 17:14:10.793447 kernel: Remapping and enabling EFI services. May 27 17:14:10.793454 kernel: smp: Bringing up secondary CPUs ... May 27 17:14:10.793460 kernel: Detected PIPT I-cache on CPU1 May 27 17:14:10.793467 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 May 27 17:14:10.793474 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040120000 May 27 17:14:10.793482 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 27 17:14:10.793493 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] May 27 17:14:10.793500 kernel: Detected PIPT I-cache on CPU2 May 27 17:14:10.793508 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 May 27 17:14:10.793515 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040130000 May 27 17:14:10.793522 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 27 17:14:10.793529 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] May 27 17:14:10.793536 kernel: Detected PIPT I-cache on CPU3 May 27 17:14:10.793543 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 May 27 17:14:10.793551 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040140000 May 27 17:14:10.793558 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 May 27 17:14:10.793565 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] May 27 17:14:10.793572 kernel: smp: Brought up 1 node, 4 CPUs May 27 17:14:10.793578 kernel: SMP: Total of 4 processors activated. May 27 17:14:10.793585 kernel: CPU: All CPU(s) started at EL1 May 27 17:14:10.793592 kernel: CPU features: detected: 32-bit EL0 Support May 27 17:14:10.793599 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence May 27 17:14:10.793606 kernel: CPU features: detected: Common not Private translations May 27 17:14:10.793614 kernel: CPU features: detected: CRC32 instructions May 27 17:14:10.793621 kernel: CPU features: detected: Enhanced Virtualization Traps May 27 17:14:10.793628 kernel: CPU features: detected: RCpc load-acquire (LDAPR) May 27 17:14:10.793635 kernel: CPU features: detected: LSE atomic instructions May 27 17:14:10.793642 kernel: CPU features: detected: Privileged Access Never May 27 17:14:10.793649 kernel: CPU features: detected: RAS Extension Support May 27 17:14:10.793656 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) May 27 17:14:10.793663 kernel: alternatives: applying system-wide alternatives May 27 17:14:10.793670 kernel: CPU features: detected: Hardware dirty bit management on CPU0-3 May 27 17:14:10.793678 kernel: Memory: 2438884K/2572288K available (11072K kernel code, 2276K rwdata, 8936K rodata, 39424K init, 1034K bss, 127636K reserved, 0K cma-reserved) May 27 17:14:10.793685 kernel: devtmpfs: initialized May 27 17:14:10.793692 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 27 17:14:10.793699 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) May 27 17:14:10.793706 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL May 27 17:14:10.793713 kernel: 0 pages in range for non-PLT usage May 27 17:14:10.793720 kernel: 508544 pages in range for PLT usage May 27 17:14:10.793727 kernel: pinctrl core: initialized pinctrl subsystem May 27 17:14:10.793733 kernel: SMBIOS 3.0.0 present. May 27 17:14:10.793742 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 May 27 17:14:10.793748 kernel: DMI: Memory slots populated: 1/1 May 27 17:14:10.793755 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 27 17:14:10.793762 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations May 27 17:14:10.793769 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations May 27 17:14:10.793776 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations May 27 17:14:10.793783 kernel: audit: initializing netlink subsys (disabled) May 27 17:14:10.793790 kernel: audit: type=2000 audit(0.035:1): state=initialized audit_enabled=0 res=1 May 27 17:14:10.793797 kernel: thermal_sys: Registered thermal governor 'step_wise' May 27 17:14:10.793806 kernel: cpuidle: using governor menu May 27 17:14:10.793813 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. May 27 17:14:10.793820 kernel: ASID allocator initialised with 32768 entries May 27 17:14:10.793826 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 27 17:14:10.793833 kernel: Serial: AMBA PL011 UART driver May 27 17:14:10.793840 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 27 17:14:10.793847 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page May 27 17:14:10.793854 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages May 27 17:14:10.793862 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page May 27 17:14:10.793869 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 27 17:14:10.793876 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page May 27 17:14:10.793889 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages May 27 17:14:10.793897 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page May 27 17:14:10.793903 kernel: ACPI: Added _OSI(Module Device) May 27 17:14:10.793910 kernel: ACPI: Added _OSI(Processor Device) May 27 17:14:10.793917 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 27 17:14:10.793924 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 27 17:14:10.793931 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 27 17:14:10.793939 kernel: ACPI: Interpreter enabled May 27 17:14:10.793946 kernel: ACPI: Using GIC for interrupt routing May 27 17:14:10.793953 kernel: ACPI: MCFG table detected, 1 entries May 27 17:14:10.793960 kernel: ACPI: CPU0 has been hot-added May 27 17:14:10.793966 kernel: ACPI: CPU1 has been hot-added May 27 17:14:10.793973 kernel: ACPI: CPU2 has been hot-added May 27 17:14:10.793980 kernel: ACPI: CPU3 has been hot-added May 27 17:14:10.793987 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA May 27 17:14:10.793994 kernel: printk: legacy console [ttyAMA0] enabled May 27 17:14:10.794002 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 27 17:14:10.794150 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] May 27 17:14:10.794220 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] May 27 17:14:10.794281 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] May 27 17:14:10.794341 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 May 27 17:14:10.794399 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] May 27 17:14:10.794408 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] May 27 17:14:10.794418 kernel: PCI host bridge to bus 0000:00 May 27 17:14:10.794485 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] May 27 17:14:10.794543 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] May 27 17:14:10.794597 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] May 27 17:14:10.794650 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 27 17:14:10.794725 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint May 27 17:14:10.794800 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint May 27 17:14:10.794866 kernel: pci 0000:00:01.0: BAR 0 [io 0x0000-0x001f] May 27 17:14:10.794938 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff] May 27 17:14:10.795001 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] May 27 17:14:10.795080 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned May 27 17:14:10.795146 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff]: assigned May 27 17:14:10.795207 kernel: pci 0000:00:01.0: BAR 0 [io 0x1000-0x101f]: assigned May 27 17:14:10.795264 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] May 27 17:14:10.795317 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] May 27 17:14:10.795371 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] May 27 17:14:10.795379 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 May 27 17:14:10.795387 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 May 27 17:14:10.795394 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 May 27 17:14:10.795401 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 May 27 17:14:10.795408 kernel: iommu: Default domain type: Translated May 27 17:14:10.795416 kernel: iommu: DMA domain TLB invalidation policy: strict mode May 27 17:14:10.795423 kernel: efivars: Registered efivars operations May 27 17:14:10.795430 kernel: vgaarb: loaded May 27 17:14:10.795437 kernel: clocksource: Switched to clocksource arch_sys_counter May 27 17:14:10.795444 kernel: VFS: Disk quotas dquot_6.6.0 May 27 17:14:10.795451 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 27 17:14:10.795458 kernel: pnp: PnP ACPI init May 27 17:14:10.795523 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved May 27 17:14:10.795533 kernel: pnp: PnP ACPI: found 1 devices May 27 17:14:10.795541 kernel: NET: Registered PF_INET protocol family May 27 17:14:10.795548 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) May 27 17:14:10.795555 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) May 27 17:14:10.795562 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 27 17:14:10.795570 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) May 27 17:14:10.795577 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) May 27 17:14:10.795584 kernel: TCP: Hash tables configured (established 32768 bind 32768) May 27 17:14:10.795591 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) May 27 17:14:10.795597 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) May 27 17:14:10.795606 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 27 17:14:10.795613 kernel: PCI: CLS 0 bytes, default 64 May 27 17:14:10.795619 kernel: kvm [1]: HYP mode not available May 27 17:14:10.795626 kernel: Initialise system trusted keyrings May 27 17:14:10.795633 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 May 27 17:14:10.795641 kernel: Key type asymmetric registered May 27 17:14:10.795647 kernel: Asymmetric key parser 'x509' registered May 27 17:14:10.795654 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) May 27 17:14:10.795661 kernel: io scheduler mq-deadline registered May 27 17:14:10.795669 kernel: io scheduler kyber registered May 27 17:14:10.795676 kernel: io scheduler bfq registered May 27 17:14:10.795683 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 May 27 17:14:10.795690 kernel: ACPI: button: Power Button [PWRB] May 27 17:14:10.795697 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 May 27 17:14:10.795757 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) May 27 17:14:10.795766 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 27 17:14:10.795773 kernel: thunder_xcv, ver 1.0 May 27 17:14:10.795780 kernel: thunder_bgx, ver 1.0 May 27 17:14:10.795789 kernel: nicpf, ver 1.0 May 27 17:14:10.795796 kernel: nicvf, ver 1.0 May 27 17:14:10.795862 kernel: rtc-efi rtc-efi.0: registered as rtc0 May 27 17:14:10.795930 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-05-27T17:14:10 UTC (1748366050) May 27 17:14:10.795940 kernel: hid: raw HID events driver (C) Jiri Kosina May 27 17:14:10.795947 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available May 27 17:14:10.795955 kernel: watchdog: NMI not fully supported May 27 17:14:10.795962 kernel: watchdog: Hard watchdog permanently disabled May 27 17:14:10.795971 kernel: NET: Registered PF_INET6 protocol family May 27 17:14:10.795978 kernel: Segment Routing with IPv6 May 27 17:14:10.795984 kernel: In-situ OAM (IOAM) with IPv6 May 27 17:14:10.795991 kernel: NET: Registered PF_PACKET protocol family May 27 17:14:10.795998 kernel: Key type dns_resolver registered May 27 17:14:10.796005 kernel: registered taskstats version 1 May 27 17:14:10.796012 kernel: Loading compiled-in X.509 certificates May 27 17:14:10.796019 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.30-flatcar: 8e5e45c34fa91568ef1fa3bdfd5a71a43b4c4580' May 27 17:14:10.796026 kernel: Demotion targets for Node 0: null May 27 17:14:10.796034 kernel: Key type .fscrypt registered May 27 17:14:10.796042 kernel: Key type fscrypt-provisioning registered May 27 17:14:10.796049 kernel: ima: No TPM chip found, activating TPM-bypass! May 27 17:14:10.796064 kernel: ima: Allocated hash algorithm: sha1 May 27 17:14:10.796072 kernel: ima: No architecture policies found May 27 17:14:10.796079 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) May 27 17:14:10.796086 kernel: clk: Disabling unused clocks May 27 17:14:10.796093 kernel: PM: genpd: Disabling unused power domains May 27 17:14:10.796100 kernel: Warning: unable to open an initial console. May 27 17:14:10.796109 kernel: Freeing unused kernel memory: 39424K May 27 17:14:10.796116 kernel: Run /init as init process May 27 17:14:10.796123 kernel: with arguments: May 27 17:14:10.796129 kernel: /init May 27 17:14:10.796136 kernel: with environment: May 27 17:14:10.796143 kernel: HOME=/ May 27 17:14:10.796150 kernel: TERM=linux May 27 17:14:10.796156 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 27 17:14:10.796164 systemd[1]: Successfully made /usr/ read-only. May 27 17:14:10.796175 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 27 17:14:10.796183 systemd[1]: Detected virtualization kvm. May 27 17:14:10.796190 systemd[1]: Detected architecture arm64. May 27 17:14:10.796197 systemd[1]: Running in initrd. May 27 17:14:10.796205 systemd[1]: No hostname configured, using default hostname. May 27 17:14:10.796213 systemd[1]: Hostname set to . May 27 17:14:10.796220 systemd[1]: Initializing machine ID from VM UUID. May 27 17:14:10.796229 systemd[1]: Queued start job for default target initrd.target. May 27 17:14:10.796236 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 27 17:14:10.796244 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 27 17:14:10.796252 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 27 17:14:10.796259 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 27 17:14:10.796267 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 27 17:14:10.796275 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 27 17:14:10.796285 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 27 17:14:10.796292 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 27 17:14:10.796300 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 27 17:14:10.796307 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 27 17:14:10.796315 systemd[1]: Reached target paths.target - Path Units. May 27 17:14:10.796322 systemd[1]: Reached target slices.target - Slice Units. May 27 17:14:10.796330 systemd[1]: Reached target swap.target - Swaps. May 27 17:14:10.796337 systemd[1]: Reached target timers.target - Timer Units. May 27 17:14:10.796346 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 27 17:14:10.796353 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 27 17:14:10.796361 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 27 17:14:10.796368 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 27 17:14:10.796376 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 27 17:14:10.796383 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 27 17:14:10.796391 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 27 17:14:10.796398 systemd[1]: Reached target sockets.target - Socket Units. May 27 17:14:10.796407 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 27 17:14:10.796414 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 27 17:14:10.796422 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 27 17:14:10.796430 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). May 27 17:14:10.796437 systemd[1]: Starting systemd-fsck-usr.service... May 27 17:14:10.796445 systemd[1]: Starting systemd-journald.service - Journal Service... May 27 17:14:10.796452 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 27 17:14:10.796459 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 27 17:14:10.796467 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 27 17:14:10.796476 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 27 17:14:10.796484 systemd[1]: Finished systemd-fsck-usr.service. May 27 17:14:10.796491 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 27 17:14:10.796514 systemd-journald[246]: Collecting audit messages is disabled. May 27 17:14:10.796534 systemd-journald[246]: Journal started May 27 17:14:10.796552 systemd-journald[246]: Runtime Journal (/run/log/journal/cdd1e788bdab41799ca3d55e1d6226c2) is 6M, max 48.5M, 42.4M free. May 27 17:14:10.786307 systemd-modules-load[247]: Inserted module 'overlay' May 27 17:14:10.802555 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 27 17:14:10.804196 systemd[1]: Started systemd-journald.service - Journal Service. May 27 17:14:10.808328 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 27 17:14:10.806612 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 27 17:14:10.812121 kernel: Bridge firewalling registered May 27 17:14:10.808406 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 27 17:14:10.809439 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 27 17:14:10.811501 systemd-modules-load[247]: Inserted module 'br_netfilter' May 27 17:14:10.813432 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 27 17:14:10.820265 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 27 17:14:10.821817 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 27 17:14:10.825585 systemd-tmpfiles[266]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. May 27 17:14:10.828657 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 27 17:14:10.833313 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 27 17:14:10.835807 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 27 17:14:10.836878 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 27 17:14:10.852175 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 27 17:14:10.855180 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 27 17:14:10.879872 systemd-resolved[287]: Positive Trust Anchors: May 27 17:14:10.879896 systemd-resolved[287]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 27 17:14:10.879927 systemd-resolved[287]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 27 17:14:10.884871 systemd-resolved[287]: Defaulting to hostname 'linux'. May 27 17:14:10.890629 dracut-cmdline[292]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=4e706b869299e1c88703222069cdfa08c45ebce568f762053eea5b3f5f0939c3 May 27 17:14:10.885802 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 27 17:14:10.889898 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 27 17:14:10.960089 kernel: SCSI subsystem initialized May 27 17:14:10.965073 kernel: Loading iSCSI transport class v2.0-870. May 27 17:14:10.972086 kernel: iscsi: registered transport (tcp) May 27 17:14:10.984142 kernel: iscsi: registered transport (qla4xxx) May 27 17:14:10.984157 kernel: QLogic iSCSI HBA Driver May 27 17:14:11.000111 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 27 17:14:11.020117 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 27 17:14:11.022530 systemd[1]: Reached target network-pre.target - Preparation for Network. May 27 17:14:11.067927 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 27 17:14:11.069603 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 27 17:14:11.131142 kernel: raid6: neonx8 gen() 15802 MB/s May 27 17:14:11.148082 kernel: raid6: neonx4 gen() 15817 MB/s May 27 17:14:11.165097 kernel: raid6: neonx2 gen() 13201 MB/s May 27 17:14:11.182079 kernel: raid6: neonx1 gen() 10491 MB/s May 27 17:14:11.199088 kernel: raid6: int64x8 gen() 6899 MB/s May 27 17:14:11.216094 kernel: raid6: int64x4 gen() 7347 MB/s May 27 17:14:11.233096 kernel: raid6: int64x2 gen() 6099 MB/s May 27 17:14:11.250074 kernel: raid6: int64x1 gen() 5053 MB/s May 27 17:14:11.250088 kernel: raid6: using algorithm neonx4 gen() 15817 MB/s May 27 17:14:11.267081 kernel: raid6: .... xor() 12394 MB/s, rmw enabled May 27 17:14:11.267095 kernel: raid6: using neon recovery algorithm May 27 17:14:11.272078 kernel: xor: measuring software checksum speed May 27 17:14:11.272113 kernel: 8regs : 21596 MB/sec May 27 17:14:11.272134 kernel: 32regs : 20277 MB/sec May 27 17:14:11.273398 kernel: arm64_neon : 28109 MB/sec May 27 17:14:11.273411 kernel: xor: using function: arm64_neon (28109 MB/sec) May 27 17:14:11.327083 kernel: Btrfs loaded, zoned=no, fsverity=no May 27 17:14:11.335157 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 27 17:14:11.337766 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 27 17:14:11.365736 systemd-udevd[500]: Using default interface naming scheme 'v255'. May 27 17:14:11.369862 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 27 17:14:11.372153 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 27 17:14:11.398547 dracut-pre-trigger[509]: rd.md=0: removing MD RAID activation May 27 17:14:11.423398 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 27 17:14:11.425783 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 27 17:14:11.481081 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 27 17:14:11.484409 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 27 17:14:11.525092 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues May 27 17:14:11.527915 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) May 27 17:14:11.532471 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 27 17:14:11.532504 kernel: GPT:9289727 != 19775487 May 27 17:14:11.532514 kernel: GPT:Alternate GPT header not at the end of the disk. May 27 17:14:11.533244 kernel: GPT:9289727 != 19775487 May 27 17:14:11.533261 kernel: GPT: Use GNU Parted to correct GPT errors. May 27 17:14:11.534076 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 27 17:14:11.538241 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 27 17:14:11.538364 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 27 17:14:11.541362 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 27 17:14:11.543467 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 27 17:14:11.563428 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 27 17:14:11.573966 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 27 17:14:11.582769 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 27 17:14:11.584030 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 27 17:14:11.595701 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 27 17:14:11.596669 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 27 17:14:11.604979 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 27 17:14:11.605907 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 27 17:14:11.607742 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 27 17:14:11.609750 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 27 17:14:11.612328 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 27 17:14:11.613944 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 27 17:14:11.630005 disk-uuid[595]: Primary Header is updated. May 27 17:14:11.630005 disk-uuid[595]: Secondary Entries is updated. May 27 17:14:11.630005 disk-uuid[595]: Secondary Header is updated. May 27 17:14:11.635088 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 27 17:14:11.636341 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 27 17:14:12.648094 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 27 17:14:12.648427 disk-uuid[598]: The operation has completed successfully. May 27 17:14:12.684148 systemd[1]: disk-uuid.service: Deactivated successfully. May 27 17:14:12.684273 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 27 17:14:12.699083 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 27 17:14:12.732040 sh[615]: Success May 27 17:14:12.747238 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 27 17:14:12.747279 kernel: device-mapper: uevent: version 1.0.3 May 27 17:14:12.750092 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev May 27 17:14:12.760074 kernel: device-mapper: verity: sha256 using shash "sha256-ce" May 27 17:14:12.789360 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 27 17:14:12.808582 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 27 17:14:12.810851 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 27 17:14:12.820434 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' May 27 17:14:12.820490 kernel: BTRFS: device fsid 3c8c76ef-f1da-40fe-979d-11bdf765e403 devid 1 transid 39 /dev/mapper/usr (253:0) scanned by mount (627) May 27 17:14:12.822719 kernel: BTRFS info (device dm-0): first mount of filesystem 3c8c76ef-f1da-40fe-979d-11bdf765e403 May 27 17:14:12.823095 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm May 27 17:14:12.823115 kernel: BTRFS info (device dm-0): using free-space-tree May 27 17:14:12.826128 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 27 17:14:12.827388 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. May 27 17:14:12.828796 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 27 17:14:12.829585 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 27 17:14:12.831125 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 27 17:14:12.853083 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 (254:6) scanned by mount (657) May 27 17:14:12.855164 kernel: BTRFS info (device vda6): first mount of filesystem 0631e8fb-ef71-4ba1-b2b8-88386996a754 May 27 17:14:12.855200 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 27 17:14:12.855212 kernel: BTRFS info (device vda6): using free-space-tree May 27 17:14:12.861076 kernel: BTRFS info (device vda6): last unmount of filesystem 0631e8fb-ef71-4ba1-b2b8-88386996a754 May 27 17:14:12.862122 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 27 17:14:12.864282 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 27 17:14:12.932347 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 27 17:14:12.934833 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 27 17:14:12.972591 systemd-networkd[803]: lo: Link UP May 27 17:14:12.972601 systemd-networkd[803]: lo: Gained carrier May 27 17:14:12.973321 systemd-networkd[803]: Enumeration completed May 27 17:14:12.973701 systemd-networkd[803]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 27 17:14:12.973704 systemd-networkd[803]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 27 17:14:12.974504 systemd-networkd[803]: eth0: Link UP May 27 17:14:12.974507 systemd-networkd[803]: eth0: Gained carrier May 27 17:14:12.974515 systemd-networkd[803]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 27 17:14:12.975589 systemd[1]: Started systemd-networkd.service - Network Configuration. May 27 17:14:12.976721 systemd[1]: Reached target network.target - Network. May 27 17:14:12.998837 ignition[701]: Ignition 2.21.0 May 27 17:14:12.998854 ignition[701]: Stage: fetch-offline May 27 17:14:12.998898 ignition[701]: no configs at "/usr/lib/ignition/base.d" May 27 17:14:13.000101 systemd-networkd[803]: eth0: DHCPv4 address 10.0.0.109/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 27 17:14:12.998906 ignition[701]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 27 17:14:12.999102 ignition[701]: parsed url from cmdline: "" May 27 17:14:12.999105 ignition[701]: no config URL provided May 27 17:14:12.999110 ignition[701]: reading system config file "/usr/lib/ignition/user.ign" May 27 17:14:12.999116 ignition[701]: no config at "/usr/lib/ignition/user.ign" May 27 17:14:12.999136 ignition[701]: op(1): [started] loading QEMU firmware config module May 27 17:14:12.999140 ignition[701]: op(1): executing: "modprobe" "qemu_fw_cfg" May 27 17:14:13.008646 ignition[701]: op(1): [finished] loading QEMU firmware config module May 27 17:14:13.045382 ignition[701]: parsing config with SHA512: 23432231af465aa1831d511bdbb89eab99ffcc5d5e512d6182e9dd5ddbb23d40d4aa17328f62b59f4fc062554412b52840380ce47f4338d0a0dc86d34977e6cb May 27 17:14:13.051595 unknown[701]: fetched base config from "system" May 27 17:14:13.051610 unknown[701]: fetched user config from "qemu" May 27 17:14:13.052006 ignition[701]: fetch-offline: fetch-offline passed May 27 17:14:13.053861 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 27 17:14:13.052080 ignition[701]: Ignition finished successfully May 27 17:14:13.055402 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). May 27 17:14:13.058158 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 27 17:14:13.077564 ignition[816]: Ignition 2.21.0 May 27 17:14:13.077582 ignition[816]: Stage: kargs May 27 17:14:13.077722 ignition[816]: no configs at "/usr/lib/ignition/base.d" May 27 17:14:13.077731 ignition[816]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 27 17:14:13.079258 ignition[816]: kargs: kargs passed May 27 17:14:13.079314 ignition[816]: Ignition finished successfully May 27 17:14:13.082498 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 27 17:14:13.084692 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 27 17:14:13.114192 ignition[824]: Ignition 2.21.0 May 27 17:14:13.114208 ignition[824]: Stage: disks May 27 17:14:13.114347 ignition[824]: no configs at "/usr/lib/ignition/base.d" May 27 17:14:13.114355 ignition[824]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 27 17:14:13.115691 ignition[824]: disks: disks passed May 27 17:14:13.117698 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 27 17:14:13.115756 ignition[824]: Ignition finished successfully May 27 17:14:13.119423 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 27 17:14:13.120721 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 27 17:14:13.122392 systemd[1]: Reached target local-fs.target - Local File Systems. May 27 17:14:13.123757 systemd[1]: Reached target sysinit.target - System Initialization. May 27 17:14:13.125324 systemd[1]: Reached target basic.target - Basic System. May 27 17:14:13.127740 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 27 17:14:13.165540 systemd-fsck[834]: ROOT: clean, 15/553520 files, 52789/553472 blocks May 27 17:14:13.169740 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 27 17:14:13.175430 systemd[1]: Mounting sysroot.mount - /sysroot... May 27 17:14:13.250708 systemd[1]: Mounted sysroot.mount - /sysroot. May 27 17:14:13.251814 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 27 17:14:13.254030 kernel: EXT4-fs (vda9): mounted filesystem a5483afc-8426-4c3e-85ef-8146f9077e7d r/w with ordered data mode. Quota mode: none. May 27 17:14:13.255750 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 27 17:14:13.257342 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 27 17:14:13.258261 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. May 27 17:14:13.258300 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 27 17:14:13.258330 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 27 17:14:13.277684 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 27 17:14:13.280323 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 27 17:14:13.282900 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 (254:6) scanned by mount (842) May 27 17:14:13.284085 kernel: BTRFS info (device vda6): first mount of filesystem 0631e8fb-ef71-4ba1-b2b8-88386996a754 May 27 17:14:13.284113 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 27 17:14:13.284124 kernel: BTRFS info (device vda6): using free-space-tree May 27 17:14:13.287186 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 27 17:14:13.320634 initrd-setup-root[866]: cut: /sysroot/etc/passwd: No such file or directory May 27 17:14:13.324445 initrd-setup-root[873]: cut: /sysroot/etc/group: No such file or directory May 27 17:14:13.327581 initrd-setup-root[880]: cut: /sysroot/etc/shadow: No such file or directory May 27 17:14:13.330998 initrd-setup-root[887]: cut: /sysroot/etc/gshadow: No such file or directory May 27 17:14:13.399123 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 27 17:14:13.401120 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 27 17:14:13.402618 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 27 17:14:13.422092 kernel: BTRFS info (device vda6): last unmount of filesystem 0631e8fb-ef71-4ba1-b2b8-88386996a754 May 27 17:14:13.434199 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 27 17:14:13.445515 ignition[956]: INFO : Ignition 2.21.0 May 27 17:14:13.445515 ignition[956]: INFO : Stage: mount May 27 17:14:13.447067 ignition[956]: INFO : no configs at "/usr/lib/ignition/base.d" May 27 17:14:13.447067 ignition[956]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 27 17:14:13.450092 ignition[956]: INFO : mount: mount passed May 27 17:14:13.450092 ignition[956]: INFO : Ignition finished successfully May 27 17:14:13.449416 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 27 17:14:13.452151 systemd[1]: Starting ignition-files.service - Ignition (files)... May 27 17:14:13.819982 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 27 17:14:13.821519 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 27 17:14:13.851494 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 (254:6) scanned by mount (968) May 27 17:14:13.851528 kernel: BTRFS info (device vda6): first mount of filesystem 0631e8fb-ef71-4ba1-b2b8-88386996a754 May 27 17:14:13.851538 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm May 27 17:14:13.853097 kernel: BTRFS info (device vda6): using free-space-tree May 27 17:14:13.854932 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 27 17:14:13.884211 ignition[985]: INFO : Ignition 2.21.0 May 27 17:14:13.884211 ignition[985]: INFO : Stage: files May 27 17:14:13.886322 ignition[985]: INFO : no configs at "/usr/lib/ignition/base.d" May 27 17:14:13.886322 ignition[985]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 27 17:14:13.888507 ignition[985]: DEBUG : files: compiled without relabeling support, skipping May 27 17:14:13.889685 ignition[985]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 27 17:14:13.889685 ignition[985]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 27 17:14:13.892571 ignition[985]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 27 17:14:13.893876 ignition[985]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 27 17:14:13.893876 ignition[985]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 27 17:14:13.893097 unknown[985]: wrote ssh authorized keys file for user: core May 27 17:14:13.897732 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" May 27 17:14:13.897732 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 May 27 17:14:14.089990 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 27 17:14:14.327165 systemd-networkd[803]: eth0: Gained IPv6LL May 27 17:14:14.496742 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" May 27 17:14:14.498756 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" May 27 17:14:14.498756 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 May 27 17:14:14.831661 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK May 27 17:14:14.991770 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" May 27 17:14:14.993835 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" May 27 17:14:14.993835 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" May 27 17:14:14.993835 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" May 27 17:14:14.993835 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" May 27 17:14:14.993835 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 27 17:14:14.993835 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 27 17:14:14.993835 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 27 17:14:14.993835 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 27 17:14:15.006965 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" May 27 17:14:15.006965 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 27 17:14:15.006965 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" May 27 17:14:15.006965 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" May 27 17:14:15.006965 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" May 27 17:14:15.006965 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-arm64.raw: attempt #1 May 27 17:14:15.345857 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK May 27 17:14:15.637390 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" May 27 17:14:15.637390 ignition[985]: INFO : files: op(c): [started] processing unit "prepare-helm.service" May 27 17:14:15.640994 ignition[985]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 27 17:14:15.640994 ignition[985]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 27 17:14:15.640994 ignition[985]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" May 27 17:14:15.640994 ignition[985]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" May 27 17:14:15.640994 ignition[985]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 27 17:14:15.640994 ignition[985]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" May 27 17:14:15.640994 ignition[985]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" May 27 17:14:15.640994 ignition[985]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" May 27 17:14:15.666406 ignition[985]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" May 27 17:14:15.670036 ignition[985]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" May 27 17:14:15.672766 ignition[985]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" May 27 17:14:15.672766 ignition[985]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" May 27 17:14:15.672766 ignition[985]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" May 27 17:14:15.672766 ignition[985]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" May 27 17:14:15.672766 ignition[985]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" May 27 17:14:15.672766 ignition[985]: INFO : files: files passed May 27 17:14:15.672766 ignition[985]: INFO : Ignition finished successfully May 27 17:14:15.673539 systemd[1]: Finished ignition-files.service - Ignition (files). May 27 17:14:15.676846 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 27 17:14:15.678871 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 27 17:14:15.689899 systemd[1]: ignition-quench.service: Deactivated successfully. May 27 17:14:15.690092 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 27 17:14:15.693323 initrd-setup-root-after-ignition[1014]: grep: /sysroot/oem/oem-release: No such file or directory May 27 17:14:15.695532 initrd-setup-root-after-ignition[1016]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 27 17:14:15.695532 initrd-setup-root-after-ignition[1016]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 27 17:14:15.698536 initrd-setup-root-after-ignition[1020]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 27 17:14:15.697767 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 27 17:14:15.699509 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 27 17:14:15.703188 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 27 17:14:15.766104 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 27 17:14:15.766930 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 27 17:14:15.768107 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 27 17:14:15.768830 systemd[1]: Reached target initrd.target - Initrd Default Target. May 27 17:14:15.770356 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 27 17:14:15.771204 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 27 17:14:15.796455 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 27 17:14:15.798694 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 27 17:14:15.823214 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 27 17:14:15.824428 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 27 17:14:15.826401 systemd[1]: Stopped target timers.target - Timer Units. May 27 17:14:15.828124 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 27 17:14:15.828258 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 27 17:14:15.830686 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 27 17:14:15.832688 systemd[1]: Stopped target basic.target - Basic System. May 27 17:14:15.834278 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 27 17:14:15.835925 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 27 17:14:15.837812 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 27 17:14:15.839710 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. May 27 17:14:15.841541 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 27 17:14:15.843298 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 27 17:14:15.845178 systemd[1]: Stopped target sysinit.target - System Initialization. May 27 17:14:15.847102 systemd[1]: Stopped target local-fs.target - Local File Systems. May 27 17:14:15.848826 systemd[1]: Stopped target swap.target - Swaps. May 27 17:14:15.850377 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 27 17:14:15.850502 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 27 17:14:15.852718 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 27 17:14:15.854615 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 27 17:14:15.856458 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 27 17:14:15.856560 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 27 17:14:15.858450 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 27 17:14:15.858557 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 27 17:14:15.861183 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 27 17:14:15.861285 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 27 17:14:15.863241 systemd[1]: Stopped target paths.target - Path Units. May 27 17:14:15.864783 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 27 17:14:15.868127 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 27 17:14:15.869729 systemd[1]: Stopped target slices.target - Slice Units. May 27 17:14:15.871768 systemd[1]: Stopped target sockets.target - Socket Units. May 27 17:14:15.873271 systemd[1]: iscsid.socket: Deactivated successfully. May 27 17:14:15.873350 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 27 17:14:15.874849 systemd[1]: iscsiuio.socket: Deactivated successfully. May 27 17:14:15.874930 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 27 17:14:15.876421 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 27 17:14:15.876527 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 27 17:14:15.878284 systemd[1]: ignition-files.service: Deactivated successfully. May 27 17:14:15.878386 systemd[1]: Stopped ignition-files.service - Ignition (files). May 27 17:14:15.880652 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 27 17:14:15.883089 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 27 17:14:15.884068 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 27 17:14:15.884171 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 27 17:14:15.885702 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 27 17:14:15.885791 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 27 17:14:15.890511 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 27 17:14:15.896216 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 27 17:14:15.904037 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 27 17:14:15.914797 ignition[1040]: INFO : Ignition 2.21.0 May 27 17:14:15.914797 ignition[1040]: INFO : Stage: umount May 27 17:14:15.916767 ignition[1040]: INFO : no configs at "/usr/lib/ignition/base.d" May 27 17:14:15.916767 ignition[1040]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" May 27 17:14:15.916767 ignition[1040]: INFO : umount: umount passed May 27 17:14:15.916767 ignition[1040]: INFO : Ignition finished successfully May 27 17:14:15.917984 systemd[1]: ignition-mount.service: Deactivated successfully. May 27 17:14:15.920961 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 27 17:14:15.922567 systemd[1]: Stopped target network.target - Network. May 27 17:14:15.925184 systemd[1]: ignition-disks.service: Deactivated successfully. May 27 17:14:15.925256 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 27 17:14:15.926944 systemd[1]: ignition-kargs.service: Deactivated successfully. May 27 17:14:15.926990 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 27 17:14:15.928527 systemd[1]: ignition-setup.service: Deactivated successfully. May 27 17:14:15.928572 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 27 17:14:15.930155 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 27 17:14:15.930194 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 27 17:14:15.932003 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 27 17:14:15.933493 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 27 17:14:15.941896 systemd[1]: systemd-resolved.service: Deactivated successfully. May 27 17:14:15.942008 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 27 17:14:15.946137 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 27 17:14:15.946319 systemd[1]: systemd-networkd.service: Deactivated successfully. May 27 17:14:15.946407 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 27 17:14:15.949276 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 27 17:14:15.949757 systemd[1]: Stopped target network-pre.target - Preparation for Network. May 27 17:14:15.953249 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 27 17:14:15.954219 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 27 17:14:15.956124 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 27 17:14:15.957370 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 27 17:14:15.957433 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 27 17:14:15.959821 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 27 17:14:15.959880 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 27 17:14:15.964122 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 27 17:14:15.964171 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 27 17:14:15.965769 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 27 17:14:15.965816 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 27 17:14:15.968879 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 27 17:14:15.972998 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 27 17:14:15.973085 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 27 17:14:15.979625 systemd[1]: sysroot-boot.service: Deactivated successfully. May 27 17:14:15.989212 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 27 17:14:15.990459 systemd[1]: systemd-udevd.service: Deactivated successfully. May 27 17:14:15.990574 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 27 17:14:15.992321 systemd[1]: network-cleanup.service: Deactivated successfully. May 27 17:14:15.992397 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 27 17:14:15.994828 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 27 17:14:15.994901 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 27 17:14:15.995917 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 27 17:14:15.995952 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 27 17:14:15.997415 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 27 17:14:15.997459 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 27 17:14:15.999784 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 27 17:14:15.999832 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 27 17:14:16.002496 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 27 17:14:16.002547 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 27 17:14:16.004316 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 27 17:14:16.004366 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 27 17:14:16.006629 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 27 17:14:16.007873 systemd[1]: systemd-network-generator.service: Deactivated successfully. May 27 17:14:16.007923 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. May 27 17:14:16.010305 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 27 17:14:16.010346 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 27 17:14:16.013232 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. May 27 17:14:16.013276 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 27 17:14:16.016486 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 27 17:14:16.016530 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 27 17:14:16.018649 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 27 17:14:16.018701 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 27 17:14:16.026044 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. May 27 17:14:16.026111 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. May 27 17:14:16.026139 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. May 27 17:14:16.026166 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 27 17:14:16.034531 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 27 17:14:16.034644 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 27 17:14:16.037379 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 27 17:14:16.040162 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 27 17:14:16.057785 systemd[1]: Switching root. May 27 17:14:16.095860 systemd-journald[246]: Journal stopped May 27 17:14:16.836751 systemd-journald[246]: Received SIGTERM from PID 1 (systemd). May 27 17:14:16.836799 kernel: SELinux: policy capability network_peer_controls=1 May 27 17:14:16.836811 kernel: SELinux: policy capability open_perms=1 May 27 17:14:16.836821 kernel: SELinux: policy capability extended_socket_class=1 May 27 17:14:16.836830 kernel: SELinux: policy capability always_check_network=0 May 27 17:14:16.836841 kernel: SELinux: policy capability cgroup_seclabel=1 May 27 17:14:16.836850 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 27 17:14:16.836866 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 27 17:14:16.836880 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 27 17:14:16.836890 kernel: SELinux: policy capability userspace_initial_context=0 May 27 17:14:16.836899 kernel: audit: type=1403 audit(1748366056.266:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 27 17:14:16.836914 systemd[1]: Successfully loaded SELinux policy in 38.423ms. May 27 17:14:16.836937 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 9.084ms. May 27 17:14:16.836948 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 27 17:14:16.836964 systemd[1]: Detected virtualization kvm. May 27 17:14:16.836974 systemd[1]: Detected architecture arm64. May 27 17:14:16.836985 systemd[1]: Detected first boot. May 27 17:14:16.836994 systemd[1]: Initializing machine ID from VM UUID. May 27 17:14:16.837007 zram_generator::config[1087]: No configuration found. May 27 17:14:16.837018 kernel: NET: Registered PF_VSOCK protocol family May 27 17:14:16.837027 systemd[1]: Populated /etc with preset unit settings. May 27 17:14:16.837037 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 27 17:14:16.837047 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 27 17:14:16.837068 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 27 17:14:16.837081 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 27 17:14:16.837092 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 27 17:14:16.837102 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 27 17:14:16.837111 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 27 17:14:16.837121 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 27 17:14:16.837131 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 27 17:14:16.837141 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 27 17:14:16.837151 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 27 17:14:16.837165 systemd[1]: Created slice user.slice - User and Session Slice. May 27 17:14:16.837177 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 27 17:14:16.837188 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 27 17:14:16.837198 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 27 17:14:16.837212 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 27 17:14:16.837223 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 27 17:14:16.837233 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 27 17:14:16.837243 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... May 27 17:14:16.837253 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 27 17:14:16.837264 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 27 17:14:16.837274 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 27 17:14:16.837284 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 27 17:14:16.837294 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 27 17:14:16.837304 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 27 17:14:16.837314 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 27 17:14:16.837324 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 27 17:14:16.837334 systemd[1]: Reached target slices.target - Slice Units. May 27 17:14:16.837344 systemd[1]: Reached target swap.target - Swaps. May 27 17:14:16.837355 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 27 17:14:16.837366 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 27 17:14:16.837376 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 27 17:14:16.837389 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 27 17:14:16.837399 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 27 17:14:16.837409 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 27 17:14:16.837419 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 27 17:14:16.837429 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 27 17:14:16.837439 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 27 17:14:16.837450 systemd[1]: Mounting media.mount - External Media Directory... May 27 17:14:16.837461 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 27 17:14:16.837471 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 27 17:14:16.837481 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 27 17:14:16.837493 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 27 17:14:16.837503 systemd[1]: Reached target machines.target - Containers. May 27 17:14:16.837513 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 27 17:14:16.837523 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 27 17:14:16.837534 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 27 17:14:16.837545 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 27 17:14:16.837555 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 27 17:14:16.837565 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 27 17:14:16.837574 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 27 17:14:16.837584 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 27 17:14:16.837594 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 27 17:14:16.837604 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 27 17:14:16.837614 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 27 17:14:16.837625 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 27 17:14:16.837635 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 27 17:14:16.837644 systemd[1]: Stopped systemd-fsck-usr.service. May 27 17:14:16.837655 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 27 17:14:16.837665 systemd[1]: Starting systemd-journald.service - Journal Service... May 27 17:14:16.837677 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 27 17:14:16.837687 kernel: loop: module loaded May 27 17:14:16.837697 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 27 17:14:16.837706 kernel: fuse: init (API version 7.41) May 27 17:14:16.837718 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 27 17:14:16.837727 kernel: ACPI: bus type drm_connector registered May 27 17:14:16.837737 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 27 17:14:16.837747 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 27 17:14:16.837758 systemd[1]: verity-setup.service: Deactivated successfully. May 27 17:14:16.837768 systemd[1]: Stopped verity-setup.service. May 27 17:14:16.837798 systemd-journald[1159]: Collecting audit messages is disabled. May 27 17:14:16.837818 systemd-journald[1159]: Journal started May 27 17:14:16.837839 systemd-journald[1159]: Runtime Journal (/run/log/journal/cdd1e788bdab41799ca3d55e1d6226c2) is 6M, max 48.5M, 42.4M free. May 27 17:14:16.636660 systemd[1]: Queued start job for default target multi-user.target. May 27 17:14:16.662077 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 27 17:14:16.662472 systemd[1]: systemd-journald.service: Deactivated successfully. May 27 17:14:16.840251 systemd[1]: Started systemd-journald.service - Journal Service. May 27 17:14:16.840857 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 27 17:14:16.841989 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 27 17:14:16.843244 systemd[1]: Mounted media.mount - External Media Directory. May 27 17:14:16.845589 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 27 17:14:16.847070 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 27 17:14:16.848473 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 27 17:14:16.849692 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 27 17:14:16.851199 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 27 17:14:16.851360 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 27 17:14:16.852821 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 27 17:14:16.852982 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 27 17:14:16.856434 systemd[1]: modprobe@drm.service: Deactivated successfully. May 27 17:14:16.856587 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 27 17:14:16.857986 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 27 17:14:16.858186 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 27 17:14:16.859601 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 27 17:14:16.860971 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 27 17:14:16.861168 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 27 17:14:16.862389 systemd[1]: modprobe@loop.service: Deactivated successfully. May 27 17:14:16.862544 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 27 17:14:16.865086 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 27 17:14:16.866467 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 27 17:14:16.868021 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 27 17:14:16.869607 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 27 17:14:16.881791 systemd[1]: Reached target network-pre.target - Preparation for Network. May 27 17:14:16.884198 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 27 17:14:16.886143 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 27 17:14:16.887209 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 27 17:14:16.887237 systemd[1]: Reached target local-fs.target - Local File Systems. May 27 17:14:16.888999 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 27 17:14:16.895800 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 27 17:14:16.896926 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 27 17:14:16.897880 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 27 17:14:16.899756 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 27 17:14:16.901002 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 27 17:14:16.904343 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 27 17:14:16.905169 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 27 17:14:16.913938 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 27 17:14:16.915765 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 27 17:14:16.917896 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 27 17:14:16.919285 systemd-journald[1159]: Time spent on flushing to /var/log/journal/cdd1e788bdab41799ca3d55e1d6226c2 is 19.999ms for 889 entries. May 27 17:14:16.919285 systemd-journald[1159]: System Journal (/var/log/journal/cdd1e788bdab41799ca3d55e1d6226c2) is 8M, max 195.6M, 187.6M free. May 27 17:14:16.953213 systemd-journald[1159]: Received client request to flush runtime journal. May 27 17:14:16.953260 kernel: loop0: detected capacity change from 0 to 138376 May 27 17:14:16.924095 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 27 17:14:16.925487 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 27 17:14:16.926718 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 27 17:14:16.928174 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 27 17:14:16.934596 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 27 17:14:16.941443 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 27 17:14:16.949192 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 27 17:14:16.955959 systemd-tmpfiles[1206]: ACLs are not supported, ignoring. May 27 17:14:16.956244 systemd-tmpfiles[1206]: ACLs are not supported, ignoring. May 27 17:14:16.956654 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 27 17:14:16.964447 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 27 17:14:16.967246 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 27 17:14:16.968400 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 27 17:14:16.976198 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 27 17:14:16.990246 kernel: loop1: detected capacity change from 0 to 107312 May 27 17:14:16.997736 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 27 17:14:17.000527 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 27 17:14:17.018112 kernel: loop2: detected capacity change from 0 to 211168 May 27 17:14:17.025296 systemd-tmpfiles[1226]: ACLs are not supported, ignoring. May 27 17:14:17.025314 systemd-tmpfiles[1226]: ACLs are not supported, ignoring. May 27 17:14:17.029156 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 27 17:14:17.041081 kernel: loop3: detected capacity change from 0 to 138376 May 27 17:14:17.049078 kernel: loop4: detected capacity change from 0 to 107312 May 27 17:14:17.055076 kernel: loop5: detected capacity change from 0 to 211168 May 27 17:14:17.062260 (sd-merge)[1230]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. May 27 17:14:17.062642 (sd-merge)[1230]: Merged extensions into '/usr'. May 27 17:14:17.066198 systemd[1]: Reload requested from client PID 1204 ('systemd-sysext') (unit systemd-sysext.service)... May 27 17:14:17.066217 systemd[1]: Reloading... May 27 17:14:17.116277 zram_generator::config[1253]: No configuration found. May 27 17:14:17.183035 ldconfig[1199]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 27 17:14:17.201474 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 27 17:14:17.263427 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 27 17:14:17.263696 systemd[1]: Reloading finished in 197 ms. May 27 17:14:17.292121 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 27 17:14:17.293324 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 27 17:14:17.308336 systemd[1]: Starting ensure-sysext.service... May 27 17:14:17.310039 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 27 17:14:17.322678 systemd[1]: Reload requested from client PID 1291 ('systemctl') (unit ensure-sysext.service)... May 27 17:14:17.322692 systemd[1]: Reloading... May 27 17:14:17.328038 systemd-tmpfiles[1292]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. May 27 17:14:17.328336 systemd-tmpfiles[1292]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. May 27 17:14:17.328550 systemd-tmpfiles[1292]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 27 17:14:17.328722 systemd-tmpfiles[1292]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 27 17:14:17.329332 systemd-tmpfiles[1292]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 27 17:14:17.329521 systemd-tmpfiles[1292]: ACLs are not supported, ignoring. May 27 17:14:17.329562 systemd-tmpfiles[1292]: ACLs are not supported, ignoring. May 27 17:14:17.332510 systemd-tmpfiles[1292]: Detected autofs mount point /boot during canonicalization of boot. May 27 17:14:17.332598 systemd-tmpfiles[1292]: Skipping /boot May 27 17:14:17.341240 systemd-tmpfiles[1292]: Detected autofs mount point /boot during canonicalization of boot. May 27 17:14:17.341334 systemd-tmpfiles[1292]: Skipping /boot May 27 17:14:17.368086 zram_generator::config[1319]: No configuration found. May 27 17:14:17.433863 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 27 17:14:17.495232 systemd[1]: Reloading finished in 172 ms. May 27 17:14:17.520523 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 27 17:14:17.532951 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 27 17:14:17.540043 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 27 17:14:17.541967 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 27 17:14:17.552438 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 27 17:14:17.555566 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 27 17:14:17.561977 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 27 17:14:17.568790 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 27 17:14:17.572481 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 27 17:14:17.588610 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 27 17:14:17.590672 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 27 17:14:17.592831 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 27 17:14:17.593995 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 27 17:14:17.594124 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 27 17:14:17.595520 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 27 17:14:17.599266 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 27 17:14:17.600708 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 27 17:14:17.602205 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 27 17:14:17.602354 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 27 17:14:17.603744 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 27 17:14:17.603888 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 27 17:14:17.606761 systemd[1]: modprobe@loop.service: Deactivated successfully. May 27 17:14:17.606908 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 27 17:14:17.613168 systemd-udevd[1360]: Using default interface naming scheme 'v255'. May 27 17:14:17.617594 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 27 17:14:17.618900 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 27 17:14:17.623731 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 27 17:14:17.625645 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 27 17:14:17.626595 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 27 17:14:17.626706 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 27 17:14:17.628896 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 27 17:14:17.631127 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 27 17:14:17.633453 augenrules[1393]: No rules May 27 17:14:17.634266 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 27 17:14:17.635743 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 27 17:14:17.638627 systemd[1]: audit-rules.service: Deactivated successfully. May 27 17:14:17.638810 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 27 17:14:17.640025 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 27 17:14:17.641323 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 27 17:14:17.642733 systemd[1]: modprobe@loop.service: Deactivated successfully. May 27 17:14:17.642891 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 27 17:14:17.648878 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 27 17:14:17.651551 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 27 17:14:17.651721 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 27 17:14:17.653909 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 27 17:14:17.670091 systemd[1]: Finished ensure-sysext.service. May 27 17:14:17.682638 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 27 17:14:17.684051 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 27 17:14:17.686200 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 27 17:14:17.688439 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 27 17:14:17.694241 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 27 17:14:17.697320 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 27 17:14:17.697364 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 27 17:14:17.699375 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 27 17:14:17.713218 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 27 17:14:17.714355 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 27 17:14:17.716536 systemd[1]: modprobe@drm.service: Deactivated successfully. May 27 17:14:17.716931 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 27 17:14:17.720374 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 27 17:14:17.720544 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 27 17:14:17.721924 augenrules[1437]: /sbin/augenrules: No change May 27 17:14:17.722451 systemd[1]: modprobe@loop.service: Deactivated successfully. May 27 17:14:17.722602 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 27 17:14:17.731104 augenrules[1463]: No rules May 27 17:14:17.731997 systemd[1]: audit-rules.service: Deactivated successfully. May 27 17:14:17.735307 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 27 17:14:17.736902 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. May 27 17:14:17.749206 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 27 17:14:17.749267 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 27 17:14:17.777336 systemd-resolved[1358]: Positive Trust Anchors: May 27 17:14:17.779380 systemd-resolved[1358]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 27 17:14:17.779490 systemd-resolved[1358]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 27 17:14:17.787100 systemd-resolved[1358]: Defaulting to hostname 'linux'. May 27 17:14:17.789538 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 27 17:14:17.791263 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 27 17:14:17.792325 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 27 17:14:17.796370 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 27 17:14:17.815296 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 27 17:14:17.816681 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 27 17:14:17.819548 systemd[1]: Reached target sysinit.target - System Initialization. May 27 17:14:17.820761 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 27 17:14:17.821571 systemd-networkd[1441]: lo: Link UP May 27 17:14:17.821583 systemd-networkd[1441]: lo: Gained carrier May 27 17:14:17.822076 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 27 17:14:17.822378 systemd-networkd[1441]: Enumeration completed May 27 17:14:17.822781 systemd-networkd[1441]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 27 17:14:17.822790 systemd-networkd[1441]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 27 17:14:17.823248 systemd-networkd[1441]: eth0: Link UP May 27 17:14:17.823298 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 27 17:14:17.823361 systemd-networkd[1441]: eth0: Gained carrier May 27 17:14:17.823379 systemd-networkd[1441]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 27 17:14:17.824596 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 27 17:14:17.824625 systemd[1]: Reached target paths.target - Path Units. May 27 17:14:17.825549 systemd[1]: Reached target time-set.target - System Time Set. May 27 17:14:17.826686 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 27 17:14:17.828193 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 27 17:14:17.829351 systemd[1]: Reached target timers.target - Timer Units. May 27 17:14:17.830878 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 27 17:14:17.833134 systemd[1]: Starting docker.socket - Docker Socket for the API... May 27 17:14:17.835937 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 27 17:14:17.837313 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). May 27 17:14:17.837666 systemd-networkd[1441]: eth0: DHCPv4 address 10.0.0.109/16, gateway 10.0.0.1 acquired from 10.0.0.1 May 27 17:14:17.838184 systemd-timesyncd[1449]: Network configuration changed, trying to establish connection. May 27 17:14:17.838228 systemd[1]: Reached target ssh-access.target - SSH Access Available. May 27 17:14:17.840527 systemd-timesyncd[1449]: Contacted time server 10.0.0.1:123 (10.0.0.1). May 27 17:14:17.840573 systemd-timesyncd[1449]: Initial clock synchronization to Tue 2025-05-27 17:14:18.201100 UTC. May 27 17:14:17.842651 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 27 17:14:17.843758 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 27 17:14:17.845099 systemd[1]: Started systemd-networkd.service - Network Configuration. May 27 17:14:17.846090 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 27 17:14:17.846960 systemd[1]: Reached target network.target - Network. May 27 17:14:17.847668 systemd[1]: Reached target sockets.target - Socket Units. May 27 17:14:17.848346 systemd[1]: Reached target basic.target - Basic System. May 27 17:14:17.849071 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 27 17:14:17.849098 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 27 17:14:17.850232 systemd[1]: Starting containerd.service - containerd container runtime... May 27 17:14:17.853815 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 27 17:14:17.858235 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 27 17:14:17.861723 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 27 17:14:17.863300 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 27 17:14:17.864922 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 27 17:14:17.866268 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 27 17:14:17.881253 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 27 17:14:17.885236 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 27 17:14:17.887139 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 27 17:14:17.892091 systemd[1]: Starting systemd-logind.service - User Login Management... May 27 17:14:17.894677 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 27 17:14:17.896684 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 27 17:14:17.897440 jq[1499]: false May 27 17:14:17.898631 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 27 17:14:17.899009 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 27 17:14:17.903956 systemd[1]: Starting update-engine.service - Update Engine... May 27 17:14:17.907493 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 27 17:14:17.911391 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 27 17:14:17.914429 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 27 17:14:17.914600 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 27 17:14:17.916378 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 27 17:14:17.916555 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 27 17:14:17.917583 extend-filesystems[1500]: Found loop3 May 27 17:14:17.918304 extend-filesystems[1500]: Found loop4 May 27 17:14:17.918304 extend-filesystems[1500]: Found loop5 May 27 17:14:17.918304 extend-filesystems[1500]: Found vda May 27 17:14:17.918304 extend-filesystems[1500]: Found vda1 May 27 17:14:17.918304 extend-filesystems[1500]: Found vda2 May 27 17:14:17.918304 extend-filesystems[1500]: Found vda3 May 27 17:14:17.918304 extend-filesystems[1500]: Found usr May 27 17:14:17.918304 extend-filesystems[1500]: Found vda4 May 27 17:14:17.918304 extend-filesystems[1500]: Found vda6 May 27 17:14:17.918304 extend-filesystems[1500]: Found vda7 May 27 17:14:17.918304 extend-filesystems[1500]: Found vda9 May 27 17:14:17.918304 extend-filesystems[1500]: Checking size of /dev/vda9 May 27 17:14:17.933914 systemd[1]: motdgen.service: Deactivated successfully. May 27 17:14:17.938580 jq[1516]: true May 27 17:14:17.934814 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 27 17:14:17.944142 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 27 17:14:17.949805 extend-filesystems[1500]: Resized partition /dev/vda9 May 27 17:14:17.955892 extend-filesystems[1538]: resize2fs 1.47.2 (1-Jan-2025) May 27 17:14:17.957214 tar[1521]: linux-arm64/LICENSE May 27 17:14:17.957214 tar[1521]: linux-arm64/helm May 27 17:14:17.959146 (ntainerd)[1533]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 27 17:14:17.964355 jq[1529]: true May 27 17:14:17.969705 dbus-daemon[1497]: [system] SELinux support is enabled May 27 17:14:17.970513 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 27 17:14:17.974174 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 27 17:14:17.974203 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 27 17:14:17.975464 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 27 17:14:17.975487 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 27 17:14:17.977286 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 27 17:14:17.980103 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks May 27 17:14:18.010172 kernel: EXT4-fs (vda9): resized filesystem to 1864699 May 27 17:14:18.034258 extend-filesystems[1538]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 27 17:14:18.034258 extend-filesystems[1538]: old_desc_blocks = 1, new_desc_blocks = 1 May 27 17:14:18.034258 extend-filesystems[1538]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. May 27 17:14:18.036914 extend-filesystems[1500]: Resized filesystem in /dev/vda9 May 27 17:14:18.035146 systemd[1]: extend-filesystems.service: Deactivated successfully. May 27 17:14:18.037141 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 27 17:14:18.051468 systemd-logind[1507]: Watching system buttons on /dev/input/event0 (Power Button) May 27 17:14:18.056288 systemd-logind[1507]: New seat seat0. May 27 17:14:18.075269 systemd[1]: Started systemd-logind.service - User Login Management. May 27 17:14:18.076698 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 27 17:14:18.087324 update_engine[1514]: I20250527 17:14:18.087092 1514 main.cc:92] Flatcar Update Engine starting May 27 17:14:18.090000 bash[1560]: Updated "/home/core/.ssh/authorized_keys" May 27 17:14:18.093199 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 27 17:14:18.096134 update_engine[1514]: I20250527 17:14:18.096052 1514 update_check_scheduler.cc:74] Next update check in 5m27s May 27 17:14:18.096592 systemd[1]: Started update-engine.service - Update Engine. May 27 17:14:18.098245 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 27 17:14:18.102358 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 27 17:14:18.162505 locksmithd[1569]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 27 17:14:18.214600 containerd[1533]: time="2025-05-27T17:14:18Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 May 27 17:14:18.215248 containerd[1533]: time="2025-05-27T17:14:18.215215263Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 May 27 17:14:18.226109 containerd[1533]: time="2025-05-27T17:14:18.226064099Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="10.033µs" May 27 17:14:18.226157 containerd[1533]: time="2025-05-27T17:14:18.226114012Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 May 27 17:14:18.226157 containerd[1533]: time="2025-05-27T17:14:18.226138091Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 May 27 17:14:18.226307 containerd[1533]: time="2025-05-27T17:14:18.226285739Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 May 27 17:14:18.226348 containerd[1533]: time="2025-05-27T17:14:18.226309358Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 May 27 17:14:18.226348 containerd[1533]: time="2025-05-27T17:14:18.226337701Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 27 17:14:18.226410 containerd[1533]: time="2025-05-27T17:14:18.226390916Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 27 17:14:18.226438 containerd[1533]: time="2025-05-27T17:14:18.226411525Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 27 17:14:18.226653 containerd[1533]: time="2025-05-27T17:14:18.226621878Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 27 17:14:18.226677 containerd[1533]: time="2025-05-27T17:14:18.226651935Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 27 17:14:18.226677 containerd[1533]: time="2025-05-27T17:14:18.226668656Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 27 17:14:18.226711 containerd[1533]: time="2025-05-27T17:14:18.226680946Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 May 27 17:14:18.226770 containerd[1533]: time="2025-05-27T17:14:18.226753726Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 May 27 17:14:18.226968 containerd[1533]: time="2025-05-27T17:14:18.226947232Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 27 17:14:18.227005 containerd[1533]: time="2025-05-27T17:14:18.226987489Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 27 17:14:18.227034 containerd[1533]: time="2025-05-27T17:14:18.227001953Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 May 27 17:14:18.227058 containerd[1533]: time="2025-05-27T17:14:18.227042000Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 May 27 17:14:18.227449 containerd[1533]: time="2025-05-27T17:14:18.227408530Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 May 27 17:14:18.227787 containerd[1533]: time="2025-05-27T17:14:18.227764066Z" level=info msg="metadata content store policy set" policy=shared May 27 17:14:18.232375 containerd[1533]: time="2025-05-27T17:14:18.232343057Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 May 27 17:14:18.232410 containerd[1533]: time="2025-05-27T17:14:18.232396732Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 May 27 17:14:18.232428 containerd[1533]: time="2025-05-27T17:14:18.232411154Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 May 27 17:14:18.232428 containerd[1533]: time="2025-05-27T17:14:18.232423612Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 May 27 17:14:18.232478 containerd[1533]: time="2025-05-27T17:14:18.232435985Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 May 27 17:14:18.232478 containerd[1533]: time="2025-05-27T17:14:18.232449195Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 May 27 17:14:18.232478 containerd[1533]: time="2025-05-27T17:14:18.232460482Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 May 27 17:14:18.232478 containerd[1533]: time="2025-05-27T17:14:18.232471727Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 May 27 17:14:18.232543 containerd[1533]: time="2025-05-27T17:14:18.232483097Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 May 27 17:14:18.232543 containerd[1533]: time="2025-05-27T17:14:18.232493506Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 May 27 17:14:18.232543 containerd[1533]: time="2025-05-27T17:14:18.232502661Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 May 27 17:14:18.232543 containerd[1533]: time="2025-05-27T17:14:18.232514784Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 May 27 17:14:18.232648 containerd[1533]: time="2025-05-27T17:14:18.232627151Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 May 27 17:14:18.232673 containerd[1533]: time="2025-05-27T17:14:18.232654616Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 May 27 17:14:18.232691 containerd[1533]: time="2025-05-27T17:14:18.232671128Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 May 27 17:14:18.232691 containerd[1533]: time="2025-05-27T17:14:18.232683126Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 May 27 17:14:18.232724 containerd[1533]: time="2025-05-27T17:14:18.232699178Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 May 27 17:14:18.232724 containerd[1533]: time="2025-05-27T17:14:18.232711217Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 May 27 17:14:18.232724 containerd[1533]: time="2025-05-27T17:14:18.232722128Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 May 27 17:14:18.232781 containerd[1533]: time="2025-05-27T17:14:18.232736425Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 May 27 17:14:18.232781 containerd[1533]: time="2025-05-27T17:14:18.232747377Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 May 27 17:14:18.232781 containerd[1533]: time="2025-05-27T17:14:18.232757828Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 May 27 17:14:18.232781 containerd[1533]: time="2025-05-27T17:14:18.232767442Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 May 27 17:14:18.234127 containerd[1533]: time="2025-05-27T17:14:18.232958483Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" May 27 17:14:18.234127 containerd[1533]: time="2025-05-27T17:14:18.232978883Z" level=info msg="Start snapshots syncer" May 27 17:14:18.234127 containerd[1533]: time="2025-05-27T17:14:18.233005428Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 May 27 17:14:18.235134 containerd[1533]: time="2025-05-27T17:14:18.234380166Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" May 27 17:14:18.235134 containerd[1533]: time="2025-05-27T17:14:18.234445546Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 May 27 17:14:18.235279 containerd[1533]: time="2025-05-27T17:14:18.234530156Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 May 27 17:14:18.235279 containerd[1533]: time="2025-05-27T17:14:18.234650841Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 May 27 17:14:18.235279 containerd[1533]: time="2025-05-27T17:14:18.234673331Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 May 27 17:14:18.235279 containerd[1533]: time="2025-05-27T17:14:18.234684786Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 May 27 17:14:18.235279 containerd[1533]: time="2025-05-27T17:14:18.234699626Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 May 27 17:14:18.235279 containerd[1533]: time="2025-05-27T17:14:18.234711414Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 May 27 17:14:18.235279 containerd[1533]: time="2025-05-27T17:14:18.234722952Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 May 27 17:14:18.235279 containerd[1533]: time="2025-05-27T17:14:18.234735075Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 May 27 17:14:18.235279 containerd[1533]: time="2025-05-27T17:14:18.234758777Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 May 27 17:14:18.235279 containerd[1533]: time="2025-05-27T17:14:18.234769479Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 May 27 17:14:18.235279 containerd[1533]: time="2025-05-27T17:14:18.234787036Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 May 27 17:14:18.235279 containerd[1533]: time="2025-05-27T17:14:18.234828630Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 27 17:14:18.235279 containerd[1533]: time="2025-05-27T17:14:18.234843847Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 27 17:14:18.235279 containerd[1533]: time="2025-05-27T17:14:18.234852333Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 27 17:14:18.235544 containerd[1533]: time="2025-05-27T17:14:18.234861864Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 27 17:14:18.235544 containerd[1533]: time="2025-05-27T17:14:18.234869764Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 May 27 17:14:18.235544 containerd[1533]: time="2025-05-27T17:14:18.234879630Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 May 27 17:14:18.235544 containerd[1533]: time="2025-05-27T17:14:18.234889955Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 May 27 17:14:18.235544 containerd[1533]: time="2025-05-27T17:14:18.234967208Z" level=info msg="runtime interface created" May 27 17:14:18.235544 containerd[1533]: time="2025-05-27T17:14:18.234972349Z" level=info msg="created NRI interface" May 27 17:14:18.235544 containerd[1533]: time="2025-05-27T17:14:18.234980919Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 May 27 17:14:18.235544 containerd[1533]: time="2025-05-27T17:14:18.235003576Z" level=info msg="Connect containerd service" May 27 17:14:18.235544 containerd[1533]: time="2025-05-27T17:14:18.235030790Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 27 17:14:18.236400 containerd[1533]: time="2025-05-27T17:14:18.236368574Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 27 17:14:18.348883 containerd[1533]: time="2025-05-27T17:14:18.348826142Z" level=info msg="Start subscribing containerd event" May 27 17:14:18.349403 containerd[1533]: time="2025-05-27T17:14:18.349381162Z" level=info msg="Start recovering state" May 27 17:14:18.349537 containerd[1533]: time="2025-05-27T17:14:18.349522332Z" level=info msg="Start event monitor" May 27 17:14:18.349594 containerd[1533]: time="2025-05-27T17:14:18.349581734Z" level=info msg="Start cni network conf syncer for default" May 27 17:14:18.349695 containerd[1533]: time="2025-05-27T17:14:18.349680682Z" level=info msg="Start streaming server" May 27 17:14:18.349748 containerd[1533]: time="2025-05-27T17:14:18.349737074Z" level=info msg="Registered namespace \"k8s.io\" with NRI" May 27 17:14:18.349796 containerd[1533]: time="2025-05-27T17:14:18.349784228Z" level=info msg="runtime interface starting up..." May 27 17:14:18.349845 containerd[1533]: time="2025-05-27T17:14:18.349832636Z" level=info msg="starting plugins..." May 27 17:14:18.349901 containerd[1533]: time="2025-05-27T17:14:18.349889405Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" May 27 17:14:18.350050 containerd[1533]: time="2025-05-27T17:14:18.349338941Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 27 17:14:18.350410 containerd[1533]: time="2025-05-27T17:14:18.350379129Z" level=info msg=serving... address=/run/containerd/containerd.sock May 27 17:14:18.350580 containerd[1533]: time="2025-05-27T17:14:18.350565780Z" level=info msg="containerd successfully booted in 0.136442s" May 27 17:14:18.350733 systemd[1]: Started containerd.service - containerd container runtime. May 27 17:14:18.444464 sshd_keygen[1515]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 27 17:14:18.463191 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 27 17:14:18.465910 systemd[1]: Starting issuegen.service - Generate /run/issue... May 27 17:14:18.472257 tar[1521]: linux-arm64/README.md May 27 17:14:18.484397 systemd[1]: issuegen.service: Deactivated successfully. May 27 17:14:18.484607 systemd[1]: Finished issuegen.service - Generate /run/issue. May 27 17:14:18.486324 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 27 17:14:18.489890 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 27 17:14:18.506389 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 27 17:14:18.509331 systemd[1]: Started getty@tty1.service - Getty on tty1. May 27 17:14:18.511346 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. May 27 17:14:18.512455 systemd[1]: Reached target getty.target - Login Prompts. May 27 17:14:19.639540 systemd-networkd[1441]: eth0: Gained IPv6LL May 27 17:14:19.643191 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 27 17:14:19.644567 systemd[1]: Reached target network-online.target - Network is Online. May 27 17:14:19.646732 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... May 27 17:14:19.649166 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 17:14:19.650924 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 27 17:14:19.674479 systemd[1]: coreos-metadata.service: Deactivated successfully. May 27 17:14:19.674685 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. May 27 17:14:19.675889 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 27 17:14:19.703687 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 27 17:14:20.249805 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 17:14:20.251467 systemd[1]: Reached target multi-user.target - Multi-User System. May 27 17:14:20.253342 (kubelet)[1634]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 27 17:14:20.256180 systemd[1]: Startup finished in 2.103s (kernel) + 5.647s (initrd) + 4.033s (userspace) = 11.784s. May 27 17:14:20.675869 kubelet[1634]: E0527 17:14:20.675746 1634 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 27 17:14:20.678154 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 27 17:14:20.678295 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 27 17:14:20.678800 systemd[1]: kubelet.service: Consumed 818ms CPU time, 257.9M memory peak. May 27 17:14:24.129239 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 27 17:14:24.132657 systemd[1]: Started sshd@0-10.0.0.109:22-10.0.0.1:45850.service - OpenSSH per-connection server daemon (10.0.0.1:45850). May 27 17:14:24.188035 sshd[1647]: Accepted publickey for core from 10.0.0.1 port 45850 ssh2: RSA SHA256:P4IJeIRssgXk4sLVjWVipI5XYGK59bekWX+Ak26Y4M8 May 27 17:14:24.189547 sshd-session[1647]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:14:24.197283 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 27 17:14:24.198172 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 27 17:14:24.203531 systemd-logind[1507]: New session 1 of user core. May 27 17:14:24.224116 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 27 17:14:24.226647 systemd[1]: Starting user@500.service - User Manager for UID 500... May 27 17:14:24.256028 (systemd)[1651]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 27 17:14:24.258161 systemd-logind[1507]: New session c1 of user core. May 27 17:14:24.370136 systemd[1651]: Queued start job for default target default.target. May 27 17:14:24.381022 systemd[1651]: Created slice app.slice - User Application Slice. May 27 17:14:24.381052 systemd[1651]: Reached target paths.target - Paths. May 27 17:14:24.381113 systemd[1651]: Reached target timers.target - Timers. May 27 17:14:24.382358 systemd[1651]: Starting dbus.socket - D-Bus User Message Bus Socket... May 27 17:14:24.391528 systemd[1651]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 27 17:14:24.391596 systemd[1651]: Reached target sockets.target - Sockets. May 27 17:14:24.391632 systemd[1651]: Reached target basic.target - Basic System. May 27 17:14:24.391661 systemd[1651]: Reached target default.target - Main User Target. May 27 17:14:24.391687 systemd[1651]: Startup finished in 128ms. May 27 17:14:24.391968 systemd[1]: Started user@500.service - User Manager for UID 500. May 27 17:14:24.393479 systemd[1]: Started session-1.scope - Session 1 of User core. May 27 17:14:24.460729 systemd[1]: Started sshd@1-10.0.0.109:22-10.0.0.1:45864.service - OpenSSH per-connection server daemon (10.0.0.1:45864). May 27 17:14:24.524992 sshd[1662]: Accepted publickey for core from 10.0.0.1 port 45864 ssh2: RSA SHA256:P4IJeIRssgXk4sLVjWVipI5XYGK59bekWX+Ak26Y4M8 May 27 17:14:24.526350 sshd-session[1662]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:14:24.530503 systemd-logind[1507]: New session 2 of user core. May 27 17:14:24.542268 systemd[1]: Started session-2.scope - Session 2 of User core. May 27 17:14:24.594124 sshd[1664]: Connection closed by 10.0.0.1 port 45864 May 27 17:14:24.594348 sshd-session[1662]: pam_unix(sshd:session): session closed for user core May 27 17:14:24.604279 systemd[1]: sshd@1-10.0.0.109:22-10.0.0.1:45864.service: Deactivated successfully. May 27 17:14:24.606518 systemd[1]: session-2.scope: Deactivated successfully. May 27 17:14:24.607297 systemd-logind[1507]: Session 2 logged out. Waiting for processes to exit. May 27 17:14:24.609757 systemd[1]: Started sshd@2-10.0.0.109:22-10.0.0.1:45866.service - OpenSSH per-connection server daemon (10.0.0.1:45866). May 27 17:14:24.610442 systemd-logind[1507]: Removed session 2. May 27 17:14:24.664491 sshd[1670]: Accepted publickey for core from 10.0.0.1 port 45866 ssh2: RSA SHA256:P4IJeIRssgXk4sLVjWVipI5XYGK59bekWX+Ak26Y4M8 May 27 17:14:24.665836 sshd-session[1670]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:14:24.670747 systemd-logind[1507]: New session 3 of user core. May 27 17:14:24.682257 systemd[1]: Started session-3.scope - Session 3 of User core. May 27 17:14:24.731897 sshd[1672]: Connection closed by 10.0.0.1 port 45866 May 27 17:14:24.732368 sshd-session[1670]: pam_unix(sshd:session): session closed for user core May 27 17:14:24.748265 systemd[1]: sshd@2-10.0.0.109:22-10.0.0.1:45866.service: Deactivated successfully. May 27 17:14:24.749925 systemd[1]: session-3.scope: Deactivated successfully. May 27 17:14:24.752053 systemd-logind[1507]: Session 3 logged out. Waiting for processes to exit. May 27 17:14:24.755423 systemd[1]: Started sshd@3-10.0.0.109:22-10.0.0.1:45882.service - OpenSSH per-connection server daemon (10.0.0.1:45882). May 27 17:14:24.756214 systemd-logind[1507]: Removed session 3. May 27 17:14:24.812750 sshd[1678]: Accepted publickey for core from 10.0.0.1 port 45882 ssh2: RSA SHA256:P4IJeIRssgXk4sLVjWVipI5XYGK59bekWX+Ak26Y4M8 May 27 17:14:24.813999 sshd-session[1678]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:14:24.818141 systemd-logind[1507]: New session 4 of user core. May 27 17:14:24.837264 systemd[1]: Started session-4.scope - Session 4 of User core. May 27 17:14:24.888875 sshd[1680]: Connection closed by 10.0.0.1 port 45882 May 27 17:14:24.889315 sshd-session[1678]: pam_unix(sshd:session): session closed for user core May 27 17:14:24.908832 systemd[1]: sshd@3-10.0.0.109:22-10.0.0.1:45882.service: Deactivated successfully. May 27 17:14:24.911394 systemd[1]: session-4.scope: Deactivated successfully. May 27 17:14:24.912071 systemd-logind[1507]: Session 4 logged out. Waiting for processes to exit. May 27 17:14:24.914928 systemd[1]: Started sshd@4-10.0.0.109:22-10.0.0.1:45894.service - OpenSSH per-connection server daemon (10.0.0.1:45894). May 27 17:14:24.915552 systemd-logind[1507]: Removed session 4. May 27 17:14:24.967139 sshd[1686]: Accepted publickey for core from 10.0.0.1 port 45894 ssh2: RSA SHA256:P4IJeIRssgXk4sLVjWVipI5XYGK59bekWX+Ak26Y4M8 May 27 17:14:24.968533 sshd-session[1686]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:14:24.973182 systemd-logind[1507]: New session 5 of user core. May 27 17:14:24.980244 systemd[1]: Started session-5.scope - Session 5 of User core. May 27 17:14:25.044678 sudo[1689]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 27 17:14:25.044952 sudo[1689]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 27 17:14:25.058770 sudo[1689]: pam_unix(sudo:session): session closed for user root May 27 17:14:25.061138 sshd[1688]: Connection closed by 10.0.0.1 port 45894 May 27 17:14:25.060914 sshd-session[1686]: pam_unix(sshd:session): session closed for user core May 27 17:14:25.071564 systemd[1]: sshd@4-10.0.0.109:22-10.0.0.1:45894.service: Deactivated successfully. May 27 17:14:25.074751 systemd[1]: session-5.scope: Deactivated successfully. May 27 17:14:25.075667 systemd-logind[1507]: Session 5 logged out. Waiting for processes to exit. May 27 17:14:25.078641 systemd[1]: Started sshd@5-10.0.0.109:22-10.0.0.1:45906.service - OpenSSH per-connection server daemon (10.0.0.1:45906). May 27 17:14:25.079471 systemd-logind[1507]: Removed session 5. May 27 17:14:25.141993 sshd[1695]: Accepted publickey for core from 10.0.0.1 port 45906 ssh2: RSA SHA256:P4IJeIRssgXk4sLVjWVipI5XYGK59bekWX+Ak26Y4M8 May 27 17:14:25.143498 sshd-session[1695]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:14:25.147485 systemd-logind[1507]: New session 6 of user core. May 27 17:14:25.158229 systemd[1]: Started session-6.scope - Session 6 of User core. May 27 17:14:25.209705 sudo[1699]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 27 17:14:25.209981 sudo[1699]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 27 17:14:25.282926 sudo[1699]: pam_unix(sudo:session): session closed for user root May 27 17:14:25.288123 sudo[1698]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules May 27 17:14:25.288392 sudo[1698]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 27 17:14:25.297192 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 27 17:14:25.341604 augenrules[1721]: No rules May 27 17:14:25.343033 systemd[1]: audit-rules.service: Deactivated successfully. May 27 17:14:25.345137 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 27 17:14:25.346567 sudo[1698]: pam_unix(sudo:session): session closed for user root May 27 17:14:25.348180 sshd[1697]: Connection closed by 10.0.0.1 port 45906 May 27 17:14:25.348918 sshd-session[1695]: pam_unix(sshd:session): session closed for user core May 27 17:14:25.359275 systemd[1]: sshd@5-10.0.0.109:22-10.0.0.1:45906.service: Deactivated successfully. May 27 17:14:25.360918 systemd[1]: session-6.scope: Deactivated successfully. May 27 17:14:25.362182 systemd-logind[1507]: Session 6 logged out. Waiting for processes to exit. May 27 17:14:25.365219 systemd[1]: Started sshd@6-10.0.0.109:22-10.0.0.1:45920.service - OpenSSH per-connection server daemon (10.0.0.1:45920). May 27 17:14:25.366178 systemd-logind[1507]: Removed session 6. May 27 17:14:25.416987 sshd[1730]: Accepted publickey for core from 10.0.0.1 port 45920 ssh2: RSA SHA256:P4IJeIRssgXk4sLVjWVipI5XYGK59bekWX+Ak26Y4M8 May 27 17:14:25.418424 sshd-session[1730]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:14:25.422448 systemd-logind[1507]: New session 7 of user core. May 27 17:14:25.435285 systemd[1]: Started session-7.scope - Session 7 of User core. May 27 17:14:25.487492 sudo[1733]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 27 17:14:25.487774 sudo[1733]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 27 17:14:25.950808 systemd[1]: Starting docker.service - Docker Application Container Engine... May 27 17:14:25.963477 (dockerd)[1754]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 27 17:14:26.269018 dockerd[1754]: time="2025-05-27T17:14:26.268957264Z" level=info msg="Starting up" May 27 17:14:26.271501 dockerd[1754]: time="2025-05-27T17:14:26.271471372Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" May 27 17:14:26.352301 dockerd[1754]: time="2025-05-27T17:14:26.352255099Z" level=info msg="Loading containers: start." May 27 17:14:26.360095 kernel: Initializing XFRM netlink socket May 27 17:14:26.547020 systemd-networkd[1441]: docker0: Link UP May 27 17:14:26.550349 dockerd[1754]: time="2025-05-27T17:14:26.550299339Z" level=info msg="Loading containers: done." May 27 17:14:26.563432 dockerd[1754]: time="2025-05-27T17:14:26.563388795Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 27 17:14:26.563554 dockerd[1754]: time="2025-05-27T17:14:26.563463251Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 May 27 17:14:26.563579 dockerd[1754]: time="2025-05-27T17:14:26.563554726Z" level=info msg="Initializing buildkit" May 27 17:14:26.585297 dockerd[1754]: time="2025-05-27T17:14:26.585259269Z" level=info msg="Completed buildkit initialization" May 27 17:14:26.589887 dockerd[1754]: time="2025-05-27T17:14:26.589834902Z" level=info msg="Daemon has completed initialization" May 27 17:14:26.590094 systemd[1]: Started docker.service - Docker Application Container Engine. May 27 17:14:26.590934 dockerd[1754]: time="2025-05-27T17:14:26.589931617Z" level=info msg="API listen on /run/docker.sock" May 27 17:14:27.118729 containerd[1533]: time="2025-05-27T17:14:27.118688185Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.1\"" May 27 17:14:27.330482 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3770110427-merged.mount: Deactivated successfully. May 27 17:14:27.710668 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3062914151.mount: Deactivated successfully. May 27 17:14:28.878832 containerd[1533]: time="2025-05-27T17:14:28.878773669Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:14:28.880278 containerd[1533]: time="2025-05-27T17:14:28.880243131Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.1: active requests=0, bytes read=27349352" May 27 17:14:28.881148 containerd[1533]: time="2025-05-27T17:14:28.881114584Z" level=info msg="ImageCreate event name:\"sha256:9a2b7cf4f8540534c6ec5b758462c6d7885c6e734652172078bba899c0e3089a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:14:28.883924 containerd[1533]: time="2025-05-27T17:14:28.883885055Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:d8ae2fb01c39aa1c7add84f3d54425cf081c24c11e3946830292a8cfa4293548\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:14:28.884883 containerd[1533]: time="2025-05-27T17:14:28.884768084Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.1\" with image id \"sha256:9a2b7cf4f8540534c6ec5b758462c6d7885c6e734652172078bba899c0e3089a\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.1\", repo digest \"registry.k8s.io/kube-apiserver@sha256:d8ae2fb01c39aa1c7add84f3d54425cf081c24c11e3946830292a8cfa4293548\", size \"27346150\" in 1.766038274s" May 27 17:14:28.884883 containerd[1533]: time="2025-05-27T17:14:28.884802123Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.1\" returns image reference \"sha256:9a2b7cf4f8540534c6ec5b758462c6d7885c6e734652172078bba899c0e3089a\"" May 27 17:14:28.887899 containerd[1533]: time="2025-05-27T17:14:28.887868015Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.1\"" May 27 17:14:30.256988 containerd[1533]: time="2025-05-27T17:14:30.256936144Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:14:30.258865 containerd[1533]: time="2025-05-27T17:14:30.258751720Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.1: active requests=0, bytes read=23531737" May 27 17:14:30.259398 containerd[1533]: time="2025-05-27T17:14:30.259353455Z" level=info msg="ImageCreate event name:\"sha256:674996a72aa5900cbbbcd410437021fa4c62a7f829a56f58eb23ac430f2ae383\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:14:30.262324 containerd[1533]: time="2025-05-27T17:14:30.262284952Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:7c9bea694e3a3c01ed6a5ee02d55a6124cc08e0b2eec6caa33f2c396b8cbc3f8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:14:30.263908 containerd[1533]: time="2025-05-27T17:14:30.263849752Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.1\" with image id \"sha256:674996a72aa5900cbbbcd410437021fa4c62a7f829a56f58eb23ac430f2ae383\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.1\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:7c9bea694e3a3c01ed6a5ee02d55a6124cc08e0b2eec6caa33f2c396b8cbc3f8\", size \"25086427\" in 1.375950511s" May 27 17:14:30.263908 containerd[1533]: time="2025-05-27T17:14:30.263884787Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.1\" returns image reference \"sha256:674996a72aa5900cbbbcd410437021fa4c62a7f829a56f58eb23ac430f2ae383\"" May 27 17:14:30.264485 containerd[1533]: time="2025-05-27T17:14:30.264460810Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.1\"" May 27 17:14:30.719608 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 27 17:14:30.721433 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 17:14:30.868841 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 17:14:30.871847 (kubelet)[2032]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 27 17:14:30.912042 kubelet[2032]: E0527 17:14:30.911991 2032 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 27 17:14:30.915641 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 27 17:14:30.915880 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 27 17:14:30.916328 systemd[1]: kubelet.service: Consumed 142ms CPU time, 106.3M memory peak. May 27 17:14:31.750832 containerd[1533]: time="2025-05-27T17:14:31.750772129Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:14:31.751308 containerd[1533]: time="2025-05-27T17:14:31.751278400Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.1: active requests=0, bytes read=18293733" May 27 17:14:31.752149 containerd[1533]: time="2025-05-27T17:14:31.752091852Z" level=info msg="ImageCreate event name:\"sha256:014094c90caacf743dc5fb4281363492da1df31cd8218aeceab3be3326277d2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:14:31.754589 containerd[1533]: time="2025-05-27T17:14:31.754560953Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:395b7de7cdbdcc3c3a3db270844a3f71d757e2447a1e4db76b4cce46fba7fd55\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:14:31.756297 containerd[1533]: time="2025-05-27T17:14:31.756266516Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.1\" with image id \"sha256:014094c90caacf743dc5fb4281363492da1df31cd8218aeceab3be3326277d2e\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.1\", repo digest \"registry.k8s.io/kube-scheduler@sha256:395b7de7cdbdcc3c3a3db270844a3f71d757e2447a1e4db76b4cce46fba7fd55\", size \"19848441\" in 1.491774919s" May 27 17:14:31.756297 containerd[1533]: time="2025-05-27T17:14:31.756296432Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.1\" returns image reference \"sha256:014094c90caacf743dc5fb4281363492da1df31cd8218aeceab3be3326277d2e\"" May 27 17:14:31.756753 containerd[1533]: time="2025-05-27T17:14:31.756725252Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.1\"" May 27 17:14:32.667646 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3609256269.mount: Deactivated successfully. May 27 17:14:32.894161 containerd[1533]: time="2025-05-27T17:14:32.894110499Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:14:32.894968 containerd[1533]: time="2025-05-27T17:14:32.894930440Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.1: active requests=0, bytes read=28196006" May 27 17:14:32.895876 containerd[1533]: time="2025-05-27T17:14:32.895824492Z" level=info msg="ImageCreate event name:\"sha256:3e58848989f556e36aa29d7852ab1712163960651e074d11cae9d31fb27192db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:14:32.897811 containerd[1533]: time="2025-05-27T17:14:32.897766177Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:7ddf379897139ae8ade8b33cb9373b70c632a4d5491da6e234f5d830e0a50807\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:14:32.898255 containerd[1533]: time="2025-05-27T17:14:32.898217492Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.1\" with image id \"sha256:3e58848989f556e36aa29d7852ab1712163960651e074d11cae9d31fb27192db\", repo tag \"registry.k8s.io/kube-proxy:v1.33.1\", repo digest \"registry.k8s.io/kube-proxy@sha256:7ddf379897139ae8ade8b33cb9373b70c632a4d5491da6e234f5d830e0a50807\", size \"28195023\" in 1.141460762s" May 27 17:14:32.898255 containerd[1533]: time="2025-05-27T17:14:32.898250601Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.1\" returns image reference \"sha256:3e58848989f556e36aa29d7852ab1712163960651e074d11cae9d31fb27192db\"" May 27 17:14:32.898709 containerd[1533]: time="2025-05-27T17:14:32.898652858Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" May 27 17:14:33.596504 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1531765654.mount: Deactivated successfully. May 27 17:14:34.545070 containerd[1533]: time="2025-05-27T17:14:34.544982203Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:14:34.545744 containerd[1533]: time="2025-05-27T17:14:34.545694614Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=19152119" May 27 17:14:34.546463 containerd[1533]: time="2025-05-27T17:14:34.546430229Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:14:34.549519 containerd[1533]: time="2025-05-27T17:14:34.549481865Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:14:34.551059 containerd[1533]: time="2025-05-27T17:14:34.551015142Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 1.652333217s" May 27 17:14:34.551149 containerd[1533]: time="2025-05-27T17:14:34.551112658Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" May 27 17:14:34.551605 containerd[1533]: time="2025-05-27T17:14:34.551565536Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 27 17:14:34.976365 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1834228090.mount: Deactivated successfully. May 27 17:14:34.982069 containerd[1533]: time="2025-05-27T17:14:34.982023474Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 27 17:14:34.983078 containerd[1533]: time="2025-05-27T17:14:34.983042228Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" May 27 17:14:34.983852 containerd[1533]: time="2025-05-27T17:14:34.983804463Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 27 17:14:34.985607 containerd[1533]: time="2025-05-27T17:14:34.985560600Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 27 17:14:34.986161 containerd[1533]: time="2025-05-27T17:14:34.986141596Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 434.542362ms" May 27 17:14:34.986262 containerd[1533]: time="2025-05-27T17:14:34.986230185Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" May 27 17:14:34.986759 containerd[1533]: time="2025-05-27T17:14:34.986695368Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" May 27 17:14:38.250311 containerd[1533]: time="2025-05-27T17:14:38.250252589Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:14:38.250997 containerd[1533]: time="2025-05-27T17:14:38.250967734Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=69230165" May 27 17:14:38.251534 containerd[1533]: time="2025-05-27T17:14:38.251498504Z" level=info msg="ImageCreate event name:\"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:14:38.254914 containerd[1533]: time="2025-05-27T17:14:38.254878336Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:14:38.255961 containerd[1533]: time="2025-05-27T17:14:38.255928522Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"70026017\" in 3.269089569s" May 27 17:14:38.255995 containerd[1533]: time="2025-05-27T17:14:38.255962749Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\"" May 27 17:14:40.969241 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 27 17:14:40.970672 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 17:14:41.104434 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 17:14:41.108316 (kubelet)[2152]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 27 17:14:41.140410 kubelet[2152]: E0527 17:14:41.140367 2152 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 27 17:14:41.143128 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 27 17:14:41.143270 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 27 17:14:41.143563 systemd[1]: kubelet.service: Consumed 131ms CPU time, 110.7M memory peak. May 27 17:14:41.854849 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 27 17:14:41.855024 systemd[1]: kubelet.service: Consumed 131ms CPU time, 110.7M memory peak. May 27 17:14:41.857054 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 17:14:41.877834 systemd[1]: Reload requested from client PID 2168 ('systemctl') (unit session-7.scope)... May 27 17:14:41.877850 systemd[1]: Reloading... May 27 17:14:41.955138 zram_generator::config[2211]: No configuration found. May 27 17:14:42.191089 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 27 17:14:42.274940 systemd[1]: Reloading finished in 396 ms. May 27 17:14:42.315489 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 27 17:14:42.315579 systemd[1]: kubelet.service: Failed with result 'signal'. May 27 17:14:42.315800 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 27 17:14:42.315846 systemd[1]: kubelet.service: Consumed 84ms CPU time, 94.9M memory peak. May 27 17:14:42.317304 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 17:14:42.428104 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 17:14:42.432963 (kubelet)[2255]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 27 17:14:42.467491 kubelet[2255]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 27 17:14:42.467491 kubelet[2255]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 27 17:14:42.467491 kubelet[2255]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 27 17:14:42.467491 kubelet[2255]: I0527 17:14:42.467253 2255 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 27 17:14:43.875648 kubelet[2255]: I0527 17:14:43.875597 2255 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" May 27 17:14:43.875648 kubelet[2255]: I0527 17:14:43.875633 2255 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 27 17:14:43.876026 kubelet[2255]: I0527 17:14:43.875855 2255 server.go:956] "Client rotation is on, will bootstrap in background" May 27 17:14:43.917340 kubelet[2255]: E0527 17:14:43.917300 2255 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.109:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.109:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" May 27 17:14:43.918063 kubelet[2255]: I0527 17:14:43.918015 2255 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 27 17:14:43.924989 kubelet[2255]: I0527 17:14:43.924965 2255 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 27 17:14:43.927605 kubelet[2255]: I0527 17:14:43.927583 2255 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 27 17:14:43.928728 kubelet[2255]: I0527 17:14:43.928680 2255 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 27 17:14:43.928904 kubelet[2255]: I0527 17:14:43.928723 2255 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 27 17:14:43.928991 kubelet[2255]: I0527 17:14:43.928967 2255 topology_manager.go:138] "Creating topology manager with none policy" May 27 17:14:43.928991 kubelet[2255]: I0527 17:14:43.928975 2255 container_manager_linux.go:303] "Creating device plugin manager" May 27 17:14:43.929726 kubelet[2255]: I0527 17:14:43.929699 2255 state_mem.go:36] "Initialized new in-memory state store" May 27 17:14:43.932176 kubelet[2255]: I0527 17:14:43.932151 2255 kubelet.go:480] "Attempting to sync node with API server" May 27 17:14:43.932209 kubelet[2255]: I0527 17:14:43.932176 2255 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" May 27 17:14:43.932209 kubelet[2255]: I0527 17:14:43.932207 2255 kubelet.go:386] "Adding apiserver pod source" May 27 17:14:43.933290 kubelet[2255]: I0527 17:14:43.933202 2255 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 27 17:14:43.934262 kubelet[2255]: I0527 17:14:43.934229 2255 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 27 17:14:43.934552 kubelet[2255]: E0527 17:14:43.934524 2255 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.109:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.109:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" May 27 17:14:43.934780 kubelet[2255]: E0527 17:14:43.934713 2255 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.109:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.109:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" May 27 17:14:43.934966 kubelet[2255]: I0527 17:14:43.934934 2255 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" May 27 17:14:43.935076 kubelet[2255]: W0527 17:14:43.935054 2255 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 27 17:14:43.939334 kubelet[2255]: I0527 17:14:43.939298 2255 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 27 17:14:43.939392 kubelet[2255]: I0527 17:14:43.939338 2255 server.go:1289] "Started kubelet" May 27 17:14:43.939493 kubelet[2255]: I0527 17:14:43.939466 2255 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 May 27 17:14:43.940617 kubelet[2255]: I0527 17:14:43.940594 2255 server.go:317] "Adding debug handlers to kubelet server" May 27 17:14:43.942802 kubelet[2255]: E0527 17:14:43.941825 2255 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.109:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.109:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.184371b0aa8b2d38 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-05-27 17:14:43.939315 +0000 UTC m=+1.503011218,LastTimestamp:2025-05-27 17:14:43.939315 +0000 UTC m=+1.503011218,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" May 27 17:14:43.943295 kubelet[2255]: I0527 17:14:43.943078 2255 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 27 17:14:43.943295 kubelet[2255]: I0527 17:14:43.943242 2255 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 27 17:14:43.943377 kubelet[2255]: I0527 17:14:43.943343 2255 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 27 17:14:43.943527 kubelet[2255]: I0527 17:14:43.943499 2255 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 27 17:14:43.944232 kubelet[2255]: E0527 17:14:43.944202 2255 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 27 17:14:43.944300 kubelet[2255]: I0527 17:14:43.944238 2255 volume_manager.go:297] "Starting Kubelet Volume Manager" May 27 17:14:43.944576 kubelet[2255]: I0527 17:14:43.944374 2255 desired_state_of_world_populator.go:150] "Desired state populator starts to run" May 27 17:14:43.944576 kubelet[2255]: I0527 17:14:43.944425 2255 reconciler.go:26] "Reconciler: start to sync state" May 27 17:14:43.944789 kubelet[2255]: I0527 17:14:43.944759 2255 factory.go:223] Registration of the systemd container factory successfully May 27 17:14:43.944881 kubelet[2255]: I0527 17:14:43.944830 2255 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 27 17:14:43.944881 kubelet[2255]: E0527 17:14:43.944857 2255 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.109:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.109:6443: connect: connection refused" interval="200ms" May 27 17:14:43.945462 kubelet[2255]: E0527 17:14:43.945421 2255 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.109:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.109:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" May 27 17:14:43.946516 kubelet[2255]: E0527 17:14:43.946483 2255 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 27 17:14:43.947871 kubelet[2255]: I0527 17:14:43.947838 2255 factory.go:223] Registration of the containerd container factory successfully May 27 17:14:43.961093 kubelet[2255]: I0527 17:14:43.961070 2255 cpu_manager.go:221] "Starting CPU manager" policy="none" May 27 17:14:43.961093 kubelet[2255]: I0527 17:14:43.961087 2255 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 27 17:14:43.961176 kubelet[2255]: I0527 17:14:43.961103 2255 state_mem.go:36] "Initialized new in-memory state store" May 27 17:14:43.962943 kubelet[2255]: I0527 17:14:43.962908 2255 policy_none.go:49] "None policy: Start" May 27 17:14:43.962943 kubelet[2255]: I0527 17:14:43.962932 2255 memory_manager.go:186] "Starting memorymanager" policy="None" May 27 17:14:43.962943 kubelet[2255]: I0527 17:14:43.962942 2255 state_mem.go:35] "Initializing new in-memory state store" May 27 17:14:43.963440 kubelet[2255]: I0527 17:14:43.963386 2255 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" May 27 17:14:43.964588 kubelet[2255]: I0527 17:14:43.964566 2255 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" May 27 17:14:43.964588 kubelet[2255]: I0527 17:14:43.964594 2255 status_manager.go:230] "Starting to sync pod status with apiserver" May 27 17:14:43.964665 kubelet[2255]: I0527 17:14:43.964609 2255 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 27 17:14:43.964665 kubelet[2255]: I0527 17:14:43.964615 2255 kubelet.go:2436] "Starting kubelet main sync loop" May 27 17:14:43.964665 kubelet[2255]: E0527 17:14:43.964646 2255 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 27 17:14:43.966167 kubelet[2255]: E0527 17:14:43.966129 2255 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.109:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.109:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" May 27 17:14:43.971099 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 27 17:14:43.988247 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 27 17:14:43.991642 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 27 17:14:44.011098 kubelet[2255]: E0527 17:14:44.010948 2255 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" May 27 17:14:44.011288 kubelet[2255]: I0527 17:14:44.011274 2255 eviction_manager.go:189] "Eviction manager: starting control loop" May 27 17:14:44.011379 kubelet[2255]: I0527 17:14:44.011350 2255 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 27 17:14:44.012036 kubelet[2255]: I0527 17:14:44.012002 2255 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 27 17:14:44.012341 kubelet[2255]: E0527 17:14:44.012309 2255 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 27 17:14:44.012409 kubelet[2255]: E0527 17:14:44.012355 2255 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" May 27 17:14:44.074898 systemd[1]: Created slice kubepods-burstable-podee334c92fadbf4633b884b311c819d5f.slice - libcontainer container kubepods-burstable-podee334c92fadbf4633b884b311c819d5f.slice. May 27 17:14:44.089315 kubelet[2255]: E0527 17:14:44.089283 2255 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 27 17:14:44.092280 systemd[1]: Created slice kubepods-burstable-pod97963c41ada533e2e0872a518ecd4611.slice - libcontainer container kubepods-burstable-pod97963c41ada533e2e0872a518ecd4611.slice. May 27 17:14:44.094190 kubelet[2255]: E0527 17:14:44.094171 2255 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 27 17:14:44.096186 systemd[1]: Created slice kubepods-burstable-pod8fba52155e63f70cc922ab7cc8c200fd.slice - libcontainer container kubepods-burstable-pod8fba52155e63f70cc922ab7cc8c200fd.slice. May 27 17:14:44.097402 kubelet[2255]: E0527 17:14:44.097380 2255 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 27 17:14:44.113244 kubelet[2255]: I0527 17:14:44.113209 2255 kubelet_node_status.go:75] "Attempting to register node" node="localhost" May 27 17:14:44.113645 kubelet[2255]: E0527 17:14:44.113620 2255 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.109:6443/api/v1/nodes\": dial tcp 10.0.0.109:6443: connect: connection refused" node="localhost" May 27 17:14:44.147164 kubelet[2255]: E0527 17:14:44.146256 2255 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.109:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.109:6443: connect: connection refused" interval="400ms" May 27 17:14:44.245846 kubelet[2255]: I0527 17:14:44.245774 2255 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/97963c41ada533e2e0872a518ecd4611-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"97963c41ada533e2e0872a518ecd4611\") " pod="kube-system/kube-controller-manager-localhost" May 27 17:14:44.245846 kubelet[2255]: I0527 17:14:44.245822 2255 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8fba52155e63f70cc922ab7cc8c200fd-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"8fba52155e63f70cc922ab7cc8c200fd\") " pod="kube-system/kube-scheduler-localhost" May 27 17:14:44.245846 kubelet[2255]: I0527 17:14:44.245845 2255 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ee334c92fadbf4633b884b311c819d5f-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"ee334c92fadbf4633b884b311c819d5f\") " pod="kube-system/kube-apiserver-localhost" May 27 17:14:44.246021 kubelet[2255]: I0527 17:14:44.245862 2255 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/97963c41ada533e2e0872a518ecd4611-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"97963c41ada533e2e0872a518ecd4611\") " pod="kube-system/kube-controller-manager-localhost" May 27 17:14:44.246021 kubelet[2255]: I0527 17:14:44.245880 2255 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/97963c41ada533e2e0872a518ecd4611-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"97963c41ada533e2e0872a518ecd4611\") " pod="kube-system/kube-controller-manager-localhost" May 27 17:14:44.246021 kubelet[2255]: I0527 17:14:44.245895 2255 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/97963c41ada533e2e0872a518ecd4611-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"97963c41ada533e2e0872a518ecd4611\") " pod="kube-system/kube-controller-manager-localhost" May 27 17:14:44.246021 kubelet[2255]: I0527 17:14:44.245910 2255 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ee334c92fadbf4633b884b311c819d5f-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"ee334c92fadbf4633b884b311c819d5f\") " pod="kube-system/kube-apiserver-localhost" May 27 17:14:44.246021 kubelet[2255]: I0527 17:14:44.245926 2255 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ee334c92fadbf4633b884b311c819d5f-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"ee334c92fadbf4633b884b311c819d5f\") " pod="kube-system/kube-apiserver-localhost" May 27 17:14:44.246157 kubelet[2255]: I0527 17:14:44.245942 2255 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/97963c41ada533e2e0872a518ecd4611-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"97963c41ada533e2e0872a518ecd4611\") " pod="kube-system/kube-controller-manager-localhost" May 27 17:14:44.315512 kubelet[2255]: I0527 17:14:44.315433 2255 kubelet_node_status.go:75] "Attempting to register node" node="localhost" May 27 17:14:44.315882 kubelet[2255]: E0527 17:14:44.315845 2255 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.109:6443/api/v1/nodes\": dial tcp 10.0.0.109:6443: connect: connection refused" node="localhost" May 27 17:14:44.390556 kubelet[2255]: E0527 17:14:44.390519 2255 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:14:44.391170 containerd[1533]: time="2025-05-27T17:14:44.391133973Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:ee334c92fadbf4633b884b311c819d5f,Namespace:kube-system,Attempt:0,}" May 27 17:14:44.395359 kubelet[2255]: E0527 17:14:44.395337 2255 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:14:44.395818 containerd[1533]: time="2025-05-27T17:14:44.395733124Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:97963c41ada533e2e0872a518ecd4611,Namespace:kube-system,Attempt:0,}" May 27 17:14:44.398306 kubelet[2255]: E0527 17:14:44.398170 2255 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:14:44.398542 containerd[1533]: time="2025-05-27T17:14:44.398511609Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:8fba52155e63f70cc922ab7cc8c200fd,Namespace:kube-system,Attempt:0,}" May 27 17:14:44.411278 containerd[1533]: time="2025-05-27T17:14:44.411238964Z" level=info msg="connecting to shim 532a953f0cfd2d40957ce944a4e0ac510376a2e125ea780c4a5f1244fbb430b9" address="unix:///run/containerd/s/d002c2da1a14cfaef5de896a50877b279e7832cc882f80e184d0e238c68828df" namespace=k8s.io protocol=ttrpc version=3 May 27 17:14:44.428722 containerd[1533]: time="2025-05-27T17:14:44.428681673Z" level=info msg="connecting to shim c09483106e125d8a120d0830b23be3881fbfa8d26f761519ac454a8e5db45623" address="unix:///run/containerd/s/797d0bdc604d477d700cf34460e188cf3431960ac6b525accd2da769b85d7c43" namespace=k8s.io protocol=ttrpc version=3 May 27 17:14:44.438630 containerd[1533]: time="2025-05-27T17:14:44.438592691Z" level=info msg="connecting to shim dfaeac14967a7ebb7cd5b591d31c79ae7d24284964752ba2519684c448006cce" address="unix:///run/containerd/s/72b2fd6d7734a785dfb60e4f740453f2671597eda4f0078584d31caa4c166369" namespace=k8s.io protocol=ttrpc version=3 May 27 17:14:44.439227 systemd[1]: Started cri-containerd-532a953f0cfd2d40957ce944a4e0ac510376a2e125ea780c4a5f1244fbb430b9.scope - libcontainer container 532a953f0cfd2d40957ce944a4e0ac510376a2e125ea780c4a5f1244fbb430b9. May 27 17:14:44.462270 systemd[1]: Started cri-containerd-c09483106e125d8a120d0830b23be3881fbfa8d26f761519ac454a8e5db45623.scope - libcontainer container c09483106e125d8a120d0830b23be3881fbfa8d26f761519ac454a8e5db45623. May 27 17:14:44.466133 systemd[1]: Started cri-containerd-dfaeac14967a7ebb7cd5b591d31c79ae7d24284964752ba2519684c448006cce.scope - libcontainer container dfaeac14967a7ebb7cd5b591d31c79ae7d24284964752ba2519684c448006cce. May 27 17:14:44.493705 containerd[1533]: time="2025-05-27T17:14:44.493655640Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:ee334c92fadbf4633b884b311c819d5f,Namespace:kube-system,Attempt:0,} returns sandbox id \"532a953f0cfd2d40957ce944a4e0ac510376a2e125ea780c4a5f1244fbb430b9\"" May 27 17:14:44.498179 kubelet[2255]: E0527 17:14:44.494578 2255 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:14:44.499917 containerd[1533]: time="2025-05-27T17:14:44.499876418Z" level=info msg="CreateContainer within sandbox \"532a953f0cfd2d40957ce944a4e0ac510376a2e125ea780c4a5f1244fbb430b9\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 27 17:14:44.509337 containerd[1533]: time="2025-05-27T17:14:44.509302358Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:8fba52155e63f70cc922ab7cc8c200fd,Namespace:kube-system,Attempt:0,} returns sandbox id \"dfaeac14967a7ebb7cd5b591d31c79ae7d24284964752ba2519684c448006cce\"" May 27 17:14:44.509723 containerd[1533]: time="2025-05-27T17:14:44.509306563Z" level=info msg="Container a44dc79e5424cbdf43b79b730ef3bcc4a3c63c1c9867df62fb5b9ae6de32dc02: CDI devices from CRI Config.CDIDevices: []" May 27 17:14:44.510024 kubelet[2255]: E0527 17:14:44.510003 2255 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:14:44.512751 containerd[1533]: time="2025-05-27T17:14:44.512724783Z" level=info msg="CreateContainer within sandbox \"dfaeac14967a7ebb7cd5b591d31c79ae7d24284964752ba2519684c448006cce\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 27 17:14:44.513646 containerd[1533]: time="2025-05-27T17:14:44.513607858Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:97963c41ada533e2e0872a518ecd4611,Namespace:kube-system,Attempt:0,} returns sandbox id \"c09483106e125d8a120d0830b23be3881fbfa8d26f761519ac454a8e5db45623\"" May 27 17:14:44.514762 kubelet[2255]: E0527 17:14:44.514742 2255 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:14:44.517955 containerd[1533]: time="2025-05-27T17:14:44.517631884Z" level=info msg="CreateContainer within sandbox \"c09483106e125d8a120d0830b23be3881fbfa8d26f761519ac454a8e5db45623\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 27 17:14:44.519565 containerd[1533]: time="2025-05-27T17:14:44.519504622Z" level=info msg="CreateContainer within sandbox \"532a953f0cfd2d40957ce944a4e0ac510376a2e125ea780c4a5f1244fbb430b9\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"a44dc79e5424cbdf43b79b730ef3bcc4a3c63c1c9867df62fb5b9ae6de32dc02\"" May 27 17:14:44.520226 containerd[1533]: time="2025-05-27T17:14:44.520159859Z" level=info msg="StartContainer for \"a44dc79e5424cbdf43b79b730ef3bcc4a3c63c1c9867df62fb5b9ae6de32dc02\"" May 27 17:14:44.522120 containerd[1533]: time="2025-05-27T17:14:44.521930574Z" level=info msg="connecting to shim a44dc79e5424cbdf43b79b730ef3bcc4a3c63c1c9867df62fb5b9ae6de32dc02" address="unix:///run/containerd/s/d002c2da1a14cfaef5de896a50877b279e7832cc882f80e184d0e238c68828df" protocol=ttrpc version=3 May 27 17:14:44.522827 containerd[1533]: time="2025-05-27T17:14:44.522803475Z" level=info msg="Container f7fef4a8ad8eec18eefa6485f65ffb072c40115b0b996e7ffdc06adbd5982e4d: CDI devices from CRI Config.CDIDevices: []" May 27 17:14:44.524238 containerd[1533]: time="2025-05-27T17:14:44.524212285Z" level=info msg="Container 7970ac5287111f37a3773925abdf06751745563cce0f5df63ea0dc7b0255c243: CDI devices from CRI Config.CDIDevices: []" May 27 17:14:44.530615 containerd[1533]: time="2025-05-27T17:14:44.530522508Z" level=info msg="CreateContainer within sandbox \"c09483106e125d8a120d0830b23be3881fbfa8d26f761519ac454a8e5db45623\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"7970ac5287111f37a3773925abdf06751745563cce0f5df63ea0dc7b0255c243\"" May 27 17:14:44.531095 containerd[1533]: time="2025-05-27T17:14:44.531069953Z" level=info msg="StartContainer for \"7970ac5287111f37a3773925abdf06751745563cce0f5df63ea0dc7b0255c243\"" May 27 17:14:44.531998 containerd[1533]: time="2025-05-27T17:14:44.531955752Z" level=info msg="CreateContainer within sandbox \"dfaeac14967a7ebb7cd5b591d31c79ae7d24284964752ba2519684c448006cce\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"f7fef4a8ad8eec18eefa6485f65ffb072c40115b0b996e7ffdc06adbd5982e4d\"" May 27 17:14:44.532385 containerd[1533]: time="2025-05-27T17:14:44.532362641Z" level=info msg="StartContainer for \"f7fef4a8ad8eec18eefa6485f65ffb072c40115b0b996e7ffdc06adbd5982e4d\"" May 27 17:14:44.533076 containerd[1533]: time="2025-05-27T17:14:44.532617397Z" level=info msg="connecting to shim 7970ac5287111f37a3773925abdf06751745563cce0f5df63ea0dc7b0255c243" address="unix:///run/containerd/s/797d0bdc604d477d700cf34460e188cf3431960ac6b525accd2da769b85d7c43" protocol=ttrpc version=3 May 27 17:14:44.533525 containerd[1533]: time="2025-05-27T17:14:44.533497948Z" level=info msg="connecting to shim f7fef4a8ad8eec18eefa6485f65ffb072c40115b0b996e7ffdc06adbd5982e4d" address="unix:///run/containerd/s/72b2fd6d7734a785dfb60e4f740453f2671597eda4f0078584d31caa4c166369" protocol=ttrpc version=3 May 27 17:14:44.544208 systemd[1]: Started cri-containerd-a44dc79e5424cbdf43b79b730ef3bcc4a3c63c1c9867df62fb5b9ae6de32dc02.scope - libcontainer container a44dc79e5424cbdf43b79b730ef3bcc4a3c63c1c9867df62fb5b9ae6de32dc02. May 27 17:14:44.547384 kubelet[2255]: E0527 17:14:44.547353 2255 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.109:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.109:6443: connect: connection refused" interval="800ms" May 27 17:14:44.549990 systemd[1]: Started cri-containerd-7970ac5287111f37a3773925abdf06751745563cce0f5df63ea0dc7b0255c243.scope - libcontainer container 7970ac5287111f37a3773925abdf06751745563cce0f5df63ea0dc7b0255c243. May 27 17:14:44.550989 systemd[1]: Started cri-containerd-f7fef4a8ad8eec18eefa6485f65ffb072c40115b0b996e7ffdc06adbd5982e4d.scope - libcontainer container f7fef4a8ad8eec18eefa6485f65ffb072c40115b0b996e7ffdc06adbd5982e4d. May 27 17:14:44.594399 containerd[1533]: time="2025-05-27T17:14:44.594364653Z" level=info msg="StartContainer for \"a44dc79e5424cbdf43b79b730ef3bcc4a3c63c1c9867df62fb5b9ae6de32dc02\" returns successfully" May 27 17:14:44.599247 containerd[1533]: time="2025-05-27T17:14:44.599123867Z" level=info msg="StartContainer for \"7970ac5287111f37a3773925abdf06751745563cce0f5df63ea0dc7b0255c243\" returns successfully" May 27 17:14:44.609654 containerd[1533]: time="2025-05-27T17:14:44.609627714Z" level=info msg="StartContainer for \"f7fef4a8ad8eec18eefa6485f65ffb072c40115b0b996e7ffdc06adbd5982e4d\" returns successfully" May 27 17:14:44.721177 kubelet[2255]: I0527 17:14:44.721147 2255 kubelet_node_status.go:75] "Attempting to register node" node="localhost" May 27 17:14:44.721741 kubelet[2255]: E0527 17:14:44.721707 2255 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.109:6443/api/v1/nodes\": dial tcp 10.0.0.109:6443: connect: connection refused" node="localhost" May 27 17:14:44.970657 kubelet[2255]: E0527 17:14:44.970615 2255 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 27 17:14:44.970969 kubelet[2255]: E0527 17:14:44.970723 2255 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:14:44.972230 kubelet[2255]: E0527 17:14:44.972167 2255 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 27 17:14:44.972289 kubelet[2255]: E0527 17:14:44.972269 2255 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:14:44.981434 kubelet[2255]: E0527 17:14:44.981410 2255 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 27 17:14:44.981539 kubelet[2255]: E0527 17:14:44.981521 2255 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:14:45.525270 kubelet[2255]: I0527 17:14:45.525233 2255 kubelet_node_status.go:75] "Attempting to register node" node="localhost" May 27 17:14:45.981668 kubelet[2255]: E0527 17:14:45.981172 2255 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 27 17:14:45.981668 kubelet[2255]: E0527 17:14:45.981295 2255 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:14:45.981668 kubelet[2255]: E0527 17:14:45.981512 2255 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 27 17:14:45.981668 kubelet[2255]: E0527 17:14:45.981607 2255 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:14:45.982583 kubelet[2255]: E0527 17:14:45.982421 2255 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" May 27 17:14:45.982583 kubelet[2255]: E0527 17:14:45.982516 2255 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:14:46.100843 kubelet[2255]: E0527 17:14:46.100810 2255 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" May 27 17:14:46.254714 kubelet[2255]: I0527 17:14:46.254554 2255 kubelet_node_status.go:78] "Successfully registered node" node="localhost" May 27 17:14:46.345003 kubelet[2255]: I0527 17:14:46.344567 2255 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 27 17:14:46.349229 kubelet[2255]: E0527 17:14:46.349182 2255 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" May 27 17:14:46.349486 kubelet[2255]: I0527 17:14:46.349305 2255 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 27 17:14:46.351377 kubelet[2255]: E0527 17:14:46.351330 2255 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" May 27 17:14:46.351471 kubelet[2255]: I0527 17:14:46.351454 2255 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 27 17:14:46.353260 kubelet[2255]: E0527 17:14:46.353232 2255 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" May 27 17:14:46.934367 kubelet[2255]: I0527 17:14:46.934324 2255 apiserver.go:52] "Watching apiserver" May 27 17:14:46.944965 kubelet[2255]: I0527 17:14:46.944924 2255 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" May 27 17:14:48.429327 systemd[1]: Reload requested from client PID 2541 ('systemctl') (unit session-7.scope)... May 27 17:14:48.429352 systemd[1]: Reloading... May 27 17:14:48.502152 zram_generator::config[2587]: No configuration found. May 27 17:14:48.570322 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 27 17:14:48.669957 systemd[1]: Reloading finished in 240 ms. May 27 17:14:48.703730 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 27 17:14:48.714266 systemd[1]: kubelet.service: Deactivated successfully. May 27 17:14:48.714532 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 27 17:14:48.714628 systemd[1]: kubelet.service: Consumed 1.897s CPU time, 127.4M memory peak. May 27 17:14:48.716771 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 27 17:14:48.858990 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 27 17:14:48.863129 (kubelet)[2626]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 27 17:14:48.901286 kubelet[2626]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 27 17:14:48.901286 kubelet[2626]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. May 27 17:14:48.901286 kubelet[2626]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 27 17:14:48.902040 kubelet[2626]: I0527 17:14:48.901325 2626 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 27 17:14:48.909076 kubelet[2626]: I0527 17:14:48.909005 2626 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" May 27 17:14:48.909076 kubelet[2626]: I0527 17:14:48.909041 2626 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 27 17:14:48.909365 kubelet[2626]: I0527 17:14:48.909324 2626 server.go:956] "Client rotation is on, will bootstrap in background" May 27 17:14:48.910587 kubelet[2626]: I0527 17:14:48.910565 2626 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" May 27 17:14:48.913775 kubelet[2626]: I0527 17:14:48.913652 2626 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 27 17:14:48.927173 kubelet[2626]: I0527 17:14:48.927144 2626 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" May 27 17:14:48.930173 kubelet[2626]: I0527 17:14:48.930133 2626 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 27 17:14:48.930384 kubelet[2626]: I0527 17:14:48.930358 2626 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 27 17:14:48.930532 kubelet[2626]: I0527 17:14:48.930385 2626 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 27 17:14:48.930614 kubelet[2626]: I0527 17:14:48.930542 2626 topology_manager.go:138] "Creating topology manager with none policy" May 27 17:14:48.930614 kubelet[2626]: I0527 17:14:48.930550 2626 container_manager_linux.go:303] "Creating device plugin manager" May 27 17:14:48.930614 kubelet[2626]: I0527 17:14:48.930592 2626 state_mem.go:36] "Initialized new in-memory state store" May 27 17:14:48.930735 kubelet[2626]: I0527 17:14:48.930725 2626 kubelet.go:480] "Attempting to sync node with API server" May 27 17:14:48.930768 kubelet[2626]: I0527 17:14:48.930741 2626 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" May 27 17:14:48.930796 kubelet[2626]: I0527 17:14:48.930775 2626 kubelet.go:386] "Adding apiserver pod source" May 27 17:14:48.930796 kubelet[2626]: I0527 17:14:48.930794 2626 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 27 17:14:48.931969 kubelet[2626]: I0527 17:14:48.931944 2626 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" May 27 17:14:48.932568 kubelet[2626]: I0527 17:14:48.932544 2626 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" May 27 17:14:48.934464 kubelet[2626]: I0527 17:14:48.934441 2626 watchdog_linux.go:99] "Systemd watchdog is not enabled" May 27 17:14:48.934533 kubelet[2626]: I0527 17:14:48.934488 2626 server.go:1289] "Started kubelet" May 27 17:14:48.934591 kubelet[2626]: I0527 17:14:48.934531 2626 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 May 27 17:14:48.935401 kubelet[2626]: I0527 17:14:48.935375 2626 server.go:317] "Adding debug handlers to kubelet server" May 27 17:14:48.936605 kubelet[2626]: I0527 17:14:48.936550 2626 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 27 17:14:48.936881 kubelet[2626]: I0527 17:14:48.936858 2626 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 27 17:14:48.937625 kubelet[2626]: I0527 17:14:48.937591 2626 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 27 17:14:48.943138 kubelet[2626]: E0527 17:14:48.941850 2626 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 27 17:14:48.943138 kubelet[2626]: I0527 17:14:48.942519 2626 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 27 17:14:48.943859 kubelet[2626]: I0527 17:14:48.943460 2626 volume_manager.go:297] "Starting Kubelet Volume Manager" May 27 17:14:48.943859 kubelet[2626]: E0527 17:14:48.943648 2626 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" May 27 17:14:48.948683 kubelet[2626]: I0527 17:14:48.948659 2626 desired_state_of_world_populator.go:150] "Desired state populator starts to run" May 27 17:14:48.951194 kubelet[2626]: I0527 17:14:48.951170 2626 reconciler.go:26] "Reconciler: start to sync state" May 27 17:14:48.952255 kubelet[2626]: I0527 17:14:48.952234 2626 factory.go:223] Registration of the containerd container factory successfully May 27 17:14:48.952332 kubelet[2626]: I0527 17:14:48.952324 2626 factory.go:223] Registration of the systemd container factory successfully May 27 17:14:48.952482 kubelet[2626]: I0527 17:14:48.952458 2626 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 27 17:14:48.965302 kubelet[2626]: I0527 17:14:48.965202 2626 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" May 27 17:14:48.966397 kubelet[2626]: I0527 17:14:48.966372 2626 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" May 27 17:14:48.966509 kubelet[2626]: I0527 17:14:48.966500 2626 status_manager.go:230] "Starting to sync pod status with apiserver" May 27 17:14:48.966590 kubelet[2626]: I0527 17:14:48.966579 2626 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." May 27 17:14:48.966730 kubelet[2626]: I0527 17:14:48.966719 2626 kubelet.go:2436] "Starting kubelet main sync loop" May 27 17:14:48.966854 kubelet[2626]: E0527 17:14:48.966833 2626 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 27 17:14:48.998354 kubelet[2626]: I0527 17:14:48.997785 2626 cpu_manager.go:221] "Starting CPU manager" policy="none" May 27 17:14:48.998354 kubelet[2626]: I0527 17:14:48.997828 2626 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" May 27 17:14:48.998354 kubelet[2626]: I0527 17:14:48.997851 2626 state_mem.go:36] "Initialized new in-memory state store" May 27 17:14:48.998354 kubelet[2626]: I0527 17:14:48.998083 2626 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 27 17:14:48.998354 kubelet[2626]: I0527 17:14:48.998103 2626 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 27 17:14:48.998354 kubelet[2626]: I0527 17:14:48.998122 2626 policy_none.go:49] "None policy: Start" May 27 17:14:48.998354 kubelet[2626]: I0527 17:14:48.998132 2626 memory_manager.go:186] "Starting memorymanager" policy="None" May 27 17:14:48.998354 kubelet[2626]: I0527 17:14:48.998142 2626 state_mem.go:35] "Initializing new in-memory state store" May 27 17:14:48.999223 kubelet[2626]: I0527 17:14:48.998398 2626 state_mem.go:75] "Updated machine memory state" May 27 17:14:49.005095 kubelet[2626]: E0527 17:14:49.005039 2626 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" May 27 17:14:49.005328 kubelet[2626]: I0527 17:14:49.005272 2626 eviction_manager.go:189] "Eviction manager: starting control loop" May 27 17:14:49.005328 kubelet[2626]: I0527 17:14:49.005296 2626 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 27 17:14:49.006311 kubelet[2626]: I0527 17:14:49.005622 2626 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 27 17:14:49.006504 kubelet[2626]: E0527 17:14:49.006474 2626 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" May 27 17:14:49.067867 kubelet[2626]: I0527 17:14:49.067794 2626 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 27 17:14:49.068004 kubelet[2626]: I0527 17:14:49.067958 2626 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" May 27 17:14:49.068004 kubelet[2626]: I0527 17:14:49.067969 2626 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 27 17:14:49.109754 kubelet[2626]: I0527 17:14:49.109712 2626 kubelet_node_status.go:75] "Attempting to register node" node="localhost" May 27 17:14:49.115749 kubelet[2626]: I0527 17:14:49.115691 2626 kubelet_node_status.go:124] "Node was previously registered" node="localhost" May 27 17:14:49.115855 kubelet[2626]: I0527 17:14:49.115771 2626 kubelet_node_status.go:78] "Successfully registered node" node="localhost" May 27 17:14:49.152976 kubelet[2626]: I0527 17:14:49.152871 2626 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ee334c92fadbf4633b884b311c819d5f-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"ee334c92fadbf4633b884b311c819d5f\") " pod="kube-system/kube-apiserver-localhost" May 27 17:14:49.152976 kubelet[2626]: I0527 17:14:49.152915 2626 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/97963c41ada533e2e0872a518ecd4611-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"97963c41ada533e2e0872a518ecd4611\") " pod="kube-system/kube-controller-manager-localhost" May 27 17:14:49.152976 kubelet[2626]: I0527 17:14:49.152951 2626 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/97963c41ada533e2e0872a518ecd4611-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"97963c41ada533e2e0872a518ecd4611\") " pod="kube-system/kube-controller-manager-localhost" May 27 17:14:49.152976 kubelet[2626]: I0527 17:14:49.152980 2626 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8fba52155e63f70cc922ab7cc8c200fd-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"8fba52155e63f70cc922ab7cc8c200fd\") " pod="kube-system/kube-scheduler-localhost" May 27 17:14:49.153192 kubelet[2626]: I0527 17:14:49.152997 2626 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ee334c92fadbf4633b884b311c819d5f-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"ee334c92fadbf4633b884b311c819d5f\") " pod="kube-system/kube-apiserver-localhost" May 27 17:14:49.153192 kubelet[2626]: I0527 17:14:49.153035 2626 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ee334c92fadbf4633b884b311c819d5f-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"ee334c92fadbf4633b884b311c819d5f\") " pod="kube-system/kube-apiserver-localhost" May 27 17:14:49.153192 kubelet[2626]: I0527 17:14:49.153082 2626 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/97963c41ada533e2e0872a518ecd4611-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"97963c41ada533e2e0872a518ecd4611\") " pod="kube-system/kube-controller-manager-localhost" May 27 17:14:49.153192 kubelet[2626]: I0527 17:14:49.153099 2626 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/97963c41ada533e2e0872a518ecd4611-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"97963c41ada533e2e0872a518ecd4611\") " pod="kube-system/kube-controller-manager-localhost" May 27 17:14:49.153192 kubelet[2626]: I0527 17:14:49.153119 2626 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/97963c41ada533e2e0872a518ecd4611-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"97963c41ada533e2e0872a518ecd4611\") " pod="kube-system/kube-controller-manager-localhost" May 27 17:14:49.373133 kubelet[2626]: E0527 17:14:49.373095 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:14:49.373308 kubelet[2626]: E0527 17:14:49.373188 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:14:49.373465 kubelet[2626]: E0527 17:14:49.373449 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:14:49.392252 sudo[2666]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin May 27 17:14:49.392499 sudo[2666]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) May 27 17:14:49.820212 sudo[2666]: pam_unix(sudo:session): session closed for user root May 27 17:14:49.932291 kubelet[2626]: I0527 17:14:49.932159 2626 apiserver.go:52] "Watching apiserver" May 27 17:14:49.951331 kubelet[2626]: I0527 17:14:49.951290 2626 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" May 27 17:14:49.980712 kubelet[2626]: I0527 17:14:49.980687 2626 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" May 27 17:14:49.983199 kubelet[2626]: I0527 17:14:49.981376 2626 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" May 27 17:14:49.983199 kubelet[2626]: E0527 17:14:49.981394 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:14:49.985957 kubelet[2626]: E0527 17:14:49.985925 2626 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" May 27 17:14:49.986088 kubelet[2626]: E0527 17:14:49.986051 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:14:49.987266 kubelet[2626]: E0527 17:14:49.987236 2626 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" May 27 17:14:49.987367 kubelet[2626]: E0527 17:14:49.987347 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:14:49.996835 kubelet[2626]: I0527 17:14:49.996771 2626 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=0.99676082 podStartE2EDuration="996.76082ms" podCreationTimestamp="2025-05-27 17:14:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 17:14:49.996581593 +0000 UTC m=+1.130234428" watchObservedRunningTime="2025-05-27 17:14:49.99676082 +0000 UTC m=+1.130413655" May 27 17:14:50.010733 kubelet[2626]: I0527 17:14:50.010662 2626 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.010647114 podStartE2EDuration="1.010647114s" podCreationTimestamp="2025-05-27 17:14:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 17:14:50.003279722 +0000 UTC m=+1.136932557" watchObservedRunningTime="2025-05-27 17:14:50.010647114 +0000 UTC m=+1.144299909" May 27 17:14:50.017874 kubelet[2626]: I0527 17:14:50.017758 2626 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.017746381 podStartE2EDuration="1.017746381s" podCreationTimestamp="2025-05-27 17:14:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 17:14:50.010992588 +0000 UTC m=+1.144645424" watchObservedRunningTime="2025-05-27 17:14:50.017746381 +0000 UTC m=+1.151399216" May 27 17:14:50.981610 kubelet[2626]: E0527 17:14:50.981581 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:14:50.985167 kubelet[2626]: E0527 17:14:50.985142 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:14:51.987973 kubelet[2626]: E0527 17:14:51.987936 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:14:52.287956 sudo[1733]: pam_unix(sudo:session): session closed for user root May 27 17:14:52.290092 sshd[1732]: Connection closed by 10.0.0.1 port 45920 May 27 17:14:52.290321 sshd-session[1730]: pam_unix(sshd:session): session closed for user core May 27 17:14:52.293496 systemd[1]: sshd@6-10.0.0.109:22-10.0.0.1:45920.service: Deactivated successfully. May 27 17:14:52.295404 systemd[1]: session-7.scope: Deactivated successfully. May 27 17:14:52.296156 systemd[1]: session-7.scope: Consumed 6.864s CPU time, 264.7M memory peak. May 27 17:14:52.297172 systemd-logind[1507]: Session 7 logged out. Waiting for processes to exit. May 27 17:14:52.298666 systemd-logind[1507]: Removed session 7. May 27 17:14:55.158473 kubelet[2626]: I0527 17:14:55.158444 2626 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 27 17:14:55.159044 containerd[1533]: time="2025-05-27T17:14:55.158713025Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 27 17:14:55.159998 kubelet[2626]: I0527 17:14:55.159433 2626 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 27 17:14:55.210256 kubelet[2626]: E0527 17:14:55.210221 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:14:55.991882 kubelet[2626]: E0527 17:14:55.991852 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:14:56.054561 systemd[1]: Created slice kubepods-besteffort-poda2fe5b25_2d35_469d_a72c_a80b92072de2.slice - libcontainer container kubepods-besteffort-poda2fe5b25_2d35_469d_a72c_a80b92072de2.slice. May 27 17:14:56.075994 systemd[1]: Created slice kubepods-burstable-pode42f5e5f_fb1f_44ac_accf_95246ee7065b.slice - libcontainer container kubepods-burstable-pode42f5e5f_fb1f_44ac_accf_95246ee7065b.slice. May 27 17:14:56.102868 kubelet[2626]: I0527 17:14:56.102683 2626 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a2fe5b25-2d35-469d-a72c-a80b92072de2-lib-modules\") pod \"kube-proxy-g27tq\" (UID: \"a2fe5b25-2d35-469d-a72c-a80b92072de2\") " pod="kube-system/kube-proxy-g27tq" May 27 17:14:56.102868 kubelet[2626]: I0527 17:14:56.102723 2626 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xmj7w\" (UniqueName: \"kubernetes.io/projected/a2fe5b25-2d35-469d-a72c-a80b92072de2-kube-api-access-xmj7w\") pod \"kube-proxy-g27tq\" (UID: \"a2fe5b25-2d35-469d-a72c-a80b92072de2\") " pod="kube-system/kube-proxy-g27tq" May 27 17:14:56.102868 kubelet[2626]: I0527 17:14:56.102743 2626 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e42f5e5f-fb1f-44ac-accf-95246ee7065b-hostproc\") pod \"cilium-tjm5t\" (UID: \"e42f5e5f-fb1f-44ac-accf-95246ee7065b\") " pod="kube-system/cilium-tjm5t" May 27 17:14:56.102868 kubelet[2626]: I0527 17:14:56.102760 2626 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e42f5e5f-fb1f-44ac-accf-95246ee7065b-cni-path\") pod \"cilium-tjm5t\" (UID: \"e42f5e5f-fb1f-44ac-accf-95246ee7065b\") " pod="kube-system/cilium-tjm5t" May 27 17:14:56.102868 kubelet[2626]: I0527 17:14:56.102775 2626 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/a2fe5b25-2d35-469d-a72c-a80b92072de2-kube-proxy\") pod \"kube-proxy-g27tq\" (UID: \"a2fe5b25-2d35-469d-a72c-a80b92072de2\") " pod="kube-system/kube-proxy-g27tq" May 27 17:14:56.102868 kubelet[2626]: I0527 17:14:56.102789 2626 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a2fe5b25-2d35-469d-a72c-a80b92072de2-xtables-lock\") pod \"kube-proxy-g27tq\" (UID: \"a2fe5b25-2d35-469d-a72c-a80b92072de2\") " pod="kube-system/kube-proxy-g27tq" May 27 17:14:56.103126 kubelet[2626]: I0527 17:14:56.102802 2626 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e42f5e5f-fb1f-44ac-accf-95246ee7065b-cilium-run\") pod \"cilium-tjm5t\" (UID: \"e42f5e5f-fb1f-44ac-accf-95246ee7065b\") " pod="kube-system/cilium-tjm5t" May 27 17:14:56.103126 kubelet[2626]: I0527 17:14:56.102818 2626 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e42f5e5f-fb1f-44ac-accf-95246ee7065b-bpf-maps\") pod \"cilium-tjm5t\" (UID: \"e42f5e5f-fb1f-44ac-accf-95246ee7065b\") " pod="kube-system/cilium-tjm5t" May 27 17:14:56.103126 kubelet[2626]: I0527 17:14:56.102832 2626 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e42f5e5f-fb1f-44ac-accf-95246ee7065b-cilium-cgroup\") pod \"cilium-tjm5t\" (UID: \"e42f5e5f-fb1f-44ac-accf-95246ee7065b\") " pod="kube-system/cilium-tjm5t" May 27 17:14:56.203714 kubelet[2626]: I0527 17:14:56.203667 2626 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e42f5e5f-fb1f-44ac-accf-95246ee7065b-etc-cni-netd\") pod \"cilium-tjm5t\" (UID: \"e42f5e5f-fb1f-44ac-accf-95246ee7065b\") " pod="kube-system/cilium-tjm5t" May 27 17:14:56.203714 kubelet[2626]: I0527 17:14:56.203706 2626 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e42f5e5f-fb1f-44ac-accf-95246ee7065b-cilium-config-path\") pod \"cilium-tjm5t\" (UID: \"e42f5e5f-fb1f-44ac-accf-95246ee7065b\") " pod="kube-system/cilium-tjm5t" May 27 17:14:56.203714 kubelet[2626]: I0527 17:14:56.203725 2626 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tv7mx\" (UniqueName: \"kubernetes.io/projected/e42f5e5f-fb1f-44ac-accf-95246ee7065b-kube-api-access-tv7mx\") pod \"cilium-tjm5t\" (UID: \"e42f5e5f-fb1f-44ac-accf-95246ee7065b\") " pod="kube-system/cilium-tjm5t" May 27 17:14:56.204774 kubelet[2626]: I0527 17:14:56.203748 2626 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e42f5e5f-fb1f-44ac-accf-95246ee7065b-lib-modules\") pod \"cilium-tjm5t\" (UID: \"e42f5e5f-fb1f-44ac-accf-95246ee7065b\") " pod="kube-system/cilium-tjm5t" May 27 17:14:56.204774 kubelet[2626]: I0527 17:14:56.203769 2626 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e42f5e5f-fb1f-44ac-accf-95246ee7065b-xtables-lock\") pod \"cilium-tjm5t\" (UID: \"e42f5e5f-fb1f-44ac-accf-95246ee7065b\") " pod="kube-system/cilium-tjm5t" May 27 17:14:56.204774 kubelet[2626]: I0527 17:14:56.203783 2626 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e42f5e5f-fb1f-44ac-accf-95246ee7065b-hubble-tls\") pod \"cilium-tjm5t\" (UID: \"e42f5e5f-fb1f-44ac-accf-95246ee7065b\") " pod="kube-system/cilium-tjm5t" May 27 17:14:56.204774 kubelet[2626]: I0527 17:14:56.203835 2626 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e42f5e5f-fb1f-44ac-accf-95246ee7065b-clustermesh-secrets\") pod \"cilium-tjm5t\" (UID: \"e42f5e5f-fb1f-44ac-accf-95246ee7065b\") " pod="kube-system/cilium-tjm5t" May 27 17:14:56.204774 kubelet[2626]: I0527 17:14:56.203889 2626 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e42f5e5f-fb1f-44ac-accf-95246ee7065b-host-proc-sys-net\") pod \"cilium-tjm5t\" (UID: \"e42f5e5f-fb1f-44ac-accf-95246ee7065b\") " pod="kube-system/cilium-tjm5t" May 27 17:14:56.204774 kubelet[2626]: I0527 17:14:56.203905 2626 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e42f5e5f-fb1f-44ac-accf-95246ee7065b-host-proc-sys-kernel\") pod \"cilium-tjm5t\" (UID: \"e42f5e5f-fb1f-44ac-accf-95246ee7065b\") " pod="kube-system/cilium-tjm5t" May 27 17:14:56.211462 kubelet[2626]: E0527 17:14:56.211375 2626 projected.go:289] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found May 27 17:14:56.211462 kubelet[2626]: E0527 17:14:56.211404 2626 projected.go:194] Error preparing data for projected volume kube-api-access-xmj7w for pod kube-system/kube-proxy-g27tq: configmap "kube-root-ca.crt" not found May 27 17:14:56.211575 kubelet[2626]: E0527 17:14:56.211507 2626 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a2fe5b25-2d35-469d-a72c-a80b92072de2-kube-api-access-xmj7w podName:a2fe5b25-2d35-469d-a72c-a80b92072de2 nodeName:}" failed. No retries permitted until 2025-05-27 17:14:56.711484349 +0000 UTC m=+7.845137184 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-xmj7w" (UniqueName: "kubernetes.io/projected/a2fe5b25-2d35-469d-a72c-a80b92072de2-kube-api-access-xmj7w") pod "kube-proxy-g27tq" (UID: "a2fe5b25-2d35-469d-a72c-a80b92072de2") : configmap "kube-root-ca.crt" not found May 27 17:14:56.353282 systemd[1]: Created slice kubepods-besteffort-pod4363c10c_f35c_4acc_bc63_e743732cad1f.slice - libcontainer container kubepods-besteffort-pod4363c10c_f35c_4acc_bc63_e743732cad1f.slice. May 27 17:14:56.379230 kubelet[2626]: E0527 17:14:56.379188 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:14:56.381015 containerd[1533]: time="2025-05-27T17:14:56.380940769Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-tjm5t,Uid:e42f5e5f-fb1f-44ac-accf-95246ee7065b,Namespace:kube-system,Attempt:0,}" May 27 17:14:56.400209 containerd[1533]: time="2025-05-27T17:14:56.400155081Z" level=info msg="connecting to shim ca49122a07aa8cd9b58a72a316bd2729882823783dd20178ba4bda52a5a5ffab" address="unix:///run/containerd/s/59054bed349ed93e3233a312cf3f9fa584ef9e59c19de50f4cdf71ed8a0ed406" namespace=k8s.io protocol=ttrpc version=3 May 27 17:14:56.431251 systemd[1]: Started cri-containerd-ca49122a07aa8cd9b58a72a316bd2729882823783dd20178ba4bda52a5a5ffab.scope - libcontainer container ca49122a07aa8cd9b58a72a316bd2729882823783dd20178ba4bda52a5a5ffab. May 27 17:14:56.454035 containerd[1533]: time="2025-05-27T17:14:56.453979928Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-tjm5t,Uid:e42f5e5f-fb1f-44ac-accf-95246ee7065b,Namespace:kube-system,Attempt:0,} returns sandbox id \"ca49122a07aa8cd9b58a72a316bd2729882823783dd20178ba4bda52a5a5ffab\"" May 27 17:14:56.454987 kubelet[2626]: E0527 17:14:56.454773 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:14:56.456261 containerd[1533]: time="2025-05-27T17:14:56.456208900Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" May 27 17:14:56.505596 kubelet[2626]: I0527 17:14:56.505539 2626 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4363c10c-f35c-4acc-bc63-e743732cad1f-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-s7mcb\" (UID: \"4363c10c-f35c-4acc-bc63-e743732cad1f\") " pod="kube-system/cilium-operator-6c4d7847fc-s7mcb" May 27 17:14:56.505596 kubelet[2626]: I0527 17:14:56.505583 2626 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-598dd\" (UniqueName: \"kubernetes.io/projected/4363c10c-f35c-4acc-bc63-e743732cad1f-kube-api-access-598dd\") pod \"cilium-operator-6c4d7847fc-s7mcb\" (UID: \"4363c10c-f35c-4acc-bc63-e743732cad1f\") " pod="kube-system/cilium-operator-6c4d7847fc-s7mcb" May 27 17:14:56.657156 kubelet[2626]: E0527 17:14:56.657040 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:14:56.658122 containerd[1533]: time="2025-05-27T17:14:56.658045644Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-s7mcb,Uid:4363c10c-f35c-4acc-bc63-e743732cad1f,Namespace:kube-system,Attempt:0,}" May 27 17:14:56.675231 containerd[1533]: time="2025-05-27T17:14:56.675183283Z" level=info msg="connecting to shim ca2609f8588e06c53318d05f36ccb3cd9ad5c95b1cb4acc101024bd8497d69e4" address="unix:///run/containerd/s/21c24b7f1405da41810778f374bb3040ffaca2666483f3cfd3394f7186fa9284" namespace=k8s.io protocol=ttrpc version=3 May 27 17:14:56.701247 systemd[1]: Started cri-containerd-ca2609f8588e06c53318d05f36ccb3cd9ad5c95b1cb4acc101024bd8497d69e4.scope - libcontainer container ca2609f8588e06c53318d05f36ccb3cd9ad5c95b1cb4acc101024bd8497d69e4. May 27 17:14:56.730949 containerd[1533]: time="2025-05-27T17:14:56.730865701Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-s7mcb,Uid:4363c10c-f35c-4acc-bc63-e743732cad1f,Namespace:kube-system,Attempt:0,} returns sandbox id \"ca2609f8588e06c53318d05f36ccb3cd9ad5c95b1cb4acc101024bd8497d69e4\"" May 27 17:14:56.731694 kubelet[2626]: E0527 17:14:56.731520 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:14:56.973516 kubelet[2626]: E0527 17:14:56.973460 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:14:56.974489 containerd[1533]: time="2025-05-27T17:14:56.974443825Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-g27tq,Uid:a2fe5b25-2d35-469d-a72c-a80b92072de2,Namespace:kube-system,Attempt:0,}" May 27 17:14:56.989014 containerd[1533]: time="2025-05-27T17:14:56.988968082Z" level=info msg="connecting to shim d06eebf42dd3b4f82c26a5deba61e9d9369b50d75711a158ab2779759998f5fd" address="unix:///run/containerd/s/12f2523fd44da1ca1c449bad8931efc05391c22da7cbd5bb081a7cf29c84d494" namespace=k8s.io protocol=ttrpc version=3 May 27 17:14:57.016228 systemd[1]: Started cri-containerd-d06eebf42dd3b4f82c26a5deba61e9d9369b50d75711a158ab2779759998f5fd.scope - libcontainer container d06eebf42dd3b4f82c26a5deba61e9d9369b50d75711a158ab2779759998f5fd. May 27 17:14:57.039737 containerd[1533]: time="2025-05-27T17:14:57.039701190Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-g27tq,Uid:a2fe5b25-2d35-469d-a72c-a80b92072de2,Namespace:kube-system,Attempt:0,} returns sandbox id \"d06eebf42dd3b4f82c26a5deba61e9d9369b50d75711a158ab2779759998f5fd\"" May 27 17:14:57.040572 kubelet[2626]: E0527 17:14:57.040547 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:14:57.045736 containerd[1533]: time="2025-05-27T17:14:57.045687642Z" level=info msg="CreateContainer within sandbox \"d06eebf42dd3b4f82c26a5deba61e9d9369b50d75711a158ab2779759998f5fd\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 27 17:14:57.054119 containerd[1533]: time="2025-05-27T17:14:57.053402120Z" level=info msg="Container b2bcde611b9af1d84c70810a097996275af5cc475da3ac3de79a294db8348e75: CDI devices from CRI Config.CDIDevices: []" May 27 17:14:57.059900 containerd[1533]: time="2025-05-27T17:14:57.059865267Z" level=info msg="CreateContainer within sandbox \"d06eebf42dd3b4f82c26a5deba61e9d9369b50d75711a158ab2779759998f5fd\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"b2bcde611b9af1d84c70810a097996275af5cc475da3ac3de79a294db8348e75\"" May 27 17:14:57.060492 containerd[1533]: time="2025-05-27T17:14:57.060466718Z" level=info msg="StartContainer for \"b2bcde611b9af1d84c70810a097996275af5cc475da3ac3de79a294db8348e75\"" May 27 17:14:57.061864 containerd[1533]: time="2025-05-27T17:14:57.061838004Z" level=info msg="connecting to shim b2bcde611b9af1d84c70810a097996275af5cc475da3ac3de79a294db8348e75" address="unix:///run/containerd/s/12f2523fd44da1ca1c449bad8931efc05391c22da7cbd5bb081a7cf29c84d494" protocol=ttrpc version=3 May 27 17:14:57.088260 systemd[1]: Started cri-containerd-b2bcde611b9af1d84c70810a097996275af5cc475da3ac3de79a294db8348e75.scope - libcontainer container b2bcde611b9af1d84c70810a097996275af5cc475da3ac3de79a294db8348e75. May 27 17:14:57.124145 containerd[1533]: time="2025-05-27T17:14:57.124084837Z" level=info msg="StartContainer for \"b2bcde611b9af1d84c70810a097996275af5cc475da3ac3de79a294db8348e75\" returns successfully" May 27 17:14:58.001221 kubelet[2626]: E0527 17:14:58.000275 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:14:58.009263 kubelet[2626]: I0527 17:14:58.009209 2626 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-g27tq" podStartSLOduration=2.009195988 podStartE2EDuration="2.009195988s" podCreationTimestamp="2025-05-27 17:14:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 17:14:58.008439746 +0000 UTC m=+9.142092581" watchObservedRunningTime="2025-05-27 17:14:58.009195988 +0000 UTC m=+9.142848823" May 27 17:14:59.197310 kubelet[2626]: E0527 17:14:59.196984 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:15:00.007243 kubelet[2626]: E0527 17:15:00.007216 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:15:00.146111 kubelet[2626]: E0527 17:15:00.146043 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:15:00.637182 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount689539714.mount: Deactivated successfully. May 27 17:15:01.009234 kubelet[2626]: E0527 17:15:01.009203 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:15:03.446942 containerd[1533]: time="2025-05-27T17:15:03.446221990Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:15:03.448251 containerd[1533]: time="2025-05-27T17:15:03.448220291Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" May 27 17:15:03.449412 containerd[1533]: time="2025-05-27T17:15:03.449383216Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:15:03.451391 containerd[1533]: time="2025-05-27T17:15:03.451353386Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 6.995110345s" May 27 17:15:03.451492 containerd[1533]: time="2025-05-27T17:15:03.451476241Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" May 27 17:15:03.466466 containerd[1533]: time="2025-05-27T17:15:03.466421586Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" May 27 17:15:03.478306 update_engine[1514]: I20250527 17:15:03.478112 1514 update_attempter.cc:509] Updating boot flags... May 27 17:15:03.479995 containerd[1533]: time="2025-05-27T17:15:03.479581766Z" level=info msg="CreateContainer within sandbox \"ca49122a07aa8cd9b58a72a316bd2729882823783dd20178ba4bda52a5a5ffab\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 27 17:15:03.486310 containerd[1533]: time="2025-05-27T17:15:03.485655348Z" level=info msg="Container 6e66826c8dc845dfe08e151df8c7f3e11fda984a805adc82b5b33848f37a1a0b: CDI devices from CRI Config.CDIDevices: []" May 27 17:15:03.488956 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3106013778.mount: Deactivated successfully. May 27 17:15:03.502899 containerd[1533]: time="2025-05-27T17:15:03.502763789Z" level=info msg="CreateContainer within sandbox \"ca49122a07aa8cd9b58a72a316bd2729882823783dd20178ba4bda52a5a5ffab\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"6e66826c8dc845dfe08e151df8c7f3e11fda984a805adc82b5b33848f37a1a0b\"" May 27 17:15:03.504870 containerd[1533]: time="2025-05-27T17:15:03.504836285Z" level=info msg="StartContainer for \"6e66826c8dc845dfe08e151df8c7f3e11fda984a805adc82b5b33848f37a1a0b\"" May 27 17:15:03.512295 containerd[1533]: time="2025-05-27T17:15:03.512182480Z" level=info msg="connecting to shim 6e66826c8dc845dfe08e151df8c7f3e11fda984a805adc82b5b33848f37a1a0b" address="unix:///run/containerd/s/59054bed349ed93e3233a312cf3f9fa584ef9e59c19de50f4cdf71ed8a0ed406" protocol=ttrpc version=3 May 27 17:15:03.626283 systemd[1]: Started cri-containerd-6e66826c8dc845dfe08e151df8c7f3e11fda984a805adc82b5b33848f37a1a0b.scope - libcontainer container 6e66826c8dc845dfe08e151df8c7f3e11fda984a805adc82b5b33848f37a1a0b. May 27 17:15:03.657577 containerd[1533]: time="2025-05-27T17:15:03.657540167Z" level=info msg="StartContainer for \"6e66826c8dc845dfe08e151df8c7f3e11fda984a805adc82b5b33848f37a1a0b\" returns successfully" May 27 17:15:03.714330 systemd[1]: cri-containerd-6e66826c8dc845dfe08e151df8c7f3e11fda984a805adc82b5b33848f37a1a0b.scope: Deactivated successfully. May 27 17:15:03.741625 containerd[1533]: time="2025-05-27T17:15:03.741571174Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6e66826c8dc845dfe08e151df8c7f3e11fda984a805adc82b5b33848f37a1a0b\" id:\"6e66826c8dc845dfe08e151df8c7f3e11fda984a805adc82b5b33848f37a1a0b\" pid:3071 exited_at:{seconds:1748366103 nanos:732863964}" May 27 17:15:03.743921 containerd[1533]: time="2025-05-27T17:15:03.743872613Z" level=info msg="received exit event container_id:\"6e66826c8dc845dfe08e151df8c7f3e11fda984a805adc82b5b33848f37a1a0b\" id:\"6e66826c8dc845dfe08e151df8c7f3e11fda984a805adc82b5b33848f37a1a0b\" pid:3071 exited_at:{seconds:1748366103 nanos:732863964}" May 27 17:15:03.774692 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6e66826c8dc845dfe08e151df8c7f3e11fda984a805adc82b5b33848f37a1a0b-rootfs.mount: Deactivated successfully. May 27 17:15:04.018295 kubelet[2626]: E0527 17:15:04.018111 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:15:04.025806 containerd[1533]: time="2025-05-27T17:15:04.025750417Z" level=info msg="CreateContainer within sandbox \"ca49122a07aa8cd9b58a72a316bd2729882823783dd20178ba4bda52a5a5ffab\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 27 17:15:04.040734 containerd[1533]: time="2025-05-27T17:15:04.040684871Z" level=info msg="Container daa483b8184d3a2dc9a5612cae595bd781b80948b809183ce0688942f88a6c5e: CDI devices from CRI Config.CDIDevices: []" May 27 17:15:04.062933 containerd[1533]: time="2025-05-27T17:15:04.062889768Z" level=info msg="CreateContainer within sandbox \"ca49122a07aa8cd9b58a72a316bd2729882823783dd20178ba4bda52a5a5ffab\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"daa483b8184d3a2dc9a5612cae595bd781b80948b809183ce0688942f88a6c5e\"" May 27 17:15:04.070974 containerd[1533]: time="2025-05-27T17:15:04.070923538Z" level=info msg="StartContainer for \"daa483b8184d3a2dc9a5612cae595bd781b80948b809183ce0688942f88a6c5e\"" May 27 17:15:04.072870 containerd[1533]: time="2025-05-27T17:15:04.072837040Z" level=info msg="connecting to shim daa483b8184d3a2dc9a5612cae595bd781b80948b809183ce0688942f88a6c5e" address="unix:///run/containerd/s/59054bed349ed93e3233a312cf3f9fa584ef9e59c19de50f4cdf71ed8a0ed406" protocol=ttrpc version=3 May 27 17:15:04.091451 systemd[1]: Started cri-containerd-daa483b8184d3a2dc9a5612cae595bd781b80948b809183ce0688942f88a6c5e.scope - libcontainer container daa483b8184d3a2dc9a5612cae595bd781b80948b809183ce0688942f88a6c5e. May 27 17:15:04.115191 containerd[1533]: time="2025-05-27T17:15:04.115116638Z" level=info msg="StartContainer for \"daa483b8184d3a2dc9a5612cae595bd781b80948b809183ce0688942f88a6c5e\" returns successfully" May 27 17:15:04.128245 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 27 17:15:04.128454 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 27 17:15:04.128672 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... May 27 17:15:04.130798 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 27 17:15:04.130989 systemd[1]: cri-containerd-daa483b8184d3a2dc9a5612cae595bd781b80948b809183ce0688942f88a6c5e.scope: Deactivated successfully. May 27 17:15:04.131818 containerd[1533]: time="2025-05-27T17:15:04.131578508Z" level=info msg="TaskExit event in podsandbox handler container_id:\"daa483b8184d3a2dc9a5612cae595bd781b80948b809183ce0688942f88a6c5e\" id:\"daa483b8184d3a2dc9a5612cae595bd781b80948b809183ce0688942f88a6c5e\" pid:3117 exited_at:{seconds:1748366104 nanos:131216272}" May 27 17:15:04.131818 containerd[1533]: time="2025-05-27T17:15:04.131644896Z" level=info msg="received exit event container_id:\"daa483b8184d3a2dc9a5612cae595bd781b80948b809183ce0688942f88a6c5e\" id:\"daa483b8184d3a2dc9a5612cae595bd781b80948b809183ce0688942f88a6c5e\" pid:3117 exited_at:{seconds:1748366104 nanos:131216272}" May 27 17:15:04.155851 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 27 17:15:04.814727 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2249354439.mount: Deactivated successfully. May 27 17:15:05.031172 kubelet[2626]: E0527 17:15:05.031133 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:15:05.038577 containerd[1533]: time="2025-05-27T17:15:05.038536855Z" level=info msg="CreateContainer within sandbox \"ca49122a07aa8cd9b58a72a316bd2729882823783dd20178ba4bda52a5a5ffab\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 27 17:15:05.055856 containerd[1533]: time="2025-05-27T17:15:05.055817322Z" level=info msg="Container a6a705686d63aa206f88fc7eba43099c744087d38369d1cad6710200b5318ad1: CDI devices from CRI Config.CDIDevices: []" May 27 17:15:05.074032 containerd[1533]: time="2025-05-27T17:15:05.073863943Z" level=info msg="CreateContainer within sandbox \"ca49122a07aa8cd9b58a72a316bd2729882823783dd20178ba4bda52a5a5ffab\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"a6a705686d63aa206f88fc7eba43099c744087d38369d1cad6710200b5318ad1\"" May 27 17:15:05.074642 containerd[1533]: time="2025-05-27T17:15:05.074618371Z" level=info msg="StartContainer for \"a6a705686d63aa206f88fc7eba43099c744087d38369d1cad6710200b5318ad1\"" May 27 17:15:05.077092 containerd[1533]: time="2025-05-27T17:15:05.077037961Z" level=info msg="connecting to shim a6a705686d63aa206f88fc7eba43099c744087d38369d1cad6710200b5318ad1" address="unix:///run/containerd/s/59054bed349ed93e3233a312cf3f9fa584ef9e59c19de50f4cdf71ed8a0ed406" protocol=ttrpc version=3 May 27 17:15:05.116228 systemd[1]: Started cri-containerd-a6a705686d63aa206f88fc7eba43099c744087d38369d1cad6710200b5318ad1.scope - libcontainer container a6a705686d63aa206f88fc7eba43099c744087d38369d1cad6710200b5318ad1. May 27 17:15:05.130090 containerd[1533]: time="2025-05-27T17:15:05.129963285Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:15:05.130942 containerd[1533]: time="2025-05-27T17:15:05.130906791Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" May 27 17:15:05.131780 containerd[1533]: time="2025-05-27T17:15:05.131737251Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 27 17:15:05.133739 containerd[1533]: time="2025-05-27T17:15:05.133709418Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 1.66697437s" May 27 17:15:05.134209 containerd[1533]: time="2025-05-27T17:15:05.134098016Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" May 27 17:15:05.139602 containerd[1533]: time="2025-05-27T17:15:05.139565973Z" level=info msg="CreateContainer within sandbox \"ca2609f8588e06c53318d05f36ccb3cd9ad5c95b1cb4acc101024bd8497d69e4\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" May 27 17:15:05.146495 containerd[1533]: time="2025-05-27T17:15:05.146462953Z" level=info msg="Container c9ad4b22977ff3cb077b0a4faa7010907e4ede0dc23c8099d8b9abb0006fbe6d: CDI devices from CRI Config.CDIDevices: []" May 27 17:15:05.151243 containerd[1533]: time="2025-05-27T17:15:05.151209655Z" level=info msg="CreateContainer within sandbox \"ca2609f8588e06c53318d05f36ccb3cd9ad5c95b1cb4acc101024bd8497d69e4\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"c9ad4b22977ff3cb077b0a4faa7010907e4ede0dc23c8099d8b9abb0006fbe6d\"" May 27 17:15:05.153998 containerd[1533]: time="2025-05-27T17:15:05.153970104Z" level=info msg="StartContainer for \"c9ad4b22977ff3cb077b0a4faa7010907e4ede0dc23c8099d8b9abb0006fbe6d\"" May 27 17:15:05.154769 containerd[1533]: time="2025-05-27T17:15:05.154719010Z" level=info msg="connecting to shim c9ad4b22977ff3cb077b0a4faa7010907e4ede0dc23c8099d8b9abb0006fbe6d" address="unix:///run/containerd/s/21c24b7f1405da41810778f374bb3040ffaca2666483f3cfd3394f7186fa9284" protocol=ttrpc version=3 May 27 17:15:05.155917 containerd[1533]: time="2025-05-27T17:15:05.155886967Z" level=info msg="StartContainer for \"a6a705686d63aa206f88fc7eba43099c744087d38369d1cad6710200b5318ad1\" returns successfully" May 27 17:15:05.181336 systemd[1]: cri-containerd-a6a705686d63aa206f88fc7eba43099c744087d38369d1cad6710200b5318ad1.scope: Deactivated successfully. May 27 17:15:05.189373 systemd[1]: Started cri-containerd-c9ad4b22977ff3cb077b0a4faa7010907e4ede0dc23c8099d8b9abb0006fbe6d.scope - libcontainer container c9ad4b22977ff3cb077b0a4faa7010907e4ede0dc23c8099d8b9abb0006fbe6d. May 27 17:15:05.190694 containerd[1533]: time="2025-05-27T17:15:05.190243418Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a6a705686d63aa206f88fc7eba43099c744087d38369d1cad6710200b5318ad1\" id:\"a6a705686d63aa206f88fc7eba43099c744087d38369d1cad6710200b5318ad1\" pid:3178 exited_at:{seconds:1748366105 nanos:189036245}" May 27 17:15:05.195534 containerd[1533]: time="2025-05-27T17:15:05.195480280Z" level=info msg="received exit event container_id:\"a6a705686d63aa206f88fc7eba43099c744087d38369d1cad6710200b5318ad1\" id:\"a6a705686d63aa206f88fc7eba43099c744087d38369d1cad6710200b5318ad1\" pid:3178 exited_at:{seconds:1748366105 nanos:189036245}" May 27 17:15:05.228102 containerd[1533]: time="2025-05-27T17:15:05.227465641Z" level=info msg="StartContainer for \"c9ad4b22977ff3cb077b0a4faa7010907e4ede0dc23c8099d8b9abb0006fbe6d\" returns successfully" May 27 17:15:06.031503 kubelet[2626]: E0527 17:15:06.031467 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:15:06.045079 kubelet[2626]: I0527 17:15:06.044908 2626 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-s7mcb" podStartSLOduration=1.6424510209999998 podStartE2EDuration="10.044894103s" podCreationTimestamp="2025-05-27 17:14:56 +0000 UTC" firstStartedPulling="2025-05-27 17:14:56.732479672 +0000 UTC m=+7.866132467" lastFinishedPulling="2025-05-27 17:15:05.134922714 +0000 UTC m=+16.268575549" observedRunningTime="2025-05-27 17:15:06.044576699 +0000 UTC m=+17.178229534" watchObservedRunningTime="2025-05-27 17:15:06.044894103 +0000 UTC m=+17.178546938" May 27 17:15:06.045079 kubelet[2626]: E0527 17:15:06.045027 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:15:06.050354 containerd[1533]: time="2025-05-27T17:15:06.050309053Z" level=info msg="CreateContainer within sandbox \"ca49122a07aa8cd9b58a72a316bd2729882823783dd20178ba4bda52a5a5ffab\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 27 17:15:06.062409 containerd[1533]: time="2025-05-27T17:15:06.062366152Z" level=info msg="Container 275fe3e036d9f45612f9da53cdc5987af73bdd86709868038e3f41e2c67698b3: CDI devices from CRI Config.CDIDevices: []" May 27 17:15:06.070148 containerd[1533]: time="2025-05-27T17:15:06.070112131Z" level=info msg="CreateContainer within sandbox \"ca49122a07aa8cd9b58a72a316bd2729882823783dd20178ba4bda52a5a5ffab\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"275fe3e036d9f45612f9da53cdc5987af73bdd86709868038e3f41e2c67698b3\"" May 27 17:15:06.071399 containerd[1533]: time="2025-05-27T17:15:06.071354656Z" level=info msg="StartContainer for \"275fe3e036d9f45612f9da53cdc5987af73bdd86709868038e3f41e2c67698b3\"" May 27 17:15:06.073305 containerd[1533]: time="2025-05-27T17:15:06.073237990Z" level=info msg="connecting to shim 275fe3e036d9f45612f9da53cdc5987af73bdd86709868038e3f41e2c67698b3" address="unix:///run/containerd/s/59054bed349ed93e3233a312cf3f9fa584ef9e59c19de50f4cdf71ed8a0ed406" protocol=ttrpc version=3 May 27 17:15:06.092330 systemd[1]: Started cri-containerd-275fe3e036d9f45612f9da53cdc5987af73bdd86709868038e3f41e2c67698b3.scope - libcontainer container 275fe3e036d9f45612f9da53cdc5987af73bdd86709868038e3f41e2c67698b3. May 27 17:15:06.114162 systemd[1]: cri-containerd-275fe3e036d9f45612f9da53cdc5987af73bdd86709868038e3f41e2c67698b3.scope: Deactivated successfully. May 27 17:15:06.115738 containerd[1533]: time="2025-05-27T17:15:06.115692656Z" level=info msg="TaskExit event in podsandbox handler container_id:\"275fe3e036d9f45612f9da53cdc5987af73bdd86709868038e3f41e2c67698b3\" id:\"275fe3e036d9f45612f9da53cdc5987af73bdd86709868038e3f41e2c67698b3\" pid:3259 exited_at:{seconds:1748366106 nanos:114753170}" May 27 17:15:06.116498 containerd[1533]: time="2025-05-27T17:15:06.116455113Z" level=info msg="received exit event container_id:\"275fe3e036d9f45612f9da53cdc5987af73bdd86709868038e3f41e2c67698b3\" id:\"275fe3e036d9f45612f9da53cdc5987af73bdd86709868038e3f41e2c67698b3\" pid:3259 exited_at:{seconds:1748366106 nanos:114753170}" May 27 17:15:06.122822 containerd[1533]: time="2025-05-27T17:15:06.122794744Z" level=info msg="StartContainer for \"275fe3e036d9f45612f9da53cdc5987af73bdd86709868038e3f41e2c67698b3\" returns successfully" May 27 17:15:06.134143 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-275fe3e036d9f45612f9da53cdc5987af73bdd86709868038e3f41e2c67698b3-rootfs.mount: Deactivated successfully. May 27 17:15:07.050521 kubelet[2626]: E0527 17:15:07.050325 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:15:07.053100 kubelet[2626]: E0527 17:15:07.051534 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:15:07.054722 containerd[1533]: time="2025-05-27T17:15:07.054679816Z" level=info msg="CreateContainer within sandbox \"ca49122a07aa8cd9b58a72a316bd2729882823783dd20178ba4bda52a5a5ffab\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 27 17:15:07.074358 containerd[1533]: time="2025-05-27T17:15:07.074316476Z" level=info msg="Container 277074e85623485c0d2c8bf183bd8d57ac9ed6c697382a71cc704ff4a6f99e84: CDI devices from CRI Config.CDIDevices: []" May 27 17:15:07.080259 containerd[1533]: time="2025-05-27T17:15:07.080216829Z" level=info msg="CreateContainer within sandbox \"ca49122a07aa8cd9b58a72a316bd2729882823783dd20178ba4bda52a5a5ffab\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"277074e85623485c0d2c8bf183bd8d57ac9ed6c697382a71cc704ff4a6f99e84\"" May 27 17:15:07.080704 containerd[1533]: time="2025-05-27T17:15:07.080667117Z" level=info msg="StartContainer for \"277074e85623485c0d2c8bf183bd8d57ac9ed6c697382a71cc704ff4a6f99e84\"" May 27 17:15:07.081742 containerd[1533]: time="2025-05-27T17:15:07.081717587Z" level=info msg="connecting to shim 277074e85623485c0d2c8bf183bd8d57ac9ed6c697382a71cc704ff4a6f99e84" address="unix:///run/containerd/s/59054bed349ed93e3233a312cf3f9fa584ef9e59c19de50f4cdf71ed8a0ed406" protocol=ttrpc version=3 May 27 17:15:07.102216 systemd[1]: Started cri-containerd-277074e85623485c0d2c8bf183bd8d57ac9ed6c697382a71cc704ff4a6f99e84.scope - libcontainer container 277074e85623485c0d2c8bf183bd8d57ac9ed6c697382a71cc704ff4a6f99e84. May 27 17:15:07.134467 containerd[1533]: time="2025-05-27T17:15:07.134422699Z" level=info msg="StartContainer for \"277074e85623485c0d2c8bf183bd8d57ac9ed6c697382a71cc704ff4a6f99e84\" returns successfully" May 27 17:15:07.256667 containerd[1533]: time="2025-05-27T17:15:07.256611199Z" level=info msg="TaskExit event in podsandbox handler container_id:\"277074e85623485c0d2c8bf183bd8d57ac9ed6c697382a71cc704ff4a6f99e84\" id:\"4683350a461878621231f4bce8e4eb283e7183fdf3a9c939b486af6ba94d98f8\" pid:3328 exited_at:{seconds:1748366107 nanos:254517741}" May 27 17:15:07.276826 kubelet[2626]: I0527 17:15:07.276778 2626 kubelet_node_status.go:501] "Fast updating node status as it just became ready" May 27 17:15:07.341485 systemd[1]: Created slice kubepods-burstable-podb4fbf2e2_efed_4209_ba83_e1ec66a17453.slice - libcontainer container kubepods-burstable-podb4fbf2e2_efed_4209_ba83_e1ec66a17453.slice. May 27 17:15:07.353410 systemd[1]: Created slice kubepods-burstable-pod70dbe311_aa4b_4e27_a5e5_18f4fe85891f.slice - libcontainer container kubepods-burstable-pod70dbe311_aa4b_4e27_a5e5_18f4fe85891f.slice. May 27 17:15:07.401702 kubelet[2626]: I0527 17:15:07.401662 2626 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-csgj6\" (UniqueName: \"kubernetes.io/projected/70dbe311-aa4b-4e27-a5e5-18f4fe85891f-kube-api-access-csgj6\") pod \"coredns-674b8bbfcf-th9r7\" (UID: \"70dbe311-aa4b-4e27-a5e5-18f4fe85891f\") " pod="kube-system/coredns-674b8bbfcf-th9r7" May 27 17:15:07.401702 kubelet[2626]: I0527 17:15:07.401702 2626 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b4fbf2e2-efed-4209-ba83-e1ec66a17453-config-volume\") pod \"coredns-674b8bbfcf-c9knb\" (UID: \"b4fbf2e2-efed-4209-ba83-e1ec66a17453\") " pod="kube-system/coredns-674b8bbfcf-c9knb" May 27 17:15:07.401848 kubelet[2626]: I0527 17:15:07.401725 2626 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/70dbe311-aa4b-4e27-a5e5-18f4fe85891f-config-volume\") pod \"coredns-674b8bbfcf-th9r7\" (UID: \"70dbe311-aa4b-4e27-a5e5-18f4fe85891f\") " pod="kube-system/coredns-674b8bbfcf-th9r7" May 27 17:15:07.401848 kubelet[2626]: I0527 17:15:07.401747 2626 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jn8sn\" (UniqueName: \"kubernetes.io/projected/b4fbf2e2-efed-4209-ba83-e1ec66a17453-kube-api-access-jn8sn\") pod \"coredns-674b8bbfcf-c9knb\" (UID: \"b4fbf2e2-efed-4209-ba83-e1ec66a17453\") " pod="kube-system/coredns-674b8bbfcf-c9knb" May 27 17:15:07.649394 kubelet[2626]: E0527 17:15:07.648900 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:15:07.649947 containerd[1533]: time="2025-05-27T17:15:07.649914399Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-c9knb,Uid:b4fbf2e2-efed-4209-ba83-e1ec66a17453,Namespace:kube-system,Attempt:0,}" May 27 17:15:07.659890 kubelet[2626]: E0527 17:15:07.656621 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:15:07.663456 containerd[1533]: time="2025-05-27T17:15:07.660218589Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-th9r7,Uid:70dbe311-aa4b-4e27-a5e5-18f4fe85891f,Namespace:kube-system,Attempt:0,}" May 27 17:15:08.056507 kubelet[2626]: E0527 17:15:08.056470 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:15:08.076014 kubelet[2626]: I0527 17:15:08.073898 2626 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-tjm5t" podStartSLOduration=5.066434799 podStartE2EDuration="12.073881718s" podCreationTimestamp="2025-05-27 17:14:56 +0000 UTC" firstStartedPulling="2025-05-27 17:14:56.455402735 +0000 UTC m=+7.589055570" lastFinishedPulling="2025-05-27 17:15:03.462849654 +0000 UTC m=+14.596502489" observedRunningTime="2025-05-27 17:15:08.073370257 +0000 UTC m=+19.207023052" watchObservedRunningTime="2025-05-27 17:15:08.073881718 +0000 UTC m=+19.207534513" May 27 17:15:09.057211 kubelet[2626]: E0527 17:15:09.057168 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:15:09.344638 systemd-networkd[1441]: cilium_host: Link UP May 27 17:15:09.345192 systemd-networkd[1441]: cilium_net: Link UP May 27 17:15:09.345372 systemd-networkd[1441]: cilium_host: Gained carrier May 27 17:15:09.345513 systemd-networkd[1441]: cilium_net: Gained carrier May 27 17:15:09.437904 systemd-networkd[1441]: cilium_vxlan: Link UP May 27 17:15:09.438018 systemd-networkd[1441]: cilium_vxlan: Gained carrier May 27 17:15:09.759238 systemd-networkd[1441]: cilium_host: Gained IPv6LL May 27 17:15:09.777714 kernel: NET: Registered PF_ALG protocol family May 27 17:15:09.792262 systemd-networkd[1441]: cilium_net: Gained IPv6LL May 27 17:15:10.059013 kubelet[2626]: E0527 17:15:10.058857 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:15:10.383353 systemd-networkd[1441]: lxc_health: Link UP May 27 17:15:10.384921 systemd-networkd[1441]: lxc_health: Gained carrier May 27 17:15:10.721501 systemd-networkd[1441]: lxc1112224ed0c6: Link UP May 27 17:15:10.730174 kernel: eth0: renamed from tmp91a35 May 27 17:15:10.736980 systemd-networkd[1441]: lxca7d006aeb677: Link UP May 27 17:15:10.739796 kernel: eth0: renamed from tmpe41a9 May 27 17:15:10.738530 systemd-networkd[1441]: lxc1112224ed0c6: Gained carrier May 27 17:15:10.740651 systemd-networkd[1441]: lxca7d006aeb677: Gained carrier May 27 17:15:10.967251 systemd-networkd[1441]: cilium_vxlan: Gained IPv6LL May 27 17:15:11.062385 kubelet[2626]: E0527 17:15:11.062268 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:15:11.799252 systemd-networkd[1441]: lxc1112224ed0c6: Gained IPv6LL May 27 17:15:11.863236 systemd-networkd[1441]: lxca7d006aeb677: Gained IPv6LL May 27 17:15:12.064723 kubelet[2626]: E0527 17:15:12.064388 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:15:12.247302 systemd-networkd[1441]: lxc_health: Gained IPv6LL May 27 17:15:13.065801 kubelet[2626]: E0527 17:15:13.065767 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:15:14.396082 containerd[1533]: time="2025-05-27T17:15:14.395681090Z" level=info msg="connecting to shim e41a9611d2d7f675239d25ae5bcfb5367b484a708f9538f9fe7d129dbf1d040f" address="unix:///run/containerd/s/804c9879ca07b5889f4da1096485e037b282c73c1aad000d334b8ce71221475b" namespace=k8s.io protocol=ttrpc version=3 May 27 17:15:14.396419 containerd[1533]: time="2025-05-27T17:15:14.396333428Z" level=info msg="connecting to shim 91a35b43a825d79cb8dee268b9038e012ce0ec5f1dad0580bd5cc6fd8de1d975" address="unix:///run/containerd/s/dee318fee361facec06bc35f3e39bf12ccb95ece37e206d39f1f83fb33a0cf1e" namespace=k8s.io protocol=ttrpc version=3 May 27 17:15:14.423272 systemd[1]: Started cri-containerd-91a35b43a825d79cb8dee268b9038e012ce0ec5f1dad0580bd5cc6fd8de1d975.scope - libcontainer container 91a35b43a825d79cb8dee268b9038e012ce0ec5f1dad0580bd5cc6fd8de1d975. May 27 17:15:14.426676 systemd[1]: Started cri-containerd-e41a9611d2d7f675239d25ae5bcfb5367b484a708f9538f9fe7d129dbf1d040f.scope - libcontainer container e41a9611d2d7f675239d25ae5bcfb5367b484a708f9538f9fe7d129dbf1d040f. May 27 17:15:14.441903 systemd-resolved[1358]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 27 17:15:14.444175 systemd-resolved[1358]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address May 27 17:15:14.470475 containerd[1533]: time="2025-05-27T17:15:14.470195545Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-th9r7,Uid:70dbe311-aa4b-4e27-a5e5-18f4fe85891f,Namespace:kube-system,Attempt:0,} returns sandbox id \"e41a9611d2d7f675239d25ae5bcfb5367b484a708f9538f9fe7d129dbf1d040f\"" May 27 17:15:14.471040 kubelet[2626]: E0527 17:15:14.471017 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:15:14.478332 containerd[1533]: time="2025-05-27T17:15:14.478239425Z" level=info msg="CreateContainer within sandbox \"e41a9611d2d7f675239d25ae5bcfb5367b484a708f9538f9fe7d129dbf1d040f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 27 17:15:14.487029 containerd[1533]: time="2025-05-27T17:15:14.486628439Z" level=info msg="Container ee8cc913b40b04ae0155f7a1f5cadf7194017b86a4969f36f5714838addc860c: CDI devices from CRI Config.CDIDevices: []" May 27 17:15:14.493536 containerd[1533]: time="2025-05-27T17:15:14.493494036Z" level=info msg="CreateContainer within sandbox \"e41a9611d2d7f675239d25ae5bcfb5367b484a708f9538f9fe7d129dbf1d040f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ee8cc913b40b04ae0155f7a1f5cadf7194017b86a4969f36f5714838addc860c\"" May 27 17:15:14.494120 containerd[1533]: time="2025-05-27T17:15:14.494093040Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-c9knb,Uid:b4fbf2e2-efed-4209-ba83-e1ec66a17453,Namespace:kube-system,Attempt:0,} returns sandbox id \"91a35b43a825d79cb8dee268b9038e012ce0ec5f1dad0580bd5cc6fd8de1d975\"" May 27 17:15:14.495850 containerd[1533]: time="2025-05-27T17:15:14.495808429Z" level=info msg="StartContainer for \"ee8cc913b40b04ae0155f7a1f5cadf7194017b86a4969f36f5714838addc860c\"" May 27 17:15:14.497272 containerd[1533]: time="2025-05-27T17:15:14.497225416Z" level=info msg="connecting to shim ee8cc913b40b04ae0155f7a1f5cadf7194017b86a4969f36f5714838addc860c" address="unix:///run/containerd/s/804c9879ca07b5889f4da1096485e037b282c73c1aad000d334b8ce71221475b" protocol=ttrpc version=3 May 27 17:15:14.498575 kubelet[2626]: E0527 17:15:14.498514 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:15:14.502598 containerd[1533]: time="2025-05-27T17:15:14.502563596Z" level=info msg="CreateContainer within sandbox \"91a35b43a825d79cb8dee268b9038e012ce0ec5f1dad0580bd5cc6fd8de1d975\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 27 17:15:14.510087 containerd[1533]: time="2025-05-27T17:15:14.509976903Z" level=info msg="Container 9083851fcbbfe0e2fc5eac82a0ed89d88d8bf14732382749b8803f5eded08142: CDI devices from CRI Config.CDIDevices: []" May 27 17:15:14.515531 containerd[1533]: time="2025-05-27T17:15:14.515490011Z" level=info msg="CreateContainer within sandbox \"91a35b43a825d79cb8dee268b9038e012ce0ec5f1dad0580bd5cc6fd8de1d975\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9083851fcbbfe0e2fc5eac82a0ed89d88d8bf14732382749b8803f5eded08142\"" May 27 17:15:14.516119 containerd[1533]: time="2025-05-27T17:15:14.516097617Z" level=info msg="StartContainer for \"9083851fcbbfe0e2fc5eac82a0ed89d88d8bf14732382749b8803f5eded08142\"" May 27 17:15:14.517011 containerd[1533]: time="2025-05-27T17:15:14.516986740Z" level=info msg="connecting to shim 9083851fcbbfe0e2fc5eac82a0ed89d88d8bf14732382749b8803f5eded08142" address="unix:///run/containerd/s/dee318fee361facec06bc35f3e39bf12ccb95ece37e206d39f1f83fb33a0cf1e" protocol=ttrpc version=3 May 27 17:15:14.522334 systemd[1]: Started cri-containerd-ee8cc913b40b04ae0155f7a1f5cadf7194017b86a4969f36f5714838addc860c.scope - libcontainer container ee8cc913b40b04ae0155f7a1f5cadf7194017b86a4969f36f5714838addc860c. May 27 17:15:14.541219 systemd[1]: Started cri-containerd-9083851fcbbfe0e2fc5eac82a0ed89d88d8bf14732382749b8803f5eded08142.scope - libcontainer container 9083851fcbbfe0e2fc5eac82a0ed89d88d8bf14732382749b8803f5eded08142. May 27 17:15:14.565481 containerd[1533]: time="2025-05-27T17:15:14.565431266Z" level=info msg="StartContainer for \"ee8cc913b40b04ae0155f7a1f5cadf7194017b86a4969f36f5714838addc860c\" returns successfully" May 27 17:15:14.610204 containerd[1533]: time="2025-05-27T17:15:14.606367140Z" level=info msg="StartContainer for \"9083851fcbbfe0e2fc5eac82a0ed89d88d8bf14732382749b8803f5eded08142\" returns successfully" May 27 17:15:15.070610 kubelet[2626]: E0527 17:15:15.070572 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:15:15.074537 kubelet[2626]: E0527 17:15:15.074496 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:15:15.082605 kubelet[2626]: I0527 17:15:15.082561 2626 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-th9r7" podStartSLOduration=19.082548886 podStartE2EDuration="19.082548886s" podCreationTimestamp="2025-05-27 17:14:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 17:15:15.081867907 +0000 UTC m=+26.215520822" watchObservedRunningTime="2025-05-27 17:15:15.082548886 +0000 UTC m=+26.216201721" May 27 17:15:15.092900 kubelet[2626]: I0527 17:15:15.092852 2626 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-c9knb" podStartSLOduration=19.09284171 podStartE2EDuration="19.09284171s" podCreationTimestamp="2025-05-27 17:14:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 17:15:15.092449126 +0000 UTC m=+26.226101961" watchObservedRunningTime="2025-05-27 17:15:15.09284171 +0000 UTC m=+26.226494545" May 27 17:15:16.076342 kubelet[2626]: E0527 17:15:16.076275 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:15:16.076687 kubelet[2626]: E0527 17:15:16.076458 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:15:17.078303 kubelet[2626]: E0527 17:15:17.078256 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:15:17.078641 kubelet[2626]: E0527 17:15:17.078329 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:15:19.866386 systemd[1]: Started sshd@7-10.0.0.109:22-10.0.0.1:58666.service - OpenSSH per-connection server daemon (10.0.0.1:58666). May 27 17:15:19.923003 sshd[3978]: Accepted publickey for core from 10.0.0.1 port 58666 ssh2: RSA SHA256:P4IJeIRssgXk4sLVjWVipI5XYGK59bekWX+Ak26Y4M8 May 27 17:15:19.924155 sshd-session[3978]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:15:19.927797 systemd-logind[1507]: New session 8 of user core. May 27 17:15:19.939208 systemd[1]: Started session-8.scope - Session 8 of User core. May 27 17:15:20.069738 sshd[3980]: Connection closed by 10.0.0.1 port 58666 May 27 17:15:20.070218 sshd-session[3978]: pam_unix(sshd:session): session closed for user core May 27 17:15:20.073540 systemd[1]: sshd@7-10.0.0.109:22-10.0.0.1:58666.service: Deactivated successfully. May 27 17:15:20.075319 systemd[1]: session-8.scope: Deactivated successfully. May 27 17:15:20.076024 systemd-logind[1507]: Session 8 logged out. Waiting for processes to exit. May 27 17:15:20.077050 systemd-logind[1507]: Removed session 8. May 27 17:15:25.085350 systemd[1]: Started sshd@8-10.0.0.109:22-10.0.0.1:40112.service - OpenSSH per-connection server daemon (10.0.0.1:40112). May 27 17:15:25.145401 sshd[3994]: Accepted publickey for core from 10.0.0.1 port 40112 ssh2: RSA SHA256:P4IJeIRssgXk4sLVjWVipI5XYGK59bekWX+Ak26Y4M8 May 27 17:15:25.146545 sshd-session[3994]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:15:25.150917 systemd-logind[1507]: New session 9 of user core. May 27 17:15:25.159206 systemd[1]: Started session-9.scope - Session 9 of User core. May 27 17:15:25.269500 sshd[3996]: Connection closed by 10.0.0.1 port 40112 May 27 17:15:25.269820 sshd-session[3994]: pam_unix(sshd:session): session closed for user core May 27 17:15:25.273302 systemd[1]: sshd@8-10.0.0.109:22-10.0.0.1:40112.service: Deactivated successfully. May 27 17:15:25.275507 systemd[1]: session-9.scope: Deactivated successfully. May 27 17:15:25.276282 systemd-logind[1507]: Session 9 logged out. Waiting for processes to exit. May 27 17:15:25.277314 systemd-logind[1507]: Removed session 9. May 27 17:15:30.286876 systemd[1]: Started sshd@9-10.0.0.109:22-10.0.0.1:40118.service - OpenSSH per-connection server daemon (10.0.0.1:40118). May 27 17:15:30.356666 sshd[4012]: Accepted publickey for core from 10.0.0.1 port 40118 ssh2: RSA SHA256:P4IJeIRssgXk4sLVjWVipI5XYGK59bekWX+Ak26Y4M8 May 27 17:15:30.359249 sshd-session[4012]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:15:30.365460 systemd-logind[1507]: New session 10 of user core. May 27 17:15:30.371275 systemd[1]: Started session-10.scope - Session 10 of User core. May 27 17:15:30.510751 sshd[4014]: Connection closed by 10.0.0.1 port 40118 May 27 17:15:30.512727 sshd-session[4012]: pam_unix(sshd:session): session closed for user core May 27 17:15:30.517275 systemd[1]: sshd@9-10.0.0.109:22-10.0.0.1:40118.service: Deactivated successfully. May 27 17:15:30.519436 systemd[1]: session-10.scope: Deactivated successfully. May 27 17:15:30.521891 systemd-logind[1507]: Session 10 logged out. Waiting for processes to exit. May 27 17:15:30.524447 systemd-logind[1507]: Removed session 10. May 27 17:15:35.530377 systemd[1]: Started sshd@10-10.0.0.109:22-10.0.0.1:60622.service - OpenSSH per-connection server daemon (10.0.0.1:60622). May 27 17:15:35.581775 sshd[4028]: Accepted publickey for core from 10.0.0.1 port 60622 ssh2: RSA SHA256:P4IJeIRssgXk4sLVjWVipI5XYGK59bekWX+Ak26Y4M8 May 27 17:15:35.583199 sshd-session[4028]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:15:35.588494 systemd-logind[1507]: New session 11 of user core. May 27 17:15:35.592256 systemd[1]: Started session-11.scope - Session 11 of User core. May 27 17:15:35.708265 sshd[4030]: Connection closed by 10.0.0.1 port 60622 May 27 17:15:35.708644 sshd-session[4028]: pam_unix(sshd:session): session closed for user core May 27 17:15:35.719553 systemd[1]: sshd@10-10.0.0.109:22-10.0.0.1:60622.service: Deactivated successfully. May 27 17:15:35.721879 systemd[1]: session-11.scope: Deactivated successfully. May 27 17:15:35.722933 systemd-logind[1507]: Session 11 logged out. Waiting for processes to exit. May 27 17:15:35.727068 systemd-logind[1507]: Removed session 11. May 27 17:15:35.729023 systemd[1]: Started sshd@11-10.0.0.109:22-10.0.0.1:60638.service - OpenSSH per-connection server daemon (10.0.0.1:60638). May 27 17:15:35.800709 sshd[4044]: Accepted publickey for core from 10.0.0.1 port 60638 ssh2: RSA SHA256:P4IJeIRssgXk4sLVjWVipI5XYGK59bekWX+Ak26Y4M8 May 27 17:15:35.802008 sshd-session[4044]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:15:35.806875 systemd-logind[1507]: New session 12 of user core. May 27 17:15:35.820286 systemd[1]: Started session-12.scope - Session 12 of User core. May 27 17:15:35.990853 sshd[4046]: Connection closed by 10.0.0.1 port 60638 May 27 17:15:35.995170 sshd-session[4044]: pam_unix(sshd:session): session closed for user core May 27 17:15:36.005114 systemd[1]: sshd@11-10.0.0.109:22-10.0.0.1:60638.service: Deactivated successfully. May 27 17:15:36.009433 systemd[1]: session-12.scope: Deactivated successfully. May 27 17:15:36.012825 systemd-logind[1507]: Session 12 logged out. Waiting for processes to exit. May 27 17:15:36.014807 systemd[1]: Started sshd@12-10.0.0.109:22-10.0.0.1:60646.service - OpenSSH per-connection server daemon (10.0.0.1:60646). May 27 17:15:36.019268 systemd-logind[1507]: Removed session 12. May 27 17:15:36.068972 sshd[4059]: Accepted publickey for core from 10.0.0.1 port 60646 ssh2: RSA SHA256:P4IJeIRssgXk4sLVjWVipI5XYGK59bekWX+Ak26Y4M8 May 27 17:15:36.070523 sshd-session[4059]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:15:36.075297 systemd-logind[1507]: New session 13 of user core. May 27 17:15:36.089288 systemd[1]: Started session-13.scope - Session 13 of User core. May 27 17:15:36.205625 sshd[4061]: Connection closed by 10.0.0.1 port 60646 May 27 17:15:36.205983 sshd-session[4059]: pam_unix(sshd:session): session closed for user core May 27 17:15:36.209639 systemd-logind[1507]: Session 13 logged out. Waiting for processes to exit. May 27 17:15:36.209907 systemd[1]: sshd@12-10.0.0.109:22-10.0.0.1:60646.service: Deactivated successfully. May 27 17:15:36.211753 systemd[1]: session-13.scope: Deactivated successfully. May 27 17:15:36.213612 systemd-logind[1507]: Removed session 13. May 27 17:15:41.220961 systemd[1]: Started sshd@13-10.0.0.109:22-10.0.0.1:60662.service - OpenSSH per-connection server daemon (10.0.0.1:60662). May 27 17:15:41.263837 sshd[4074]: Accepted publickey for core from 10.0.0.1 port 60662 ssh2: RSA SHA256:P4IJeIRssgXk4sLVjWVipI5XYGK59bekWX+Ak26Y4M8 May 27 17:15:41.265363 sshd-session[4074]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:15:41.270278 systemd-logind[1507]: New session 14 of user core. May 27 17:15:41.282314 systemd[1]: Started session-14.scope - Session 14 of User core. May 27 17:15:41.411553 sshd[4076]: Connection closed by 10.0.0.1 port 60662 May 27 17:15:41.411945 sshd-session[4074]: pam_unix(sshd:session): session closed for user core May 27 17:15:41.415963 systemd[1]: sshd@13-10.0.0.109:22-10.0.0.1:60662.service: Deactivated successfully. May 27 17:15:41.420206 systemd[1]: session-14.scope: Deactivated successfully. May 27 17:15:41.422011 systemd-logind[1507]: Session 14 logged out. Waiting for processes to exit. May 27 17:15:41.424595 systemd-logind[1507]: Removed session 14. May 27 17:15:46.424881 systemd[1]: Started sshd@14-10.0.0.109:22-10.0.0.1:41028.service - OpenSSH per-connection server daemon (10.0.0.1:41028). May 27 17:15:46.477338 sshd[4090]: Accepted publickey for core from 10.0.0.1 port 41028 ssh2: RSA SHA256:P4IJeIRssgXk4sLVjWVipI5XYGK59bekWX+Ak26Y4M8 May 27 17:15:46.481284 sshd-session[4090]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:15:46.485797 systemd-logind[1507]: New session 15 of user core. May 27 17:15:46.501266 systemd[1]: Started session-15.scope - Session 15 of User core. May 27 17:15:46.625539 sshd[4092]: Connection closed by 10.0.0.1 port 41028 May 27 17:15:46.625997 sshd-session[4090]: pam_unix(sshd:session): session closed for user core May 27 17:15:46.642637 systemd[1]: sshd@14-10.0.0.109:22-10.0.0.1:41028.service: Deactivated successfully. May 27 17:15:46.644415 systemd[1]: session-15.scope: Deactivated successfully. May 27 17:15:46.646733 systemd-logind[1507]: Session 15 logged out. Waiting for processes to exit. May 27 17:15:46.650152 systemd[1]: Started sshd@15-10.0.0.109:22-10.0.0.1:41040.service - OpenSSH per-connection server daemon (10.0.0.1:41040). May 27 17:15:46.651374 systemd-logind[1507]: Removed session 15. May 27 17:15:46.714186 sshd[4106]: Accepted publickey for core from 10.0.0.1 port 41040 ssh2: RSA SHA256:P4IJeIRssgXk4sLVjWVipI5XYGK59bekWX+Ak26Y4M8 May 27 17:15:46.715473 sshd-session[4106]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:15:46.720065 systemd-logind[1507]: New session 16 of user core. May 27 17:15:46.733230 systemd[1]: Started session-16.scope - Session 16 of User core. May 27 17:15:46.950578 sshd[4108]: Connection closed by 10.0.0.1 port 41040 May 27 17:15:46.949749 sshd-session[4106]: pam_unix(sshd:session): session closed for user core May 27 17:15:46.957417 systemd[1]: sshd@15-10.0.0.109:22-10.0.0.1:41040.service: Deactivated successfully. May 27 17:15:46.960053 systemd[1]: session-16.scope: Deactivated successfully. May 27 17:15:46.963583 systemd-logind[1507]: Session 16 logged out. Waiting for processes to exit. May 27 17:15:46.965408 systemd[1]: Started sshd@16-10.0.0.109:22-10.0.0.1:41046.service - OpenSSH per-connection server daemon (10.0.0.1:41046). May 27 17:15:46.967110 systemd-logind[1507]: Removed session 16. May 27 17:15:47.019866 sshd[4119]: Accepted publickey for core from 10.0.0.1 port 41046 ssh2: RSA SHA256:P4IJeIRssgXk4sLVjWVipI5XYGK59bekWX+Ak26Y4M8 May 27 17:15:47.021421 sshd-session[4119]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:15:47.027711 systemd-logind[1507]: New session 17 of user core. May 27 17:15:47.034249 systemd[1]: Started session-17.scope - Session 17 of User core. May 27 17:15:47.801247 sshd[4121]: Connection closed by 10.0.0.1 port 41046 May 27 17:15:47.802105 sshd-session[4119]: pam_unix(sshd:session): session closed for user core May 27 17:15:47.813377 systemd[1]: sshd@16-10.0.0.109:22-10.0.0.1:41046.service: Deactivated successfully. May 27 17:15:47.815636 systemd[1]: session-17.scope: Deactivated successfully. May 27 17:15:47.816778 systemd-logind[1507]: Session 17 logged out. Waiting for processes to exit. May 27 17:15:47.822366 systemd[1]: Started sshd@17-10.0.0.109:22-10.0.0.1:41058.service - OpenSSH per-connection server daemon (10.0.0.1:41058). May 27 17:15:47.823802 systemd-logind[1507]: Removed session 17. May 27 17:15:47.878114 sshd[4143]: Accepted publickey for core from 10.0.0.1 port 41058 ssh2: RSA SHA256:P4IJeIRssgXk4sLVjWVipI5XYGK59bekWX+Ak26Y4M8 May 27 17:15:47.879330 sshd-session[4143]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:15:47.883266 systemd-logind[1507]: New session 18 of user core. May 27 17:15:47.900250 systemd[1]: Started session-18.scope - Session 18 of User core. May 27 17:15:48.131817 sshd[4145]: Connection closed by 10.0.0.1 port 41058 May 27 17:15:48.132245 sshd-session[4143]: pam_unix(sshd:session): session closed for user core May 27 17:15:48.141748 systemd[1]: sshd@17-10.0.0.109:22-10.0.0.1:41058.service: Deactivated successfully. May 27 17:15:48.144377 systemd[1]: session-18.scope: Deactivated successfully. May 27 17:15:48.146163 systemd-logind[1507]: Session 18 logged out. Waiting for processes to exit. May 27 17:15:48.151962 systemd[1]: Started sshd@18-10.0.0.109:22-10.0.0.1:41060.service - OpenSSH per-connection server daemon (10.0.0.1:41060). May 27 17:15:48.153510 systemd-logind[1507]: Removed session 18. May 27 17:15:48.205406 sshd[4157]: Accepted publickey for core from 10.0.0.1 port 41060 ssh2: RSA SHA256:P4IJeIRssgXk4sLVjWVipI5XYGK59bekWX+Ak26Y4M8 May 27 17:15:48.206727 sshd-session[4157]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:15:48.210558 systemd-logind[1507]: New session 19 of user core. May 27 17:15:48.216315 systemd[1]: Started session-19.scope - Session 19 of User core. May 27 17:15:48.327326 sshd[4159]: Connection closed by 10.0.0.1 port 41060 May 27 17:15:48.328673 sshd-session[4157]: pam_unix(sshd:session): session closed for user core May 27 17:15:48.331950 systemd[1]: sshd@18-10.0.0.109:22-10.0.0.1:41060.service: Deactivated successfully. May 27 17:15:48.333868 systemd[1]: session-19.scope: Deactivated successfully. May 27 17:15:48.336275 systemd-logind[1507]: Session 19 logged out. Waiting for processes to exit. May 27 17:15:48.338868 systemd-logind[1507]: Removed session 19. May 27 17:15:53.343203 systemd[1]: Started sshd@19-10.0.0.109:22-10.0.0.1:49444.service - OpenSSH per-connection server daemon (10.0.0.1:49444). May 27 17:15:53.383988 sshd[4177]: Accepted publickey for core from 10.0.0.1 port 49444 ssh2: RSA SHA256:P4IJeIRssgXk4sLVjWVipI5XYGK59bekWX+Ak26Y4M8 May 27 17:15:53.385152 sshd-session[4177]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:15:53.389520 systemd-logind[1507]: New session 20 of user core. May 27 17:15:53.404211 systemd[1]: Started session-20.scope - Session 20 of User core. May 27 17:15:53.511107 sshd[4179]: Connection closed by 10.0.0.1 port 49444 May 27 17:15:53.511412 sshd-session[4177]: pam_unix(sshd:session): session closed for user core May 27 17:15:53.514641 systemd[1]: sshd@19-10.0.0.109:22-10.0.0.1:49444.service: Deactivated successfully. May 27 17:15:53.516207 systemd[1]: session-20.scope: Deactivated successfully. May 27 17:15:53.519104 systemd-logind[1507]: Session 20 logged out. Waiting for processes to exit. May 27 17:15:53.520157 systemd-logind[1507]: Removed session 20. May 27 17:15:58.524209 systemd[1]: Started sshd@20-10.0.0.109:22-10.0.0.1:49454.service - OpenSSH per-connection server daemon (10.0.0.1:49454). May 27 17:15:58.566568 sshd[4195]: Accepted publickey for core from 10.0.0.1 port 49454 ssh2: RSA SHA256:P4IJeIRssgXk4sLVjWVipI5XYGK59bekWX+Ak26Y4M8 May 27 17:15:58.567679 sshd-session[4195]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:15:58.572187 systemd-logind[1507]: New session 21 of user core. May 27 17:15:58.578204 systemd[1]: Started session-21.scope - Session 21 of User core. May 27 17:15:58.688312 sshd[4197]: Connection closed by 10.0.0.1 port 49454 May 27 17:15:58.688649 sshd-session[4195]: pam_unix(sshd:session): session closed for user core May 27 17:15:58.696135 systemd[1]: sshd@20-10.0.0.109:22-10.0.0.1:49454.service: Deactivated successfully. May 27 17:15:58.698431 systemd[1]: session-21.scope: Deactivated successfully. May 27 17:15:58.700037 systemd-logind[1507]: Session 21 logged out. Waiting for processes to exit. May 27 17:15:58.702365 systemd[1]: Started sshd@21-10.0.0.109:22-10.0.0.1:49468.service - OpenSSH per-connection server daemon (10.0.0.1:49468). May 27 17:15:58.703123 systemd-logind[1507]: Removed session 21. May 27 17:15:58.751515 sshd[4210]: Accepted publickey for core from 10.0.0.1 port 49468 ssh2: RSA SHA256:P4IJeIRssgXk4sLVjWVipI5XYGK59bekWX+Ak26Y4M8 May 27 17:15:58.752604 sshd-session[4210]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:15:58.756842 systemd-logind[1507]: New session 22 of user core. May 27 17:15:58.766318 systemd[1]: Started session-22.scope - Session 22 of User core. May 27 17:16:00.880317 containerd[1533]: time="2025-05-27T17:16:00.880030784Z" level=info msg="StopContainer for \"c9ad4b22977ff3cb077b0a4faa7010907e4ede0dc23c8099d8b9abb0006fbe6d\" with timeout 30 (s)" May 27 17:16:00.883219 containerd[1533]: time="2025-05-27T17:16:00.882665080Z" level=info msg="Stop container \"c9ad4b22977ff3cb077b0a4faa7010907e4ede0dc23c8099d8b9abb0006fbe6d\" with signal terminated" May 27 17:16:00.908149 containerd[1533]: time="2025-05-27T17:16:00.908091760Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 27 17:16:00.909617 systemd[1]: cri-containerd-c9ad4b22977ff3cb077b0a4faa7010907e4ede0dc23c8099d8b9abb0006fbe6d.scope: Deactivated successfully. May 27 17:16:00.912097 containerd[1533]: time="2025-05-27T17:16:00.912051224Z" level=info msg="received exit event container_id:\"c9ad4b22977ff3cb077b0a4faa7010907e4ede0dc23c8099d8b9abb0006fbe6d\" id:\"c9ad4b22977ff3cb077b0a4faa7010907e4ede0dc23c8099d8b9abb0006fbe6d\" pid:3211 exited_at:{seconds:1748366160 nanos:910361007}" May 27 17:16:00.912894 containerd[1533]: time="2025-05-27T17:16:00.912861395Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c9ad4b22977ff3cb077b0a4faa7010907e4ede0dc23c8099d8b9abb0006fbe6d\" id:\"c9ad4b22977ff3cb077b0a4faa7010907e4ede0dc23c8099d8b9abb0006fbe6d\" pid:3211 exited_at:{seconds:1748366160 nanos:910361007}" May 27 17:16:00.913989 containerd[1533]: time="2025-05-27T17:16:00.913963381Z" level=info msg="TaskExit event in podsandbox handler container_id:\"277074e85623485c0d2c8bf183bd8d57ac9ed6c697382a71cc704ff4a6f99e84\" id:\"eeaf6ef4be5e9ce5270d59f1a63495ec4157a2d149e5ccda601e297039e55efb\" pid:4239 exited_at:{seconds:1748366160 nanos:913772238}" May 27 17:16:00.916331 containerd[1533]: time="2025-05-27T17:16:00.916305543Z" level=info msg="StopContainer for \"277074e85623485c0d2c8bf183bd8d57ac9ed6c697382a71cc704ff4a6f99e84\" with timeout 2 (s)" May 27 17:16:00.916605 containerd[1533]: time="2025-05-27T17:16:00.916576959Z" level=info msg="Stop container \"277074e85623485c0d2c8bf183bd8d57ac9ed6c697382a71cc704ff4a6f99e84\" with signal terminated" May 27 17:16:00.923334 systemd-networkd[1441]: lxc_health: Link DOWN May 27 17:16:00.923340 systemd-networkd[1441]: lxc_health: Lost carrier May 27 17:16:00.937615 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c9ad4b22977ff3cb077b0a4faa7010907e4ede0dc23c8099d8b9abb0006fbe6d-rootfs.mount: Deactivated successfully. May 27 17:16:00.941769 systemd[1]: cri-containerd-277074e85623485c0d2c8bf183bd8d57ac9ed6c697382a71cc704ff4a6f99e84.scope: Deactivated successfully. May 27 17:16:00.943004 containerd[1533]: time="2025-05-27T17:16:00.942946800Z" level=info msg="received exit event container_id:\"277074e85623485c0d2c8bf183bd8d57ac9ed6c697382a71cc704ff4a6f99e84\" id:\"277074e85623485c0d2c8bf183bd8d57ac9ed6c697382a71cc704ff4a6f99e84\" pid:3297 exited_at:{seconds:1748366160 nanos:942742977}" May 27 17:16:00.943418 containerd[1533]: time="2025-05-27T17:16:00.943394242Z" level=info msg="TaskExit event in podsandbox handler container_id:\"277074e85623485c0d2c8bf183bd8d57ac9ed6c697382a71cc704ff4a6f99e84\" id:\"277074e85623485c0d2c8bf183bd8d57ac9ed6c697382a71cc704ff4a6f99e84\" pid:3297 exited_at:{seconds:1748366160 nanos:942742977}" May 27 17:16:00.944130 systemd[1]: cri-containerd-277074e85623485c0d2c8bf183bd8d57ac9ed6c697382a71cc704ff4a6f99e84.scope: Consumed 6.516s CPU time, 122.3M memory peak, 136K read from disk, 12.9M written to disk. May 27 17:16:00.949335 containerd[1533]: time="2025-05-27T17:16:00.949269743Z" level=info msg="StopContainer for \"c9ad4b22977ff3cb077b0a4faa7010907e4ede0dc23c8099d8b9abb0006fbe6d\" returns successfully" May 27 17:16:00.953035 containerd[1533]: time="2025-05-27T17:16:00.952992626Z" level=info msg="StopPodSandbox for \"ca2609f8588e06c53318d05f36ccb3cd9ad5c95b1cb4acc101024bd8497d69e4\"" May 27 17:16:00.962456 containerd[1533]: time="2025-05-27T17:16:00.962403147Z" level=info msg="Container to stop \"c9ad4b22977ff3cb077b0a4faa7010907e4ede0dc23c8099d8b9abb0006fbe6d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 27 17:16:00.963251 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-277074e85623485c0d2c8bf183bd8d57ac9ed6c697382a71cc704ff4a6f99e84-rootfs.mount: Deactivated successfully. May 27 17:16:00.972241 containerd[1533]: time="2025-05-27T17:16:00.972156759Z" level=info msg="StopContainer for \"277074e85623485c0d2c8bf183bd8d57ac9ed6c697382a71cc704ff4a6f99e84\" returns successfully" May 27 17:16:00.972859 containerd[1533]: time="2025-05-27T17:16:00.972831221Z" level=info msg="StopPodSandbox for \"ca49122a07aa8cd9b58a72a316bd2729882823783dd20178ba4bda52a5a5ffab\"" May 27 17:16:00.973004 containerd[1533]: time="2025-05-27T17:16:00.972984488Z" level=info msg="Container to stop \"6e66826c8dc845dfe08e151df8c7f3e11fda984a805adc82b5b33848f37a1a0b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 27 17:16:00.973043 containerd[1533]: time="2025-05-27T17:16:00.973004407Z" level=info msg="Container to stop \"daa483b8184d3a2dc9a5612cae595bd781b80948b809183ce0688942f88a6c5e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 27 17:16:00.973097 containerd[1533]: time="2025-05-27T17:16:00.973014766Z" level=info msg="Container to stop \"a6a705686d63aa206f88fc7eba43099c744087d38369d1cad6710200b5318ad1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 27 17:16:00.973127 containerd[1533]: time="2025-05-27T17:16:00.973097839Z" level=info msg="Container to stop \"275fe3e036d9f45612f9da53cdc5987af73bdd86709868038e3f41e2c67698b3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 27 17:16:00.973127 containerd[1533]: time="2025-05-27T17:16:00.973107518Z" level=info msg="Container to stop \"277074e85623485c0d2c8bf183bd8d57ac9ed6c697382a71cc704ff4a6f99e84\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" May 27 17:16:00.974195 systemd[1]: cri-containerd-ca2609f8588e06c53318d05f36ccb3cd9ad5c95b1cb4acc101024bd8497d69e4.scope: Deactivated successfully. May 27 17:16:00.976408 containerd[1533]: time="2025-05-27T17:16:00.976318245Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ca2609f8588e06c53318d05f36ccb3cd9ad5c95b1cb4acc101024bd8497d69e4\" id:\"ca2609f8588e06c53318d05f36ccb3cd9ad5c95b1cb4acc101024bd8497d69e4\" pid:2790 exit_status:137 exited_at:{seconds:1748366160 nanos:975842286}" May 27 17:16:00.979188 systemd[1]: cri-containerd-ca49122a07aa8cd9b58a72a316bd2729882823783dd20178ba4bda52a5a5ffab.scope: Deactivated successfully. May 27 17:16:00.995947 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ca49122a07aa8cd9b58a72a316bd2729882823783dd20178ba4bda52a5a5ffab-rootfs.mount: Deactivated successfully. May 27 17:16:00.999844 containerd[1533]: time="2025-05-27T17:16:00.999801331Z" level=info msg="shim disconnected" id=ca49122a07aa8cd9b58a72a316bd2729882823783dd20178ba4bda52a5a5ffab namespace=k8s.io May 27 17:16:01.005963 containerd[1533]: time="2025-05-27T17:16:00.999836248Z" level=warning msg="cleaning up after shim disconnected" id=ca49122a07aa8cd9b58a72a316bd2729882823783dd20178ba4bda52a5a5ffab namespace=k8s.io May 27 17:16:01.005963 containerd[1533]: time="2025-05-27T17:16:01.005955073Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 27 17:16:01.008890 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ca2609f8588e06c53318d05f36ccb3cd9ad5c95b1cb4acc101024bd8497d69e4-rootfs.mount: Deactivated successfully. May 27 17:16:01.040163 containerd[1533]: time="2025-05-27T17:16:01.040096706Z" level=info msg="shim disconnected" id=ca2609f8588e06c53318d05f36ccb3cd9ad5c95b1cb4acc101024bd8497d69e4 namespace=k8s.io May 27 17:16:01.040163 containerd[1533]: time="2025-05-27T17:16:01.040130584Z" level=warning msg="cleaning up after shim disconnected" id=ca2609f8588e06c53318d05f36ccb3cd9ad5c95b1cb4acc101024bd8497d69e4 namespace=k8s.io May 27 17:16:01.040163 containerd[1533]: time="2025-05-27T17:16:01.040158822Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 27 17:16:01.042094 containerd[1533]: time="2025-05-27T17:16:01.042035112Z" level=info msg="TearDown network for sandbox \"ca49122a07aa8cd9b58a72a316bd2729882823783dd20178ba4bda52a5a5ffab\" successfully" May 27 17:16:01.042191 containerd[1533]: time="2025-05-27T17:16:01.042074629Z" level=info msg="StopPodSandbox for \"ca49122a07aa8cd9b58a72a316bd2729882823783dd20178ba4bda52a5a5ffab\" returns successfully" May 27 17:16:01.042464 containerd[1533]: time="2025-05-27T17:16:01.042426800Z" level=info msg="received exit event sandbox_id:\"ca49122a07aa8cd9b58a72a316bd2729882823783dd20178ba4bda52a5a5ffab\" exit_status:137 exited_at:{seconds:1748366160 nanos:979497535}" May 27 17:16:01.043888 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ca49122a07aa8cd9b58a72a316bd2729882823783dd20178ba4bda52a5a5ffab-shm.mount: Deactivated successfully. May 27 17:16:01.055179 containerd[1533]: time="2025-05-27T17:16:01.055046433Z" level=info msg="received exit event sandbox_id:\"ca2609f8588e06c53318d05f36ccb3cd9ad5c95b1cb4acc101024bd8497d69e4\" exit_status:137 exited_at:{seconds:1748366160 nanos:975842286}" May 27 17:16:01.055293 containerd[1533]: time="2025-05-27T17:16:01.055069311Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ca49122a07aa8cd9b58a72a316bd2729882823783dd20178ba4bda52a5a5ffab\" id:\"ca49122a07aa8cd9b58a72a316bd2729882823783dd20178ba4bda52a5a5ffab\" pid:2744 exit_status:137 exited_at:{seconds:1748366160 nanos:979497535}" May 27 17:16:01.055749 containerd[1533]: time="2025-05-27T17:16:01.055717499Z" level=info msg="TearDown network for sandbox \"ca2609f8588e06c53318d05f36ccb3cd9ad5c95b1cb4acc101024bd8497d69e4\" successfully" May 27 17:16:01.055749 containerd[1533]: time="2025-05-27T17:16:01.055744137Z" level=info msg="StopPodSandbox for \"ca2609f8588e06c53318d05f36ccb3cd9ad5c95b1cb4acc101024bd8497d69e4\" returns successfully" May 27 17:16:01.146126 kubelet[2626]: I0527 17:16:01.145330 2626 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4363c10c-f35c-4acc-bc63-e743732cad1f-cilium-config-path\") pod \"4363c10c-f35c-4acc-bc63-e743732cad1f\" (UID: \"4363c10c-f35c-4acc-bc63-e743732cad1f\") " May 27 17:16:01.146126 kubelet[2626]: I0527 17:16:01.145369 2626 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e42f5e5f-fb1f-44ac-accf-95246ee7065b-lib-modules\") pod \"e42f5e5f-fb1f-44ac-accf-95246ee7065b\" (UID: \"e42f5e5f-fb1f-44ac-accf-95246ee7065b\") " May 27 17:16:01.146126 kubelet[2626]: I0527 17:16:01.145388 2626 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e42f5e5f-fb1f-44ac-accf-95246ee7065b-hostproc\") pod \"e42f5e5f-fb1f-44ac-accf-95246ee7065b\" (UID: \"e42f5e5f-fb1f-44ac-accf-95246ee7065b\") " May 27 17:16:01.146126 kubelet[2626]: I0527 17:16:01.145403 2626 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e42f5e5f-fb1f-44ac-accf-95246ee7065b-cilium-run\") pod \"e42f5e5f-fb1f-44ac-accf-95246ee7065b\" (UID: \"e42f5e5f-fb1f-44ac-accf-95246ee7065b\") " May 27 17:16:01.146126 kubelet[2626]: I0527 17:16:01.145418 2626 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e42f5e5f-fb1f-44ac-accf-95246ee7065b-host-proc-sys-kernel\") pod \"e42f5e5f-fb1f-44ac-accf-95246ee7065b\" (UID: \"e42f5e5f-fb1f-44ac-accf-95246ee7065b\") " May 27 17:16:01.146126 kubelet[2626]: I0527 17:16:01.145434 2626 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e42f5e5f-fb1f-44ac-accf-95246ee7065b-cilium-config-path\") pod \"e42f5e5f-fb1f-44ac-accf-95246ee7065b\" (UID: \"e42f5e5f-fb1f-44ac-accf-95246ee7065b\") " May 27 17:16:01.146535 kubelet[2626]: I0527 17:16:01.145456 2626 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e42f5e5f-fb1f-44ac-accf-95246ee7065b-clustermesh-secrets\") pod \"e42f5e5f-fb1f-44ac-accf-95246ee7065b\" (UID: \"e42f5e5f-fb1f-44ac-accf-95246ee7065b\") " May 27 17:16:01.146535 kubelet[2626]: I0527 17:16:01.145472 2626 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e42f5e5f-fb1f-44ac-accf-95246ee7065b-bpf-maps\") pod \"e42f5e5f-fb1f-44ac-accf-95246ee7065b\" (UID: \"e42f5e5f-fb1f-44ac-accf-95246ee7065b\") " May 27 17:16:01.146535 kubelet[2626]: I0527 17:16:01.145486 2626 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e42f5e5f-fb1f-44ac-accf-95246ee7065b-etc-cni-netd\") pod \"e42f5e5f-fb1f-44ac-accf-95246ee7065b\" (UID: \"e42f5e5f-fb1f-44ac-accf-95246ee7065b\") " May 27 17:16:01.146535 kubelet[2626]: I0527 17:16:01.145502 2626 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e42f5e5f-fb1f-44ac-accf-95246ee7065b-hubble-tls\") pod \"e42f5e5f-fb1f-44ac-accf-95246ee7065b\" (UID: \"e42f5e5f-fb1f-44ac-accf-95246ee7065b\") " May 27 17:16:01.146535 kubelet[2626]: I0527 17:16:01.145518 2626 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-598dd\" (UniqueName: \"kubernetes.io/projected/4363c10c-f35c-4acc-bc63-e743732cad1f-kube-api-access-598dd\") pod \"4363c10c-f35c-4acc-bc63-e743732cad1f\" (UID: \"4363c10c-f35c-4acc-bc63-e743732cad1f\") " May 27 17:16:01.146535 kubelet[2626]: I0527 17:16:01.145533 2626 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e42f5e5f-fb1f-44ac-accf-95246ee7065b-host-proc-sys-net\") pod \"e42f5e5f-fb1f-44ac-accf-95246ee7065b\" (UID: \"e42f5e5f-fb1f-44ac-accf-95246ee7065b\") " May 27 17:16:01.146654 kubelet[2626]: I0527 17:16:01.145548 2626 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e42f5e5f-fb1f-44ac-accf-95246ee7065b-cilium-cgroup\") pod \"e42f5e5f-fb1f-44ac-accf-95246ee7065b\" (UID: \"e42f5e5f-fb1f-44ac-accf-95246ee7065b\") " May 27 17:16:01.146654 kubelet[2626]: I0527 17:16:01.145562 2626 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e42f5e5f-fb1f-44ac-accf-95246ee7065b-cni-path\") pod \"e42f5e5f-fb1f-44ac-accf-95246ee7065b\" (UID: \"e42f5e5f-fb1f-44ac-accf-95246ee7065b\") " May 27 17:16:01.146654 kubelet[2626]: I0527 17:16:01.145577 2626 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tv7mx\" (UniqueName: \"kubernetes.io/projected/e42f5e5f-fb1f-44ac-accf-95246ee7065b-kube-api-access-tv7mx\") pod \"e42f5e5f-fb1f-44ac-accf-95246ee7065b\" (UID: \"e42f5e5f-fb1f-44ac-accf-95246ee7065b\") " May 27 17:16:01.146654 kubelet[2626]: I0527 17:16:01.145595 2626 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e42f5e5f-fb1f-44ac-accf-95246ee7065b-xtables-lock\") pod \"e42f5e5f-fb1f-44ac-accf-95246ee7065b\" (UID: \"e42f5e5f-fb1f-44ac-accf-95246ee7065b\") " May 27 17:16:01.149336 kubelet[2626]: I0527 17:16:01.149051 2626 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e42f5e5f-fb1f-44ac-accf-95246ee7065b-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "e42f5e5f-fb1f-44ac-accf-95246ee7065b" (UID: "e42f5e5f-fb1f-44ac-accf-95246ee7065b"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 27 17:16:01.149336 kubelet[2626]: I0527 17:16:01.149054 2626 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e42f5e5f-fb1f-44ac-accf-95246ee7065b-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "e42f5e5f-fb1f-44ac-accf-95246ee7065b" (UID: "e42f5e5f-fb1f-44ac-accf-95246ee7065b"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 27 17:16:01.149336 kubelet[2626]: I0527 17:16:01.149065 2626 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e42f5e5f-fb1f-44ac-accf-95246ee7065b-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "e42f5e5f-fb1f-44ac-accf-95246ee7065b" (UID: "e42f5e5f-fb1f-44ac-accf-95246ee7065b"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 27 17:16:01.149336 kubelet[2626]: I0527 17:16:01.149085 2626 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e42f5e5f-fb1f-44ac-accf-95246ee7065b-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "e42f5e5f-fb1f-44ac-accf-95246ee7065b" (UID: "e42f5e5f-fb1f-44ac-accf-95246ee7065b"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 27 17:16:01.149336 kubelet[2626]: I0527 17:16:01.149108 2626 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e42f5e5f-fb1f-44ac-accf-95246ee7065b-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "e42f5e5f-fb1f-44ac-accf-95246ee7065b" (UID: "e42f5e5f-fb1f-44ac-accf-95246ee7065b"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 27 17:16:01.149503 kubelet[2626]: I0527 17:16:01.149051 2626 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e42f5e5f-fb1f-44ac-accf-95246ee7065b-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "e42f5e5f-fb1f-44ac-accf-95246ee7065b" (UID: "e42f5e5f-fb1f-44ac-accf-95246ee7065b"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 27 17:16:01.149503 kubelet[2626]: I0527 17:16:01.149125 2626 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e42f5e5f-fb1f-44ac-accf-95246ee7065b-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "e42f5e5f-fb1f-44ac-accf-95246ee7065b" (UID: "e42f5e5f-fb1f-44ac-accf-95246ee7065b"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 27 17:16:01.149503 kubelet[2626]: I0527 17:16:01.149114 2626 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e42f5e5f-fb1f-44ac-accf-95246ee7065b-hostproc" (OuterVolumeSpecName: "hostproc") pod "e42f5e5f-fb1f-44ac-accf-95246ee7065b" (UID: "e42f5e5f-fb1f-44ac-accf-95246ee7065b"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 27 17:16:01.149503 kubelet[2626]: I0527 17:16:01.149126 2626 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e42f5e5f-fb1f-44ac-accf-95246ee7065b-cni-path" (OuterVolumeSpecName: "cni-path") pod "e42f5e5f-fb1f-44ac-accf-95246ee7065b" (UID: "e42f5e5f-fb1f-44ac-accf-95246ee7065b"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 27 17:16:01.149503 kubelet[2626]: I0527 17:16:01.149166 2626 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e42f5e5f-fb1f-44ac-accf-95246ee7065b-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "e42f5e5f-fb1f-44ac-accf-95246ee7065b" (UID: "e42f5e5f-fb1f-44ac-accf-95246ee7065b"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" May 27 17:16:01.160549 kubelet[2626]: I0527 17:16:01.160386 2626 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e42f5e5f-fb1f-44ac-accf-95246ee7065b-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "e42f5e5f-fb1f-44ac-accf-95246ee7065b" (UID: "e42f5e5f-fb1f-44ac-accf-95246ee7065b"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 27 17:16:01.160549 kubelet[2626]: I0527 17:16:01.160420 2626 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4363c10c-f35c-4acc-bc63-e743732cad1f-kube-api-access-598dd" (OuterVolumeSpecName: "kube-api-access-598dd") pod "4363c10c-f35c-4acc-bc63-e743732cad1f" (UID: "4363c10c-f35c-4acc-bc63-e743732cad1f"). InnerVolumeSpecName "kube-api-access-598dd". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 27 17:16:01.160549 kubelet[2626]: I0527 17:16:01.160421 2626 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e42f5e5f-fb1f-44ac-accf-95246ee7065b-kube-api-access-tv7mx" (OuterVolumeSpecName: "kube-api-access-tv7mx") pod "e42f5e5f-fb1f-44ac-accf-95246ee7065b" (UID: "e42f5e5f-fb1f-44ac-accf-95246ee7065b"). InnerVolumeSpecName "kube-api-access-tv7mx". PluginName "kubernetes.io/projected", VolumeGIDValue "" May 27 17:16:01.160757 kubelet[2626]: I0527 17:16:01.160736 2626 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e42f5e5f-fb1f-44ac-accf-95246ee7065b-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "e42f5e5f-fb1f-44ac-accf-95246ee7065b" (UID: "e42f5e5f-fb1f-44ac-accf-95246ee7065b"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" May 27 17:16:01.170290 kubelet[2626]: I0527 17:16:01.170247 2626 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e42f5e5f-fb1f-44ac-accf-95246ee7065b-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "e42f5e5f-fb1f-44ac-accf-95246ee7065b" (UID: "e42f5e5f-fb1f-44ac-accf-95246ee7065b"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 27 17:16:01.170903 kubelet[2626]: I0527 17:16:01.170877 2626 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4363c10c-f35c-4acc-bc63-e743732cad1f-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "4363c10c-f35c-4acc-bc63-e743732cad1f" (UID: "4363c10c-f35c-4acc-bc63-e743732cad1f"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" May 27 17:16:01.171987 kubelet[2626]: I0527 17:16:01.171960 2626 scope.go:117] "RemoveContainer" containerID="c9ad4b22977ff3cb077b0a4faa7010907e4ede0dc23c8099d8b9abb0006fbe6d" May 27 17:16:01.173838 containerd[1533]: time="2025-05-27T17:16:01.173800869Z" level=info msg="RemoveContainer for \"c9ad4b22977ff3cb077b0a4faa7010907e4ede0dc23c8099d8b9abb0006fbe6d\"" May 27 17:16:01.177224 systemd[1]: Removed slice kubepods-burstable-pode42f5e5f_fb1f_44ac_accf_95246ee7065b.slice - libcontainer container kubepods-burstable-pode42f5e5f_fb1f_44ac_accf_95246ee7065b.slice. May 27 17:16:01.177329 systemd[1]: kubepods-burstable-pode42f5e5f_fb1f_44ac_accf_95246ee7065b.slice: Consumed 6.688s CPU time, 122.6M memory peak, 140K read from disk, 12.9M written to disk. May 27 17:16:01.185024 containerd[1533]: time="2025-05-27T17:16:01.184978696Z" level=info msg="RemoveContainer for \"c9ad4b22977ff3cb077b0a4faa7010907e4ede0dc23c8099d8b9abb0006fbe6d\" returns successfully" May 27 17:16:01.185285 kubelet[2626]: I0527 17:16:01.185237 2626 scope.go:117] "RemoveContainer" containerID="c9ad4b22977ff3cb077b0a4faa7010907e4ede0dc23c8099d8b9abb0006fbe6d" May 27 17:16:01.185469 containerd[1533]: time="2025-05-27T17:16:01.185436980Z" level=error msg="ContainerStatus for \"c9ad4b22977ff3cb077b0a4faa7010907e4ede0dc23c8099d8b9abb0006fbe6d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c9ad4b22977ff3cb077b0a4faa7010907e4ede0dc23c8099d8b9abb0006fbe6d\": not found" May 27 17:16:01.190088 kubelet[2626]: E0527 17:16:01.189993 2626 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c9ad4b22977ff3cb077b0a4faa7010907e4ede0dc23c8099d8b9abb0006fbe6d\": not found" containerID="c9ad4b22977ff3cb077b0a4faa7010907e4ede0dc23c8099d8b9abb0006fbe6d" May 27 17:16:01.190161 kubelet[2626]: I0527 17:16:01.190078 2626 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c9ad4b22977ff3cb077b0a4faa7010907e4ede0dc23c8099d8b9abb0006fbe6d"} err="failed to get container status \"c9ad4b22977ff3cb077b0a4faa7010907e4ede0dc23c8099d8b9abb0006fbe6d\": rpc error: code = NotFound desc = an error occurred when try to find container \"c9ad4b22977ff3cb077b0a4faa7010907e4ede0dc23c8099d8b9abb0006fbe6d\": not found" May 27 17:16:01.190161 kubelet[2626]: I0527 17:16:01.190113 2626 scope.go:117] "RemoveContainer" containerID="277074e85623485c0d2c8bf183bd8d57ac9ed6c697382a71cc704ff4a6f99e84" May 27 17:16:01.194410 containerd[1533]: time="2025-05-27T17:16:01.194384745Z" level=info msg="RemoveContainer for \"277074e85623485c0d2c8bf183bd8d57ac9ed6c697382a71cc704ff4a6f99e84\"" May 27 17:16:01.198083 containerd[1533]: time="2025-05-27T17:16:01.198004016Z" level=info msg="RemoveContainer for \"277074e85623485c0d2c8bf183bd8d57ac9ed6c697382a71cc704ff4a6f99e84\" returns successfully" May 27 17:16:01.198303 kubelet[2626]: I0527 17:16:01.198276 2626 scope.go:117] "RemoveContainer" containerID="275fe3e036d9f45612f9da53cdc5987af73bdd86709868038e3f41e2c67698b3" May 27 17:16:01.199643 containerd[1533]: time="2025-05-27T17:16:01.199604249Z" level=info msg="RemoveContainer for \"275fe3e036d9f45612f9da53cdc5987af73bdd86709868038e3f41e2c67698b3\"" May 27 17:16:01.202983 containerd[1533]: time="2025-05-27T17:16:01.202949381Z" level=info msg="RemoveContainer for \"275fe3e036d9f45612f9da53cdc5987af73bdd86709868038e3f41e2c67698b3\" returns successfully" May 27 17:16:01.203158 kubelet[2626]: I0527 17:16:01.203126 2626 scope.go:117] "RemoveContainer" containerID="a6a705686d63aa206f88fc7eba43099c744087d38369d1cad6710200b5318ad1" May 27 17:16:01.205136 containerd[1533]: time="2025-05-27T17:16:01.205108249Z" level=info msg="RemoveContainer for \"a6a705686d63aa206f88fc7eba43099c744087d38369d1cad6710200b5318ad1\"" May 27 17:16:01.208411 containerd[1533]: time="2025-05-27T17:16:01.208370228Z" level=info msg="RemoveContainer for \"a6a705686d63aa206f88fc7eba43099c744087d38369d1cad6710200b5318ad1\" returns successfully" May 27 17:16:01.208583 kubelet[2626]: I0527 17:16:01.208545 2626 scope.go:117] "RemoveContainer" containerID="daa483b8184d3a2dc9a5612cae595bd781b80948b809183ce0688942f88a6c5e" May 27 17:16:01.209851 containerd[1533]: time="2025-05-27T17:16:01.209828912Z" level=info msg="RemoveContainer for \"daa483b8184d3a2dc9a5612cae595bd781b80948b809183ce0688942f88a6c5e\"" May 27 17:16:01.212451 containerd[1533]: time="2025-05-27T17:16:01.212375309Z" level=info msg="RemoveContainer for \"daa483b8184d3a2dc9a5612cae595bd781b80948b809183ce0688942f88a6c5e\" returns successfully" May 27 17:16:01.212523 kubelet[2626]: I0527 17:16:01.212499 2626 scope.go:117] "RemoveContainer" containerID="6e66826c8dc845dfe08e151df8c7f3e11fda984a805adc82b5b33848f37a1a0b" May 27 17:16:01.213696 containerd[1533]: time="2025-05-27T17:16:01.213671125Z" level=info msg="RemoveContainer for \"6e66826c8dc845dfe08e151df8c7f3e11fda984a805adc82b5b33848f37a1a0b\"" May 27 17:16:01.216333 containerd[1533]: time="2025-05-27T17:16:01.216297475Z" level=info msg="RemoveContainer for \"6e66826c8dc845dfe08e151df8c7f3e11fda984a805adc82b5b33848f37a1a0b\" returns successfully" May 27 17:16:01.216484 kubelet[2626]: I0527 17:16:01.216457 2626 scope.go:117] "RemoveContainer" containerID="277074e85623485c0d2c8bf183bd8d57ac9ed6c697382a71cc704ff4a6f99e84" May 27 17:16:01.216727 containerd[1533]: time="2025-05-27T17:16:01.216653927Z" level=error msg="ContainerStatus for \"277074e85623485c0d2c8bf183bd8d57ac9ed6c697382a71cc704ff4a6f99e84\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"277074e85623485c0d2c8bf183bd8d57ac9ed6c697382a71cc704ff4a6f99e84\": not found" May 27 17:16:01.216925 kubelet[2626]: E0527 17:16:01.216879 2626 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"277074e85623485c0d2c8bf183bd8d57ac9ed6c697382a71cc704ff4a6f99e84\": not found" containerID="277074e85623485c0d2c8bf183bd8d57ac9ed6c697382a71cc704ff4a6f99e84" May 27 17:16:01.216925 kubelet[2626]: I0527 17:16:01.216908 2626 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"277074e85623485c0d2c8bf183bd8d57ac9ed6c697382a71cc704ff4a6f99e84"} err="failed to get container status \"277074e85623485c0d2c8bf183bd8d57ac9ed6c697382a71cc704ff4a6f99e84\": rpc error: code = NotFound desc = an error occurred when try to find container \"277074e85623485c0d2c8bf183bd8d57ac9ed6c697382a71cc704ff4a6f99e84\": not found" May 27 17:16:01.216925 kubelet[2626]: I0527 17:16:01.216927 2626 scope.go:117] "RemoveContainer" containerID="275fe3e036d9f45612f9da53cdc5987af73bdd86709868038e3f41e2c67698b3" May 27 17:16:01.217236 containerd[1533]: time="2025-05-27T17:16:01.217102571Z" level=error msg="ContainerStatus for \"275fe3e036d9f45612f9da53cdc5987af73bdd86709868038e3f41e2c67698b3\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"275fe3e036d9f45612f9da53cdc5987af73bdd86709868038e3f41e2c67698b3\": not found" May 27 17:16:01.217449 kubelet[2626]: E0527 17:16:01.217338 2626 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"275fe3e036d9f45612f9da53cdc5987af73bdd86709868038e3f41e2c67698b3\": not found" containerID="275fe3e036d9f45612f9da53cdc5987af73bdd86709868038e3f41e2c67698b3" May 27 17:16:01.217449 kubelet[2626]: I0527 17:16:01.217365 2626 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"275fe3e036d9f45612f9da53cdc5987af73bdd86709868038e3f41e2c67698b3"} err="failed to get container status \"275fe3e036d9f45612f9da53cdc5987af73bdd86709868038e3f41e2c67698b3\": rpc error: code = NotFound desc = an error occurred when try to find container \"275fe3e036d9f45612f9da53cdc5987af73bdd86709868038e3f41e2c67698b3\": not found" May 27 17:16:01.217449 kubelet[2626]: I0527 17:16:01.217383 2626 scope.go:117] "RemoveContainer" containerID="a6a705686d63aa206f88fc7eba43099c744087d38369d1cad6710200b5318ad1" May 27 17:16:01.217660 containerd[1533]: time="2025-05-27T17:16:01.217615530Z" level=error msg="ContainerStatus for \"a6a705686d63aa206f88fc7eba43099c744087d38369d1cad6710200b5318ad1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a6a705686d63aa206f88fc7eba43099c744087d38369d1cad6710200b5318ad1\": not found" May 27 17:16:01.217792 kubelet[2626]: E0527 17:16:01.217770 2626 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a6a705686d63aa206f88fc7eba43099c744087d38369d1cad6710200b5318ad1\": not found" containerID="a6a705686d63aa206f88fc7eba43099c744087d38369d1cad6710200b5318ad1" May 27 17:16:01.217824 kubelet[2626]: I0527 17:16:01.217799 2626 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a6a705686d63aa206f88fc7eba43099c744087d38369d1cad6710200b5318ad1"} err="failed to get container status \"a6a705686d63aa206f88fc7eba43099c744087d38369d1cad6710200b5318ad1\": rpc error: code = NotFound desc = an error occurred when try to find container \"a6a705686d63aa206f88fc7eba43099c744087d38369d1cad6710200b5318ad1\": not found" May 27 17:16:01.217849 kubelet[2626]: I0527 17:16:01.217833 2626 scope.go:117] "RemoveContainer" containerID="daa483b8184d3a2dc9a5612cae595bd781b80948b809183ce0688942f88a6c5e" May 27 17:16:01.218048 containerd[1533]: time="2025-05-27T17:16:01.218019378Z" level=error msg="ContainerStatus for \"daa483b8184d3a2dc9a5612cae595bd781b80948b809183ce0688942f88a6c5e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"daa483b8184d3a2dc9a5612cae595bd781b80948b809183ce0688942f88a6c5e\": not found" May 27 17:16:01.218199 kubelet[2626]: E0527 17:16:01.218178 2626 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"daa483b8184d3a2dc9a5612cae595bd781b80948b809183ce0688942f88a6c5e\": not found" containerID="daa483b8184d3a2dc9a5612cae595bd781b80948b809183ce0688942f88a6c5e" May 27 17:16:01.218234 kubelet[2626]: I0527 17:16:01.218200 2626 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"daa483b8184d3a2dc9a5612cae595bd781b80948b809183ce0688942f88a6c5e"} err="failed to get container status \"daa483b8184d3a2dc9a5612cae595bd781b80948b809183ce0688942f88a6c5e\": rpc error: code = NotFound desc = an error occurred when try to find container \"daa483b8184d3a2dc9a5612cae595bd781b80948b809183ce0688942f88a6c5e\": not found" May 27 17:16:01.218234 kubelet[2626]: I0527 17:16:01.218214 2626 scope.go:117] "RemoveContainer" containerID="6e66826c8dc845dfe08e151df8c7f3e11fda984a805adc82b5b33848f37a1a0b" May 27 17:16:01.218508 containerd[1533]: time="2025-05-27T17:16:01.218383709Z" level=error msg="ContainerStatus for \"6e66826c8dc845dfe08e151df8c7f3e11fda984a805adc82b5b33848f37a1a0b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6e66826c8dc845dfe08e151df8c7f3e11fda984a805adc82b5b33848f37a1a0b\": not found" May 27 17:16:01.218581 kubelet[2626]: E0527 17:16:01.218559 2626 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6e66826c8dc845dfe08e151df8c7f3e11fda984a805adc82b5b33848f37a1a0b\": not found" containerID="6e66826c8dc845dfe08e151df8c7f3e11fda984a805adc82b5b33848f37a1a0b" May 27 17:16:01.218622 kubelet[2626]: I0527 17:16:01.218579 2626 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6e66826c8dc845dfe08e151df8c7f3e11fda984a805adc82b5b33848f37a1a0b"} err="failed to get container status \"6e66826c8dc845dfe08e151df8c7f3e11fda984a805adc82b5b33848f37a1a0b\": rpc error: code = NotFound desc = an error occurred when try to find container \"6e66826c8dc845dfe08e151df8c7f3e11fda984a805adc82b5b33848f37a1a0b\": not found" May 27 17:16:01.245944 kubelet[2626]: I0527 17:16:01.245909 2626 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e42f5e5f-fb1f-44ac-accf-95246ee7065b-bpf-maps\") on node \"localhost\" DevicePath \"\"" May 27 17:16:01.245944 kubelet[2626]: I0527 17:16:01.245935 2626 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e42f5e5f-fb1f-44ac-accf-95246ee7065b-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" May 27 17:16:01.245944 kubelet[2626]: I0527 17:16:01.245947 2626 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e42f5e5f-fb1f-44ac-accf-95246ee7065b-hubble-tls\") on node \"localhost\" DevicePath \"\"" May 27 17:16:01.246106 kubelet[2626]: I0527 17:16:01.245956 2626 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-598dd\" (UniqueName: \"kubernetes.io/projected/4363c10c-f35c-4acc-bc63-e743732cad1f-kube-api-access-598dd\") on node \"localhost\" DevicePath \"\"" May 27 17:16:01.246106 kubelet[2626]: I0527 17:16:01.245965 2626 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e42f5e5f-fb1f-44ac-accf-95246ee7065b-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" May 27 17:16:01.246106 kubelet[2626]: I0527 17:16:01.245973 2626 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e42f5e5f-fb1f-44ac-accf-95246ee7065b-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" May 27 17:16:01.246106 kubelet[2626]: I0527 17:16:01.245980 2626 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e42f5e5f-fb1f-44ac-accf-95246ee7065b-cni-path\") on node \"localhost\" DevicePath \"\"" May 27 17:16:01.246106 kubelet[2626]: I0527 17:16:01.245988 2626 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tv7mx\" (UniqueName: \"kubernetes.io/projected/e42f5e5f-fb1f-44ac-accf-95246ee7065b-kube-api-access-tv7mx\") on node \"localhost\" DevicePath \"\"" May 27 17:16:01.246106 kubelet[2626]: I0527 17:16:01.245995 2626 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e42f5e5f-fb1f-44ac-accf-95246ee7065b-xtables-lock\") on node \"localhost\" DevicePath \"\"" May 27 17:16:01.246106 kubelet[2626]: I0527 17:16:01.246003 2626 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4363c10c-f35c-4acc-bc63-e743732cad1f-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 27 17:16:01.246106 kubelet[2626]: I0527 17:16:01.246017 2626 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e42f5e5f-fb1f-44ac-accf-95246ee7065b-lib-modules\") on node \"localhost\" DevicePath \"\"" May 27 17:16:01.246263 kubelet[2626]: I0527 17:16:01.246025 2626 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e42f5e5f-fb1f-44ac-accf-95246ee7065b-hostproc\") on node \"localhost\" DevicePath \"\"" May 27 17:16:01.246263 kubelet[2626]: I0527 17:16:01.246036 2626 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e42f5e5f-fb1f-44ac-accf-95246ee7065b-cilium-run\") on node \"localhost\" DevicePath \"\"" May 27 17:16:01.246263 kubelet[2626]: I0527 17:16:01.246044 2626 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e42f5e5f-fb1f-44ac-accf-95246ee7065b-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" May 27 17:16:01.246263 kubelet[2626]: I0527 17:16:01.246051 2626 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e42f5e5f-fb1f-44ac-accf-95246ee7065b-cilium-config-path\") on node \"localhost\" DevicePath \"\"" May 27 17:16:01.246263 kubelet[2626]: I0527 17:16:01.246079 2626 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e42f5e5f-fb1f-44ac-accf-95246ee7065b-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" May 27 17:16:01.472035 systemd[1]: Removed slice kubepods-besteffort-pod4363c10c_f35c_4acc_bc63_e743732cad1f.slice - libcontainer container kubepods-besteffort-pod4363c10c_f35c_4acc_bc63_e743732cad1f.slice. May 27 17:16:01.934736 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ca2609f8588e06c53318d05f36ccb3cd9ad5c95b1cb4acc101024bd8497d69e4-shm.mount: Deactivated successfully. May 27 17:16:01.934842 systemd[1]: var-lib-kubelet-pods-4363c10c\x2df35c\x2d4acc\x2dbc63\x2de743732cad1f-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d598dd.mount: Deactivated successfully. May 27 17:16:01.934893 systemd[1]: var-lib-kubelet-pods-e42f5e5f\x2dfb1f\x2d44ac\x2daccf\x2d95246ee7065b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dtv7mx.mount: Deactivated successfully. May 27 17:16:01.934943 systemd[1]: var-lib-kubelet-pods-e42f5e5f\x2dfb1f\x2d44ac\x2daccf\x2d95246ee7065b-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. May 27 17:16:01.935014 systemd[1]: var-lib-kubelet-pods-e42f5e5f\x2dfb1f\x2d44ac\x2daccf\x2d95246ee7065b-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. May 27 17:16:02.840332 sshd[4212]: Connection closed by 10.0.0.1 port 49468 May 27 17:16:02.840680 sshd-session[4210]: pam_unix(sshd:session): session closed for user core May 27 17:16:02.851697 systemd[1]: sshd@21-10.0.0.109:22-10.0.0.1:49468.service: Deactivated successfully. May 27 17:16:02.853442 systemd[1]: session-22.scope: Deactivated successfully. May 27 17:16:02.853678 systemd[1]: session-22.scope: Consumed 1.448s CPU time, 24.9M memory peak. May 27 17:16:02.854933 systemd-logind[1507]: Session 22 logged out. Waiting for processes to exit. May 27 17:16:02.857112 systemd[1]: Started sshd@22-10.0.0.109:22-10.0.0.1:45174.service - OpenSSH per-connection server daemon (10.0.0.1:45174). May 27 17:16:02.858994 systemd-logind[1507]: Removed session 22. May 27 17:16:02.907793 sshd[4360]: Accepted publickey for core from 10.0.0.1 port 45174 ssh2: RSA SHA256:P4IJeIRssgXk4sLVjWVipI5XYGK59bekWX+Ak26Y4M8 May 27 17:16:02.909317 sshd-session[4360]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:16:02.914103 systemd-logind[1507]: New session 23 of user core. May 27 17:16:02.926234 systemd[1]: Started session-23.scope - Session 23 of User core. May 27 17:16:02.969945 kubelet[2626]: I0527 17:16:02.969899 2626 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4363c10c-f35c-4acc-bc63-e743732cad1f" path="/var/lib/kubelet/pods/4363c10c-f35c-4acc-bc63-e743732cad1f/volumes" May 27 17:16:02.970302 kubelet[2626]: I0527 17:16:02.970281 2626 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e42f5e5f-fb1f-44ac-accf-95246ee7065b" path="/var/lib/kubelet/pods/e42f5e5f-fb1f-44ac-accf-95246ee7065b/volumes" May 27 17:16:04.021826 kubelet[2626]: E0527 17:16:04.021777 2626 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" May 27 17:16:04.395208 sshd[4362]: Connection closed by 10.0.0.1 port 45174 May 27 17:16:04.394469 sshd-session[4360]: pam_unix(sshd:session): session closed for user core May 27 17:16:04.408459 systemd[1]: sshd@22-10.0.0.109:22-10.0.0.1:45174.service: Deactivated successfully. May 27 17:16:04.414792 systemd[1]: session-23.scope: Deactivated successfully. May 27 17:16:04.415386 systemd[1]: session-23.scope: Consumed 1.348s CPU time, 26.3M memory peak. May 27 17:16:04.417295 systemd-logind[1507]: Session 23 logged out. Waiting for processes to exit. May 27 17:16:04.424374 systemd[1]: Started sshd@23-10.0.0.109:22-10.0.0.1:45186.service - OpenSSH per-connection server daemon (10.0.0.1:45186). May 27 17:16:04.426788 systemd-logind[1507]: Removed session 23. May 27 17:16:04.440460 systemd[1]: Created slice kubepods-burstable-poddb566636_0bd0_40c6_811e_3f06e9911414.slice - libcontainer container kubepods-burstable-poddb566636_0bd0_40c6_811e_3f06e9911414.slice. May 27 17:16:04.466401 kubelet[2626]: I0527 17:16:04.466281 2626 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/db566636-0bd0-40c6-811e-3f06e9911414-lib-modules\") pod \"cilium-rhqck\" (UID: \"db566636-0bd0-40c6-811e-3f06e9911414\") " pod="kube-system/cilium-rhqck" May 27 17:16:04.466401 kubelet[2626]: I0527 17:16:04.466396 2626 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dmsms\" (UniqueName: \"kubernetes.io/projected/db566636-0bd0-40c6-811e-3f06e9911414-kube-api-access-dmsms\") pod \"cilium-rhqck\" (UID: \"db566636-0bd0-40c6-811e-3f06e9911414\") " pod="kube-system/cilium-rhqck" May 27 17:16:04.466922 kubelet[2626]: I0527 17:16:04.466420 2626 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/db566636-0bd0-40c6-811e-3f06e9911414-etc-cni-netd\") pod \"cilium-rhqck\" (UID: \"db566636-0bd0-40c6-811e-3f06e9911414\") " pod="kube-system/cilium-rhqck" May 27 17:16:04.466922 kubelet[2626]: I0527 17:16:04.466436 2626 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/db566636-0bd0-40c6-811e-3f06e9911414-host-proc-sys-net\") pod \"cilium-rhqck\" (UID: \"db566636-0bd0-40c6-811e-3f06e9911414\") " pod="kube-system/cilium-rhqck" May 27 17:16:04.466922 kubelet[2626]: I0527 17:16:04.466454 2626 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/db566636-0bd0-40c6-811e-3f06e9911414-cilium-cgroup\") pod \"cilium-rhqck\" (UID: \"db566636-0bd0-40c6-811e-3f06e9911414\") " pod="kube-system/cilium-rhqck" May 27 17:16:04.466922 kubelet[2626]: I0527 17:16:04.466469 2626 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/db566636-0bd0-40c6-811e-3f06e9911414-cilium-run\") pod \"cilium-rhqck\" (UID: \"db566636-0bd0-40c6-811e-3f06e9911414\") " pod="kube-system/cilium-rhqck" May 27 17:16:04.466922 kubelet[2626]: I0527 17:16:04.466484 2626 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/db566636-0bd0-40c6-811e-3f06e9911414-bpf-maps\") pod \"cilium-rhqck\" (UID: \"db566636-0bd0-40c6-811e-3f06e9911414\") " pod="kube-system/cilium-rhqck" May 27 17:16:04.466922 kubelet[2626]: I0527 17:16:04.466498 2626 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/db566636-0bd0-40c6-811e-3f06e9911414-cni-path\") pod \"cilium-rhqck\" (UID: \"db566636-0bd0-40c6-811e-3f06e9911414\") " pod="kube-system/cilium-rhqck" May 27 17:16:04.467126 kubelet[2626]: I0527 17:16:04.466541 2626 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/db566636-0bd0-40c6-811e-3f06e9911414-cilium-ipsec-secrets\") pod \"cilium-rhqck\" (UID: \"db566636-0bd0-40c6-811e-3f06e9911414\") " pod="kube-system/cilium-rhqck" May 27 17:16:04.467126 kubelet[2626]: I0527 17:16:04.466557 2626 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/db566636-0bd0-40c6-811e-3f06e9911414-host-proc-sys-kernel\") pod \"cilium-rhqck\" (UID: \"db566636-0bd0-40c6-811e-3f06e9911414\") " pod="kube-system/cilium-rhqck" May 27 17:16:04.467126 kubelet[2626]: I0527 17:16:04.466645 2626 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/db566636-0bd0-40c6-811e-3f06e9911414-hostproc\") pod \"cilium-rhqck\" (UID: \"db566636-0bd0-40c6-811e-3f06e9911414\") " pod="kube-system/cilium-rhqck" May 27 17:16:04.467126 kubelet[2626]: I0527 17:16:04.466684 2626 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/db566636-0bd0-40c6-811e-3f06e9911414-xtables-lock\") pod \"cilium-rhqck\" (UID: \"db566636-0bd0-40c6-811e-3f06e9911414\") " pod="kube-system/cilium-rhqck" May 27 17:16:04.467126 kubelet[2626]: I0527 17:16:04.466704 2626 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/db566636-0bd0-40c6-811e-3f06e9911414-clustermesh-secrets\") pod \"cilium-rhqck\" (UID: \"db566636-0bd0-40c6-811e-3f06e9911414\") " pod="kube-system/cilium-rhqck" May 27 17:16:04.467126 kubelet[2626]: I0527 17:16:04.466721 2626 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/db566636-0bd0-40c6-811e-3f06e9911414-hubble-tls\") pod \"cilium-rhqck\" (UID: \"db566636-0bd0-40c6-811e-3f06e9911414\") " pod="kube-system/cilium-rhqck" May 27 17:16:04.467855 kubelet[2626]: I0527 17:16:04.466752 2626 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/db566636-0bd0-40c6-811e-3f06e9911414-cilium-config-path\") pod \"cilium-rhqck\" (UID: \"db566636-0bd0-40c6-811e-3f06e9911414\") " pod="kube-system/cilium-rhqck" May 27 17:16:04.483994 sshd[4374]: Accepted publickey for core from 10.0.0.1 port 45186 ssh2: RSA SHA256:P4IJeIRssgXk4sLVjWVipI5XYGK59bekWX+Ak26Y4M8 May 27 17:16:04.485307 sshd-session[4374]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:16:04.489389 systemd-logind[1507]: New session 24 of user core. May 27 17:16:04.505251 systemd[1]: Started session-24.scope - Session 24 of User core. May 27 17:16:04.555493 sshd[4376]: Connection closed by 10.0.0.1 port 45186 May 27 17:16:04.556678 sshd-session[4374]: pam_unix(sshd:session): session closed for user core May 27 17:16:04.578802 systemd[1]: sshd@23-10.0.0.109:22-10.0.0.1:45186.service: Deactivated successfully. May 27 17:16:04.581227 systemd[1]: session-24.scope: Deactivated successfully. May 27 17:16:04.582001 systemd-logind[1507]: Session 24 logged out. Waiting for processes to exit. May 27 17:16:04.587944 systemd[1]: Started sshd@24-10.0.0.109:22-10.0.0.1:45188.service - OpenSSH per-connection server daemon (10.0.0.1:45188). May 27 17:16:04.588434 systemd-logind[1507]: Removed session 24. May 27 17:16:04.642216 sshd[4388]: Accepted publickey for core from 10.0.0.1 port 45188 ssh2: RSA SHA256:P4IJeIRssgXk4sLVjWVipI5XYGK59bekWX+Ak26Y4M8 May 27 17:16:04.643445 sshd-session[4388]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 27 17:16:04.647336 systemd-logind[1507]: New session 25 of user core. May 27 17:16:04.657899 systemd[1]: Started session-25.scope - Session 25 of User core. May 27 17:16:04.751652 kubelet[2626]: E0527 17:16:04.751520 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:16:04.753047 containerd[1533]: time="2025-05-27T17:16:04.753006557Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rhqck,Uid:db566636-0bd0-40c6-811e-3f06e9911414,Namespace:kube-system,Attempt:0,}" May 27 17:16:04.770634 containerd[1533]: time="2025-05-27T17:16:04.770580245Z" level=info msg="connecting to shim 8818423d0cc5165b4fe64240239c59384701c72b04fcc06ffc7c2a71a3a4189b" address="unix:///run/containerd/s/a9fdcecf93900550179823220ca3ef0433aec91e40fec1c2efc29a0d6549a3f2" namespace=k8s.io protocol=ttrpc version=3 May 27 17:16:04.795254 systemd[1]: Started cri-containerd-8818423d0cc5165b4fe64240239c59384701c72b04fcc06ffc7c2a71a3a4189b.scope - libcontainer container 8818423d0cc5165b4fe64240239c59384701c72b04fcc06ffc7c2a71a3a4189b. May 27 17:16:04.817156 containerd[1533]: time="2025-05-27T17:16:04.817074917Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-rhqck,Uid:db566636-0bd0-40c6-811e-3f06e9911414,Namespace:kube-system,Attempt:0,} returns sandbox id \"8818423d0cc5165b4fe64240239c59384701c72b04fcc06ffc7c2a71a3a4189b\"" May 27 17:16:04.818493 kubelet[2626]: E0527 17:16:04.818259 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:16:04.823710 containerd[1533]: time="2025-05-27T17:16:04.823553932Z" level=info msg="CreateContainer within sandbox \"8818423d0cc5165b4fe64240239c59384701c72b04fcc06ffc7c2a71a3a4189b\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" May 27 17:16:04.829081 containerd[1533]: time="2025-05-27T17:16:04.828909821Z" level=info msg="Container 7dde02e16ed4aaa6281e82954780cff979a731bb0dabf0b2b268771bb4be018b: CDI devices from CRI Config.CDIDevices: []" May 27 17:16:04.847927 containerd[1533]: time="2025-05-27T17:16:04.847877057Z" level=info msg="CreateContainer within sandbox \"8818423d0cc5165b4fe64240239c59384701c72b04fcc06ffc7c2a71a3a4189b\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"7dde02e16ed4aaa6281e82954780cff979a731bb0dabf0b2b268771bb4be018b\"" May 27 17:16:04.848933 containerd[1533]: time="2025-05-27T17:16:04.848748920Z" level=info msg="StartContainer for \"7dde02e16ed4aaa6281e82954780cff979a731bb0dabf0b2b268771bb4be018b\"" May 27 17:16:04.849631 containerd[1533]: time="2025-05-27T17:16:04.849591545Z" level=info msg="connecting to shim 7dde02e16ed4aaa6281e82954780cff979a731bb0dabf0b2b268771bb4be018b" address="unix:///run/containerd/s/a9fdcecf93900550179823220ca3ef0433aec91e40fec1c2efc29a0d6549a3f2" protocol=ttrpc version=3 May 27 17:16:04.873246 systemd[1]: Started cri-containerd-7dde02e16ed4aaa6281e82954780cff979a731bb0dabf0b2b268771bb4be018b.scope - libcontainer container 7dde02e16ed4aaa6281e82954780cff979a731bb0dabf0b2b268771bb4be018b. May 27 17:16:04.917109 containerd[1533]: time="2025-05-27T17:16:04.916989006Z" level=info msg="StartContainer for \"7dde02e16ed4aaa6281e82954780cff979a731bb0dabf0b2b268771bb4be018b\" returns successfully" May 27 17:16:04.931622 systemd[1]: cri-containerd-7dde02e16ed4aaa6281e82954780cff979a731bb0dabf0b2b268771bb4be018b.scope: Deactivated successfully. May 27 17:16:04.942977 containerd[1533]: time="2025-05-27T17:16:04.942931305Z" level=info msg="received exit event container_id:\"7dde02e16ed4aaa6281e82954780cff979a731bb0dabf0b2b268771bb4be018b\" id:\"7dde02e16ed4aaa6281e82954780cff979a731bb0dabf0b2b268771bb4be018b\" pid:4453 exited_at:{seconds:1748366164 nanos:942609327}" May 27 17:16:04.943091 containerd[1533]: time="2025-05-27T17:16:04.943024979Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7dde02e16ed4aaa6281e82954780cff979a731bb0dabf0b2b268771bb4be018b\" id:\"7dde02e16ed4aaa6281e82954780cff979a731bb0dabf0b2b268771bb4be018b\" pid:4453 exited_at:{seconds:1748366164 nanos:942609327}" May 27 17:16:04.967993 kubelet[2626]: E0527 17:16:04.967961 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:16:05.186093 kubelet[2626]: E0527 17:16:05.184988 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:16:05.208883 containerd[1533]: time="2025-05-27T17:16:05.208838002Z" level=info msg="CreateContainer within sandbox \"8818423d0cc5165b4fe64240239c59384701c72b04fcc06ffc7c2a71a3a4189b\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" May 27 17:16:05.215646 containerd[1533]: time="2025-05-27T17:16:05.215603189Z" level=info msg="Container 0cf8ab10bb75698c3f917e2486ea8ee8fb357bd301318d682ebc3ac56a2b6c14: CDI devices from CRI Config.CDIDevices: []" May 27 17:16:05.220337 containerd[1533]: time="2025-05-27T17:16:05.220292262Z" level=info msg="CreateContainer within sandbox \"8818423d0cc5165b4fe64240239c59384701c72b04fcc06ffc7c2a71a3a4189b\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"0cf8ab10bb75698c3f917e2486ea8ee8fb357bd301318d682ebc3ac56a2b6c14\"" May 27 17:16:05.221069 containerd[1533]: time="2025-05-27T17:16:05.221034017Z" level=info msg="StartContainer for \"0cf8ab10bb75698c3f917e2486ea8ee8fb357bd301318d682ebc3ac56a2b6c14\"" May 27 17:16:05.222022 containerd[1533]: time="2025-05-27T17:16:05.221930922Z" level=info msg="connecting to shim 0cf8ab10bb75698c3f917e2486ea8ee8fb357bd301318d682ebc3ac56a2b6c14" address="unix:///run/containerd/s/a9fdcecf93900550179823220ca3ef0433aec91e40fec1c2efc29a0d6549a3f2" protocol=ttrpc version=3 May 27 17:16:05.247255 systemd[1]: Started cri-containerd-0cf8ab10bb75698c3f917e2486ea8ee8fb357bd301318d682ebc3ac56a2b6c14.scope - libcontainer container 0cf8ab10bb75698c3f917e2486ea8ee8fb357bd301318d682ebc3ac56a2b6c14. May 27 17:16:05.274126 containerd[1533]: time="2025-05-27T17:16:05.274053538Z" level=info msg="StartContainer for \"0cf8ab10bb75698c3f917e2486ea8ee8fb357bd301318d682ebc3ac56a2b6c14\" returns successfully" May 27 17:16:05.278925 systemd[1]: cri-containerd-0cf8ab10bb75698c3f917e2486ea8ee8fb357bd301318d682ebc3ac56a2b6c14.scope: Deactivated successfully. May 27 17:16:05.279733 containerd[1533]: time="2025-05-27T17:16:05.279282699Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0cf8ab10bb75698c3f917e2486ea8ee8fb357bd301318d682ebc3ac56a2b6c14\" id:\"0cf8ab10bb75698c3f917e2486ea8ee8fb357bd301318d682ebc3ac56a2b6c14\" pid:4497 exited_at:{seconds:1748366165 nanos:279041593}" May 27 17:16:05.279733 containerd[1533]: time="2025-05-27T17:16:05.279470967Z" level=info msg="received exit event container_id:\"0cf8ab10bb75698c3f917e2486ea8ee8fb357bd301318d682ebc3ac56a2b6c14\" id:\"0cf8ab10bb75698c3f917e2486ea8ee8fb357bd301318d682ebc3ac56a2b6c14\" pid:4497 exited_at:{seconds:1748366165 nanos:279041593}" May 27 17:16:06.189539 kubelet[2626]: E0527 17:16:06.189505 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:16:06.194367 containerd[1533]: time="2025-05-27T17:16:06.194224247Z" level=info msg="CreateContainer within sandbox \"8818423d0cc5165b4fe64240239c59384701c72b04fcc06ffc7c2a71a3a4189b\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" May 27 17:16:06.204090 containerd[1533]: time="2025-05-27T17:16:06.201634346Z" level=info msg="Container 35d8980971f7311d29ec435b3f47f7f3fc8808bc8cf44b4c56a707f88d4e89dd: CDI devices from CRI Config.CDIDevices: []" May 27 17:16:06.210335 containerd[1533]: time="2025-05-27T17:16:06.210285535Z" level=info msg="CreateContainer within sandbox \"8818423d0cc5165b4fe64240239c59384701c72b04fcc06ffc7c2a71a3a4189b\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"35d8980971f7311d29ec435b3f47f7f3fc8808bc8cf44b4c56a707f88d4e89dd\"" May 27 17:16:06.211264 containerd[1533]: time="2025-05-27T17:16:06.211228281Z" level=info msg="StartContainer for \"35d8980971f7311d29ec435b3f47f7f3fc8808bc8cf44b4c56a707f88d4e89dd\"" May 27 17:16:06.212853 containerd[1533]: time="2025-05-27T17:16:06.212824671Z" level=info msg="connecting to shim 35d8980971f7311d29ec435b3f47f7f3fc8808bc8cf44b4c56a707f88d4e89dd" address="unix:///run/containerd/s/a9fdcecf93900550179823220ca3ef0433aec91e40fec1c2efc29a0d6549a3f2" protocol=ttrpc version=3 May 27 17:16:06.235245 systemd[1]: Started cri-containerd-35d8980971f7311d29ec435b3f47f7f3fc8808bc8cf44b4c56a707f88d4e89dd.scope - libcontainer container 35d8980971f7311d29ec435b3f47f7f3fc8808bc8cf44b4c56a707f88d4e89dd. May 27 17:16:06.266789 systemd[1]: cri-containerd-35d8980971f7311d29ec435b3f47f7f3fc8808bc8cf44b4c56a707f88d4e89dd.scope: Deactivated successfully. May 27 17:16:06.269666 containerd[1533]: time="2025-05-27T17:16:06.269560691Z" level=info msg="StartContainer for \"35d8980971f7311d29ec435b3f47f7f3fc8808bc8cf44b4c56a707f88d4e89dd\" returns successfully" May 27 17:16:06.269842 containerd[1533]: time="2025-05-27T17:16:06.269792438Z" level=info msg="TaskExit event in podsandbox handler container_id:\"35d8980971f7311d29ec435b3f47f7f3fc8808bc8cf44b4c56a707f88d4e89dd\" id:\"35d8980971f7311d29ec435b3f47f7f3fc8808bc8cf44b4c56a707f88d4e89dd\" pid:4544 exited_at:{seconds:1748366166 nanos:269546772}" May 27 17:16:06.269842 containerd[1533]: time="2025-05-27T17:16:06.269806077Z" level=info msg="received exit event container_id:\"35d8980971f7311d29ec435b3f47f7f3fc8808bc8cf44b4c56a707f88d4e89dd\" id:\"35d8980971f7311d29ec435b3f47f7f3fc8808bc8cf44b4c56a707f88d4e89dd\" pid:4544 exited_at:{seconds:1748366166 nanos:269546772}" May 27 17:16:06.291342 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-35d8980971f7311d29ec435b3f47f7f3fc8808bc8cf44b4c56a707f88d4e89dd-rootfs.mount: Deactivated successfully. May 27 17:16:06.968449 kubelet[2626]: E0527 17:16:06.968348 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:16:07.195831 kubelet[2626]: E0527 17:16:07.195798 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:16:07.201787 containerd[1533]: time="2025-05-27T17:16:07.201751585Z" level=info msg="CreateContainer within sandbox \"8818423d0cc5165b4fe64240239c59384701c72b04fcc06ffc7c2a71a3a4189b\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" May 27 17:16:07.210920 containerd[1533]: time="2025-05-27T17:16:07.209605532Z" level=info msg="Container 5015d8e1dc44eb63a893f019884afb6c8c5bdc5d4d1bc587bcda08311d06f0df: CDI devices from CRI Config.CDIDevices: []" May 27 17:16:07.215543 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2431716513.mount: Deactivated successfully. May 27 17:16:07.216831 containerd[1533]: time="2025-05-27T17:16:07.216776235Z" level=info msg="CreateContainer within sandbox \"8818423d0cc5165b4fe64240239c59384701c72b04fcc06ffc7c2a71a3a4189b\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"5015d8e1dc44eb63a893f019884afb6c8c5bdc5d4d1bc587bcda08311d06f0df\"" May 27 17:16:07.217643 containerd[1533]: time="2025-05-27T17:16:07.217323567Z" level=info msg="StartContainer for \"5015d8e1dc44eb63a893f019884afb6c8c5bdc5d4d1bc587bcda08311d06f0df\"" May 27 17:16:07.218156 containerd[1533]: time="2025-05-27T17:16:07.218122125Z" level=info msg="connecting to shim 5015d8e1dc44eb63a893f019884afb6c8c5bdc5d4d1bc587bcda08311d06f0df" address="unix:///run/containerd/s/a9fdcecf93900550179823220ca3ef0433aec91e40fec1c2efc29a0d6549a3f2" protocol=ttrpc version=3 May 27 17:16:07.237295 systemd[1]: Started cri-containerd-5015d8e1dc44eb63a893f019884afb6c8c5bdc5d4d1bc587bcda08311d06f0df.scope - libcontainer container 5015d8e1dc44eb63a893f019884afb6c8c5bdc5d4d1bc587bcda08311d06f0df. May 27 17:16:07.260162 systemd[1]: cri-containerd-5015d8e1dc44eb63a893f019884afb6c8c5bdc5d4d1bc587bcda08311d06f0df.scope: Deactivated successfully. May 27 17:16:07.261205 containerd[1533]: time="2025-05-27T17:16:07.261025390Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5015d8e1dc44eb63a893f019884afb6c8c5bdc5d4d1bc587bcda08311d06f0df\" id:\"5015d8e1dc44eb63a893f019884afb6c8c5bdc5d4d1bc587bcda08311d06f0df\" pid:4584 exited_at:{seconds:1748366167 nanos:260344465}" May 27 17:16:07.261401 containerd[1533]: time="2025-05-27T17:16:07.261355932Z" level=info msg="received exit event container_id:\"5015d8e1dc44eb63a893f019884afb6c8c5bdc5d4d1bc587bcda08311d06f0df\" id:\"5015d8e1dc44eb63a893f019884afb6c8c5bdc5d4d1bc587bcda08311d06f0df\" pid:4584 exited_at:{seconds:1748366167 nanos:260344465}" May 27 17:16:07.269663 containerd[1533]: time="2025-05-27T17:16:07.269614418Z" level=info msg="StartContainer for \"5015d8e1dc44eb63a893f019884afb6c8c5bdc5d4d1bc587bcda08311d06f0df\" returns successfully" May 27 17:16:07.280595 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5015d8e1dc44eb63a893f019884afb6c8c5bdc5d4d1bc587bcda08311d06f0df-rootfs.mount: Deactivated successfully. May 27 17:16:08.201229 kubelet[2626]: E0527 17:16:08.201164 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:16:08.205960 containerd[1533]: time="2025-05-27T17:16:08.205439544Z" level=info msg="CreateContainer within sandbox \"8818423d0cc5165b4fe64240239c59384701c72b04fcc06ffc7c2a71a3a4189b\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" May 27 17:16:08.226485 containerd[1533]: time="2025-05-27T17:16:08.226441206Z" level=info msg="Container 032c7f8f1752bc1dfe510247752e951d08fe26ac7373eb96529a1acf939f41b3: CDI devices from CRI Config.CDIDevices: []" May 27 17:16:08.235658 containerd[1533]: time="2025-05-27T17:16:08.235610961Z" level=info msg="CreateContainer within sandbox \"8818423d0cc5165b4fe64240239c59384701c72b04fcc06ffc7c2a71a3a4189b\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"032c7f8f1752bc1dfe510247752e951d08fe26ac7373eb96529a1acf939f41b3\"" May 27 17:16:08.237197 containerd[1533]: time="2025-05-27T17:16:08.237144846Z" level=info msg="StartContainer for \"032c7f8f1752bc1dfe510247752e951d08fe26ac7373eb96529a1acf939f41b3\"" May 27 17:16:08.237992 containerd[1533]: time="2025-05-27T17:16:08.237957127Z" level=info msg="connecting to shim 032c7f8f1752bc1dfe510247752e951d08fe26ac7373eb96529a1acf939f41b3" address="unix:///run/containerd/s/a9fdcecf93900550179823220ca3ef0433aec91e40fec1c2efc29a0d6549a3f2" protocol=ttrpc version=3 May 27 17:16:08.269353 systemd[1]: Started cri-containerd-032c7f8f1752bc1dfe510247752e951d08fe26ac7373eb96529a1acf939f41b3.scope - libcontainer container 032c7f8f1752bc1dfe510247752e951d08fe26ac7373eb96529a1acf939f41b3. May 27 17:16:08.298258 containerd[1533]: time="2025-05-27T17:16:08.298220565Z" level=info msg="StartContainer for \"032c7f8f1752bc1dfe510247752e951d08fe26ac7373eb96529a1acf939f41b3\" returns successfully" May 27 17:16:08.351431 containerd[1533]: time="2025-05-27T17:16:08.351357428Z" level=info msg="TaskExit event in podsandbox handler container_id:\"032c7f8f1752bc1dfe510247752e951d08fe26ac7373eb96529a1acf939f41b3\" id:\"5efc00b0c2e99fbbfdecde297ab1127508fc74adbaf37c3755aa78ebd721f1bd\" pid:4653 exited_at:{seconds:1748366168 nanos:350219483}" May 27 17:16:08.562085 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) May 27 17:16:09.206809 kubelet[2626]: E0527 17:16:09.206774 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:16:09.230155 kubelet[2626]: I0527 17:16:09.230086 2626 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-rhqck" podStartSLOduration=5.23004692 podStartE2EDuration="5.23004692s" podCreationTimestamp="2025-05-27 17:16:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-27 17:16:09.221243073 +0000 UTC m=+80.354895908" watchObservedRunningTime="2025-05-27 17:16:09.23004692 +0000 UTC m=+80.363699755" May 27 17:16:10.753268 kubelet[2626]: E0527 17:16:10.753239 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:16:11.092736 containerd[1533]: time="2025-05-27T17:16:11.092585753Z" level=info msg="TaskExit event in podsandbox handler container_id:\"032c7f8f1752bc1dfe510247752e951d08fe26ac7373eb96529a1acf939f41b3\" id:\"4912c6d53745c5dda75b64a78a70afc0a93b039d658d2d36d2a0ef0fe24d6bd1\" pid:5063 exit_status:1 exited_at:{seconds:1748366171 nanos:92196328}" May 27 17:16:11.108299 kubelet[2626]: E0527 17:16:11.108261 2626 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:48656->127.0.0.1:44065: write tcp 127.0.0.1:48656->127.0.0.1:44065: write: broken pipe May 27 17:16:11.449446 systemd-networkd[1441]: lxc_health: Link UP May 27 17:16:11.452215 systemd-networkd[1441]: lxc_health: Gained carrier May 27 17:16:12.755906 kubelet[2626]: E0527 17:16:12.755870 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:16:13.221920 containerd[1533]: time="2025-05-27T17:16:13.221821968Z" level=info msg="TaskExit event in podsandbox handler container_id:\"032c7f8f1752bc1dfe510247752e951d08fe26ac7373eb96529a1acf939f41b3\" id:\"00100facbbaddc0748a7f6fa7bf92c2c30bc254f4d1f738fa31a89ad774b5f4a\" pid:5192 exited_at:{seconds:1748366173 nanos:221511738}" May 27 17:16:13.224946 kubelet[2626]: E0527 17:16:13.224850 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:16:13.495481 systemd-networkd[1441]: lxc_health: Gained IPv6LL May 27 17:16:13.967684 kubelet[2626]: E0527 17:16:13.967562 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:16:14.226104 kubelet[2626]: E0527 17:16:14.225987 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:16:14.968193 kubelet[2626]: E0527 17:16:14.968153 2626 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" May 27 17:16:15.343345 containerd[1533]: time="2025-05-27T17:16:15.343307191Z" level=info msg="TaskExit event in podsandbox handler container_id:\"032c7f8f1752bc1dfe510247752e951d08fe26ac7373eb96529a1acf939f41b3\" id:\"c213b86136689ce1d05607401ab9b612d0fbc581fbd2f24dfde47041e19d7e28\" pid:5226 exited_at:{seconds:1748366175 nanos:342890521}" May 27 17:16:17.448351 containerd[1533]: time="2025-05-27T17:16:17.448306776Z" level=info msg="TaskExit event in podsandbox handler container_id:\"032c7f8f1752bc1dfe510247752e951d08fe26ac7373eb96529a1acf939f41b3\" id:\"941b6b34194d1f1fdf4c7a23fad8445766f832737437659057a92252707f24a5\" pid:5250 exited_at:{seconds:1748366177 nanos:447599908}" May 27 17:16:17.454037 sshd[4390]: Connection closed by 10.0.0.1 port 45188 May 27 17:16:17.454537 sshd-session[4388]: pam_unix(sshd:session): session closed for user core May 27 17:16:17.457562 systemd[1]: sshd@24-10.0.0.109:22-10.0.0.1:45188.service: Deactivated successfully. May 27 17:16:17.461809 systemd[1]: session-25.scope: Deactivated successfully. May 27 17:16:17.463642 systemd-logind[1507]: Session 25 logged out. Waiting for processes to exit. May 27 17:16:17.465887 systemd-logind[1507]: Removed session 25.