Sep 12 17:23:07.833377 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Sep 12 17:23:07.833399 kernel: Linux version 6.12.47-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT Fri Sep 12 15:37:01 -00 2025 Sep 12 17:23:07.833408 kernel: KASLR enabled Sep 12 17:23:07.833414 kernel: efi: EFI v2.7 by EDK II Sep 12 17:23:07.833419 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb228018 ACPI 2.0=0xdb9b8018 RNG=0xdb9b8a18 MEMRESERVE=0xdb21fd18 Sep 12 17:23:07.833424 kernel: random: crng init done Sep 12 17:23:07.833431 kernel: Kernel is locked down from EFI Secure Boot; see man kernel_lockdown.7 Sep 12 17:23:07.833480 kernel: secureboot: Secure boot enabled Sep 12 17:23:07.833487 kernel: ACPI: Early table checksum verification disabled Sep 12 17:23:07.833495 kernel: ACPI: RSDP 0x00000000DB9B8018 000024 (v02 BOCHS ) Sep 12 17:23:07.833501 kernel: ACPI: XSDT 0x00000000DB9B8F18 000064 (v01 BOCHS BXPC 00000001 01000013) Sep 12 17:23:07.833507 kernel: ACPI: FACP 0x00000000DB9B8B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 17:23:07.833512 kernel: ACPI: DSDT 0x00000000DB904018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 17:23:07.833518 kernel: ACPI: APIC 0x00000000DB9B8C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 17:23:07.833525 kernel: ACPI: PPTT 0x00000000DB9B8098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 17:23:07.833532 kernel: ACPI: GTDT 0x00000000DB9B8818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 17:23:07.833538 kernel: ACPI: MCFG 0x00000000DB9B8A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 17:23:07.833545 kernel: ACPI: SPCR 0x00000000DB9B8918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 17:23:07.833551 kernel: ACPI: DBG2 0x00000000DB9B8998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 17:23:07.833557 kernel: ACPI: IORT 0x00000000DB9B8198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 12 17:23:07.833563 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Sep 12 17:23:07.833569 kernel: ACPI: Use ACPI SPCR as default console: No Sep 12 17:23:07.833575 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Sep 12 17:23:07.833581 kernel: NODE_DATA(0) allocated [mem 0xdc737a00-0xdc73efff] Sep 12 17:23:07.833587 kernel: Zone ranges: Sep 12 17:23:07.833594 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Sep 12 17:23:07.833600 kernel: DMA32 empty Sep 12 17:23:07.833606 kernel: Normal empty Sep 12 17:23:07.833612 kernel: Device empty Sep 12 17:23:07.833617 kernel: Movable zone start for each node Sep 12 17:23:07.833623 kernel: Early memory node ranges Sep 12 17:23:07.833629 kernel: node 0: [mem 0x0000000040000000-0x00000000dbb4ffff] Sep 12 17:23:07.833635 kernel: node 0: [mem 0x00000000dbb50000-0x00000000dbe7ffff] Sep 12 17:23:07.833641 kernel: node 0: [mem 0x00000000dbe80000-0x00000000dbe9ffff] Sep 12 17:23:07.833647 kernel: node 0: [mem 0x00000000dbea0000-0x00000000dbedffff] Sep 12 17:23:07.833653 kernel: node 0: [mem 0x00000000dbee0000-0x00000000dbf1ffff] Sep 12 17:23:07.833659 kernel: node 0: [mem 0x00000000dbf20000-0x00000000dbf6ffff] Sep 12 17:23:07.833666 kernel: node 0: [mem 0x00000000dbf70000-0x00000000dcbfffff] Sep 12 17:23:07.833672 kernel: node 0: [mem 0x00000000dcc00000-0x00000000dcfdffff] Sep 12 17:23:07.833678 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Sep 12 17:23:07.833687 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Sep 12 17:23:07.833694 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Sep 12 17:23:07.833700 kernel: cma: Reserved 16 MiB at 0x00000000d7a00000 on node -1 Sep 12 17:23:07.833706 kernel: psci: probing for conduit method from ACPI. Sep 12 17:23:07.833714 kernel: psci: PSCIv1.1 detected in firmware. Sep 12 17:23:07.833720 kernel: psci: Using standard PSCI v0.2 function IDs Sep 12 17:23:07.833737 kernel: psci: Trusted OS migration not required Sep 12 17:23:07.833743 kernel: psci: SMC Calling Convention v1.1 Sep 12 17:23:07.833750 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Sep 12 17:23:07.833756 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Sep 12 17:23:07.833763 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Sep 12 17:23:07.833769 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Sep 12 17:23:07.833776 kernel: Detected PIPT I-cache on CPU0 Sep 12 17:23:07.833784 kernel: CPU features: detected: GIC system register CPU interface Sep 12 17:23:07.833791 kernel: CPU features: detected: Spectre-v4 Sep 12 17:23:07.833797 kernel: CPU features: detected: Spectre-BHB Sep 12 17:23:07.833803 kernel: CPU features: kernel page table isolation forced ON by KASLR Sep 12 17:23:07.833810 kernel: CPU features: detected: Kernel page table isolation (KPTI) Sep 12 17:23:07.833816 kernel: CPU features: detected: ARM erratum 1418040 Sep 12 17:23:07.833823 kernel: CPU features: detected: SSBS not fully self-synchronizing Sep 12 17:23:07.833829 kernel: alternatives: applying boot alternatives Sep 12 17:23:07.833837 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=9b01894f6bb04aff3ec9b8554b3ae56a087d51961f1a01981bc4d4f54ccefc09 Sep 12 17:23:07.833843 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 12 17:23:07.833850 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 12 17:23:07.833858 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 12 17:23:07.833864 kernel: Fallback order for Node 0: 0 Sep 12 17:23:07.833871 kernel: Built 1 zonelists, mobility grouping on. Total pages: 643072 Sep 12 17:23:07.833877 kernel: Policy zone: DMA Sep 12 17:23:07.833884 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 12 17:23:07.833890 kernel: software IO TLB: SWIOTLB bounce buffer size adjusted to 2MB Sep 12 17:23:07.833896 kernel: software IO TLB: area num 4. Sep 12 17:23:07.833903 kernel: software IO TLB: SWIOTLB bounce buffer size roundup to 4MB Sep 12 17:23:07.833909 kernel: software IO TLB: mapped [mem 0x00000000db504000-0x00000000db904000] (4MB) Sep 12 17:23:07.833916 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 12 17:23:07.833922 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 12 17:23:07.833929 kernel: rcu: RCU event tracing is enabled. Sep 12 17:23:07.833937 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 12 17:23:07.833944 kernel: Trampoline variant of Tasks RCU enabled. Sep 12 17:23:07.833950 kernel: Tracing variant of Tasks RCU enabled. Sep 12 17:23:07.833957 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 12 17:23:07.833963 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 12 17:23:07.833970 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 12 17:23:07.833976 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 12 17:23:07.833983 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Sep 12 17:23:07.833989 kernel: GICv3: 256 SPIs implemented Sep 12 17:23:07.833995 kernel: GICv3: 0 Extended SPIs implemented Sep 12 17:23:07.834002 kernel: Root IRQ handler: gic_handle_irq Sep 12 17:23:07.834009 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Sep 12 17:23:07.834016 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Sep 12 17:23:07.834022 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Sep 12 17:23:07.834029 kernel: ITS [mem 0x08080000-0x0809ffff] Sep 12 17:23:07.834035 kernel: ITS@0x0000000008080000: allocated 8192 Devices @40110000 (indirect, esz 8, psz 64K, shr 1) Sep 12 17:23:07.834042 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @40120000 (flat, esz 8, psz 64K, shr 1) Sep 12 17:23:07.834048 kernel: GICv3: using LPI property table @0x0000000040130000 Sep 12 17:23:07.834055 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040140000 Sep 12 17:23:07.834061 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 12 17:23:07.834068 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 12 17:23:07.834074 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Sep 12 17:23:07.834081 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Sep 12 17:23:07.834088 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Sep 12 17:23:07.834103 kernel: arm-pv: using stolen time PV Sep 12 17:23:07.834111 kernel: Console: colour dummy device 80x25 Sep 12 17:23:07.834118 kernel: ACPI: Core revision 20240827 Sep 12 17:23:07.834125 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Sep 12 17:23:07.834131 kernel: pid_max: default: 32768 minimum: 301 Sep 12 17:23:07.834138 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Sep 12 17:23:07.834144 kernel: landlock: Up and running. Sep 12 17:23:07.834151 kernel: SELinux: Initializing. Sep 12 17:23:07.834159 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 12 17:23:07.834166 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 12 17:23:07.834173 kernel: rcu: Hierarchical SRCU implementation. Sep 12 17:23:07.834179 kernel: rcu: Max phase no-delay instances is 400. Sep 12 17:23:07.834186 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Sep 12 17:23:07.834193 kernel: Remapping and enabling EFI services. Sep 12 17:23:07.834199 kernel: smp: Bringing up secondary CPUs ... Sep 12 17:23:07.834205 kernel: Detected PIPT I-cache on CPU1 Sep 12 17:23:07.834212 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Sep 12 17:23:07.834220 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040150000 Sep 12 17:23:07.834231 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 12 17:23:07.834238 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Sep 12 17:23:07.834246 kernel: Detected PIPT I-cache on CPU2 Sep 12 17:23:07.834253 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Sep 12 17:23:07.834260 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040160000 Sep 12 17:23:07.834267 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 12 17:23:07.834274 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Sep 12 17:23:07.834281 kernel: Detected PIPT I-cache on CPU3 Sep 12 17:23:07.834289 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Sep 12 17:23:07.834296 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040170000 Sep 12 17:23:07.834303 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 12 17:23:07.834310 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Sep 12 17:23:07.834317 kernel: smp: Brought up 1 node, 4 CPUs Sep 12 17:23:07.834323 kernel: SMP: Total of 4 processors activated. Sep 12 17:23:07.834330 kernel: CPU: All CPU(s) started at EL1 Sep 12 17:23:07.834337 kernel: CPU features: detected: 32-bit EL0 Support Sep 12 17:23:07.834344 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Sep 12 17:23:07.834352 kernel: CPU features: detected: Common not Private translations Sep 12 17:23:07.834359 kernel: CPU features: detected: CRC32 instructions Sep 12 17:23:07.834366 kernel: CPU features: detected: Enhanced Virtualization Traps Sep 12 17:23:07.834373 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Sep 12 17:23:07.834380 kernel: CPU features: detected: LSE atomic instructions Sep 12 17:23:07.834387 kernel: CPU features: detected: Privileged Access Never Sep 12 17:23:07.834394 kernel: CPU features: detected: RAS Extension Support Sep 12 17:23:07.834401 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Sep 12 17:23:07.834408 kernel: alternatives: applying system-wide alternatives Sep 12 17:23:07.834416 kernel: CPU features: detected: Hardware dirty bit management on CPU0-3 Sep 12 17:23:07.834424 kernel: Memory: 2422436K/2572288K available (11136K kernel code, 2440K rwdata, 9068K rodata, 38912K init, 1038K bss, 127516K reserved, 16384K cma-reserved) Sep 12 17:23:07.834431 kernel: devtmpfs: initialized Sep 12 17:23:07.834438 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 12 17:23:07.834445 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 12 17:23:07.834452 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Sep 12 17:23:07.834459 kernel: 0 pages in range for non-PLT usage Sep 12 17:23:07.834466 kernel: 508576 pages in range for PLT usage Sep 12 17:23:07.834473 kernel: pinctrl core: initialized pinctrl subsystem Sep 12 17:23:07.834480 kernel: SMBIOS 3.0.0 present. Sep 12 17:23:07.834487 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Sep 12 17:23:07.834494 kernel: DMI: Memory slots populated: 1/1 Sep 12 17:23:07.834501 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 12 17:23:07.834508 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Sep 12 17:23:07.834515 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 12 17:23:07.834522 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 12 17:23:07.834529 kernel: audit: initializing netlink subsys (disabled) Sep 12 17:23:07.834536 kernel: audit: type=2000 audit(0.023:1): state=initialized audit_enabled=0 res=1 Sep 12 17:23:07.834545 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 12 17:23:07.834551 kernel: cpuidle: using governor menu Sep 12 17:23:07.834558 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Sep 12 17:23:07.834565 kernel: ASID allocator initialised with 32768 entries Sep 12 17:23:07.834572 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 12 17:23:07.834579 kernel: Serial: AMBA PL011 UART driver Sep 12 17:23:07.834586 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 12 17:23:07.834593 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Sep 12 17:23:07.834600 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Sep 12 17:23:07.834608 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Sep 12 17:23:07.834615 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 12 17:23:07.834622 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Sep 12 17:23:07.834629 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Sep 12 17:23:07.834636 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Sep 12 17:23:07.834643 kernel: ACPI: Added _OSI(Module Device) Sep 12 17:23:07.834649 kernel: ACPI: Added _OSI(Processor Device) Sep 12 17:23:07.834656 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 12 17:23:07.834663 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 12 17:23:07.834671 kernel: ACPI: Interpreter enabled Sep 12 17:23:07.834678 kernel: ACPI: Using GIC for interrupt routing Sep 12 17:23:07.834685 kernel: ACPI: MCFG table detected, 1 entries Sep 12 17:23:07.834692 kernel: ACPI: CPU0 has been hot-added Sep 12 17:23:07.834699 kernel: ACPI: CPU1 has been hot-added Sep 12 17:23:07.834705 kernel: ACPI: CPU2 has been hot-added Sep 12 17:23:07.834712 kernel: ACPI: CPU3 has been hot-added Sep 12 17:23:07.834719 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Sep 12 17:23:07.834733 kernel: printk: legacy console [ttyAMA0] enabled Sep 12 17:23:07.834743 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 12 17:23:07.834876 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 12 17:23:07.835483 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Sep 12 17:23:07.835554 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Sep 12 17:23:07.835611 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Sep 12 17:23:07.835667 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Sep 12 17:23:07.835676 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Sep 12 17:23:07.835688 kernel: PCI host bridge to bus 0000:00 Sep 12 17:23:07.835779 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Sep 12 17:23:07.835835 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Sep 12 17:23:07.835886 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Sep 12 17:23:07.835937 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 12 17:23:07.836020 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint Sep 12 17:23:07.836088 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Sep 12 17:23:07.836167 kernel: pci 0000:00:01.0: BAR 0 [io 0x0000-0x001f] Sep 12 17:23:07.836227 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff] Sep 12 17:23:07.836290 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] Sep 12 17:23:07.836369 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned Sep 12 17:23:07.836474 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff]: assigned Sep 12 17:23:07.836539 kernel: pci 0000:00:01.0: BAR 0 [io 0x1000-0x101f]: assigned Sep 12 17:23:07.836598 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Sep 12 17:23:07.836650 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Sep 12 17:23:07.836702 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Sep 12 17:23:07.836711 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Sep 12 17:23:07.836718 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Sep 12 17:23:07.836761 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Sep 12 17:23:07.836769 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Sep 12 17:23:07.836776 kernel: iommu: Default domain type: Translated Sep 12 17:23:07.836783 kernel: iommu: DMA domain TLB invalidation policy: strict mode Sep 12 17:23:07.836793 kernel: efivars: Registered efivars operations Sep 12 17:23:07.836800 kernel: vgaarb: loaded Sep 12 17:23:07.836807 kernel: clocksource: Switched to clocksource arch_sys_counter Sep 12 17:23:07.836814 kernel: VFS: Disk quotas dquot_6.6.0 Sep 12 17:23:07.836821 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 12 17:23:07.836828 kernel: pnp: PnP ACPI init Sep 12 17:23:07.836899 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Sep 12 17:23:07.836909 kernel: pnp: PnP ACPI: found 1 devices Sep 12 17:23:07.836918 kernel: NET: Registered PF_INET protocol family Sep 12 17:23:07.836925 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 12 17:23:07.836932 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 12 17:23:07.836940 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 12 17:23:07.836947 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 12 17:23:07.836954 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 12 17:23:07.836961 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 12 17:23:07.836968 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 12 17:23:07.836975 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 12 17:23:07.836984 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 12 17:23:07.836991 kernel: PCI: CLS 0 bytes, default 64 Sep 12 17:23:07.836998 kernel: kvm [1]: HYP mode not available Sep 12 17:23:07.837005 kernel: Initialise system trusted keyrings Sep 12 17:23:07.837013 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 12 17:23:07.837020 kernel: Key type asymmetric registered Sep 12 17:23:07.837027 kernel: Asymmetric key parser 'x509' registered Sep 12 17:23:07.837034 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Sep 12 17:23:07.837041 kernel: io scheduler mq-deadline registered Sep 12 17:23:07.837049 kernel: io scheduler kyber registered Sep 12 17:23:07.837057 kernel: io scheduler bfq registered Sep 12 17:23:07.837065 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Sep 12 17:23:07.837072 kernel: ACPI: button: Power Button [PWRB] Sep 12 17:23:07.837082 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Sep 12 17:23:07.837159 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Sep 12 17:23:07.837169 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 12 17:23:07.837176 kernel: thunder_xcv, ver 1.0 Sep 12 17:23:07.837184 kernel: thunder_bgx, ver 1.0 Sep 12 17:23:07.837192 kernel: nicpf, ver 1.0 Sep 12 17:23:07.837199 kernel: nicvf, ver 1.0 Sep 12 17:23:07.837270 kernel: rtc-efi rtc-efi.0: registered as rtc0 Sep 12 17:23:07.837325 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-09-12T17:23:07 UTC (1757697787) Sep 12 17:23:07.837335 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 12 17:23:07.837342 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Sep 12 17:23:07.837349 kernel: watchdog: NMI not fully supported Sep 12 17:23:07.837356 kernel: watchdog: Hard watchdog permanently disabled Sep 12 17:23:07.837365 kernel: NET: Registered PF_INET6 protocol family Sep 12 17:23:07.837372 kernel: Segment Routing with IPv6 Sep 12 17:23:07.837379 kernel: In-situ OAM (IOAM) with IPv6 Sep 12 17:23:07.837385 kernel: NET: Registered PF_PACKET protocol family Sep 12 17:23:07.837392 kernel: Key type dns_resolver registered Sep 12 17:23:07.837399 kernel: registered taskstats version 1 Sep 12 17:23:07.837406 kernel: Loading compiled-in X.509 certificates Sep 12 17:23:07.837413 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.47-flatcar: 7675c1947f324bc6524fdc1ee0f8f5f343acfea7' Sep 12 17:23:07.837420 kernel: Demotion targets for Node 0: null Sep 12 17:23:07.837429 kernel: Key type .fscrypt registered Sep 12 17:23:07.837436 kernel: Key type fscrypt-provisioning registered Sep 12 17:23:07.837443 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 12 17:23:07.837449 kernel: ima: Allocated hash algorithm: sha1 Sep 12 17:23:07.837456 kernel: ima: No architecture policies found Sep 12 17:23:07.837463 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Sep 12 17:23:07.837470 kernel: clk: Disabling unused clocks Sep 12 17:23:07.837477 kernel: PM: genpd: Disabling unused power domains Sep 12 17:23:07.837484 kernel: Warning: unable to open an initial console. Sep 12 17:23:07.837493 kernel: Freeing unused kernel memory: 38912K Sep 12 17:23:07.837500 kernel: Run /init as init process Sep 12 17:23:07.837507 kernel: with arguments: Sep 12 17:23:07.837513 kernel: /init Sep 12 17:23:07.837520 kernel: with environment: Sep 12 17:23:07.837527 kernel: HOME=/ Sep 12 17:23:07.837534 kernel: TERM=linux Sep 12 17:23:07.837541 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 12 17:23:07.837549 systemd[1]: Successfully made /usr/ read-only. Sep 12 17:23:07.837561 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 12 17:23:07.837568 systemd[1]: Detected virtualization kvm. Sep 12 17:23:07.837576 systemd[1]: Detected architecture arm64. Sep 12 17:23:07.837583 systemd[1]: Running in initrd. Sep 12 17:23:07.837590 systemd[1]: No hostname configured, using default hostname. Sep 12 17:23:07.837598 systemd[1]: Hostname set to . Sep 12 17:23:07.837605 systemd[1]: Initializing machine ID from VM UUID. Sep 12 17:23:07.837614 systemd[1]: Queued start job for default target initrd.target. Sep 12 17:23:07.837621 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 17:23:07.837629 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 17:23:07.837636 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 12 17:23:07.837644 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 12 17:23:07.837651 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 12 17:23:07.837660 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 12 17:23:07.837669 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 12 17:23:07.837677 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 12 17:23:07.837684 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 17:23:07.837692 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 12 17:23:07.837699 systemd[1]: Reached target paths.target - Path Units. Sep 12 17:23:07.837707 systemd[1]: Reached target slices.target - Slice Units. Sep 12 17:23:07.837714 systemd[1]: Reached target swap.target - Swaps. Sep 12 17:23:07.837732 systemd[1]: Reached target timers.target - Timer Units. Sep 12 17:23:07.837743 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 12 17:23:07.837750 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 12 17:23:07.837758 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 12 17:23:07.837765 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 12 17:23:07.837773 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 12 17:23:07.837780 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 12 17:23:07.837788 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 17:23:07.837795 systemd[1]: Reached target sockets.target - Socket Units. Sep 12 17:23:07.837803 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 12 17:23:07.837811 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 12 17:23:07.837819 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 12 17:23:07.837827 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Sep 12 17:23:07.837834 systemd[1]: Starting systemd-fsck-usr.service... Sep 12 17:23:07.837842 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 12 17:23:07.837849 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 12 17:23:07.837857 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:23:07.837865 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 12 17:23:07.837874 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 17:23:07.837881 systemd[1]: Finished systemd-fsck-usr.service. Sep 12 17:23:07.837889 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 12 17:23:07.837912 systemd-journald[248]: Collecting audit messages is disabled. Sep 12 17:23:07.837932 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 12 17:23:07.837940 kernel: Bridge firewalling registered Sep 12 17:23:07.837948 systemd-journald[248]: Journal started Sep 12 17:23:07.837966 systemd-journald[248]: Runtime Journal (/run/log/journal/8b2420c1e4064066afc203abbb13767a) is 6M, max 48.5M, 42.4M free. Sep 12 17:23:07.822300 systemd-modules-load[249]: Inserted module 'overlay' Sep 12 17:23:07.837033 systemd-modules-load[249]: Inserted module 'br_netfilter' Sep 12 17:23:07.843591 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:23:07.843610 systemd[1]: Started systemd-journald.service - Journal Service. Sep 12 17:23:07.844965 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 12 17:23:07.846298 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 12 17:23:07.851503 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 12 17:23:07.853323 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 17:23:07.855513 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 12 17:23:07.868562 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 12 17:23:07.877211 systemd-tmpfiles[270]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Sep 12 17:23:07.879087 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:23:07.881927 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 17:23:07.887757 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 17:23:07.890792 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 17:23:07.893417 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 12 17:23:07.897217 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 12 17:23:07.925449 dracut-cmdline[287]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=9b01894f6bb04aff3ec9b8554b3ae56a087d51961f1a01981bc4d4f54ccefc09 Sep 12 17:23:07.943403 systemd-resolved[288]: Positive Trust Anchors: Sep 12 17:23:07.943421 systemd-resolved[288]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 12 17:23:07.943452 systemd-resolved[288]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 12 17:23:07.949215 systemd-resolved[288]: Defaulting to hostname 'linux'. Sep 12 17:23:07.950279 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 12 17:23:07.953146 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 12 17:23:07.997749 kernel: SCSI subsystem initialized Sep 12 17:23:08.002739 kernel: Loading iSCSI transport class v2.0-870. Sep 12 17:23:08.011059 kernel: iscsi: registered transport (tcp) Sep 12 17:23:08.023864 kernel: iscsi: registered transport (qla4xxx) Sep 12 17:23:08.023930 kernel: QLogic iSCSI HBA Driver Sep 12 17:23:08.040351 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 12 17:23:08.056148 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 12 17:23:08.057807 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 12 17:23:08.109118 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 12 17:23:08.112498 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 12 17:23:08.182776 kernel: raid6: neonx8 gen() 15251 MB/s Sep 12 17:23:08.199766 kernel: raid6: neonx4 gen() 15476 MB/s Sep 12 17:23:08.216755 kernel: raid6: neonx2 gen() 13182 MB/s Sep 12 17:23:08.233765 kernel: raid6: neonx1 gen() 10167 MB/s Sep 12 17:23:08.250778 kernel: raid6: int64x8 gen() 6848 MB/s Sep 12 17:23:08.267778 kernel: raid6: int64x4 gen() 7287 MB/s Sep 12 17:23:08.284768 kernel: raid6: int64x2 gen() 6061 MB/s Sep 12 17:23:08.302018 kernel: raid6: int64x1 gen() 5025 MB/s Sep 12 17:23:08.302060 kernel: raid6: using algorithm neonx4 gen() 15476 MB/s Sep 12 17:23:08.319997 kernel: raid6: .... xor() 12338 MB/s, rmw enabled Sep 12 17:23:08.320021 kernel: raid6: using neon recovery algorithm Sep 12 17:23:08.325747 kernel: xor: measuring software checksum speed Sep 12 17:23:08.325778 kernel: 8regs : 20510 MB/sec Sep 12 17:23:08.327076 kernel: 32regs : 19057 MB/sec Sep 12 17:23:08.327103 kernel: arm64_neon : 27955 MB/sec Sep 12 17:23:08.327117 kernel: xor: using function: arm64_neon (27955 MB/sec) Sep 12 17:23:08.380784 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 12 17:23:08.386657 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 12 17:23:08.389434 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 17:23:08.418136 systemd-udevd[500]: Using default interface naming scheme 'v255'. Sep 12 17:23:08.423850 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 17:23:08.426628 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 12 17:23:08.452546 dracut-pre-trigger[506]: rd.md=0: removing MD RAID activation Sep 12 17:23:08.475938 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 12 17:23:08.478492 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 12 17:23:08.531925 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 17:23:08.534404 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 12 17:23:08.591749 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Sep 12 17:23:08.600170 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 12 17:23:08.602259 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 17:23:08.602388 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:23:08.609861 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 12 17:23:08.609894 kernel: GPT:9289727 != 19775487 Sep 12 17:23:08.609904 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 12 17:23:08.610782 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:23:08.615077 kernel: GPT:9289727 != 19775487 Sep 12 17:23:08.615108 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 12 17:23:08.615118 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 12 17:23:08.616802 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:23:08.633619 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 12 17:23:08.646540 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 12 17:23:08.652898 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 12 17:23:08.654276 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:23:08.666538 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 12 17:23:08.667922 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 12 17:23:08.677711 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 12 17:23:08.678929 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 12 17:23:08.681316 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 17:23:08.683931 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 12 17:23:08.686748 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 12 17:23:08.688832 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 12 17:23:08.707209 disk-uuid[592]: Primary Header is updated. Sep 12 17:23:08.707209 disk-uuid[592]: Secondary Entries is updated. Sep 12 17:23:08.707209 disk-uuid[592]: Secondary Header is updated. Sep 12 17:23:08.712752 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 12 17:23:08.712826 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 12 17:23:08.718755 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 12 17:23:09.718748 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 12 17:23:09.719203 disk-uuid[597]: The operation has completed successfully. Sep 12 17:23:09.740684 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 12 17:23:09.740893 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 12 17:23:09.770056 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 12 17:23:09.783582 sh[611]: Success Sep 12 17:23:09.796127 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 12 17:23:09.796171 kernel: device-mapper: uevent: version 1.0.3 Sep 12 17:23:09.797413 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Sep 12 17:23:09.804458 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Sep 12 17:23:09.828362 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 12 17:23:09.831238 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 12 17:23:09.847744 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 12 17:23:09.853943 kernel: BTRFS: device fsid 752cb955-bdfa-486a-ad02-b54d5e61d194 devid 1 transid 39 /dev/mapper/usr (253:0) scanned by mount (623) Sep 12 17:23:09.853976 kernel: BTRFS info (device dm-0): first mount of filesystem 752cb955-bdfa-486a-ad02-b54d5e61d194 Sep 12 17:23:09.855804 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Sep 12 17:23:09.859740 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 12 17:23:09.859769 kernel: BTRFS info (device dm-0): enabling free space tree Sep 12 17:23:09.860601 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 12 17:23:09.861935 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Sep 12 17:23:09.863450 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 12 17:23:09.864194 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 12 17:23:09.867851 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 12 17:23:09.890490 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (654) Sep 12 17:23:09.890550 kernel: BTRFS info (device vda6): first mount of filesystem 5f4a7913-42f7-487c-8331-8ab180fe9df7 Sep 12 17:23:09.891560 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 12 17:23:09.894120 kernel: BTRFS info (device vda6): turning on async discard Sep 12 17:23:09.894156 kernel: BTRFS info (device vda6): enabling free space tree Sep 12 17:23:09.898759 kernel: BTRFS info (device vda6): last unmount of filesystem 5f4a7913-42f7-487c-8331-8ab180fe9df7 Sep 12 17:23:09.899588 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 12 17:23:09.901839 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 12 17:23:09.962789 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 12 17:23:09.967894 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 12 17:23:10.006361 ignition[702]: Ignition 2.21.0 Sep 12 17:23:10.006376 ignition[702]: Stage: fetch-offline Sep 12 17:23:10.006409 ignition[702]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:23:10.006417 ignition[702]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 17:23:10.006623 ignition[702]: parsed url from cmdline: "" Sep 12 17:23:10.009575 systemd-networkd[798]: lo: Link UP Sep 12 17:23:10.006625 ignition[702]: no config URL provided Sep 12 17:23:10.009579 systemd-networkd[798]: lo: Gained carrier Sep 12 17:23:10.006630 ignition[702]: reading system config file "/usr/lib/ignition/user.ign" Sep 12 17:23:10.010318 systemd-networkd[798]: Enumeration completed Sep 12 17:23:10.006636 ignition[702]: no config at "/usr/lib/ignition/user.ign" Sep 12 17:23:10.010400 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 12 17:23:10.006654 ignition[702]: op(1): [started] loading QEMU firmware config module Sep 12 17:23:10.011597 systemd[1]: Reached target network.target - Network. Sep 12 17:23:10.006659 ignition[702]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 12 17:23:10.013366 systemd-networkd[798]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:23:10.011691 ignition[702]: op(1): [finished] loading QEMU firmware config module Sep 12 17:23:10.013369 systemd-networkd[798]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 17:23:10.014068 systemd-networkd[798]: eth0: Link UP Sep 12 17:23:10.014223 systemd-networkd[798]: eth0: Gained carrier Sep 12 17:23:10.014232 systemd-networkd[798]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:23:10.032784 systemd-networkd[798]: eth0: DHCPv4 address 10.0.0.105/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 12 17:23:10.062865 ignition[702]: parsing config with SHA512: 56d11d3dffc14a562ea8c3024324f15ffb156494cf920d150207b1c5ee0914fa8af139e4e773d9520061da29e930ee3232fd939976e3ea1794f3132d2b26f9d2 Sep 12 17:23:10.068954 unknown[702]: fetched base config from "system" Sep 12 17:23:10.068971 unknown[702]: fetched user config from "qemu" Sep 12 17:23:10.069374 ignition[702]: fetch-offline: fetch-offline passed Sep 12 17:23:10.069425 ignition[702]: Ignition finished successfully Sep 12 17:23:10.070917 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 12 17:23:10.072960 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 12 17:23:10.073799 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 12 17:23:10.102757 ignition[811]: Ignition 2.21.0 Sep 12 17:23:10.102772 ignition[811]: Stage: kargs Sep 12 17:23:10.102913 ignition[811]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:23:10.102922 ignition[811]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 17:23:10.105052 ignition[811]: kargs: kargs passed Sep 12 17:23:10.105176 ignition[811]: Ignition finished successfully Sep 12 17:23:10.108198 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 12 17:23:10.110225 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 12 17:23:10.135901 ignition[819]: Ignition 2.21.0 Sep 12 17:23:10.135917 ignition[819]: Stage: disks Sep 12 17:23:10.136061 ignition[819]: no configs at "/usr/lib/ignition/base.d" Sep 12 17:23:10.140143 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 12 17:23:10.136069 ignition[819]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 17:23:10.141533 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 12 17:23:10.137752 ignition[819]: disks: disks passed Sep 12 17:23:10.143279 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 12 17:23:10.137817 ignition[819]: Ignition finished successfully Sep 12 17:23:10.145367 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 12 17:23:10.147287 systemd[1]: Reached target sysinit.target - System Initialization. Sep 12 17:23:10.148942 systemd[1]: Reached target basic.target - Basic System. Sep 12 17:23:10.151868 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 12 17:23:10.181341 systemd-fsck[830]: ROOT: clean, 15/553520 files, 52789/553472 blocks Sep 12 17:23:10.185164 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 12 17:23:10.189260 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 12 17:23:10.252767 kernel: EXT4-fs (vda9): mounted filesystem c902100c-52b7-422c-84ac-d834d4db2717 r/w with ordered data mode. Quota mode: none. Sep 12 17:23:10.252904 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 12 17:23:10.254143 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 12 17:23:10.256528 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 12 17:23:10.258313 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 12 17:23:10.259439 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 12 17:23:10.259478 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 12 17:23:10.259501 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 12 17:23:10.270633 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 12 17:23:10.273851 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 12 17:23:10.278588 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (838) Sep 12 17:23:10.278610 kernel: BTRFS info (device vda6): first mount of filesystem 5f4a7913-42f7-487c-8331-8ab180fe9df7 Sep 12 17:23:10.278619 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 12 17:23:10.283039 kernel: BTRFS info (device vda6): turning on async discard Sep 12 17:23:10.283084 kernel: BTRFS info (device vda6): enabling free space tree Sep 12 17:23:10.284775 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 12 17:23:10.313322 initrd-setup-root[862]: cut: /sysroot/etc/passwd: No such file or directory Sep 12 17:23:10.316659 initrd-setup-root[869]: cut: /sysroot/etc/group: No such file or directory Sep 12 17:23:10.321009 initrd-setup-root[876]: cut: /sysroot/etc/shadow: No such file or directory Sep 12 17:23:10.324680 initrd-setup-root[883]: cut: /sysroot/etc/gshadow: No such file or directory Sep 12 17:23:10.398772 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 12 17:23:10.401036 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 12 17:23:10.402788 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 12 17:23:10.417746 kernel: BTRFS info (device vda6): last unmount of filesystem 5f4a7913-42f7-487c-8331-8ab180fe9df7 Sep 12 17:23:10.431859 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 12 17:23:10.445562 ignition[951]: INFO : Ignition 2.21.0 Sep 12 17:23:10.445562 ignition[951]: INFO : Stage: mount Sep 12 17:23:10.447374 ignition[951]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 17:23:10.447374 ignition[951]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 17:23:10.447374 ignition[951]: INFO : mount: mount passed Sep 12 17:23:10.447374 ignition[951]: INFO : Ignition finished successfully Sep 12 17:23:10.447678 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 12 17:23:10.450744 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 12 17:23:10.853156 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 12 17:23:10.854572 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 12 17:23:10.874467 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (964) Sep 12 17:23:10.874501 kernel: BTRFS info (device vda6): first mount of filesystem 5f4a7913-42f7-487c-8331-8ab180fe9df7 Sep 12 17:23:10.875501 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 12 17:23:10.878064 kernel: BTRFS info (device vda6): turning on async discard Sep 12 17:23:10.878096 kernel: BTRFS info (device vda6): enabling free space tree Sep 12 17:23:10.879440 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 12 17:23:10.912813 ignition[981]: INFO : Ignition 2.21.0 Sep 12 17:23:10.913858 ignition[981]: INFO : Stage: files Sep 12 17:23:10.914520 ignition[981]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 17:23:10.914520 ignition[981]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 17:23:10.916715 ignition[981]: DEBUG : files: compiled without relabeling support, skipping Sep 12 17:23:10.916715 ignition[981]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 12 17:23:10.916715 ignition[981]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 12 17:23:10.920761 ignition[981]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 12 17:23:10.920761 ignition[981]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 12 17:23:10.920761 ignition[981]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 12 17:23:10.920761 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Sep 12 17:23:10.920761 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Sep 12 17:23:10.918447 unknown[981]: wrote ssh authorized keys file for user: core Sep 12 17:23:10.962656 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 12 17:23:11.087859 systemd-networkd[798]: eth0: Gained IPv6LL Sep 12 17:23:11.433665 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Sep 12 17:23:11.435763 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 12 17:23:11.435763 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Sep 12 17:23:11.612836 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 12 17:23:11.672050 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 12 17:23:11.672050 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 12 17:23:11.676029 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 12 17:23:11.676029 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 12 17:23:11.676029 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 12 17:23:11.676029 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 12 17:23:11.676029 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 12 17:23:11.676029 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 12 17:23:11.676029 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 12 17:23:11.676029 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 12 17:23:11.676029 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 12 17:23:11.676029 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 12 17:23:11.694197 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 12 17:23:11.694197 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 12 17:23:11.694197 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-arm64.raw: attempt #1 Sep 12 17:23:12.019468 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 12 17:23:12.311689 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 12 17:23:12.311689 ignition[981]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Sep 12 17:23:12.315546 ignition[981]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 12 17:23:12.315546 ignition[981]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 12 17:23:12.315546 ignition[981]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Sep 12 17:23:12.315546 ignition[981]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Sep 12 17:23:12.315546 ignition[981]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 12 17:23:12.315546 ignition[981]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 12 17:23:12.315546 ignition[981]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Sep 12 17:23:12.315546 ignition[981]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Sep 12 17:23:12.333291 ignition[981]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 12 17:23:12.336856 ignition[981]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 12 17:23:12.339436 ignition[981]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Sep 12 17:23:12.339436 ignition[981]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Sep 12 17:23:12.339436 ignition[981]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Sep 12 17:23:12.339436 ignition[981]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 12 17:23:12.339436 ignition[981]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 12 17:23:12.339436 ignition[981]: INFO : files: files passed Sep 12 17:23:12.339436 ignition[981]: INFO : Ignition finished successfully Sep 12 17:23:12.340905 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 12 17:23:12.343857 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 12 17:23:12.349415 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 12 17:23:12.366022 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 12 17:23:12.368471 initrd-setup-root-after-ignition[1009]: grep: /sysroot/oem/oem-release: No such file or directory Sep 12 17:23:12.366150 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 12 17:23:12.371702 initrd-setup-root-after-ignition[1012]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 12 17:23:12.371702 initrd-setup-root-after-ignition[1012]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 12 17:23:12.375029 initrd-setup-root-after-ignition[1016]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 12 17:23:12.377650 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 12 17:23:12.379130 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 12 17:23:12.381900 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 12 17:23:12.412304 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 12 17:23:12.412411 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 12 17:23:12.415003 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 12 17:23:12.417575 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 12 17:23:12.420322 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 12 17:23:12.421256 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 12 17:23:12.442749 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 12 17:23:12.445601 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 12 17:23:12.473040 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 12 17:23:12.474486 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 17:23:12.476952 systemd[1]: Stopped target timers.target - Timer Units. Sep 12 17:23:12.478913 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 12 17:23:12.479058 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 12 17:23:12.481784 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 12 17:23:12.483887 systemd[1]: Stopped target basic.target - Basic System. Sep 12 17:23:12.485596 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 12 17:23:12.487753 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 12 17:23:12.490004 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 12 17:23:12.492124 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Sep 12 17:23:12.494248 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 12 17:23:12.496515 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 12 17:23:12.498670 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 12 17:23:12.500946 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 12 17:23:12.502853 systemd[1]: Stopped target swap.target - Swaps. Sep 12 17:23:12.504660 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 12 17:23:12.504806 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 12 17:23:12.507637 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 12 17:23:12.509998 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 17:23:12.512217 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 12 17:23:12.512318 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 17:23:12.514667 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 12 17:23:12.514809 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 12 17:23:12.518093 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 12 17:23:12.518214 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 12 17:23:12.520811 systemd[1]: Stopped target paths.target - Path Units. Sep 12 17:23:12.522706 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 12 17:23:12.526790 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 17:23:12.530261 systemd[1]: Stopped target slices.target - Slice Units. Sep 12 17:23:12.533059 systemd[1]: Stopped target sockets.target - Socket Units. Sep 12 17:23:12.534825 systemd[1]: iscsid.socket: Deactivated successfully. Sep 12 17:23:12.534913 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 12 17:23:12.537057 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 12 17:23:12.537239 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 12 17:23:12.538962 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 12 17:23:12.539106 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 12 17:23:12.541373 systemd[1]: ignition-files.service: Deactivated successfully. Sep 12 17:23:12.541491 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 12 17:23:12.544154 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 12 17:23:12.546285 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 12 17:23:12.546409 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 17:23:12.570439 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 12 17:23:12.571474 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 12 17:23:12.571631 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 17:23:12.573847 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 12 17:23:12.573952 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 12 17:23:12.580206 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 12 17:23:12.580319 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 12 17:23:12.586456 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 12 17:23:12.588418 ignition[1037]: INFO : Ignition 2.21.0 Sep 12 17:23:12.588418 ignition[1037]: INFO : Stage: umount Sep 12 17:23:12.591213 ignition[1037]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 12 17:23:12.591213 ignition[1037]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 12 17:23:12.591213 ignition[1037]: INFO : umount: umount passed Sep 12 17:23:12.591213 ignition[1037]: INFO : Ignition finished successfully Sep 12 17:23:12.591958 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 12 17:23:12.593770 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 12 17:23:12.595886 systemd[1]: Stopped target network.target - Network. Sep 12 17:23:12.597533 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 12 17:23:12.598746 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 12 17:23:12.601915 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 12 17:23:12.601973 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 12 17:23:12.603260 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 12 17:23:12.603326 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 12 17:23:12.604428 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 12 17:23:12.604471 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 12 17:23:12.606588 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 12 17:23:12.607785 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 12 17:23:12.614345 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 12 17:23:12.614454 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 12 17:23:12.618793 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 12 17:23:12.619072 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 12 17:23:12.619204 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 12 17:23:12.623843 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 12 17:23:12.624440 systemd[1]: Stopped target network-pre.target - Preparation for Network. Sep 12 17:23:12.626088 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 12 17:23:12.626132 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 12 17:23:12.629413 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 12 17:23:12.630397 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 12 17:23:12.630463 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 12 17:23:12.633025 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 12 17:23:12.633081 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:23:12.636629 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 12 17:23:12.636676 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 12 17:23:12.639009 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 12 17:23:12.639060 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 17:23:12.642027 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 17:23:12.647307 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 12 17:23:12.647367 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 12 17:23:12.665352 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 12 17:23:12.665860 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 12 17:23:12.667085 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 12 17:23:12.667129 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 12 17:23:12.669271 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 12 17:23:12.669351 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 12 17:23:12.671354 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 12 17:23:12.672787 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 17:23:12.675786 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 12 17:23:12.675852 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 12 17:23:12.677144 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 12 17:23:12.677174 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 17:23:12.678905 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 12 17:23:12.678954 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 12 17:23:12.681949 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 12 17:23:12.681996 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 12 17:23:12.684909 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 12 17:23:12.684963 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 12 17:23:12.688861 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 12 17:23:12.689956 systemd[1]: systemd-network-generator.service: Deactivated successfully. Sep 12 17:23:12.690012 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Sep 12 17:23:12.693842 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 12 17:23:12.693882 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 17:23:12.700805 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 12 17:23:12.700853 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:23:12.707516 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Sep 12 17:23:12.707565 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Sep 12 17:23:12.707597 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 12 17:23:12.707909 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 12 17:23:12.708006 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 12 17:23:12.710128 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 12 17:23:12.712851 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 12 17:23:12.730846 systemd[1]: Switching root. Sep 12 17:23:12.771115 systemd-journald[248]: Journal stopped Sep 12 17:23:13.595747 systemd-journald[248]: Received SIGTERM from PID 1 (systemd). Sep 12 17:23:13.595801 kernel: SELinux: policy capability network_peer_controls=1 Sep 12 17:23:13.595813 kernel: SELinux: policy capability open_perms=1 Sep 12 17:23:13.595822 kernel: SELinux: policy capability extended_socket_class=1 Sep 12 17:23:13.595834 kernel: SELinux: policy capability always_check_network=0 Sep 12 17:23:13.595849 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 12 17:23:13.595858 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 12 17:23:13.595869 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 12 17:23:13.595879 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 12 17:23:13.595888 kernel: SELinux: policy capability userspace_initial_context=0 Sep 12 17:23:13.595898 kernel: audit: type=1403 audit(1757697792.970:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 12 17:23:13.595908 systemd[1]: Successfully loaded SELinux policy in 62.216ms. Sep 12 17:23:13.595920 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 5.406ms. Sep 12 17:23:13.595932 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 12 17:23:13.595943 systemd[1]: Detected virtualization kvm. Sep 12 17:23:13.595953 systemd[1]: Detected architecture arm64. Sep 12 17:23:13.595963 systemd[1]: Detected first boot. Sep 12 17:23:13.595974 systemd[1]: Initializing machine ID from VM UUID. Sep 12 17:23:13.595984 zram_generator::config[1083]: No configuration found. Sep 12 17:23:13.595995 kernel: NET: Registered PF_VSOCK protocol family Sep 12 17:23:13.596004 systemd[1]: Populated /etc with preset unit settings. Sep 12 17:23:13.596015 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 12 17:23:13.596024 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 12 17:23:13.596034 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 12 17:23:13.596044 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 12 17:23:13.596055 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 12 17:23:13.596076 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 12 17:23:13.596087 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 12 17:23:13.596096 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 12 17:23:13.596106 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 12 17:23:13.596116 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 12 17:23:13.596126 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 12 17:23:13.596136 systemd[1]: Created slice user.slice - User and Session Slice. Sep 12 17:23:13.596147 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 12 17:23:13.596157 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 12 17:23:13.596167 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 12 17:23:13.596177 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 12 17:23:13.596190 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 12 17:23:13.596200 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 12 17:23:13.596210 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Sep 12 17:23:13.596220 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 12 17:23:13.596231 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 12 17:23:13.596241 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 12 17:23:13.596251 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 12 17:23:13.596260 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 12 17:23:13.596270 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 12 17:23:13.596280 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 12 17:23:13.596293 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 12 17:23:13.596302 systemd[1]: Reached target slices.target - Slice Units. Sep 12 17:23:13.596312 systemd[1]: Reached target swap.target - Swaps. Sep 12 17:23:13.596323 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 12 17:23:13.596333 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 12 17:23:13.596342 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 12 17:23:13.596352 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 12 17:23:13.596362 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 12 17:23:13.596371 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 12 17:23:13.596381 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 12 17:23:13.596391 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 12 17:23:13.596401 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 12 17:23:13.596411 systemd[1]: Mounting media.mount - External Media Directory... Sep 12 17:23:13.596423 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 12 17:23:13.596432 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 12 17:23:13.596442 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 12 17:23:13.596452 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 12 17:23:13.596462 systemd[1]: Reached target machines.target - Containers. Sep 12 17:23:13.596471 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 12 17:23:13.596481 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 17:23:13.596490 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 12 17:23:13.596501 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 12 17:23:13.596511 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 17:23:13.596520 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 12 17:23:13.596530 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 17:23:13.596539 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 12 17:23:13.596552 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 17:23:13.596562 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 12 17:23:13.596572 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 12 17:23:13.596583 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 12 17:23:13.596593 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 12 17:23:13.596602 systemd[1]: Stopped systemd-fsck-usr.service. Sep 12 17:23:13.596612 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 12 17:23:13.596622 kernel: fuse: init (API version 7.41) Sep 12 17:23:13.596631 kernel: ACPI: bus type drm_connector registered Sep 12 17:23:13.596640 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 12 17:23:13.596650 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 12 17:23:13.596659 kernel: loop: module loaded Sep 12 17:23:13.596670 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 12 17:23:13.596680 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 12 17:23:13.596690 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 12 17:23:13.596700 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 12 17:23:13.596709 systemd[1]: verity-setup.service: Deactivated successfully. Sep 12 17:23:13.596720 systemd[1]: Stopped verity-setup.service. Sep 12 17:23:13.596756 systemd-journald[1158]: Collecting audit messages is disabled. Sep 12 17:23:13.596777 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 12 17:23:13.596787 systemd-journald[1158]: Journal started Sep 12 17:23:13.596807 systemd-journald[1158]: Runtime Journal (/run/log/journal/8b2420c1e4064066afc203abbb13767a) is 6M, max 48.5M, 42.4M free. Sep 12 17:23:13.603804 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 12 17:23:13.603839 systemd[1]: Mounted media.mount - External Media Directory. Sep 12 17:23:13.603851 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 12 17:23:13.603862 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 12 17:23:13.365276 systemd[1]: Queued start job for default target multi-user.target. Sep 12 17:23:13.389756 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 12 17:23:13.390149 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 12 17:23:13.607972 systemd[1]: Started systemd-journald.service - Journal Service. Sep 12 17:23:13.609232 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 12 17:23:13.610606 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 12 17:23:13.613149 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 12 17:23:13.614704 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 12 17:23:13.614881 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 12 17:23:13.618105 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 17:23:13.618266 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 17:23:13.619653 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 12 17:23:13.619852 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 12 17:23:13.621236 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 17:23:13.621398 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 17:23:13.622888 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 12 17:23:13.623041 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 12 17:23:13.625069 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 17:23:13.625225 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 17:23:13.627756 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 12 17:23:13.629117 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 12 17:23:13.630809 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 12 17:23:13.632499 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 12 17:23:13.644286 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 12 17:23:13.646716 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 12 17:23:13.649146 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 12 17:23:13.650373 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 12 17:23:13.650412 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 12 17:23:13.652412 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 12 17:23:13.659567 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 12 17:23:13.660903 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 17:23:13.661985 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 12 17:23:13.664028 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 12 17:23:13.665396 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 12 17:23:13.668872 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 12 17:23:13.670150 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 12 17:23:13.673836 systemd-journald[1158]: Time spent on flushing to /var/log/journal/8b2420c1e4064066afc203abbb13767a is 16.205ms for 887 entries. Sep 12 17:23:13.673836 systemd-journald[1158]: System Journal (/var/log/journal/8b2420c1e4064066afc203abbb13767a) is 8M, max 195.6M, 187.6M free. Sep 12 17:23:13.702045 systemd-journald[1158]: Received client request to flush runtime journal. Sep 12 17:23:13.671115 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 17:23:13.673341 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 12 17:23:13.679018 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 12 17:23:13.687891 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 12 17:23:13.689443 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 12 17:23:13.691568 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 12 17:23:13.703193 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 12 17:23:13.706861 kernel: loop0: detected capacity change from 0 to 211168 Sep 12 17:23:13.708447 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 12 17:23:13.714428 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:23:13.718217 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 12 17:23:13.720765 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 12 17:23:13.721426 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 12 17:23:13.731984 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 12 17:23:13.735891 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 12 17:23:13.744807 kernel: loop1: detected capacity change from 0 to 119320 Sep 12 17:23:13.747383 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 12 17:23:13.760961 systemd-tmpfiles[1215]: ACLs are not supported, ignoring. Sep 12 17:23:13.760980 systemd-tmpfiles[1215]: ACLs are not supported, ignoring. Sep 12 17:23:13.764656 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 12 17:23:13.771760 kernel: loop2: detected capacity change from 0 to 100608 Sep 12 17:23:13.793830 kernel: loop3: detected capacity change from 0 to 211168 Sep 12 17:23:13.800754 kernel: loop4: detected capacity change from 0 to 119320 Sep 12 17:23:13.805749 kernel: loop5: detected capacity change from 0 to 100608 Sep 12 17:23:13.810146 (sd-merge)[1222]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 12 17:23:13.810521 (sd-merge)[1222]: Merged extensions into '/usr'. Sep 12 17:23:13.814120 systemd[1]: Reload requested from client PID 1199 ('systemd-sysext') (unit systemd-sysext.service)... Sep 12 17:23:13.814136 systemd[1]: Reloading... Sep 12 17:23:13.873824 zram_generator::config[1245]: No configuration found. Sep 12 17:23:13.937292 ldconfig[1194]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 12 17:23:14.023399 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 12 17:23:14.023709 systemd[1]: Reloading finished in 209 ms. Sep 12 17:23:14.057496 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 12 17:23:14.059034 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 12 17:23:14.073020 systemd[1]: Starting ensure-sysext.service... Sep 12 17:23:14.074909 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 12 17:23:14.084690 systemd[1]: Reload requested from client PID 1282 ('systemctl') (unit ensure-sysext.service)... Sep 12 17:23:14.084827 systemd[1]: Reloading... Sep 12 17:23:14.089493 systemd-tmpfiles[1283]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Sep 12 17:23:14.089553 systemd-tmpfiles[1283]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Sep 12 17:23:14.089883 systemd-tmpfiles[1283]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 12 17:23:14.090079 systemd-tmpfiles[1283]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 12 17:23:14.090675 systemd-tmpfiles[1283]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 12 17:23:14.090918 systemd-tmpfiles[1283]: ACLs are not supported, ignoring. Sep 12 17:23:14.090971 systemd-tmpfiles[1283]: ACLs are not supported, ignoring. Sep 12 17:23:14.093821 systemd-tmpfiles[1283]: Detected autofs mount point /boot during canonicalization of boot. Sep 12 17:23:14.093836 systemd-tmpfiles[1283]: Skipping /boot Sep 12 17:23:14.099684 systemd-tmpfiles[1283]: Detected autofs mount point /boot during canonicalization of boot. Sep 12 17:23:14.099696 systemd-tmpfiles[1283]: Skipping /boot Sep 12 17:23:14.131767 zram_generator::config[1313]: No configuration found. Sep 12 17:23:14.255379 systemd[1]: Reloading finished in 170 ms. Sep 12 17:23:14.275350 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 12 17:23:14.282223 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 12 17:23:14.290762 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 12 17:23:14.293204 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 12 17:23:14.295596 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 12 17:23:14.298972 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 12 17:23:14.303049 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 12 17:23:14.305634 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 12 17:23:14.312850 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 12 17:23:14.316640 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 17:23:14.320187 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 17:23:14.323165 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 17:23:14.326164 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 17:23:14.327477 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 17:23:14.327831 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 12 17:23:14.329458 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 17:23:14.329634 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 17:23:14.334427 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 12 17:23:14.338548 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 17:23:14.338942 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 17:23:14.340850 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 17:23:14.341088 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 17:23:14.347855 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 17:23:14.352079 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 17:23:14.353765 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 17:23:14.353971 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 12 17:23:14.354176 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 12 17:23:14.355614 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 12 17:23:14.355675 systemd-udevd[1356]: Using default interface naming scheme 'v255'. Sep 12 17:23:14.361737 augenrules[1381]: No rules Sep 12 17:23:14.361946 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 12 17:23:14.364818 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 17:23:14.364977 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 17:23:14.367021 systemd[1]: audit-rules.service: Deactivated successfully. Sep 12 17:23:14.367274 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 12 17:23:14.369360 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 12 17:23:14.371378 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 12 17:23:14.377856 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 12 17:23:14.381453 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 12 17:23:14.389560 systemd[1]: Finished ensure-sysext.service. Sep 12 17:23:14.399232 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 12 17:23:14.400410 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 12 17:23:14.401932 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 12 17:23:14.403860 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 12 17:23:14.415092 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 12 17:23:14.418647 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 12 17:23:14.419892 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 12 17:23:14.419953 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 12 17:23:14.424098 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 12 17:23:14.426755 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 12 17:23:14.429281 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 12 17:23:14.429830 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 12 17:23:14.430020 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 12 17:23:14.436105 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 12 17:23:14.436409 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 12 17:23:14.439103 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 12 17:23:14.439761 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 12 17:23:14.439931 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 12 17:23:14.448834 augenrules[1421]: /sbin/augenrules: No change Sep 12 17:23:14.458916 augenrules[1452]: No rules Sep 12 17:23:14.460269 systemd[1]: audit-rules.service: Deactivated successfully. Sep 12 17:23:14.460619 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 12 17:23:14.464825 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Sep 12 17:23:14.466169 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 12 17:23:14.466623 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 12 17:23:14.468967 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 12 17:23:14.517613 systemd-resolved[1349]: Positive Trust Anchors: Sep 12 17:23:14.517630 systemd-resolved[1349]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 12 17:23:14.517662 systemd-resolved[1349]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 12 17:23:14.522011 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 12 17:23:14.524622 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 12 17:23:14.526176 systemd-resolved[1349]: Defaulting to hostname 'linux'. Sep 12 17:23:14.527385 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 12 17:23:14.528814 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 12 17:23:14.531706 systemd-networkd[1429]: lo: Link UP Sep 12 17:23:14.531958 systemd-networkd[1429]: lo: Gained carrier Sep 12 17:23:14.532789 systemd-networkd[1429]: Enumeration completed Sep 12 17:23:14.533134 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 12 17:23:14.535615 systemd[1]: Reached target network.target - Network. Sep 12 17:23:14.537826 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 12 17:23:14.539457 systemd-networkd[1429]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:23:14.540081 systemd-networkd[1429]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 12 17:23:14.540918 systemd-networkd[1429]: eth0: Link UP Sep 12 17:23:14.541030 systemd-networkd[1429]: eth0: Gained carrier Sep 12 17:23:14.541048 systemd-networkd[1429]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 12 17:23:14.543117 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 12 17:23:14.546080 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 12 17:23:14.549368 systemd[1]: Reached target sysinit.target - System Initialization. Sep 12 17:23:14.551062 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 12 17:23:14.552661 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 12 17:23:14.554662 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 12 17:23:14.556367 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 12 17:23:14.556398 systemd[1]: Reached target paths.target - Path Units. Sep 12 17:23:14.557800 systemd[1]: Reached target time-set.target - System Time Set. Sep 12 17:23:14.559165 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 12 17:23:14.560536 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 12 17:23:14.560783 systemd-networkd[1429]: eth0: DHCPv4 address 10.0.0.105/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 12 17:23:14.562010 systemd-timesyncd[1434]: Network configuration changed, trying to establish connection. Sep 12 17:23:14.562013 systemd[1]: Reached target timers.target - Timer Units. Sep 12 17:23:14.565593 systemd-timesyncd[1434]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 12 17:23:14.565804 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 12 17:23:14.565920 systemd-timesyncd[1434]: Initial clock synchronization to Fri 2025-09-12 17:23:14.405867 UTC. Sep 12 17:23:14.568510 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 12 17:23:14.571440 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 12 17:23:14.573139 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 12 17:23:14.574610 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 12 17:23:14.578017 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 12 17:23:14.579820 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 12 17:23:14.582214 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 12 17:23:14.584048 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 12 17:23:14.585709 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 12 17:23:14.588302 systemd[1]: Reached target sockets.target - Socket Units. Sep 12 17:23:14.589488 systemd[1]: Reached target basic.target - Basic System. Sep 12 17:23:14.590825 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 12 17:23:14.590860 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 12 17:23:14.592866 systemd[1]: Starting containerd.service - containerd container runtime... Sep 12 17:23:14.595935 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 12 17:23:14.598797 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 12 17:23:14.602941 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 12 17:23:14.607914 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 12 17:23:14.609801 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 12 17:23:14.625048 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 12 17:23:14.629741 jq[1485]: false Sep 12 17:23:14.628850 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 12 17:23:14.631654 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 12 17:23:14.634937 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 12 17:23:14.639299 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 12 17:23:14.641475 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 12 17:23:14.642187 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 12 17:23:14.643955 systemd[1]: Starting update-engine.service - Update Engine... Sep 12 17:23:14.647897 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 12 17:23:14.661120 extend-filesystems[1486]: Found /dev/vda6 Sep 12 17:23:14.662144 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 12 17:23:14.664353 jq[1504]: true Sep 12 17:23:14.663855 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 12 17:23:14.664380 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 12 17:23:14.664754 systemd[1]: motdgen.service: Deactivated successfully. Sep 12 17:23:14.664951 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 12 17:23:14.668927 extend-filesystems[1486]: Found /dev/vda9 Sep 12 17:23:14.670141 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 12 17:23:14.670782 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 12 17:23:14.672352 extend-filesystems[1486]: Checking size of /dev/vda9 Sep 12 17:23:14.681238 (ntainerd)[1518]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 12 17:23:14.685535 jq[1517]: true Sep 12 17:23:14.689931 extend-filesystems[1486]: Resized partition /dev/vda9 Sep 12 17:23:14.692894 update_engine[1500]: I20250912 17:23:14.690224 1500 main.cc:92] Flatcar Update Engine starting Sep 12 17:23:14.698696 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 12 17:23:14.700898 extend-filesystems[1537]: resize2fs 1.47.2 (1-Jan-2025) Sep 12 17:23:14.707969 tar[1514]: linux-arm64/LICENSE Sep 12 17:23:14.708186 tar[1514]: linux-arm64/helm Sep 12 17:23:14.730750 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 12 17:23:14.731544 bash[1549]: Updated "/home/core/.ssh/authorized_keys" Sep 12 17:23:14.734192 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 12 17:23:14.739210 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 12 17:23:14.748213 dbus-daemon[1477]: [system] SELinux support is enabled Sep 12 17:23:14.750814 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 12 17:23:14.754037 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 12 17:23:14.754077 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 12 17:23:14.755299 update_engine[1500]: I20250912 17:23:14.755245 1500 update_check_scheduler.cc:74] Next update check in 10m14s Sep 12 17:23:14.755750 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 12 17:23:14.755792 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 12 17:23:14.757963 systemd[1]: Started update-engine.service - Update Engine. Sep 12 17:23:14.762023 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 12 17:23:14.791752 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 12 17:23:14.804463 extend-filesystems[1537]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 12 17:23:14.804463 extend-filesystems[1537]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 12 17:23:14.804463 extend-filesystems[1537]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 12 17:23:14.812063 extend-filesystems[1486]: Resized filesystem in /dev/vda9 Sep 12 17:23:14.806362 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 12 17:23:14.806556 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 12 17:23:14.819257 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 12 17:23:14.834556 systemd-logind[1498]: Watching system buttons on /dev/input/event0 (Power Button) Sep 12 17:23:14.837871 systemd-logind[1498]: New seat seat0. Sep 12 17:23:14.838472 systemd[1]: Started systemd-logind.service - User Login Management. Sep 12 17:23:14.852111 locksmithd[1553]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 12 17:23:14.875800 containerd[1518]: time="2025-09-12T17:23:14Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Sep 12 17:23:14.877849 containerd[1518]: time="2025-09-12T17:23:14.877814720Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Sep 12 17:23:14.893736 containerd[1518]: time="2025-09-12T17:23:14.892806880Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="9.8µs" Sep 12 17:23:14.893736 containerd[1518]: time="2025-09-12T17:23:14.892847760Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Sep 12 17:23:14.893736 containerd[1518]: time="2025-09-12T17:23:14.892866400Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Sep 12 17:23:14.893736 containerd[1518]: time="2025-09-12T17:23:14.893024480Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Sep 12 17:23:14.893736 containerd[1518]: time="2025-09-12T17:23:14.893041720Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Sep 12 17:23:14.893736 containerd[1518]: time="2025-09-12T17:23:14.893075800Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 12 17:23:14.893736 containerd[1518]: time="2025-09-12T17:23:14.893129080Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 12 17:23:14.893736 containerd[1518]: time="2025-09-12T17:23:14.893141120Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 12 17:23:14.893736 containerd[1518]: time="2025-09-12T17:23:14.893362840Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 12 17:23:14.893736 containerd[1518]: time="2025-09-12T17:23:14.893376080Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 12 17:23:14.893736 containerd[1518]: time="2025-09-12T17:23:14.893386240Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 12 17:23:14.893736 containerd[1518]: time="2025-09-12T17:23:14.893394040Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Sep 12 17:23:14.894001 containerd[1518]: time="2025-09-12T17:23:14.893468720Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Sep 12 17:23:14.894001 containerd[1518]: time="2025-09-12T17:23:14.893643880Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 12 17:23:14.894001 containerd[1518]: time="2025-09-12T17:23:14.893671000Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 12 17:23:14.894001 containerd[1518]: time="2025-09-12T17:23:14.893681440Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Sep 12 17:23:14.894001 containerd[1518]: time="2025-09-12T17:23:14.893716080Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Sep 12 17:23:14.894001 containerd[1518]: time="2025-09-12T17:23:14.893962600Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Sep 12 17:23:14.894106 containerd[1518]: time="2025-09-12T17:23:14.894027200Z" level=info msg="metadata content store policy set" policy=shared Sep 12 17:23:14.897806 containerd[1518]: time="2025-09-12T17:23:14.897768680Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Sep 12 17:23:14.897877 containerd[1518]: time="2025-09-12T17:23:14.897830560Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Sep 12 17:23:14.897877 containerd[1518]: time="2025-09-12T17:23:14.897846600Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Sep 12 17:23:14.897877 containerd[1518]: time="2025-09-12T17:23:14.897858280Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Sep 12 17:23:14.897877 containerd[1518]: time="2025-09-12T17:23:14.897875520Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Sep 12 17:23:14.897961 containerd[1518]: time="2025-09-12T17:23:14.897887480Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Sep 12 17:23:14.897961 containerd[1518]: time="2025-09-12T17:23:14.897900000Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Sep 12 17:23:14.897961 containerd[1518]: time="2025-09-12T17:23:14.897911480Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Sep 12 17:23:14.897961 containerd[1518]: time="2025-09-12T17:23:14.897921760Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Sep 12 17:23:14.897961 containerd[1518]: time="2025-09-12T17:23:14.897932040Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Sep 12 17:23:14.897961 containerd[1518]: time="2025-09-12T17:23:14.897941920Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Sep 12 17:23:14.897961 containerd[1518]: time="2025-09-12T17:23:14.897954800Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Sep 12 17:23:14.898178 containerd[1518]: time="2025-09-12T17:23:14.898088160Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Sep 12 17:23:14.898178 containerd[1518]: time="2025-09-12T17:23:14.898116080Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Sep 12 17:23:14.898178 containerd[1518]: time="2025-09-12T17:23:14.898135640Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Sep 12 17:23:14.898178 containerd[1518]: time="2025-09-12T17:23:14.898146520Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Sep 12 17:23:14.898178 containerd[1518]: time="2025-09-12T17:23:14.898158520Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Sep 12 17:23:14.898270 containerd[1518]: time="2025-09-12T17:23:14.898208640Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Sep 12 17:23:14.898270 containerd[1518]: time="2025-09-12T17:23:14.898222640Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Sep 12 17:23:14.898270 containerd[1518]: time="2025-09-12T17:23:14.898232520Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Sep 12 17:23:14.898270 containerd[1518]: time="2025-09-12T17:23:14.898244200Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Sep 12 17:23:14.898270 containerd[1518]: time="2025-09-12T17:23:14.898254680Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Sep 12 17:23:14.898270 containerd[1518]: time="2025-09-12T17:23:14.898265680Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Sep 12 17:23:14.899238 containerd[1518]: time="2025-09-12T17:23:14.898546080Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Sep 12 17:23:14.899238 containerd[1518]: time="2025-09-12T17:23:14.898578160Z" level=info msg="Start snapshots syncer" Sep 12 17:23:14.899238 containerd[1518]: time="2025-09-12T17:23:14.898602280Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Sep 12 17:23:14.900184 containerd[1518]: time="2025-09-12T17:23:14.900084080Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Sep 12 17:23:14.900278 containerd[1518]: time="2025-09-12T17:23:14.900211600Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Sep 12 17:23:14.900380 containerd[1518]: time="2025-09-12T17:23:14.900358520Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Sep 12 17:23:14.900601 containerd[1518]: time="2025-09-12T17:23:14.900576000Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Sep 12 17:23:14.900684 containerd[1518]: time="2025-09-12T17:23:14.900666600Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Sep 12 17:23:14.900716 containerd[1518]: time="2025-09-12T17:23:14.900689560Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Sep 12 17:23:14.900716 containerd[1518]: time="2025-09-12T17:23:14.900703120Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Sep 12 17:23:14.900763 containerd[1518]: time="2025-09-12T17:23:14.900715760Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Sep 12 17:23:14.900797 containerd[1518]: time="2025-09-12T17:23:14.900739680Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Sep 12 17:23:14.900816 containerd[1518]: time="2025-09-12T17:23:14.900803200Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Sep 12 17:23:14.900849 containerd[1518]: time="2025-09-12T17:23:14.900839120Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Sep 12 17:23:14.900868 containerd[1518]: time="2025-09-12T17:23:14.900855200Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Sep 12 17:23:14.900892 containerd[1518]: time="2025-09-12T17:23:14.900868160Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Sep 12 17:23:14.900914 containerd[1518]: time="2025-09-12T17:23:14.900905880Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 12 17:23:14.900932 containerd[1518]: time="2025-09-12T17:23:14.900921720Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 12 17:23:14.900950 containerd[1518]: time="2025-09-12T17:23:14.900931120Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 12 17:23:14.901008 containerd[1518]: time="2025-09-12T17:23:14.900989640Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 12 17:23:14.901102 containerd[1518]: time="2025-09-12T17:23:14.901008720Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Sep 12 17:23:14.901102 containerd[1518]: time="2025-09-12T17:23:14.901026040Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Sep 12 17:23:14.901102 containerd[1518]: time="2025-09-12T17:23:14.901037160Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Sep 12 17:23:14.901162 containerd[1518]: time="2025-09-12T17:23:14.901123920Z" level=info msg="runtime interface created" Sep 12 17:23:14.901162 containerd[1518]: time="2025-09-12T17:23:14.901131160Z" level=info msg="created NRI interface" Sep 12 17:23:14.901195 containerd[1518]: time="2025-09-12T17:23:14.901139560Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Sep 12 17:23:14.901212 containerd[1518]: time="2025-09-12T17:23:14.901197920Z" level=info msg="Connect containerd service" Sep 12 17:23:14.901247 containerd[1518]: time="2025-09-12T17:23:14.901229520Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 12 17:23:14.902328 containerd[1518]: time="2025-09-12T17:23:14.902299320Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 12 17:23:14.970836 containerd[1518]: time="2025-09-12T17:23:14.970760120Z" level=info msg="Start subscribing containerd event" Sep 12 17:23:14.971006 containerd[1518]: time="2025-09-12T17:23:14.970974440Z" level=info msg="Start recovering state" Sep 12 17:23:14.971178 containerd[1518]: time="2025-09-12T17:23:14.971144480Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 12 17:23:14.971214 containerd[1518]: time="2025-09-12T17:23:14.971201240Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 12 17:23:14.971276 containerd[1518]: time="2025-09-12T17:23:14.971253160Z" level=info msg="Start event monitor" Sep 12 17:23:14.971324 containerd[1518]: time="2025-09-12T17:23:14.971313200Z" level=info msg="Start cni network conf syncer for default" Sep 12 17:23:14.971441 containerd[1518]: time="2025-09-12T17:23:14.971425680Z" level=info msg="Start streaming server" Sep 12 17:23:14.971492 containerd[1518]: time="2025-09-12T17:23:14.971480440Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Sep 12 17:23:14.971547 containerd[1518]: time="2025-09-12T17:23:14.971536160Z" level=info msg="runtime interface starting up..." Sep 12 17:23:14.971745 containerd[1518]: time="2025-09-12T17:23:14.971578840Z" level=info msg="starting plugins..." Sep 12 17:23:14.971818 containerd[1518]: time="2025-09-12T17:23:14.971804440Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Sep 12 17:23:14.972111 systemd[1]: Started containerd.service - containerd container runtime. Sep 12 17:23:14.973760 containerd[1518]: time="2025-09-12T17:23:14.973715960Z" level=info msg="containerd successfully booted in 0.098273s" Sep 12 17:23:15.034641 tar[1514]: linux-arm64/README.md Sep 12 17:23:15.051854 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 12 17:23:15.451577 sshd_keygen[1505]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 12 17:23:15.471285 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 12 17:23:15.474200 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 12 17:23:15.495311 systemd[1]: issuegen.service: Deactivated successfully. Sep 12 17:23:15.495535 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 12 17:23:15.498233 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 12 17:23:15.517677 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 12 17:23:15.521083 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 12 17:23:15.523350 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Sep 12 17:23:15.524781 systemd[1]: Reached target getty.target - Login Prompts. Sep 12 17:23:16.207887 systemd-networkd[1429]: eth0: Gained IPv6LL Sep 12 17:23:16.211815 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 12 17:23:16.214954 systemd[1]: Reached target network-online.target - Network is Online. Sep 12 17:23:16.219569 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 12 17:23:16.236431 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:23:16.240441 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 12 17:23:16.257117 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 12 17:23:16.258030 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 12 17:23:16.259575 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 12 17:23:16.262149 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 12 17:23:16.910580 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:23:16.914670 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 12 17:23:16.916905 systemd[1]: Startup finished in 2.077s (kernel) + 5.342s (initrd) + 4.008s (userspace) = 11.428s. Sep 12 17:23:16.931219 (kubelet)[1627]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 17:23:17.360737 kubelet[1627]: E0912 17:23:17.360625 1627 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 17:23:17.363651 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 17:23:17.363810 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 17:23:17.364867 systemd[1]: kubelet.service: Consumed 761ms CPU time, 259.6M memory peak. Sep 12 17:23:20.641332 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 12 17:23:20.647600 systemd[1]: Started sshd@0-10.0.0.105:22-10.0.0.1:44508.service - OpenSSH per-connection server daemon (10.0.0.1:44508). Sep 12 17:23:20.721615 sshd[1640]: Accepted publickey for core from 10.0.0.1 port 44508 ssh2: RSA SHA256:UT5jL9R+kNVMu55HRewvy3KiK11NkEv9jWcPEawXfBI Sep 12 17:23:20.727436 sshd-session[1640]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:23:20.734662 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 12 17:23:20.737971 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 12 17:23:20.744949 systemd-logind[1498]: New session 1 of user core. Sep 12 17:23:20.761226 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 12 17:23:20.764354 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 12 17:23:20.777996 (systemd)[1645]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 12 17:23:20.780801 systemd-logind[1498]: New session c1 of user core. Sep 12 17:23:20.918356 systemd[1645]: Queued start job for default target default.target. Sep 12 17:23:20.935759 systemd[1645]: Created slice app.slice - User Application Slice. Sep 12 17:23:20.935790 systemd[1645]: Reached target paths.target - Paths. Sep 12 17:23:20.935826 systemd[1645]: Reached target timers.target - Timers. Sep 12 17:23:20.936976 systemd[1645]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 12 17:23:20.949066 systemd[1645]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 12 17:23:20.949175 systemd[1645]: Reached target sockets.target - Sockets. Sep 12 17:23:20.949211 systemd[1645]: Reached target basic.target - Basic System. Sep 12 17:23:20.949242 systemd[1645]: Reached target default.target - Main User Target. Sep 12 17:23:20.949269 systemd[1645]: Startup finished in 159ms. Sep 12 17:23:20.949393 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 12 17:23:20.952329 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 12 17:23:21.021093 systemd[1]: Started sshd@1-10.0.0.105:22-10.0.0.1:44512.service - OpenSSH per-connection server daemon (10.0.0.1:44512). Sep 12 17:23:21.126221 sshd[1656]: Accepted publickey for core from 10.0.0.1 port 44512 ssh2: RSA SHA256:UT5jL9R+kNVMu55HRewvy3KiK11NkEv9jWcPEawXfBI Sep 12 17:23:21.128782 sshd-session[1656]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:23:21.134108 systemd-logind[1498]: New session 2 of user core. Sep 12 17:23:21.152988 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 12 17:23:21.211351 sshd[1659]: Connection closed by 10.0.0.1 port 44512 Sep 12 17:23:21.213036 sshd-session[1656]: pam_unix(sshd:session): session closed for user core Sep 12 17:23:21.224203 systemd[1]: sshd@1-10.0.0.105:22-10.0.0.1:44512.service: Deactivated successfully. Sep 12 17:23:21.227244 systemd[1]: session-2.scope: Deactivated successfully. Sep 12 17:23:21.229658 systemd-logind[1498]: Session 2 logged out. Waiting for processes to exit. Sep 12 17:23:21.230841 systemd[1]: Started sshd@2-10.0.0.105:22-10.0.0.1:44524.service - OpenSSH per-connection server daemon (10.0.0.1:44524). Sep 12 17:23:21.232503 systemd-logind[1498]: Removed session 2. Sep 12 17:23:21.300224 sshd[1665]: Accepted publickey for core from 10.0.0.1 port 44524 ssh2: RSA SHA256:UT5jL9R+kNVMu55HRewvy3KiK11NkEv9jWcPEawXfBI Sep 12 17:23:21.302230 sshd-session[1665]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:23:21.311078 systemd-logind[1498]: New session 3 of user core. Sep 12 17:23:21.321958 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 12 17:23:21.377131 sshd[1668]: Connection closed by 10.0.0.1 port 44524 Sep 12 17:23:21.377680 sshd-session[1665]: pam_unix(sshd:session): session closed for user core Sep 12 17:23:21.395885 systemd[1]: sshd@2-10.0.0.105:22-10.0.0.1:44524.service: Deactivated successfully. Sep 12 17:23:21.397517 systemd[1]: session-3.scope: Deactivated successfully. Sep 12 17:23:21.400619 systemd-logind[1498]: Session 3 logged out. Waiting for processes to exit. Sep 12 17:23:21.406746 systemd[1]: Started sshd@3-10.0.0.105:22-10.0.0.1:44532.service - OpenSSH per-connection server daemon (10.0.0.1:44532). Sep 12 17:23:21.407354 systemd-logind[1498]: Removed session 3. Sep 12 17:23:21.486394 sshd[1674]: Accepted publickey for core from 10.0.0.1 port 44532 ssh2: RSA SHA256:UT5jL9R+kNVMu55HRewvy3KiK11NkEv9jWcPEawXfBI Sep 12 17:23:21.488483 sshd-session[1674]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:23:21.497191 systemd-logind[1498]: New session 4 of user core. Sep 12 17:23:21.507958 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 12 17:23:21.568098 sshd[1677]: Connection closed by 10.0.0.1 port 44532 Sep 12 17:23:21.568417 sshd-session[1674]: pam_unix(sshd:session): session closed for user core Sep 12 17:23:21.576286 systemd[1]: sshd@3-10.0.0.105:22-10.0.0.1:44532.service: Deactivated successfully. Sep 12 17:23:21.578138 systemd[1]: session-4.scope: Deactivated successfully. Sep 12 17:23:21.580413 systemd-logind[1498]: Session 4 logged out. Waiting for processes to exit. Sep 12 17:23:21.584047 systemd[1]: Started sshd@4-10.0.0.105:22-10.0.0.1:44542.service - OpenSSH per-connection server daemon (10.0.0.1:44542). Sep 12 17:23:21.584803 systemd-logind[1498]: Removed session 4. Sep 12 17:23:21.640221 sshd[1683]: Accepted publickey for core from 10.0.0.1 port 44542 ssh2: RSA SHA256:UT5jL9R+kNVMu55HRewvy3KiK11NkEv9jWcPEawXfBI Sep 12 17:23:21.642392 sshd-session[1683]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:23:21.648337 systemd-logind[1498]: New session 5 of user core. Sep 12 17:23:21.660086 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 12 17:23:21.719825 sudo[1687]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 12 17:23:21.720097 sudo[1687]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:23:21.734599 sudo[1687]: pam_unix(sudo:session): session closed for user root Sep 12 17:23:21.737601 sshd[1686]: Connection closed by 10.0.0.1 port 44542 Sep 12 17:23:21.737910 sshd-session[1683]: pam_unix(sshd:session): session closed for user core Sep 12 17:23:21.760456 systemd[1]: sshd@4-10.0.0.105:22-10.0.0.1:44542.service: Deactivated successfully. Sep 12 17:23:21.765234 systemd[1]: session-5.scope: Deactivated successfully. Sep 12 17:23:21.771862 systemd-logind[1498]: Session 5 logged out. Waiting for processes to exit. Sep 12 17:23:21.775040 systemd[1]: Started sshd@5-10.0.0.105:22-10.0.0.1:44544.service - OpenSSH per-connection server daemon (10.0.0.1:44544). Sep 12 17:23:21.776290 systemd-logind[1498]: Removed session 5. Sep 12 17:23:21.848054 sshd[1693]: Accepted publickey for core from 10.0.0.1 port 44544 ssh2: RSA SHA256:UT5jL9R+kNVMu55HRewvy3KiK11NkEv9jWcPEawXfBI Sep 12 17:23:21.849859 sshd-session[1693]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:23:21.856428 systemd-logind[1498]: New session 6 of user core. Sep 12 17:23:21.868155 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 12 17:23:21.926512 sudo[1698]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 12 17:23:21.927499 sudo[1698]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:23:22.006016 sudo[1698]: pam_unix(sudo:session): session closed for user root Sep 12 17:23:22.015307 sudo[1697]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 12 17:23:22.015593 sudo[1697]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:23:22.032382 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 12 17:23:22.098693 augenrules[1720]: No rules Sep 12 17:23:22.100002 systemd[1]: audit-rules.service: Deactivated successfully. Sep 12 17:23:22.100223 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 12 17:23:22.101610 sudo[1697]: pam_unix(sudo:session): session closed for user root Sep 12 17:23:22.103535 sshd[1696]: Connection closed by 10.0.0.1 port 44544 Sep 12 17:23:22.104089 sshd-session[1693]: pam_unix(sshd:session): session closed for user core Sep 12 17:23:22.121689 systemd[1]: sshd@5-10.0.0.105:22-10.0.0.1:44544.service: Deactivated successfully. Sep 12 17:23:22.123422 systemd[1]: session-6.scope: Deactivated successfully. Sep 12 17:23:22.124279 systemd-logind[1498]: Session 6 logged out. Waiting for processes to exit. Sep 12 17:23:22.130620 systemd[1]: Started sshd@6-10.0.0.105:22-10.0.0.1:44548.service - OpenSSH per-connection server daemon (10.0.0.1:44548). Sep 12 17:23:22.131860 systemd-logind[1498]: Removed session 6. Sep 12 17:23:22.201806 sshd[1729]: Accepted publickey for core from 10.0.0.1 port 44548 ssh2: RSA SHA256:UT5jL9R+kNVMu55HRewvy3KiK11NkEv9jWcPEawXfBI Sep 12 17:23:22.204329 sshd-session[1729]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:23:22.208775 systemd-logind[1498]: New session 7 of user core. Sep 12 17:23:22.222946 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 12 17:23:22.277475 sudo[1733]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 12 17:23:22.278590 sudo[1733]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 12 17:23:22.601804 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 12 17:23:22.620117 (dockerd)[1754]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 12 17:23:22.828962 dockerd[1754]: time="2025-09-12T17:23:22.828879098Z" level=info msg="Starting up" Sep 12 17:23:22.830151 dockerd[1754]: time="2025-09-12T17:23:22.830111369Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Sep 12 17:23:22.849799 dockerd[1754]: time="2025-09-12T17:23:22.849752264Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Sep 12 17:23:22.943548 systemd[1]: var-lib-docker-metacopy\x2dcheck2126079682-merged.mount: Deactivated successfully. Sep 12 17:23:22.971169 dockerd[1754]: time="2025-09-12T17:23:22.971110297Z" level=info msg="Loading containers: start." Sep 12 17:23:22.981451 kernel: Initializing XFRM netlink socket Sep 12 17:23:23.268395 systemd-networkd[1429]: docker0: Link UP Sep 12 17:23:23.275284 dockerd[1754]: time="2025-09-12T17:23:23.275231187Z" level=info msg="Loading containers: done." Sep 12 17:23:23.287485 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2233475538-merged.mount: Deactivated successfully. Sep 12 17:23:23.290928 dockerd[1754]: time="2025-09-12T17:23:23.290864536Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 12 17:23:23.291038 dockerd[1754]: time="2025-09-12T17:23:23.290956144Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Sep 12 17:23:23.291061 dockerd[1754]: time="2025-09-12T17:23:23.291042746Z" level=info msg="Initializing buildkit" Sep 12 17:23:23.323633 dockerd[1754]: time="2025-09-12T17:23:23.323580114Z" level=info msg="Completed buildkit initialization" Sep 12 17:23:23.330771 dockerd[1754]: time="2025-09-12T17:23:23.330712332Z" level=info msg="Daemon has completed initialization" Sep 12 17:23:23.331021 dockerd[1754]: time="2025-09-12T17:23:23.330917834Z" level=info msg="API listen on /run/docker.sock" Sep 12 17:23:23.331345 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 12 17:23:23.943975 containerd[1518]: time="2025-09-12T17:23:23.943678758Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\"" Sep 12 17:23:24.755870 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3105974883.mount: Deactivated successfully. Sep 12 17:23:25.894282 containerd[1518]: time="2025-09-12T17:23:25.894216835Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.5: active requests=0, bytes read=27390230" Sep 12 17:23:25.894626 containerd[1518]: time="2025-09-12T17:23:25.894332306Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:23:25.896652 containerd[1518]: time="2025-09-12T17:23:25.896610686Z" level=info msg="ImageCreate event name:\"sha256:6a7fd297b49102b08dc3d8d4fd7f1538bcf21d3131eae8bf62ba26ce3283237f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:23:25.897417 containerd[1518]: time="2025-09-12T17:23:25.897396620Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:23:25.899115 containerd[1518]: time="2025-09-12T17:23:25.899089092Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.5\" with image id \"sha256:6a7fd297b49102b08dc3d8d4fd7f1538bcf21d3131eae8bf62ba26ce3283237f\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.5\", repo digest \"registry.k8s.io/kube-apiserver@sha256:1b9c6c00bc1fe86860e72efb8e4148f9e436a132eba4ca636ca4f48d61d6dfb4\", size \"27386827\" in 1.955368925s" Sep 12 17:23:25.899166 containerd[1518]: time="2025-09-12T17:23:25.899125182Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.5\" returns image reference \"sha256:6a7fd297b49102b08dc3d8d4fd7f1538bcf21d3131eae8bf62ba26ce3283237f\"" Sep 12 17:23:25.900446 containerd[1518]: time="2025-09-12T17:23:25.900385692Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\"" Sep 12 17:23:27.086512 containerd[1518]: time="2025-09-12T17:23:27.086001182Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:23:27.087001 containerd[1518]: time="2025-09-12T17:23:27.086966688Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.5: active requests=0, bytes read=23547919" Sep 12 17:23:27.088746 containerd[1518]: time="2025-09-12T17:23:27.088683442Z" level=info msg="ImageCreate event name:\"sha256:2dd4c25a937008b7b8a6cdca70d816403b5078b51550926721b7a7762139cd23\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:23:27.091668 containerd[1518]: time="2025-09-12T17:23:27.091627367Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:23:27.092967 containerd[1518]: time="2025-09-12T17:23:27.092914255Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.5\" with image id \"sha256:2dd4c25a937008b7b8a6cdca70d816403b5078b51550926721b7a7762139cd23\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.5\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:1082a6ab67fb46397314dd36b36cb197ba4a4c5365033e9ad22bc7edaaaabd5c\", size \"25135832\" in 1.192492582s" Sep 12 17:23:27.092967 containerd[1518]: time="2025-09-12T17:23:27.092945330Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.5\" returns image reference \"sha256:2dd4c25a937008b7b8a6cdca70d816403b5078b51550926721b7a7762139cd23\"" Sep 12 17:23:27.093568 containerd[1518]: time="2025-09-12T17:23:27.093541127Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\"" Sep 12 17:23:27.614172 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 12 17:23:27.616891 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:23:27.795277 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:23:27.810076 (kubelet)[2043]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 17:23:27.893419 kubelet[2043]: E0912 17:23:27.893133 2043 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 17:23:27.896602 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 17:23:27.896749 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 17:23:27.897044 systemd[1]: kubelet.service: Consumed 144ms CPU time, 109M memory peak. Sep 12 17:23:28.330858 containerd[1518]: time="2025-09-12T17:23:28.329949106Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:23:28.335074 containerd[1518]: time="2025-09-12T17:23:28.335031377Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.5: active requests=0, bytes read=18295979" Sep 12 17:23:28.337434 containerd[1518]: time="2025-09-12T17:23:28.336497803Z" level=info msg="ImageCreate event name:\"sha256:5e600beaed8620718e0650dd2721266869ce1d737488c004a869333273e6ec15\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:23:28.341196 containerd[1518]: time="2025-09-12T17:23:28.340286796Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:23:28.341403 containerd[1518]: time="2025-09-12T17:23:28.341358415Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.5\" with image id \"sha256:5e600beaed8620718e0650dd2721266869ce1d737488c004a869333273e6ec15\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.5\", repo digest \"registry.k8s.io/kube-scheduler@sha256:3e7b57c9d9f06b77f0064e5be7f3df61e0151101160acd5fdecce911df28a189\", size \"19883910\" in 1.247779317s" Sep 12 17:23:28.341403 containerd[1518]: time="2025-09-12T17:23:28.341396481Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.5\" returns image reference \"sha256:5e600beaed8620718e0650dd2721266869ce1d737488c004a869333273e6ec15\"" Sep 12 17:23:28.341871 containerd[1518]: time="2025-09-12T17:23:28.341821542Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\"" Sep 12 17:23:29.367571 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1866719457.mount: Deactivated successfully. Sep 12 17:23:29.657236 containerd[1518]: time="2025-09-12T17:23:29.657127682Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:23:29.659540 containerd[1518]: time="2025-09-12T17:23:29.659510490Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.5: active requests=0, bytes read=28240108" Sep 12 17:23:29.660529 containerd[1518]: time="2025-09-12T17:23:29.660471844Z" level=info msg="ImageCreate event name:\"sha256:021a8d45ab0c346664e47d95595ff5180ce90a22a681ea27904c65ae90788e70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:23:29.662535 containerd[1518]: time="2025-09-12T17:23:29.662338644Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:23:29.663400 containerd[1518]: time="2025-09-12T17:23:29.663360132Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.5\" with image id \"sha256:021a8d45ab0c346664e47d95595ff5180ce90a22a681ea27904c65ae90788e70\", repo tag \"registry.k8s.io/kube-proxy:v1.33.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:71445ec84ad98bd52a7784865a9d31b1b50b56092d3f7699edc39eefd71befe1\", size \"28239125\" in 1.321495618s" Sep 12 17:23:29.663400 containerd[1518]: time="2025-09-12T17:23:29.663396300Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.5\" returns image reference \"sha256:021a8d45ab0c346664e47d95595ff5180ce90a22a681ea27904c65ae90788e70\"" Sep 12 17:23:29.664000 containerd[1518]: time="2025-09-12T17:23:29.663975274Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Sep 12 17:23:30.251437 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1607991465.mount: Deactivated successfully. Sep 12 17:23:31.439712 containerd[1518]: time="2025-09-12T17:23:31.438710346Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:23:31.439712 containerd[1518]: time="2025-09-12T17:23:31.439613134Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=19152119" Sep 12 17:23:31.447627 containerd[1518]: time="2025-09-12T17:23:31.447522543Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:23:31.463348 containerd[1518]: time="2025-09-12T17:23:31.462833519Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:23:31.464642 containerd[1518]: time="2025-09-12T17:23:31.464503776Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 1.80049472s" Sep 12 17:23:31.464642 containerd[1518]: time="2025-09-12T17:23:31.464539851Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Sep 12 17:23:31.464983 containerd[1518]: time="2025-09-12T17:23:31.464953833Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 12 17:23:32.004277 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount384003547.mount: Deactivated successfully. Sep 12 17:23:32.017622 containerd[1518]: time="2025-09-12T17:23:32.017556681Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:23:32.019454 containerd[1518]: time="2025-09-12T17:23:32.019406222Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Sep 12 17:23:32.020648 containerd[1518]: time="2025-09-12T17:23:32.020598440Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:23:32.023750 containerd[1518]: time="2025-09-12T17:23:32.023125582Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 12 17:23:32.023874 containerd[1518]: time="2025-09-12T17:23:32.023853758Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 558.8668ms" Sep 12 17:23:32.023937 containerd[1518]: time="2025-09-12T17:23:32.023925729Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Sep 12 17:23:32.025181 containerd[1518]: time="2025-09-12T17:23:32.025160060Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Sep 12 17:23:32.511169 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3200274916.mount: Deactivated successfully. Sep 12 17:23:34.355023 containerd[1518]: time="2025-09-12T17:23:34.354975563Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:23:34.356895 containerd[1518]: time="2025-09-12T17:23:34.356858228Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=69465859" Sep 12 17:23:34.357638 containerd[1518]: time="2025-09-12T17:23:34.357588194Z" level=info msg="ImageCreate event name:\"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:23:34.361065 containerd[1518]: time="2025-09-12T17:23:34.361027759Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:23:34.362061 containerd[1518]: time="2025-09-12T17:23:34.362033170Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"70026017\" in 2.336844206s" Sep 12 17:23:34.362112 containerd[1518]: time="2025-09-12T17:23:34.362066278Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\"" Sep 12 17:23:38.147212 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 12 17:23:38.148659 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:23:38.312598 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:23:38.316411 (kubelet)[2206]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 12 17:23:38.348863 kubelet[2206]: E0912 17:23:38.348803 2206 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 12 17:23:38.351263 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 12 17:23:38.351416 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 12 17:23:38.353003 systemd[1]: kubelet.service: Consumed 132ms CPU time, 106.7M memory peak. Sep 12 17:23:39.417431 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:23:39.417573 systemd[1]: kubelet.service: Consumed 132ms CPU time, 106.7M memory peak. Sep 12 17:23:39.419641 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:23:39.443216 systemd[1]: Reload requested from client PID 2222 ('systemctl') (unit session-7.scope)... Sep 12 17:23:39.443230 systemd[1]: Reloading... Sep 12 17:23:39.514859 zram_generator::config[2265]: No configuration found. Sep 12 17:23:39.676227 systemd[1]: Reloading finished in 232 ms. Sep 12 17:23:39.743355 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 12 17:23:39.743613 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 12 17:23:39.743969 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:23:39.744117 systemd[1]: kubelet.service: Consumed 91ms CPU time, 95M memory peak. Sep 12 17:23:39.745778 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:23:39.895605 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:23:39.913061 (kubelet)[2310]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 12 17:23:39.944952 kubelet[2310]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:23:39.944952 kubelet[2310]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 12 17:23:39.944952 kubelet[2310]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:23:39.944952 kubelet[2310]: I0912 17:23:39.944891 2310 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 12 17:23:41.046409 kubelet[2310]: I0912 17:23:41.046354 2310 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 12 17:23:41.046409 kubelet[2310]: I0912 17:23:41.046389 2310 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 12 17:23:41.046762 kubelet[2310]: I0912 17:23:41.046617 2310 server.go:956] "Client rotation is on, will bootstrap in background" Sep 12 17:23:41.072642 kubelet[2310]: E0912 17:23:41.072595 2310 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.105:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.105:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Sep 12 17:23:41.073222 kubelet[2310]: I0912 17:23:41.073194 2310 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 12 17:23:41.082301 kubelet[2310]: I0912 17:23:41.082272 2310 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 12 17:23:41.085034 kubelet[2310]: I0912 17:23:41.085009 2310 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 12 17:23:41.086782 kubelet[2310]: I0912 17:23:41.086146 2310 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 12 17:23:41.086782 kubelet[2310]: I0912 17:23:41.086187 2310 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 12 17:23:41.086782 kubelet[2310]: I0912 17:23:41.086390 2310 topology_manager.go:138] "Creating topology manager with none policy" Sep 12 17:23:41.086782 kubelet[2310]: I0912 17:23:41.086398 2310 container_manager_linux.go:303] "Creating device plugin manager" Sep 12 17:23:41.086782 kubelet[2310]: I0912 17:23:41.086590 2310 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:23:41.089176 kubelet[2310]: I0912 17:23:41.089138 2310 kubelet.go:480] "Attempting to sync node with API server" Sep 12 17:23:41.089176 kubelet[2310]: I0912 17:23:41.089174 2310 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 12 17:23:41.089244 kubelet[2310]: I0912 17:23:41.089201 2310 kubelet.go:386] "Adding apiserver pod source" Sep 12 17:23:41.089244 kubelet[2310]: I0912 17:23:41.089214 2310 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 12 17:23:41.090271 kubelet[2310]: I0912 17:23:41.090253 2310 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 12 17:23:41.092437 kubelet[2310]: I0912 17:23:41.090921 2310 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 12 17:23:41.092437 kubelet[2310]: W0912 17:23:41.091086 2310 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 12 17:23:41.092437 kubelet[2310]: E0912 17:23:41.091991 2310 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.105:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.105:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 12 17:23:41.092437 kubelet[2310]: E0912 17:23:41.092337 2310 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.105:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.105:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 12 17:23:41.095687 kubelet[2310]: I0912 17:23:41.095665 2310 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 12 17:23:41.095767 kubelet[2310]: I0912 17:23:41.095750 2310 server.go:1289] "Started kubelet" Sep 12 17:23:41.096395 kubelet[2310]: I0912 17:23:41.096342 2310 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 12 17:23:41.096696 kubelet[2310]: I0912 17:23:41.096660 2310 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 12 17:23:41.098875 kubelet[2310]: I0912 17:23:41.098833 2310 server.go:317] "Adding debug handlers to kubelet server" Sep 12 17:23:41.099360 kubelet[2310]: I0912 17:23:41.099328 2310 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 12 17:23:41.100068 kubelet[2310]: I0912 17:23:41.100035 2310 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 12 17:23:41.100698 kubelet[2310]: I0912 17:23:41.100656 2310 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 12 17:23:41.101180 kubelet[2310]: I0912 17:23:41.101163 2310 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 12 17:23:41.101798 kubelet[2310]: E0912 17:23:41.101321 2310 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 17:23:41.101911 kubelet[2310]: I0912 17:23:41.101887 2310 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 12 17:23:41.101964 kubelet[2310]: E0912 17:23:41.101932 2310 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.105:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.105:6443: connect: connection refused" interval="200ms" Sep 12 17:23:41.101993 kubelet[2310]: I0912 17:23:41.101951 2310 reconciler.go:26] "Reconciler: start to sync state" Sep 12 17:23:41.102265 kubelet[2310]: E0912 17:23:41.102237 2310 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.105:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.105:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 12 17:23:41.102265 kubelet[2310]: I0912 17:23:41.102256 2310 factory.go:223] Registration of the systemd container factory successfully Sep 12 17:23:41.102330 kubelet[2310]: I0912 17:23:41.102315 2310 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 12 17:23:41.103359 kubelet[2310]: E0912 17:23:41.103337 2310 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 12 17:23:41.103452 kubelet[2310]: I0912 17:23:41.103438 2310 factory.go:223] Registration of the containerd container factory successfully Sep 12 17:23:41.105422 kubelet[2310]: E0912 17:23:41.104029 2310 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.105:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.105:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.186498db08e3182a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-12 17:23:41.095696426 +0000 UTC m=+1.179468506,LastTimestamp:2025-09-12 17:23:41.095696426 +0000 UTC m=+1.179468506,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 12 17:23:41.114462 kubelet[2310]: I0912 17:23:41.114440 2310 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 12 17:23:41.114462 kubelet[2310]: I0912 17:23:41.114455 2310 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 12 17:23:41.114557 kubelet[2310]: I0912 17:23:41.114470 2310 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:23:41.116364 kubelet[2310]: I0912 17:23:41.116322 2310 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 12 17:23:41.117394 kubelet[2310]: I0912 17:23:41.117370 2310 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 12 17:23:41.117394 kubelet[2310]: I0912 17:23:41.117390 2310 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 12 17:23:41.117475 kubelet[2310]: I0912 17:23:41.117413 2310 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 12 17:23:41.117475 kubelet[2310]: I0912 17:23:41.117421 2310 kubelet.go:2436] "Starting kubelet main sync loop" Sep 12 17:23:41.117475 kubelet[2310]: E0912 17:23:41.117458 2310 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 12 17:23:41.202271 kubelet[2310]: E0912 17:23:41.202229 2310 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 17:23:41.218411 kubelet[2310]: E0912 17:23:41.218379 2310 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Sep 12 17:23:41.302693 kubelet[2310]: E0912 17:23:41.302598 2310 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 17:23:41.303087 kubelet[2310]: E0912 17:23:41.303046 2310 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.105:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.105:6443: connect: connection refused" interval="400ms" Sep 12 17:23:41.328491 kubelet[2310]: I0912 17:23:41.328451 2310 policy_none.go:49] "None policy: Start" Sep 12 17:23:41.328491 kubelet[2310]: I0912 17:23:41.328483 2310 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 12 17:23:41.328596 kubelet[2310]: I0912 17:23:41.328505 2310 state_mem.go:35] "Initializing new in-memory state store" Sep 12 17:23:41.328916 kubelet[2310]: E0912 17:23:41.328887 2310 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.105:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.105:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 12 17:23:41.334180 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 12 17:23:41.345719 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 12 17:23:41.348353 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 12 17:23:41.368084 kubelet[2310]: E0912 17:23:41.367557 2310 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 12 17:23:41.368153 kubelet[2310]: I0912 17:23:41.368120 2310 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 12 17:23:41.368153 kubelet[2310]: I0912 17:23:41.368132 2310 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 12 17:23:41.368512 kubelet[2310]: I0912 17:23:41.368479 2310 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 12 17:23:41.369685 kubelet[2310]: E0912 17:23:41.369643 2310 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 12 17:23:41.369775 kubelet[2310]: E0912 17:23:41.369691 2310 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 12 17:23:41.427753 systemd[1]: Created slice kubepods-burstable-podb678d5c6713e936e66aa5bb73166297e.slice - libcontainer container kubepods-burstable-podb678d5c6713e936e66aa5bb73166297e.slice. Sep 12 17:23:41.456178 kubelet[2310]: E0912 17:23:41.456112 2310 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 17:23:41.458908 systemd[1]: Created slice kubepods-burstable-pod7b968cf906b2d9d713a362c43868bef2.slice - libcontainer container kubepods-burstable-pod7b968cf906b2d9d713a362c43868bef2.slice. Sep 12 17:23:41.460930 kubelet[2310]: E0912 17:23:41.460827 2310 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 17:23:41.469810 kubelet[2310]: I0912 17:23:41.469761 2310 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 12 17:23:41.470271 kubelet[2310]: E0912 17:23:41.470218 2310 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.105:6443/api/v1/nodes\": dial tcp 10.0.0.105:6443: connect: connection refused" node="localhost" Sep 12 17:23:41.478335 systemd[1]: Created slice kubepods-burstable-pod61ce0d458e7633c7bcf73838cc00c318.slice - libcontainer container kubepods-burstable-pod61ce0d458e7633c7bcf73838cc00c318.slice. Sep 12 17:23:41.480737 kubelet[2310]: E0912 17:23:41.480242 2310 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 17:23:41.503585 kubelet[2310]: I0912 17:23:41.503554 2310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 17:23:41.503682 kubelet[2310]: I0912 17:23:41.503593 2310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 17:23:41.503682 kubelet[2310]: I0912 17:23:41.503616 2310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 17:23:41.503682 kubelet[2310]: I0912 17:23:41.503630 2310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 17:23:41.503682 kubelet[2310]: I0912 17:23:41.503645 2310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7b968cf906b2d9d713a362c43868bef2-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"7b968cf906b2d9d713a362c43868bef2\") " pod="kube-system/kube-scheduler-localhost" Sep 12 17:23:41.503682 kubelet[2310]: I0912 17:23:41.503658 2310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/61ce0d458e7633c7bcf73838cc00c318-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"61ce0d458e7633c7bcf73838cc00c318\") " pod="kube-system/kube-apiserver-localhost" Sep 12 17:23:41.503815 kubelet[2310]: I0912 17:23:41.503672 2310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/61ce0d458e7633c7bcf73838cc00c318-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"61ce0d458e7633c7bcf73838cc00c318\") " pod="kube-system/kube-apiserver-localhost" Sep 12 17:23:41.503815 kubelet[2310]: I0912 17:23:41.503685 2310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/61ce0d458e7633c7bcf73838cc00c318-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"61ce0d458e7633c7bcf73838cc00c318\") " pod="kube-system/kube-apiserver-localhost" Sep 12 17:23:41.503815 kubelet[2310]: I0912 17:23:41.503702 2310 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 17:23:41.671935 kubelet[2310]: I0912 17:23:41.671834 2310 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 12 17:23:41.672172 kubelet[2310]: E0912 17:23:41.672148 2310 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.105:6443/api/v1/nodes\": dial tcp 10.0.0.105:6443: connect: connection refused" node="localhost" Sep 12 17:23:41.703781 kubelet[2310]: E0912 17:23:41.703713 2310 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.105:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.105:6443: connect: connection refused" interval="800ms" Sep 12 17:23:41.757105 kubelet[2310]: E0912 17:23:41.757067 2310 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:23:41.757771 containerd[1518]: time="2025-09-12T17:23:41.757676827Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b678d5c6713e936e66aa5bb73166297e,Namespace:kube-system,Attempt:0,}" Sep 12 17:23:41.761964 kubelet[2310]: E0912 17:23:41.761931 2310 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:23:41.762390 containerd[1518]: time="2025-09-12T17:23:41.762360323Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:7b968cf906b2d9d713a362c43868bef2,Namespace:kube-system,Attempt:0,}" Sep 12 17:23:41.782225 kubelet[2310]: E0912 17:23:41.782156 2310 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:23:41.782712 containerd[1518]: time="2025-09-12T17:23:41.782661497Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:61ce0d458e7633c7bcf73838cc00c318,Namespace:kube-system,Attempt:0,}" Sep 12 17:23:41.783443 containerd[1518]: time="2025-09-12T17:23:41.783397641Z" level=info msg="connecting to shim e1db387e3c6e233f547c182d909e4ba4831c978c64fcbe030bd5940a72bf9776" address="unix:///run/containerd/s/e297b33791a24367b501fb505597e0a02e99055b0e7e5d0a03beb8396e290a5e" namespace=k8s.io protocol=ttrpc version=3 Sep 12 17:23:41.787280 containerd[1518]: time="2025-09-12T17:23:41.787229305Z" level=info msg="connecting to shim ab6de8613c119a94495227e2af62cd9365f4ea24e9d2eb3e5e518d3fb880cf57" address="unix:///run/containerd/s/ef232b81219f58f1546c3af7534adc5866982d14d34141319ff03bbe85d118f3" namespace=k8s.io protocol=ttrpc version=3 Sep 12 17:23:41.810331 containerd[1518]: time="2025-09-12T17:23:41.810013060Z" level=info msg="connecting to shim dc2a9c18bd452ef4db9ed628dfa9087149c82d2572c3d9be46a00cd6833e265c" address="unix:///run/containerd/s/38b40b80405e0295935dbaaac94e018817ce22522cc5ee3284294db35a5f8835" namespace=k8s.io protocol=ttrpc version=3 Sep 12 17:23:41.817912 systemd[1]: Started cri-containerd-ab6de8613c119a94495227e2af62cd9365f4ea24e9d2eb3e5e518d3fb880cf57.scope - libcontainer container ab6de8613c119a94495227e2af62cd9365f4ea24e9d2eb3e5e518d3fb880cf57. Sep 12 17:23:41.819769 systemd[1]: Started cri-containerd-e1db387e3c6e233f547c182d909e4ba4831c978c64fcbe030bd5940a72bf9776.scope - libcontainer container e1db387e3c6e233f547c182d909e4ba4831c978c64fcbe030bd5940a72bf9776. Sep 12 17:23:41.835914 systemd[1]: Started cri-containerd-dc2a9c18bd452ef4db9ed628dfa9087149c82d2572c3d9be46a00cd6833e265c.scope - libcontainer container dc2a9c18bd452ef4db9ed628dfa9087149c82d2572c3d9be46a00cd6833e265c. Sep 12 17:23:41.870169 containerd[1518]: time="2025-09-12T17:23:41.869744789Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:7b968cf906b2d9d713a362c43868bef2,Namespace:kube-system,Attempt:0,} returns sandbox id \"ab6de8613c119a94495227e2af62cd9365f4ea24e9d2eb3e5e518d3fb880cf57\"" Sep 12 17:23:41.870953 kubelet[2310]: E0912 17:23:41.870925 2310 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:23:41.875152 containerd[1518]: time="2025-09-12T17:23:41.875112781Z" level=info msg="CreateContainer within sandbox \"ab6de8613c119a94495227e2af62cd9365f4ea24e9d2eb3e5e518d3fb880cf57\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 12 17:23:41.876315 containerd[1518]: time="2025-09-12T17:23:41.876270704Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:b678d5c6713e936e66aa5bb73166297e,Namespace:kube-system,Attempt:0,} returns sandbox id \"e1db387e3c6e233f547c182d909e4ba4831c978c64fcbe030bd5940a72bf9776\"" Sep 12 17:23:41.877056 kubelet[2310]: E0912 17:23:41.877033 2310 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:23:41.879929 containerd[1518]: time="2025-09-12T17:23:41.879884823Z" level=info msg="CreateContainer within sandbox \"e1db387e3c6e233f547c182d909e4ba4831c978c64fcbe030bd5940a72bf9776\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 12 17:23:41.883997 containerd[1518]: time="2025-09-12T17:23:41.883970490Z" level=info msg="Container da1e2931a4ec103891d38dfb8bf9612e70680106f1de95e5a2ce215e93cf7cbc: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:23:41.886062 containerd[1518]: time="2025-09-12T17:23:41.886014503Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:61ce0d458e7633c7bcf73838cc00c318,Namespace:kube-system,Attempt:0,} returns sandbox id \"dc2a9c18bd452ef4db9ed628dfa9087149c82d2572c3d9be46a00cd6833e265c\"" Sep 12 17:23:41.886623 kubelet[2310]: E0912 17:23:41.886603 2310 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:23:41.889133 containerd[1518]: time="2025-09-12T17:23:41.889099590Z" level=info msg="Container c9dd25fdf6e33279b0fd9522d5e9c4073f98dd19ec6b8f5eda091d5f951138bd: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:23:41.890494 containerd[1518]: time="2025-09-12T17:23:41.890463545Z" level=info msg="CreateContainer within sandbox \"dc2a9c18bd452ef4db9ed628dfa9087149c82d2572c3d9be46a00cd6833e265c\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 12 17:23:41.893417 containerd[1518]: time="2025-09-12T17:23:41.893215279Z" level=info msg="CreateContainer within sandbox \"ab6de8613c119a94495227e2af62cd9365f4ea24e9d2eb3e5e518d3fb880cf57\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"da1e2931a4ec103891d38dfb8bf9612e70680106f1de95e5a2ce215e93cf7cbc\"" Sep 12 17:23:41.894450 containerd[1518]: time="2025-09-12T17:23:41.894423570Z" level=info msg="StartContainer for \"da1e2931a4ec103891d38dfb8bf9612e70680106f1de95e5a2ce215e93cf7cbc\"" Sep 12 17:23:41.896086 containerd[1518]: time="2025-09-12T17:23:41.896058516Z" level=info msg="connecting to shim da1e2931a4ec103891d38dfb8bf9612e70680106f1de95e5a2ce215e93cf7cbc" address="unix:///run/containerd/s/ef232b81219f58f1546c3af7534adc5866982d14d34141319ff03bbe85d118f3" protocol=ttrpc version=3 Sep 12 17:23:41.896302 containerd[1518]: time="2025-09-12T17:23:41.896272543Z" level=info msg="CreateContainer within sandbox \"e1db387e3c6e233f547c182d909e4ba4831c978c64fcbe030bd5940a72bf9776\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"c9dd25fdf6e33279b0fd9522d5e9c4073f98dd19ec6b8f5eda091d5f951138bd\"" Sep 12 17:23:41.896752 containerd[1518]: time="2025-09-12T17:23:41.896713470Z" level=info msg="StartContainer for \"c9dd25fdf6e33279b0fd9522d5e9c4073f98dd19ec6b8f5eda091d5f951138bd\"" Sep 12 17:23:41.897822 containerd[1518]: time="2025-09-12T17:23:41.897794999Z" level=info msg="connecting to shim c9dd25fdf6e33279b0fd9522d5e9c4073f98dd19ec6b8f5eda091d5f951138bd" address="unix:///run/containerd/s/e297b33791a24367b501fb505597e0a02e99055b0e7e5d0a03beb8396e290a5e" protocol=ttrpc version=3 Sep 12 17:23:41.899813 containerd[1518]: time="2025-09-12T17:23:41.899774013Z" level=info msg="Container 931c22ad792da7d10313ecd79e5575eb5a9f217cf32be4be2a4d99ce203b767a: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:23:41.906852 containerd[1518]: time="2025-09-12T17:23:41.906808092Z" level=info msg="CreateContainer within sandbox \"dc2a9c18bd452ef4db9ed628dfa9087149c82d2572c3d9be46a00cd6833e265c\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"931c22ad792da7d10313ecd79e5575eb5a9f217cf32be4be2a4d99ce203b767a\"" Sep 12 17:23:41.907331 containerd[1518]: time="2025-09-12T17:23:41.907307822Z" level=info msg="StartContainer for \"931c22ad792da7d10313ecd79e5575eb5a9f217cf32be4be2a4d99ce203b767a\"" Sep 12 17:23:41.908960 containerd[1518]: time="2025-09-12T17:23:41.908925419Z" level=info msg="connecting to shim 931c22ad792da7d10313ecd79e5575eb5a9f217cf32be4be2a4d99ce203b767a" address="unix:///run/containerd/s/38b40b80405e0295935dbaaac94e018817ce22522cc5ee3284294db35a5f8835" protocol=ttrpc version=3 Sep 12 17:23:41.918044 systemd[1]: Started cri-containerd-c9dd25fdf6e33279b0fd9522d5e9c4073f98dd19ec6b8f5eda091d5f951138bd.scope - libcontainer container c9dd25fdf6e33279b0fd9522d5e9c4073f98dd19ec6b8f5eda091d5f951138bd. Sep 12 17:23:41.921302 systemd[1]: Started cri-containerd-da1e2931a4ec103891d38dfb8bf9612e70680106f1de95e5a2ce215e93cf7cbc.scope - libcontainer container da1e2931a4ec103891d38dfb8bf9612e70680106f1de95e5a2ce215e93cf7cbc. Sep 12 17:23:41.927239 systemd[1]: Started cri-containerd-931c22ad792da7d10313ecd79e5575eb5a9f217cf32be4be2a4d99ce203b767a.scope - libcontainer container 931c22ad792da7d10313ecd79e5575eb5a9f217cf32be4be2a4d99ce203b767a. Sep 12 17:23:41.928859 kubelet[2310]: E0912 17:23:41.928681 2310 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.105:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.105:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 12 17:23:41.973485 containerd[1518]: time="2025-09-12T17:23:41.973445419Z" level=info msg="StartContainer for \"da1e2931a4ec103891d38dfb8bf9612e70680106f1de95e5a2ce215e93cf7cbc\" returns successfully" Sep 12 17:23:41.975075 containerd[1518]: time="2025-09-12T17:23:41.975045308Z" level=info msg="StartContainer for \"c9dd25fdf6e33279b0fd9522d5e9c4073f98dd19ec6b8f5eda091d5f951138bd\" returns successfully" Sep 12 17:23:41.977490 containerd[1518]: time="2025-09-12T17:23:41.977365749Z" level=info msg="StartContainer for \"931c22ad792da7d10313ecd79e5575eb5a9f217cf32be4be2a4d99ce203b767a\" returns successfully" Sep 12 17:23:42.074996 kubelet[2310]: I0912 17:23:42.074938 2310 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 12 17:23:42.127696 kubelet[2310]: E0912 17:23:42.127063 2310 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 17:23:42.127696 kubelet[2310]: E0912 17:23:42.127205 2310 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:23:42.128068 kubelet[2310]: E0912 17:23:42.128046 2310 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 17:23:42.131370 kubelet[2310]: E0912 17:23:42.131342 2310 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:23:42.135283 kubelet[2310]: E0912 17:23:42.135114 2310 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 17:23:42.135283 kubelet[2310]: E0912 17:23:42.135230 2310 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:23:43.137105 kubelet[2310]: E0912 17:23:43.137045 2310 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 17:23:43.137768 kubelet[2310]: E0912 17:23:43.137531 2310 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:23:43.138164 kubelet[2310]: E0912 17:23:43.138012 2310 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 17:23:43.138164 kubelet[2310]: E0912 17:23:43.138118 2310 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:23:43.208626 kubelet[2310]: E0912 17:23:43.208597 2310 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 12 17:23:43.208803 kubelet[2310]: E0912 17:23:43.208758 2310 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:23:44.075750 kubelet[2310]: E0912 17:23:44.073950 2310 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Sep 12 17:23:44.165665 kubelet[2310]: I0912 17:23:44.165603 2310 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 12 17:23:44.165665 kubelet[2310]: E0912 17:23:44.165673 2310 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Sep 12 17:23:44.179437 kubelet[2310]: E0912 17:23:44.179235 2310 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 17:23:44.279390 kubelet[2310]: E0912 17:23:44.279353 2310 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 17:23:44.380170 kubelet[2310]: E0912 17:23:44.380093 2310 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 17:23:44.480964 kubelet[2310]: E0912 17:23:44.480897 2310 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 17:23:44.581083 kubelet[2310]: E0912 17:23:44.581025 2310 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 17:23:44.682307 kubelet[2310]: E0912 17:23:44.682151 2310 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 17:23:44.782927 kubelet[2310]: E0912 17:23:44.782850 2310 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 17:23:44.883488 kubelet[2310]: E0912 17:23:44.883417 2310 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 17:23:44.983964 kubelet[2310]: E0912 17:23:44.983834 2310 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 17:23:45.091803 kubelet[2310]: I0912 17:23:45.091752 2310 apiserver.go:52] "Watching apiserver" Sep 12 17:23:45.102411 kubelet[2310]: I0912 17:23:45.102370 2310 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 12 17:23:45.102411 kubelet[2310]: I0912 17:23:45.102410 2310 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 12 17:23:45.115569 kubelet[2310]: I0912 17:23:45.115504 2310 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 12 17:23:45.116612 kubelet[2310]: E0912 17:23:45.116561 2310 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:23:45.120770 kubelet[2310]: I0912 17:23:45.120716 2310 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 12 17:23:45.121453 kubelet[2310]: E0912 17:23:45.121431 2310 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:23:45.130105 kubelet[2310]: E0912 17:23:45.129988 2310 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:23:46.084401 kubelet[2310]: E0912 17:23:46.084353 2310 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:23:46.343953 systemd[1]: Reload requested from client PID 2595 ('systemctl') (unit session-7.scope)... Sep 12 17:23:46.343970 systemd[1]: Reloading... Sep 12 17:23:46.412759 zram_generator::config[2638]: No configuration found. Sep 12 17:23:46.586765 systemd[1]: Reloading finished in 242 ms. Sep 12 17:23:46.614761 kubelet[2310]: I0912 17:23:46.614637 2310 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 12 17:23:46.614869 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:23:46.633613 systemd[1]: kubelet.service: Deactivated successfully. Sep 12 17:23:46.634830 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:23:46.634895 systemd[1]: kubelet.service: Consumed 1.538s CPU time, 129.4M memory peak. Sep 12 17:23:46.638340 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 12 17:23:46.797472 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 12 17:23:46.813140 (kubelet)[2680]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 12 17:23:46.861043 kubelet[2680]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:23:46.861043 kubelet[2680]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 12 17:23:46.861043 kubelet[2680]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 12 17:23:46.861361 kubelet[2680]: I0912 17:23:46.861072 2680 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 12 17:23:46.867788 kubelet[2680]: I0912 17:23:46.867688 2680 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 12 17:23:46.868754 kubelet[2680]: I0912 17:23:46.867895 2680 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 12 17:23:46.868754 kubelet[2680]: I0912 17:23:46.868096 2680 server.go:956] "Client rotation is on, will bootstrap in background" Sep 12 17:23:46.869505 kubelet[2680]: I0912 17:23:46.869477 2680 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Sep 12 17:23:46.872043 kubelet[2680]: I0912 17:23:46.872007 2680 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 12 17:23:46.875905 kubelet[2680]: I0912 17:23:46.875871 2680 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 12 17:23:46.878397 kubelet[2680]: I0912 17:23:46.878382 2680 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 12 17:23:46.878595 kubelet[2680]: I0912 17:23:46.878574 2680 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 12 17:23:46.878739 kubelet[2680]: I0912 17:23:46.878596 2680 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 12 17:23:46.878808 kubelet[2680]: I0912 17:23:46.878751 2680 topology_manager.go:138] "Creating topology manager with none policy" Sep 12 17:23:46.878808 kubelet[2680]: I0912 17:23:46.878759 2680 container_manager_linux.go:303] "Creating device plugin manager" Sep 12 17:23:46.878808 kubelet[2680]: I0912 17:23:46.878799 2680 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:23:46.878938 kubelet[2680]: I0912 17:23:46.878925 2680 kubelet.go:480] "Attempting to sync node with API server" Sep 12 17:23:46.878938 kubelet[2680]: I0912 17:23:46.878938 2680 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 12 17:23:46.878983 kubelet[2680]: I0912 17:23:46.878958 2680 kubelet.go:386] "Adding apiserver pod source" Sep 12 17:23:46.878983 kubelet[2680]: I0912 17:23:46.878969 2680 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 12 17:23:46.879620 kubelet[2680]: I0912 17:23:46.879603 2680 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 12 17:23:46.880140 kubelet[2680]: I0912 17:23:46.880123 2680 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 12 17:23:46.883243 kubelet[2680]: I0912 17:23:46.883218 2680 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 12 17:23:46.883313 kubelet[2680]: I0912 17:23:46.883265 2680 server.go:1289] "Started kubelet" Sep 12 17:23:46.883826 kubelet[2680]: I0912 17:23:46.883782 2680 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 12 17:23:46.886622 kubelet[2680]: I0912 17:23:46.884043 2680 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 12 17:23:46.886622 kubelet[2680]: I0912 17:23:46.883638 2680 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 12 17:23:46.887703 kubelet[2680]: I0912 17:23:46.887680 2680 server.go:317] "Adding debug handlers to kubelet server" Sep 12 17:23:46.888050 kubelet[2680]: I0912 17:23:46.888011 2680 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 12 17:23:46.889869 kubelet[2680]: I0912 17:23:46.889836 2680 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 12 17:23:46.890438 kubelet[2680]: I0912 17:23:46.890418 2680 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 12 17:23:46.890640 kubelet[2680]: E0912 17:23:46.890622 2680 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 12 17:23:46.897748 kubelet[2680]: I0912 17:23:46.897190 2680 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 12 17:23:46.899365 kubelet[2680]: I0912 17:23:46.898555 2680 reconciler.go:26] "Reconciler: start to sync state" Sep 12 17:23:46.910193 kubelet[2680]: E0912 17:23:46.910153 2680 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 12 17:23:46.910279 kubelet[2680]: I0912 17:23:46.910196 2680 factory.go:223] Registration of the containerd container factory successfully Sep 12 17:23:46.910279 kubelet[2680]: I0912 17:23:46.910213 2680 factory.go:223] Registration of the systemd container factory successfully Sep 12 17:23:46.910322 kubelet[2680]: I0912 17:23:46.910282 2680 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 12 17:23:46.910799 kubelet[2680]: I0912 17:23:46.910765 2680 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 12 17:23:46.913197 kubelet[2680]: I0912 17:23:46.913169 2680 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 12 17:23:46.913197 kubelet[2680]: I0912 17:23:46.913199 2680 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 12 17:23:46.913288 kubelet[2680]: I0912 17:23:46.913218 2680 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 12 17:23:46.913288 kubelet[2680]: I0912 17:23:46.913225 2680 kubelet.go:2436] "Starting kubelet main sync loop" Sep 12 17:23:46.913288 kubelet[2680]: E0912 17:23:46.913265 2680 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 12 17:23:46.947198 kubelet[2680]: I0912 17:23:46.947168 2680 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 12 17:23:46.947198 kubelet[2680]: I0912 17:23:46.947190 2680 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 12 17:23:46.947324 kubelet[2680]: I0912 17:23:46.947213 2680 state_mem.go:36] "Initialized new in-memory state store" Sep 12 17:23:46.947386 kubelet[2680]: I0912 17:23:46.947369 2680 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 12 17:23:46.947413 kubelet[2680]: I0912 17:23:46.947384 2680 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 12 17:23:46.947413 kubelet[2680]: I0912 17:23:46.947400 2680 policy_none.go:49] "None policy: Start" Sep 12 17:23:46.947413 kubelet[2680]: I0912 17:23:46.947409 2680 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 12 17:23:46.947486 kubelet[2680]: I0912 17:23:46.947418 2680 state_mem.go:35] "Initializing new in-memory state store" Sep 12 17:23:46.947520 kubelet[2680]: I0912 17:23:46.947507 2680 state_mem.go:75] "Updated machine memory state" Sep 12 17:23:46.951168 kubelet[2680]: E0912 17:23:46.951132 2680 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 12 17:23:46.951304 kubelet[2680]: I0912 17:23:46.951283 2680 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 12 17:23:46.951543 kubelet[2680]: I0912 17:23:46.951305 2680 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 12 17:23:46.951763 kubelet[2680]: I0912 17:23:46.951741 2680 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 12 17:23:46.953050 kubelet[2680]: E0912 17:23:46.953024 2680 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 12 17:23:47.014648 kubelet[2680]: I0912 17:23:47.014607 2680 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 12 17:23:47.014842 kubelet[2680]: I0912 17:23:47.014762 2680 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 12 17:23:47.015031 kubelet[2680]: I0912 17:23:47.015002 2680 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 12 17:23:47.028489 kubelet[2680]: E0912 17:23:47.028442 2680 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 12 17:23:47.029267 kubelet[2680]: E0912 17:23:47.029234 2680 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Sep 12 17:23:47.029344 kubelet[2680]: E0912 17:23:47.029324 2680 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Sep 12 17:23:47.056646 kubelet[2680]: I0912 17:23:47.056623 2680 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 12 17:23:47.091378 kubelet[2680]: I0912 17:23:47.091331 2680 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Sep 12 17:23:47.091483 kubelet[2680]: I0912 17:23:47.091428 2680 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 12 17:23:47.099887 kubelet[2680]: I0912 17:23:47.099857 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/61ce0d458e7633c7bcf73838cc00c318-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"61ce0d458e7633c7bcf73838cc00c318\") " pod="kube-system/kube-apiserver-localhost" Sep 12 17:23:47.099887 kubelet[2680]: I0912 17:23:47.099891 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 17:23:47.100066 kubelet[2680]: I0912 17:23:47.099910 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 17:23:47.100066 kubelet[2680]: I0912 17:23:47.099927 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 17:23:47.100066 kubelet[2680]: I0912 17:23:47.099974 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7b968cf906b2d9d713a362c43868bef2-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"7b968cf906b2d9d713a362c43868bef2\") " pod="kube-system/kube-scheduler-localhost" Sep 12 17:23:47.100066 kubelet[2680]: I0912 17:23:47.100011 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/61ce0d458e7633c7bcf73838cc00c318-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"61ce0d458e7633c7bcf73838cc00c318\") " pod="kube-system/kube-apiserver-localhost" Sep 12 17:23:47.100066 kubelet[2680]: I0912 17:23:47.100040 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/61ce0d458e7633c7bcf73838cc00c318-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"61ce0d458e7633c7bcf73838cc00c318\") " pod="kube-system/kube-apiserver-localhost" Sep 12 17:23:47.100216 kubelet[2680]: I0912 17:23:47.100060 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 17:23:47.100216 kubelet[2680]: I0912 17:23:47.100081 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b678d5c6713e936e66aa5bb73166297e-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"b678d5c6713e936e66aa5bb73166297e\") " pod="kube-system/kube-controller-manager-localhost" Sep 12 17:23:47.329074 kubelet[2680]: E0912 17:23:47.328882 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:23:47.331598 kubelet[2680]: E0912 17:23:47.329889 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:23:47.331598 kubelet[2680]: E0912 17:23:47.329978 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:23:47.349507 sudo[2719]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 12 17:23:47.350228 sudo[2719]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Sep 12 17:23:47.711591 sudo[2719]: pam_unix(sudo:session): session closed for user root Sep 12 17:23:47.879589 kubelet[2680]: I0912 17:23:47.879507 2680 apiserver.go:52] "Watching apiserver" Sep 12 17:23:47.899247 kubelet[2680]: I0912 17:23:47.899198 2680 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 12 17:23:47.929264 kubelet[2680]: I0912 17:23:47.929233 2680 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 12 17:23:47.929789 kubelet[2680]: I0912 17:23:47.929767 2680 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 12 17:23:47.930054 kubelet[2680]: E0912 17:23:47.930032 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:23:47.938855 kubelet[2680]: E0912 17:23:47.938816 2680 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 12 17:23:47.939371 kubelet[2680]: E0912 17:23:47.939344 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:23:47.939555 kubelet[2680]: E0912 17:23:47.939523 2680 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Sep 12 17:23:47.942582 kubelet[2680]: E0912 17:23:47.939663 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:23:47.958977 kubelet[2680]: I0912 17:23:47.958902 2680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.958887337 podStartE2EDuration="2.958887337s" podCreationTimestamp="2025-09-12 17:23:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:23:47.957858463 +0000 UTC m=+1.138566619" watchObservedRunningTime="2025-09-12 17:23:47.958887337 +0000 UTC m=+1.139595453" Sep 12 17:23:47.974234 kubelet[2680]: I0912 17:23:47.974100 2680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=2.974083218 podStartE2EDuration="2.974083218s" podCreationTimestamp="2025-09-12 17:23:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:23:47.966196482 +0000 UTC m=+1.146904598" watchObservedRunningTime="2025-09-12 17:23:47.974083218 +0000 UTC m=+1.154791334" Sep 12 17:23:47.974452 kubelet[2680]: I0912 17:23:47.974225 2680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.974220647 podStartE2EDuration="2.974220647s" podCreationTimestamp="2025-09-12 17:23:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:23:47.974202728 +0000 UTC m=+1.154910884" watchObservedRunningTime="2025-09-12 17:23:47.974220647 +0000 UTC m=+1.154928763" Sep 12 17:23:48.933789 kubelet[2680]: E0912 17:23:48.933756 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:23:48.934887 kubelet[2680]: E0912 17:23:48.934860 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:23:49.787245 sudo[1733]: pam_unix(sudo:session): session closed for user root Sep 12 17:23:49.788980 sshd[1732]: Connection closed by 10.0.0.1 port 44548 Sep 12 17:23:49.789452 sshd-session[1729]: pam_unix(sshd:session): session closed for user core Sep 12 17:23:49.792891 systemd[1]: sshd@6-10.0.0.105:22-10.0.0.1:44548.service: Deactivated successfully. Sep 12 17:23:49.795489 systemd[1]: session-7.scope: Deactivated successfully. Sep 12 17:23:49.795717 systemd[1]: session-7.scope: Consumed 7.453s CPU time, 259.1M memory peak. Sep 12 17:23:49.797034 systemd-logind[1498]: Session 7 logged out. Waiting for processes to exit. Sep 12 17:23:49.798501 systemd-logind[1498]: Removed session 7. Sep 12 17:23:52.258299 kubelet[2680]: I0912 17:23:52.258266 2680 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 12 17:23:52.259042 containerd[1518]: time="2025-09-12T17:23:52.258992698Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 12 17:23:52.259282 kubelet[2680]: I0912 17:23:52.259149 2680 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 12 17:23:53.369668 systemd[1]: Created slice kubepods-besteffort-pod3bc3148f_91dc_4d8a_9c70_c07ae292107e.slice - libcontainer container kubepods-besteffort-pod3bc3148f_91dc_4d8a_9c70_c07ae292107e.slice. Sep 12 17:23:53.385837 systemd[1]: Created slice kubepods-burstable-podde7575b0_5296_4950_86fa_e9171e9de4b5.slice - libcontainer container kubepods-burstable-podde7575b0_5296_4950_86fa_e9171e9de4b5.slice. Sep 12 17:23:53.436350 kubelet[2680]: I0912 17:23:53.436308 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3bc3148f-91dc-4d8a-9c70-c07ae292107e-lib-modules\") pod \"kube-proxy-sqvnv\" (UID: \"3bc3148f-91dc-4d8a-9c70-c07ae292107e\") " pod="kube-system/kube-proxy-sqvnv" Sep 12 17:23:53.436350 kubelet[2680]: I0912 17:23:53.436343 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/de7575b0-5296-4950-86fa-e9171e9de4b5-bpf-maps\") pod \"cilium-w9pkm\" (UID: \"de7575b0-5296-4950-86fa-e9171e9de4b5\") " pod="kube-system/cilium-w9pkm" Sep 12 17:23:53.436350 kubelet[2680]: I0912 17:23:53.436362 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/de7575b0-5296-4950-86fa-e9171e9de4b5-xtables-lock\") pod \"cilium-w9pkm\" (UID: \"de7575b0-5296-4950-86fa-e9171e9de4b5\") " pod="kube-system/cilium-w9pkm" Sep 12 17:23:53.436691 kubelet[2680]: I0912 17:23:53.436374 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/de7575b0-5296-4950-86fa-e9171e9de4b5-clustermesh-secrets\") pod \"cilium-w9pkm\" (UID: \"de7575b0-5296-4950-86fa-e9171e9de4b5\") " pod="kube-system/cilium-w9pkm" Sep 12 17:23:53.436691 kubelet[2680]: I0912 17:23:53.436415 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/de7575b0-5296-4950-86fa-e9171e9de4b5-cilium-config-path\") pod \"cilium-w9pkm\" (UID: \"de7575b0-5296-4950-86fa-e9171e9de4b5\") " pod="kube-system/cilium-w9pkm" Sep 12 17:23:53.436691 kubelet[2680]: I0912 17:23:53.436482 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lddcj\" (UniqueName: \"kubernetes.io/projected/3bc3148f-91dc-4d8a-9c70-c07ae292107e-kube-api-access-lddcj\") pod \"kube-proxy-sqvnv\" (UID: \"3bc3148f-91dc-4d8a-9c70-c07ae292107e\") " pod="kube-system/kube-proxy-sqvnv" Sep 12 17:23:53.436691 kubelet[2680]: I0912 17:23:53.436500 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/de7575b0-5296-4950-86fa-e9171e9de4b5-hostproc\") pod \"cilium-w9pkm\" (UID: \"de7575b0-5296-4950-86fa-e9171e9de4b5\") " pod="kube-system/cilium-w9pkm" Sep 12 17:23:53.436691 kubelet[2680]: I0912 17:23:53.436516 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3bc3148f-91dc-4d8a-9c70-c07ae292107e-xtables-lock\") pod \"kube-proxy-sqvnv\" (UID: \"3bc3148f-91dc-4d8a-9c70-c07ae292107e\") " pod="kube-system/kube-proxy-sqvnv" Sep 12 17:23:53.436852 kubelet[2680]: I0912 17:23:53.436533 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/de7575b0-5296-4950-86fa-e9171e9de4b5-cilium-run\") pod \"cilium-w9pkm\" (UID: \"de7575b0-5296-4950-86fa-e9171e9de4b5\") " pod="kube-system/cilium-w9pkm" Sep 12 17:23:53.436852 kubelet[2680]: I0912 17:23:53.436551 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/de7575b0-5296-4950-86fa-e9171e9de4b5-etc-cni-netd\") pod \"cilium-w9pkm\" (UID: \"de7575b0-5296-4950-86fa-e9171e9de4b5\") " pod="kube-system/cilium-w9pkm" Sep 12 17:23:53.436852 kubelet[2680]: I0912 17:23:53.436564 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/de7575b0-5296-4950-86fa-e9171e9de4b5-lib-modules\") pod \"cilium-w9pkm\" (UID: \"de7575b0-5296-4950-86fa-e9171e9de4b5\") " pod="kube-system/cilium-w9pkm" Sep 12 17:23:53.436852 kubelet[2680]: I0912 17:23:53.436578 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/de7575b0-5296-4950-86fa-e9171e9de4b5-host-proc-sys-kernel\") pod \"cilium-w9pkm\" (UID: \"de7575b0-5296-4950-86fa-e9171e9de4b5\") " pod="kube-system/cilium-w9pkm" Sep 12 17:23:53.436852 kubelet[2680]: I0912 17:23:53.436592 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brdgx\" (UniqueName: \"kubernetes.io/projected/de7575b0-5296-4950-86fa-e9171e9de4b5-kube-api-access-brdgx\") pod \"cilium-w9pkm\" (UID: \"de7575b0-5296-4950-86fa-e9171e9de4b5\") " pod="kube-system/cilium-w9pkm" Sep 12 17:23:53.436852 kubelet[2680]: I0912 17:23:53.436615 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/3bc3148f-91dc-4d8a-9c70-c07ae292107e-kube-proxy\") pod \"kube-proxy-sqvnv\" (UID: \"3bc3148f-91dc-4d8a-9c70-c07ae292107e\") " pod="kube-system/kube-proxy-sqvnv" Sep 12 17:23:53.436975 kubelet[2680]: I0912 17:23:53.436639 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/de7575b0-5296-4950-86fa-e9171e9de4b5-cilium-cgroup\") pod \"cilium-w9pkm\" (UID: \"de7575b0-5296-4950-86fa-e9171e9de4b5\") " pod="kube-system/cilium-w9pkm" Sep 12 17:23:53.436975 kubelet[2680]: I0912 17:23:53.436654 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/de7575b0-5296-4950-86fa-e9171e9de4b5-cni-path\") pod \"cilium-w9pkm\" (UID: \"de7575b0-5296-4950-86fa-e9171e9de4b5\") " pod="kube-system/cilium-w9pkm" Sep 12 17:23:53.436975 kubelet[2680]: I0912 17:23:53.436673 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/de7575b0-5296-4950-86fa-e9171e9de4b5-host-proc-sys-net\") pod \"cilium-w9pkm\" (UID: \"de7575b0-5296-4950-86fa-e9171e9de4b5\") " pod="kube-system/cilium-w9pkm" Sep 12 17:23:53.436975 kubelet[2680]: I0912 17:23:53.436704 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/de7575b0-5296-4950-86fa-e9171e9de4b5-hubble-tls\") pod \"cilium-w9pkm\" (UID: \"de7575b0-5296-4950-86fa-e9171e9de4b5\") " pod="kube-system/cilium-w9pkm" Sep 12 17:23:53.491091 systemd[1]: Created slice kubepods-besteffort-pod38658b31_5065_47eb_8cfd_99f24154568b.slice - libcontainer container kubepods-besteffort-pod38658b31_5065_47eb_8cfd_99f24154568b.slice. Sep 12 17:23:53.638616 kubelet[2680]: I0912 17:23:53.638476 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/38658b31-5065-47eb-8cfd-99f24154568b-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-zmc2f\" (UID: \"38658b31-5065-47eb-8cfd-99f24154568b\") " pod="kube-system/cilium-operator-6c4d7847fc-zmc2f" Sep 12 17:23:53.638827 kubelet[2680]: I0912 17:23:53.638524 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-swbtt\" (UniqueName: \"kubernetes.io/projected/38658b31-5065-47eb-8cfd-99f24154568b-kube-api-access-swbtt\") pod \"cilium-operator-6c4d7847fc-zmc2f\" (UID: \"38658b31-5065-47eb-8cfd-99f24154568b\") " pod="kube-system/cilium-operator-6c4d7847fc-zmc2f" Sep 12 17:23:53.683739 kubelet[2680]: E0912 17:23:53.683694 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:23:53.684935 containerd[1518]: time="2025-09-12T17:23:53.684870788Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-sqvnv,Uid:3bc3148f-91dc-4d8a-9c70-c07ae292107e,Namespace:kube-system,Attempt:0,}" Sep 12 17:23:53.689395 kubelet[2680]: E0912 17:23:53.689367 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:23:53.689928 containerd[1518]: time="2025-09-12T17:23:53.689846928Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-w9pkm,Uid:de7575b0-5296-4950-86fa-e9171e9de4b5,Namespace:kube-system,Attempt:0,}" Sep 12 17:23:53.711472 containerd[1518]: time="2025-09-12T17:23:53.711421271Z" level=info msg="connecting to shim 75da0b6899656bb5d4017bc9eaf404af7950025ca0bbac3c89dd5235779abdd7" address="unix:///run/containerd/s/9e52a1fe80b6f664e6b09e146d6c5aac6433131abb4edf9e5faafb6dade6be11" namespace=k8s.io protocol=ttrpc version=3 Sep 12 17:23:53.712661 containerd[1518]: time="2025-09-12T17:23:53.712622199Z" level=info msg="connecting to shim 4e4b962e1da50792bd8d8b791ee29210055674c312ac04af8c8204dbaec2a858" address="unix:///run/containerd/s/dd54dd33e846374b2b1dcf39d7a743eda07b68c2334ac2b9950909b7b932d299" namespace=k8s.io protocol=ttrpc version=3 Sep 12 17:23:53.733927 systemd[1]: Started cri-containerd-75da0b6899656bb5d4017bc9eaf404af7950025ca0bbac3c89dd5235779abdd7.scope - libcontainer container 75da0b6899656bb5d4017bc9eaf404af7950025ca0bbac3c89dd5235779abdd7. Sep 12 17:23:53.738006 systemd[1]: Started cri-containerd-4e4b962e1da50792bd8d8b791ee29210055674c312ac04af8c8204dbaec2a858.scope - libcontainer container 4e4b962e1da50792bd8d8b791ee29210055674c312ac04af8c8204dbaec2a858. Sep 12 17:23:53.770417 containerd[1518]: time="2025-09-12T17:23:53.770318651Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-w9pkm,Uid:de7575b0-5296-4950-86fa-e9171e9de4b5,Namespace:kube-system,Attempt:0,} returns sandbox id \"75da0b6899656bb5d4017bc9eaf404af7950025ca0bbac3c89dd5235779abdd7\"" Sep 12 17:23:53.771479 kubelet[2680]: E0912 17:23:53.771426 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:23:53.772922 containerd[1518]: time="2025-09-12T17:23:53.772858978Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 12 17:23:53.773456 containerd[1518]: time="2025-09-12T17:23:53.773424864Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-sqvnv,Uid:3bc3148f-91dc-4d8a-9c70-c07ae292107e,Namespace:kube-system,Attempt:0,} returns sandbox id \"4e4b962e1da50792bd8d8b791ee29210055674c312ac04af8c8204dbaec2a858\"" Sep 12 17:23:53.774273 kubelet[2680]: E0912 17:23:53.774218 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:23:53.778133 containerd[1518]: time="2025-09-12T17:23:53.778086384Z" level=info msg="CreateContainer within sandbox \"4e4b962e1da50792bd8d8b791ee29210055674c312ac04af8c8204dbaec2a858\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 12 17:23:53.787121 containerd[1518]: time="2025-09-12T17:23:53.787038926Z" level=info msg="Container ebcf7c6f82a44c497652a22ba3f3554f012f3c22fd7daad5bae2b6812684486c: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:23:53.794401 containerd[1518]: time="2025-09-12T17:23:53.794331967Z" level=info msg="CreateContainer within sandbox \"4e4b962e1da50792bd8d8b791ee29210055674c312ac04af8c8204dbaec2a858\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"ebcf7c6f82a44c497652a22ba3f3554f012f3c22fd7daad5bae2b6812684486c\"" Sep 12 17:23:53.794815 kubelet[2680]: E0912 17:23:53.794715 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:23:53.795012 containerd[1518]: time="2025-09-12T17:23:53.794983328Z" level=info msg="StartContainer for \"ebcf7c6f82a44c497652a22ba3f3554f012f3c22fd7daad5bae2b6812684486c\"" Sep 12 17:23:53.796447 containerd[1518]: time="2025-09-12T17:23:53.796393643Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-zmc2f,Uid:38658b31-5065-47eb-8cfd-99f24154568b,Namespace:kube-system,Attempt:0,}" Sep 12 17:23:53.796946 containerd[1518]: time="2025-09-12T17:23:53.796912052Z" level=info msg="connecting to shim ebcf7c6f82a44c497652a22ba3f3554f012f3c22fd7daad5bae2b6812684486c" address="unix:///run/containerd/s/dd54dd33e846374b2b1dcf39d7a743eda07b68c2334ac2b9950909b7b932d299" protocol=ttrpc version=3 Sep 12 17:23:53.813056 containerd[1518]: time="2025-09-12T17:23:53.813008444Z" level=info msg="connecting to shim dbc5b0c5e1a177ae957a8ebf067674bf3231ff56a13f62c13671248d99dfab57" address="unix:///run/containerd/s/bad6575f21173f7cadd857c82164e78b116f82e3607634257daed48a151931db" namespace=k8s.io protocol=ttrpc version=3 Sep 12 17:23:53.816925 systemd[1]: Started cri-containerd-ebcf7c6f82a44c497652a22ba3f3554f012f3c22fd7daad5bae2b6812684486c.scope - libcontainer container ebcf7c6f82a44c497652a22ba3f3554f012f3c22fd7daad5bae2b6812684486c. Sep 12 17:23:53.834923 systemd[1]: Started cri-containerd-dbc5b0c5e1a177ae957a8ebf067674bf3231ff56a13f62c13671248d99dfab57.scope - libcontainer container dbc5b0c5e1a177ae957a8ebf067674bf3231ff56a13f62c13671248d99dfab57. Sep 12 17:23:53.866627 containerd[1518]: time="2025-09-12T17:23:53.866588423Z" level=info msg="StartContainer for \"ebcf7c6f82a44c497652a22ba3f3554f012f3c22fd7daad5bae2b6812684486c\" returns successfully" Sep 12 17:23:53.873273 containerd[1518]: time="2025-09-12T17:23:53.873218065Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-zmc2f,Uid:38658b31-5065-47eb-8cfd-99f24154568b,Namespace:kube-system,Attempt:0,} returns sandbox id \"dbc5b0c5e1a177ae957a8ebf067674bf3231ff56a13f62c13671248d99dfab57\"" Sep 12 17:23:53.873922 kubelet[2680]: E0912 17:23:53.873898 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:23:53.954241 kubelet[2680]: E0912 17:23:53.954111 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:23:53.971454 kubelet[2680]: I0912 17:23:53.970975 2680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-sqvnv" podStartSLOduration=0.970959509 podStartE2EDuration="970.959509ms" podCreationTimestamp="2025-09-12 17:23:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:23:53.966961189 +0000 UTC m=+7.147669305" watchObservedRunningTime="2025-09-12 17:23:53.970959509 +0000 UTC m=+7.151667625" Sep 12 17:23:54.703052 kubelet[2680]: E0912 17:23:54.702993 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:23:54.956606 kubelet[2680]: E0912 17:23:54.956346 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:23:56.378446 kubelet[2680]: E0912 17:23:56.378259 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:23:56.960776 kubelet[2680]: E0912 17:23:56.960714 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:23:57.776663 kubelet[2680]: E0912 17:23:57.776615 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:23:57.963141 kubelet[2680]: E0912 17:23:57.963107 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:23:57.963390 kubelet[2680]: E0912 17:23:57.963328 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:24:00.494680 update_engine[1500]: I20250912 17:24:00.494616 1500 update_attempter.cc:509] Updating boot flags... Sep 12 17:24:05.484032 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2570511003.mount: Deactivated successfully. Sep 12 17:24:07.063907 containerd[1518]: time="2025-09-12T17:24:07.063704751Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:24:07.065543 containerd[1518]: time="2025-09-12T17:24:07.065494858Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Sep 12 17:24:07.069133 containerd[1518]: time="2025-09-12T17:24:07.068932995Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:24:07.083385 containerd[1518]: time="2025-09-12T17:24:07.083338084Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 13.31040939s" Sep 12 17:24:07.083545 containerd[1518]: time="2025-09-12T17:24:07.083529958Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Sep 12 17:24:07.084539 containerd[1518]: time="2025-09-12T17:24:07.084512689Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 12 17:24:07.096084 containerd[1518]: time="2025-09-12T17:24:07.096034704Z" level=info msg="CreateContainer within sandbox \"75da0b6899656bb5d4017bc9eaf404af7950025ca0bbac3c89dd5235779abdd7\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 12 17:24:07.121337 containerd[1518]: time="2025-09-12T17:24:07.121276389Z" level=info msg="Container d68e4f563979a4b97258fc1a920bb0365d331b8b6e4c1dbeef71db6d69140fac: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:24:07.127337 containerd[1518]: time="2025-09-12T17:24:07.127286809Z" level=info msg="CreateContainer within sandbox \"75da0b6899656bb5d4017bc9eaf404af7950025ca0bbac3c89dd5235779abdd7\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"d68e4f563979a4b97258fc1a920bb0365d331b8b6e4c1dbeef71db6d69140fac\"" Sep 12 17:24:07.128804 containerd[1518]: time="2025-09-12T17:24:07.128767765Z" level=info msg="StartContainer for \"d68e4f563979a4b97258fc1a920bb0365d331b8b6e4c1dbeef71db6d69140fac\"" Sep 12 17:24:07.129674 containerd[1518]: time="2025-09-12T17:24:07.129648418Z" level=info msg="connecting to shim d68e4f563979a4b97258fc1a920bb0365d331b8b6e4c1dbeef71db6d69140fac" address="unix:///run/containerd/s/9e52a1fe80b6f664e6b09e146d6c5aac6433131abb4edf9e5faafb6dade6be11" protocol=ttrpc version=3 Sep 12 17:24:07.175942 systemd[1]: Started cri-containerd-d68e4f563979a4b97258fc1a920bb0365d331b8b6e4c1dbeef71db6d69140fac.scope - libcontainer container d68e4f563979a4b97258fc1a920bb0365d331b8b6e4c1dbeef71db6d69140fac. Sep 12 17:24:07.204636 containerd[1518]: time="2025-09-12T17:24:07.204529498Z" level=info msg="StartContainer for \"d68e4f563979a4b97258fc1a920bb0365d331b8b6e4c1dbeef71db6d69140fac\" returns successfully" Sep 12 17:24:07.217764 systemd[1]: cri-containerd-d68e4f563979a4b97258fc1a920bb0365d331b8b6e4c1dbeef71db6d69140fac.scope: Deactivated successfully. Sep 12 17:24:07.246128 containerd[1518]: time="2025-09-12T17:24:07.246015577Z" level=info msg="received exit event container_id:\"d68e4f563979a4b97258fc1a920bb0365d331b8b6e4c1dbeef71db6d69140fac\" id:\"d68e4f563979a4b97258fc1a920bb0365d331b8b6e4c1dbeef71db6d69140fac\" pid:3127 exited_at:{seconds:1757697847 nanos:236386545}" Sep 12 17:24:07.246367 containerd[1518]: time="2025-09-12T17:24:07.246085655Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d68e4f563979a4b97258fc1a920bb0365d331b8b6e4c1dbeef71db6d69140fac\" id:\"d68e4f563979a4b97258fc1a920bb0365d331b8b6e4c1dbeef71db6d69140fac\" pid:3127 exited_at:{seconds:1757697847 nanos:236386545}" Sep 12 17:24:07.995894 kubelet[2680]: E0912 17:24:07.995430 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:24:08.002430 containerd[1518]: time="2025-09-12T17:24:08.001695172Z" level=info msg="CreateContainer within sandbox \"75da0b6899656bb5d4017bc9eaf404af7950025ca0bbac3c89dd5235779abdd7\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 12 17:24:08.023822 containerd[1518]: time="2025-09-12T17:24:08.023702582Z" level=info msg="Container 45b94f8f5f963000392c4a9acaf73f51871df6ad4a4125634a4edf371efedad3: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:24:08.057115 containerd[1518]: time="2025-09-12T17:24:08.057053188Z" level=info msg="CreateContainer within sandbox \"75da0b6899656bb5d4017bc9eaf404af7950025ca0bbac3c89dd5235779abdd7\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"45b94f8f5f963000392c4a9acaf73f51871df6ad4a4125634a4edf371efedad3\"" Sep 12 17:24:08.057619 containerd[1518]: time="2025-09-12T17:24:08.057592772Z" level=info msg="StartContainer for \"45b94f8f5f963000392c4a9acaf73f51871df6ad4a4125634a4edf371efedad3\"" Sep 12 17:24:08.058471 containerd[1518]: time="2025-09-12T17:24:08.058422508Z" level=info msg="connecting to shim 45b94f8f5f963000392c4a9acaf73f51871df6ad4a4125634a4edf371efedad3" address="unix:///run/containerd/s/9e52a1fe80b6f664e6b09e146d6c5aac6433131abb4edf9e5faafb6dade6be11" protocol=ttrpc version=3 Sep 12 17:24:08.089107 systemd[1]: Started cri-containerd-45b94f8f5f963000392c4a9acaf73f51871df6ad4a4125634a4edf371efedad3.scope - libcontainer container 45b94f8f5f963000392c4a9acaf73f51871df6ad4a4125634a4edf371efedad3. Sep 12 17:24:08.110208 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d68e4f563979a4b97258fc1a920bb0365d331b8b6e4c1dbeef71db6d69140fac-rootfs.mount: Deactivated successfully. Sep 12 17:24:08.145524 containerd[1518]: time="2025-09-12T17:24:08.145472136Z" level=info msg="StartContainer for \"45b94f8f5f963000392c4a9acaf73f51871df6ad4a4125634a4edf371efedad3\" returns successfully" Sep 12 17:24:08.160030 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 12 17:24:08.160282 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:24:08.160591 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Sep 12 17:24:08.162610 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 12 17:24:08.165592 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 12 17:24:08.166110 systemd[1]: cri-containerd-45b94f8f5f963000392c4a9acaf73f51871df6ad4a4125634a4edf371efedad3.scope: Deactivated successfully. Sep 12 17:24:08.178464 containerd[1518]: time="2025-09-12T17:24:08.178415793Z" level=info msg="TaskExit event in podsandbox handler container_id:\"45b94f8f5f963000392c4a9acaf73f51871df6ad4a4125634a4edf371efedad3\" id:\"45b94f8f5f963000392c4a9acaf73f51871df6ad4a4125634a4edf371efedad3\" pid:3174 exited_at:{seconds:1757697848 nanos:177964046}" Sep 12 17:24:08.178676 containerd[1518]: time="2025-09-12T17:24:08.178635227Z" level=info msg="received exit event container_id:\"45b94f8f5f963000392c4a9acaf73f51871df6ad4a4125634a4edf371efedad3\" id:\"45b94f8f5f963000392c4a9acaf73f51871df6ad4a4125634a4edf371efedad3\" pid:3174 exited_at:{seconds:1757697848 nanos:177964046}" Sep 12 17:24:08.199776 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 12 17:24:08.207116 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-45b94f8f5f963000392c4a9acaf73f51871df6ad4a4125634a4edf371efedad3-rootfs.mount: Deactivated successfully. Sep 12 17:24:08.998480 kubelet[2680]: E0912 17:24:08.998439 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:24:09.003264 containerd[1518]: time="2025-09-12T17:24:09.003218182Z" level=info msg="CreateContainer within sandbox \"75da0b6899656bb5d4017bc9eaf404af7950025ca0bbac3c89dd5235779abdd7\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 12 17:24:09.018810 containerd[1518]: time="2025-09-12T17:24:09.018266250Z" level=info msg="Container 46f5a205e2ff98fd8095e6a877dd6903514c39f4aef80495d7af3746674226e6: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:24:09.028324 containerd[1518]: time="2025-09-12T17:24:09.027272843Z" level=info msg="CreateContainer within sandbox \"75da0b6899656bb5d4017bc9eaf404af7950025ca0bbac3c89dd5235779abdd7\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"46f5a205e2ff98fd8095e6a877dd6903514c39f4aef80495d7af3746674226e6\"" Sep 12 17:24:09.028324 containerd[1518]: time="2025-09-12T17:24:09.027937945Z" level=info msg="StartContainer for \"46f5a205e2ff98fd8095e6a877dd6903514c39f4aef80495d7af3746674226e6\"" Sep 12 17:24:09.029713 containerd[1518]: time="2025-09-12T17:24:09.029673497Z" level=info msg="connecting to shim 46f5a205e2ff98fd8095e6a877dd6903514c39f4aef80495d7af3746674226e6" address="unix:///run/containerd/s/9e52a1fe80b6f664e6b09e146d6c5aac6433131abb4edf9e5faafb6dade6be11" protocol=ttrpc version=3 Sep 12 17:24:09.061960 systemd[1]: Started cri-containerd-46f5a205e2ff98fd8095e6a877dd6903514c39f4aef80495d7af3746674226e6.scope - libcontainer container 46f5a205e2ff98fd8095e6a877dd6903514c39f4aef80495d7af3746674226e6. Sep 12 17:24:09.097184 containerd[1518]: time="2025-09-12T17:24:09.097131367Z" level=info msg="StartContainer for \"46f5a205e2ff98fd8095e6a877dd6903514c39f4aef80495d7af3746674226e6\" returns successfully" Sep 12 17:24:09.099620 systemd[1]: cri-containerd-46f5a205e2ff98fd8095e6a877dd6903514c39f4aef80495d7af3746674226e6.scope: Deactivated successfully. Sep 12 17:24:09.101866 containerd[1518]: time="2025-09-12T17:24:09.101829838Z" level=info msg="received exit event container_id:\"46f5a205e2ff98fd8095e6a877dd6903514c39f4aef80495d7af3746674226e6\" id:\"46f5a205e2ff98fd8095e6a877dd6903514c39f4aef80495d7af3746674226e6\" pid:3225 exited_at:{seconds:1757697849 nanos:101531446}" Sep 12 17:24:09.102060 containerd[1518]: time="2025-09-12T17:24:09.101937715Z" level=info msg="TaskExit event in podsandbox handler container_id:\"46f5a205e2ff98fd8095e6a877dd6903514c39f4aef80495d7af3746674226e6\" id:\"46f5a205e2ff98fd8095e6a877dd6903514c39f4aef80495d7af3746674226e6\" pid:3225 exited_at:{seconds:1757697849 nanos:101531446}" Sep 12 17:24:09.110714 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2223369221.mount: Deactivated successfully. Sep 12 17:24:09.126508 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-46f5a205e2ff98fd8095e6a877dd6903514c39f4aef80495d7af3746674226e6-rootfs.mount: Deactivated successfully. Sep 12 17:24:10.005114 kubelet[2680]: E0912 17:24:10.005043 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:24:10.013087 containerd[1518]: time="2025-09-12T17:24:10.012991664Z" level=info msg="CreateContainer within sandbox \"75da0b6899656bb5d4017bc9eaf404af7950025ca0bbac3c89dd5235779abdd7\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 12 17:24:10.036451 containerd[1518]: time="2025-09-12T17:24:10.036384369Z" level=info msg="Container f38f8e6fab4bc612434278ac3519a62a525509f507825acd96c73db2b0f7c9d4: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:24:10.044105 containerd[1518]: time="2025-09-12T17:24:10.043966850Z" level=info msg="CreateContainer within sandbox \"75da0b6899656bb5d4017bc9eaf404af7950025ca0bbac3c89dd5235779abdd7\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"f38f8e6fab4bc612434278ac3519a62a525509f507825acd96c73db2b0f7c9d4\"" Sep 12 17:24:10.045560 containerd[1518]: time="2025-09-12T17:24:10.045424251Z" level=info msg="StartContainer for \"f38f8e6fab4bc612434278ac3519a62a525509f507825acd96c73db2b0f7c9d4\"" Sep 12 17:24:10.046678 containerd[1518]: time="2025-09-12T17:24:10.046557582Z" level=info msg="connecting to shim f38f8e6fab4bc612434278ac3519a62a525509f507825acd96c73db2b0f7c9d4" address="unix:///run/containerd/s/9e52a1fe80b6f664e6b09e146d6c5aac6433131abb4edf9e5faafb6dade6be11" protocol=ttrpc version=3 Sep 12 17:24:10.078970 systemd[1]: Started cri-containerd-f38f8e6fab4bc612434278ac3519a62a525509f507825acd96c73db2b0f7c9d4.scope - libcontainer container f38f8e6fab4bc612434278ac3519a62a525509f507825acd96c73db2b0f7c9d4. Sep 12 17:24:10.130049 systemd[1]: cri-containerd-f38f8e6fab4bc612434278ac3519a62a525509f507825acd96c73db2b0f7c9d4.scope: Deactivated successfully. Sep 12 17:24:10.131158 containerd[1518]: time="2025-09-12T17:24:10.131103478Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f38f8e6fab4bc612434278ac3519a62a525509f507825acd96c73db2b0f7c9d4\" id:\"f38f8e6fab4bc612434278ac3519a62a525509f507825acd96c73db2b0f7c9d4\" pid:3263 exited_at:{seconds:1757697850 nanos:130658850}" Sep 12 17:24:10.134184 containerd[1518]: time="2025-09-12T17:24:10.131413030Z" level=info msg="received exit event container_id:\"f38f8e6fab4bc612434278ac3519a62a525509f507825acd96c73db2b0f7c9d4\" id:\"f38f8e6fab4bc612434278ac3519a62a525509f507825acd96c73db2b0f7c9d4\" pid:3263 exited_at:{seconds:1757697850 nanos:130658850}" Sep 12 17:24:10.135859 containerd[1518]: time="2025-09-12T17:24:10.135800315Z" level=info msg="StartContainer for \"f38f8e6fab4bc612434278ac3519a62a525509f507825acd96c73db2b0f7c9d4\" returns successfully" Sep 12 17:24:10.155308 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f38f8e6fab4bc612434278ac3519a62a525509f507825acd96c73db2b0f7c9d4-rootfs.mount: Deactivated successfully. Sep 12 17:24:10.873939 containerd[1518]: time="2025-09-12T17:24:10.873886787Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:24:10.874446 containerd[1518]: time="2025-09-12T17:24:10.874417774Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Sep 12 17:24:10.875643 containerd[1518]: time="2025-09-12T17:24:10.875615382Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 12 17:24:10.876984 containerd[1518]: time="2025-09-12T17:24:10.876825310Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 3.792184585s" Sep 12 17:24:10.876984 containerd[1518]: time="2025-09-12T17:24:10.876857869Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Sep 12 17:24:10.881987 containerd[1518]: time="2025-09-12T17:24:10.881931576Z" level=info msg="CreateContainer within sandbox \"dbc5b0c5e1a177ae957a8ebf067674bf3231ff56a13f62c13671248d99dfab57\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 12 17:24:10.888140 containerd[1518]: time="2025-09-12T17:24:10.888089374Z" level=info msg="Container 9473901c5bb031ed257f22813112bfa356fda480baa102269360e458b20d1250: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:24:10.894515 containerd[1518]: time="2025-09-12T17:24:10.894411888Z" level=info msg="CreateContainer within sandbox \"dbc5b0c5e1a177ae957a8ebf067674bf3231ff56a13f62c13671248d99dfab57\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"9473901c5bb031ed257f22813112bfa356fda480baa102269360e458b20d1250\"" Sep 12 17:24:10.895016 containerd[1518]: time="2025-09-12T17:24:10.894992873Z" level=info msg="StartContainer for \"9473901c5bb031ed257f22813112bfa356fda480baa102269360e458b20d1250\"" Sep 12 17:24:10.895993 containerd[1518]: time="2025-09-12T17:24:10.895909808Z" level=info msg="connecting to shim 9473901c5bb031ed257f22813112bfa356fda480baa102269360e458b20d1250" address="unix:///run/containerd/s/bad6575f21173f7cadd857c82164e78b116f82e3607634257daed48a151931db" protocol=ttrpc version=3 Sep 12 17:24:10.917913 systemd[1]: Started cri-containerd-9473901c5bb031ed257f22813112bfa356fda480baa102269360e458b20d1250.scope - libcontainer container 9473901c5bb031ed257f22813112bfa356fda480baa102269360e458b20d1250. Sep 12 17:24:10.948462 containerd[1518]: time="2025-09-12T17:24:10.948424708Z" level=info msg="StartContainer for \"9473901c5bb031ed257f22813112bfa356fda480baa102269360e458b20d1250\" returns successfully" Sep 12 17:24:11.008552 kubelet[2680]: E0912 17:24:11.008521 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:24:11.014579 kubelet[2680]: E0912 17:24:11.014553 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:24:11.021551 containerd[1518]: time="2025-09-12T17:24:11.020634310Z" level=info msg="CreateContainer within sandbox \"75da0b6899656bb5d4017bc9eaf404af7950025ca0bbac3c89dd5235779abdd7\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 12 17:24:11.023764 kubelet[2680]: I0912 17:24:11.022889 2680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-zmc2f" podStartSLOduration=1.020451101 podStartE2EDuration="18.022870054s" podCreationTimestamp="2025-09-12 17:23:53 +0000 UTC" firstStartedPulling="2025-09-12 17:23:53.87546017 +0000 UTC m=+7.056168286" lastFinishedPulling="2025-09-12 17:24:10.877879163 +0000 UTC m=+24.058587239" observedRunningTime="2025-09-12 17:24:11.022019315 +0000 UTC m=+24.202727431" watchObservedRunningTime="2025-09-12 17:24:11.022870054 +0000 UTC m=+24.203578210" Sep 12 17:24:11.041793 containerd[1518]: time="2025-09-12T17:24:11.041715018Z" level=info msg="Container 47379d17b411fc0e47dc14ed5a6159e56d146651e292c5ce6342ebd1b7217e57: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:24:11.055600 containerd[1518]: time="2025-09-12T17:24:11.055557669Z" level=info msg="CreateContainer within sandbox \"75da0b6899656bb5d4017bc9eaf404af7950025ca0bbac3c89dd5235779abdd7\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"47379d17b411fc0e47dc14ed5a6159e56d146651e292c5ce6342ebd1b7217e57\"" Sep 12 17:24:11.056161 containerd[1518]: time="2025-09-12T17:24:11.056129894Z" level=info msg="StartContainer for \"47379d17b411fc0e47dc14ed5a6159e56d146651e292c5ce6342ebd1b7217e57\"" Sep 12 17:24:11.057078 containerd[1518]: time="2025-09-12T17:24:11.057040591Z" level=info msg="connecting to shim 47379d17b411fc0e47dc14ed5a6159e56d146651e292c5ce6342ebd1b7217e57" address="unix:///run/containerd/s/9e52a1fe80b6f664e6b09e146d6c5aac6433131abb4edf9e5faafb6dade6be11" protocol=ttrpc version=3 Sep 12 17:24:11.104999 systemd[1]: Started cri-containerd-47379d17b411fc0e47dc14ed5a6159e56d146651e292c5ce6342ebd1b7217e57.scope - libcontainer container 47379d17b411fc0e47dc14ed5a6159e56d146651e292c5ce6342ebd1b7217e57. Sep 12 17:24:11.178810 containerd[1518]: time="2025-09-12T17:24:11.178013418Z" level=info msg="StartContainer for \"47379d17b411fc0e47dc14ed5a6159e56d146651e292c5ce6342ebd1b7217e57\" returns successfully" Sep 12 17:24:11.268032 containerd[1518]: time="2025-09-12T17:24:11.267994788Z" level=info msg="TaskExit event in podsandbox handler container_id:\"47379d17b411fc0e47dc14ed5a6159e56d146651e292c5ce6342ebd1b7217e57\" id:\"c7dbd0348fd4a345e68b317fe9526a48fcfad5b6ebfe09aec82e67554616559f\" pid:3379 exited_at:{seconds:1757697851 nanos:267482201}" Sep 12 17:24:11.357062 kubelet[2680]: I0912 17:24:11.357028 2680 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 12 17:24:11.426718 systemd[1]: Created slice kubepods-burstable-pod2a6cce25_2549_4da2_ace1_0142e05517ea.slice - libcontainer container kubepods-burstable-pod2a6cce25_2549_4da2_ace1_0142e05517ea.slice. Sep 12 17:24:11.451238 systemd[1]: Created slice kubepods-burstable-pod3c0ff26d_b4b9_49f7_87af_5f31561b40a7.slice - libcontainer container kubepods-burstable-pod3c0ff26d_b4b9_49f7_87af_5f31561b40a7.slice. Sep 12 17:24:11.459999 kubelet[2680]: I0912 17:24:11.459951 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mrlcb\" (UniqueName: \"kubernetes.io/projected/3c0ff26d-b4b9-49f7-87af-5f31561b40a7-kube-api-access-mrlcb\") pod \"coredns-674b8bbfcf-w9sbt\" (UID: \"3c0ff26d-b4b9-49f7-87af-5f31561b40a7\") " pod="kube-system/coredns-674b8bbfcf-w9sbt" Sep 12 17:24:11.459999 kubelet[2680]: I0912 17:24:11.459992 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3c0ff26d-b4b9-49f7-87af-5f31561b40a7-config-volume\") pod \"coredns-674b8bbfcf-w9sbt\" (UID: \"3c0ff26d-b4b9-49f7-87af-5f31561b40a7\") " pod="kube-system/coredns-674b8bbfcf-w9sbt" Sep 12 17:24:11.460170 kubelet[2680]: I0912 17:24:11.460017 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2a6cce25-2549-4da2-ace1-0142e05517ea-config-volume\") pod \"coredns-674b8bbfcf-6mswj\" (UID: \"2a6cce25-2549-4da2-ace1-0142e05517ea\") " pod="kube-system/coredns-674b8bbfcf-6mswj" Sep 12 17:24:11.460170 kubelet[2680]: I0912 17:24:11.460032 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9gzg8\" (UniqueName: \"kubernetes.io/projected/2a6cce25-2549-4da2-ace1-0142e05517ea-kube-api-access-9gzg8\") pod \"coredns-674b8bbfcf-6mswj\" (UID: \"2a6cce25-2549-4da2-ace1-0142e05517ea\") " pod="kube-system/coredns-674b8bbfcf-6mswj" Sep 12 17:24:11.730353 kubelet[2680]: E0912 17:24:11.729874 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:24:11.730963 containerd[1518]: time="2025-09-12T17:24:11.730906946Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-6mswj,Uid:2a6cce25-2549-4da2-ace1-0142e05517ea,Namespace:kube-system,Attempt:0,}" Sep 12 17:24:11.755696 kubelet[2680]: E0912 17:24:11.755666 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:24:11.758740 containerd[1518]: time="2025-09-12T17:24:11.758031142Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-w9sbt,Uid:3c0ff26d-b4b9-49f7-87af-5f31561b40a7,Namespace:kube-system,Attempt:0,}" Sep 12 17:24:12.021893 kubelet[2680]: E0912 17:24:12.021425 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:24:12.021893 kubelet[2680]: E0912 17:24:12.021569 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:24:12.036767 kubelet[2680]: I0912 17:24:12.036539 2680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-w9pkm" podStartSLOduration=5.724724372 podStartE2EDuration="19.036524549s" podCreationTimestamp="2025-09-12 17:23:53 +0000 UTC" firstStartedPulling="2025-09-12 17:23:53.772535677 +0000 UTC m=+6.953243753" lastFinishedPulling="2025-09-12 17:24:07.084335814 +0000 UTC m=+20.265043930" observedRunningTime="2025-09-12 17:24:12.036196077 +0000 UTC m=+25.216904193" watchObservedRunningTime="2025-09-12 17:24:12.036524549 +0000 UTC m=+25.217232665" Sep 12 17:24:13.027540 kubelet[2680]: E0912 17:24:13.027155 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:24:14.029213 kubelet[2680]: E0912 17:24:14.029171 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:24:15.032217 kubelet[2680]: E0912 17:24:15.032176 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:24:15.258630 systemd-networkd[1429]: cilium_host: Link UP Sep 12 17:24:15.259787 systemd-networkd[1429]: cilium_net: Link UP Sep 12 17:24:15.260671 systemd-networkd[1429]: cilium_net: Gained carrier Sep 12 17:24:15.261301 systemd-networkd[1429]: cilium_host: Gained carrier Sep 12 17:24:15.343827 systemd-networkd[1429]: cilium_vxlan: Link UP Sep 12 17:24:15.343834 systemd-networkd[1429]: cilium_vxlan: Gained carrier Sep 12 17:24:15.615773 kernel: NET: Registered PF_ALG protocol family Sep 12 17:24:16.180132 systemd-networkd[1429]: lxc_health: Link UP Sep 12 17:24:16.181410 systemd-networkd[1429]: cilium_net: Gained IPv6LL Sep 12 17:24:16.181841 systemd-networkd[1429]: cilium_host: Gained IPv6LL Sep 12 17:24:16.183122 systemd-networkd[1429]: lxc_health: Gained carrier Sep 12 17:24:16.316010 systemd-networkd[1429]: lxc3dd6b456ea53: Link UP Sep 12 17:24:16.321861 kernel: eth0: renamed from tmp2fd34 Sep 12 17:24:16.321947 kernel: eth0: renamed from tmp9aede Sep 12 17:24:16.321284 systemd-networkd[1429]: lxc57f9b67470cb: Link UP Sep 12 17:24:16.321473 systemd-networkd[1429]: lxc57f9b67470cb: Gained carrier Sep 12 17:24:16.325019 systemd-networkd[1429]: lxc3dd6b456ea53: Gained carrier Sep 12 17:24:16.368889 systemd-networkd[1429]: cilium_vxlan: Gained IPv6LL Sep 12 17:24:16.725792 systemd[1]: Started sshd@7-10.0.0.105:22-10.0.0.1:45680.service - OpenSSH per-connection server daemon (10.0.0.1:45680). Sep 12 17:24:16.792754 sshd[3855]: Accepted publickey for core from 10.0.0.1 port 45680 ssh2: RSA SHA256:UT5jL9R+kNVMu55HRewvy3KiK11NkEv9jWcPEawXfBI Sep 12 17:24:16.794452 sshd-session[3855]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:24:16.798465 systemd-logind[1498]: New session 8 of user core. Sep 12 17:24:16.805878 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 12 17:24:16.947712 sshd[3858]: Connection closed by 10.0.0.1 port 45680 Sep 12 17:24:16.947627 sshd-session[3855]: pam_unix(sshd:session): session closed for user core Sep 12 17:24:16.953127 systemd[1]: sshd@7-10.0.0.105:22-10.0.0.1:45680.service: Deactivated successfully. Sep 12 17:24:16.956372 systemd[1]: session-8.scope: Deactivated successfully. Sep 12 17:24:16.958989 systemd-logind[1498]: Session 8 logged out. Waiting for processes to exit. Sep 12 17:24:16.961375 systemd-logind[1498]: Removed session 8. Sep 12 17:24:17.455926 systemd-networkd[1429]: lxc_health: Gained IPv6LL Sep 12 17:24:17.696960 kubelet[2680]: E0912 17:24:17.696906 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:24:17.776923 systemd-networkd[1429]: lxc57f9b67470cb: Gained IPv6LL Sep 12 17:24:17.967902 systemd-networkd[1429]: lxc3dd6b456ea53: Gained IPv6LL Sep 12 17:24:18.040291 kubelet[2680]: E0912 17:24:18.037690 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:24:20.099710 containerd[1518]: time="2025-09-12T17:24:20.099646047Z" level=info msg="connecting to shim 2fd3429316446c749f6ff7ee63e55529216a09053878f1e1d16638ffce050ac7" address="unix:///run/containerd/s/29bc60c394b91a516e76a56d7c9ada2ca0a9b4c9d9b77fdb0d82c566157f0a51" namespace=k8s.io protocol=ttrpc version=3 Sep 12 17:24:20.108757 containerd[1518]: time="2025-09-12T17:24:20.107900737Z" level=info msg="connecting to shim 9aede56941bfc177e9e12717289e3cb55517f5029728e50c0e6889042565acfd" address="unix:///run/containerd/s/e8b3c85a4ca69d1a7676a7ea0a7f3e9a64ebe73eea24c0717e93f2867bb734ab" namespace=k8s.io protocol=ttrpc version=3 Sep 12 17:24:20.135931 systemd[1]: Started cri-containerd-9aede56941bfc177e9e12717289e3cb55517f5029728e50c0e6889042565acfd.scope - libcontainer container 9aede56941bfc177e9e12717289e3cb55517f5029728e50c0e6889042565acfd. Sep 12 17:24:20.147438 systemd-resolved[1349]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 17:24:20.164942 systemd[1]: Started cri-containerd-2fd3429316446c749f6ff7ee63e55529216a09053878f1e1d16638ffce050ac7.scope - libcontainer container 2fd3429316446c749f6ff7ee63e55529216a09053878f1e1d16638ffce050ac7. Sep 12 17:24:20.170168 containerd[1518]: time="2025-09-12T17:24:20.170128922Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-w9sbt,Uid:3c0ff26d-b4b9-49f7-87af-5f31561b40a7,Namespace:kube-system,Attempt:0,} returns sandbox id \"9aede56941bfc177e9e12717289e3cb55517f5029728e50c0e6889042565acfd\"" Sep 12 17:24:20.171427 kubelet[2680]: E0912 17:24:20.171401 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:24:20.175263 containerd[1518]: time="2025-09-12T17:24:20.175228349Z" level=info msg="CreateContainer within sandbox \"9aede56941bfc177e9e12717289e3cb55517f5029728e50c0e6889042565acfd\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 12 17:24:20.186980 containerd[1518]: time="2025-09-12T17:24:20.186194389Z" level=info msg="Container da3cd5a01e447f972de49ce832e00e7aeb5ae28193bf504fbd2559c93010cded: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:24:20.188096 systemd-resolved[1349]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 12 17:24:20.196536 containerd[1518]: time="2025-09-12T17:24:20.196478762Z" level=info msg="CreateContainer within sandbox \"9aede56941bfc177e9e12717289e3cb55517f5029728e50c0e6889042565acfd\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"da3cd5a01e447f972de49ce832e00e7aeb5ae28193bf504fbd2559c93010cded\"" Sep 12 17:24:20.197233 containerd[1518]: time="2025-09-12T17:24:20.197206228Z" level=info msg="StartContainer for \"da3cd5a01e447f972de49ce832e00e7aeb5ae28193bf504fbd2559c93010cded\"" Sep 12 17:24:20.198111 containerd[1518]: time="2025-09-12T17:24:20.198088332Z" level=info msg="connecting to shim da3cd5a01e447f972de49ce832e00e7aeb5ae28193bf504fbd2559c93010cded" address="unix:///run/containerd/s/e8b3c85a4ca69d1a7676a7ea0a7f3e9a64ebe73eea24c0717e93f2867bb734ab" protocol=ttrpc version=3 Sep 12 17:24:20.212558 containerd[1518]: time="2025-09-12T17:24:20.212278554Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-6mswj,Uid:2a6cce25-2549-4da2-ace1-0142e05517ea,Namespace:kube-system,Attempt:0,} returns sandbox id \"2fd3429316446c749f6ff7ee63e55529216a09053878f1e1d16638ffce050ac7\"" Sep 12 17:24:20.213530 kubelet[2680]: E0912 17:24:20.213043 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:24:20.217222 containerd[1518]: time="2025-09-12T17:24:20.217186304Z" level=info msg="CreateContainer within sandbox \"2fd3429316446c749f6ff7ee63e55529216a09053878f1e1d16638ffce050ac7\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 12 17:24:20.217963 systemd[1]: Started cri-containerd-da3cd5a01e447f972de49ce832e00e7aeb5ae28193bf504fbd2559c93010cded.scope - libcontainer container da3cd5a01e447f972de49ce832e00e7aeb5ae28193bf504fbd2559c93010cded. Sep 12 17:24:20.226779 containerd[1518]: time="2025-09-12T17:24:20.226229859Z" level=info msg="Container a8879ae9fef9da02c1cc0db4632513831fb1d2b162b80246f543191303628726: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:24:20.233633 containerd[1518]: time="2025-09-12T17:24:20.233447648Z" level=info msg="CreateContainer within sandbox \"2fd3429316446c749f6ff7ee63e55529216a09053878f1e1d16638ffce050ac7\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a8879ae9fef9da02c1cc0db4632513831fb1d2b162b80246f543191303628726\"" Sep 12 17:24:20.234064 containerd[1518]: time="2025-09-12T17:24:20.234010717Z" level=info msg="StartContainer for \"a8879ae9fef9da02c1cc0db4632513831fb1d2b162b80246f543191303628726\"" Sep 12 17:24:20.235191 containerd[1518]: time="2025-09-12T17:24:20.235160216Z" level=info msg="connecting to shim a8879ae9fef9da02c1cc0db4632513831fb1d2b162b80246f543191303628726" address="unix:///run/containerd/s/29bc60c394b91a516e76a56d7c9ada2ca0a9b4c9d9b77fdb0d82c566157f0a51" protocol=ttrpc version=3 Sep 12 17:24:20.248298 containerd[1518]: time="2025-09-12T17:24:20.248264857Z" level=info msg="StartContainer for \"da3cd5a01e447f972de49ce832e00e7aeb5ae28193bf504fbd2559c93010cded\" returns successfully" Sep 12 17:24:20.255925 systemd[1]: Started cri-containerd-a8879ae9fef9da02c1cc0db4632513831fb1d2b162b80246f543191303628726.scope - libcontainer container a8879ae9fef9da02c1cc0db4632513831fb1d2b162b80246f543191303628726. Sep 12 17:24:20.288457 containerd[1518]: time="2025-09-12T17:24:20.288421045Z" level=info msg="StartContainer for \"a8879ae9fef9da02c1cc0db4632513831fb1d2b162b80246f543191303628726\" returns successfully" Sep 12 17:24:21.050153 kubelet[2680]: E0912 17:24:21.050084 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:24:21.055643 kubelet[2680]: E0912 17:24:21.055614 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:24:21.071513 kubelet[2680]: I0912 17:24:21.070190 2680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-w9sbt" podStartSLOduration=28.070175348 podStartE2EDuration="28.070175348s" podCreationTimestamp="2025-09-12 17:23:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:24:21.068659135 +0000 UTC m=+34.249367251" watchObservedRunningTime="2025-09-12 17:24:21.070175348 +0000 UTC m=+34.250883464" Sep 12 17:24:21.115168 kubelet[2680]: I0912 17:24:21.115043 2680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-6mswj" podStartSLOduration=28.115025275 podStartE2EDuration="28.115025275s" podCreationTimestamp="2025-09-12 17:23:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:24:21.114255969 +0000 UTC m=+34.294964085" watchObservedRunningTime="2025-09-12 17:24:21.115025275 +0000 UTC m=+34.295733391" Sep 12 17:24:21.970655 systemd[1]: Started sshd@8-10.0.0.105:22-10.0.0.1:50046.service - OpenSSH per-connection server daemon (10.0.0.1:50046). Sep 12 17:24:22.058038 sshd[4061]: Accepted publickey for core from 10.0.0.1 port 50046 ssh2: RSA SHA256:UT5jL9R+kNVMu55HRewvy3KiK11NkEv9jWcPEawXfBI Sep 12 17:24:22.058493 kubelet[2680]: E0912 17:24:22.058249 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:24:22.059083 kubelet[2680]: E0912 17:24:22.058959 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:24:22.061898 sshd-session[4061]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:24:22.071802 systemd-logind[1498]: New session 9 of user core. Sep 12 17:24:22.080976 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 12 17:24:22.232323 sshd[4064]: Connection closed by 10.0.0.1 port 50046 Sep 12 17:24:22.232677 sshd-session[4061]: pam_unix(sshd:session): session closed for user core Sep 12 17:24:22.238361 systemd[1]: sshd@8-10.0.0.105:22-10.0.0.1:50046.service: Deactivated successfully. Sep 12 17:24:22.240421 systemd[1]: session-9.scope: Deactivated successfully. Sep 12 17:24:22.241170 systemd-logind[1498]: Session 9 logged out. Waiting for processes to exit. Sep 12 17:24:22.242221 systemd-logind[1498]: Removed session 9. Sep 12 17:24:23.060665 kubelet[2680]: E0912 17:24:23.060606 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:24:23.061080 kubelet[2680]: E0912 17:24:23.060753 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:24:27.244228 systemd[1]: Started sshd@9-10.0.0.105:22-10.0.0.1:50052.service - OpenSSH per-connection server daemon (10.0.0.1:50052). Sep 12 17:24:27.309061 sshd[4081]: Accepted publickey for core from 10.0.0.1 port 50052 ssh2: RSA SHA256:UT5jL9R+kNVMu55HRewvy3KiK11NkEv9jWcPEawXfBI Sep 12 17:24:27.310508 sshd-session[4081]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:24:27.315607 systemd-logind[1498]: New session 10 of user core. Sep 12 17:24:27.326034 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 12 17:24:27.440074 sshd[4084]: Connection closed by 10.0.0.1 port 50052 Sep 12 17:24:27.440402 sshd-session[4081]: pam_unix(sshd:session): session closed for user core Sep 12 17:24:27.443899 systemd[1]: sshd@9-10.0.0.105:22-10.0.0.1:50052.service: Deactivated successfully. Sep 12 17:24:27.445551 systemd[1]: session-10.scope: Deactivated successfully. Sep 12 17:24:27.446288 systemd-logind[1498]: Session 10 logged out. Waiting for processes to exit. Sep 12 17:24:27.447229 systemd-logind[1498]: Removed session 10. Sep 12 17:24:32.457080 systemd[1]: Started sshd@10-10.0.0.105:22-10.0.0.1:34576.service - OpenSSH per-connection server daemon (10.0.0.1:34576). Sep 12 17:24:32.512126 sshd[4098]: Accepted publickey for core from 10.0.0.1 port 34576 ssh2: RSA SHA256:UT5jL9R+kNVMu55HRewvy3KiK11NkEv9jWcPEawXfBI Sep 12 17:24:32.513513 sshd-session[4098]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:24:32.517823 systemd-logind[1498]: New session 11 of user core. Sep 12 17:24:32.524951 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 12 17:24:32.633766 sshd[4101]: Connection closed by 10.0.0.1 port 34576 Sep 12 17:24:32.633495 sshd-session[4098]: pam_unix(sshd:session): session closed for user core Sep 12 17:24:32.647880 systemd[1]: sshd@10-10.0.0.105:22-10.0.0.1:34576.service: Deactivated successfully. Sep 12 17:24:32.650224 systemd[1]: session-11.scope: Deactivated successfully. Sep 12 17:24:32.650957 systemd-logind[1498]: Session 11 logged out. Waiting for processes to exit. Sep 12 17:24:32.653422 systemd[1]: Started sshd@11-10.0.0.105:22-10.0.0.1:34584.service - OpenSSH per-connection server daemon (10.0.0.1:34584). Sep 12 17:24:32.654352 systemd-logind[1498]: Removed session 11. Sep 12 17:24:32.722415 sshd[4115]: Accepted publickey for core from 10.0.0.1 port 34584 ssh2: RSA SHA256:UT5jL9R+kNVMu55HRewvy3KiK11NkEv9jWcPEawXfBI Sep 12 17:24:32.724001 sshd-session[4115]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:24:32.727931 systemd-logind[1498]: New session 12 of user core. Sep 12 17:24:32.734908 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 12 17:24:32.899381 sshd[4118]: Connection closed by 10.0.0.1 port 34584 Sep 12 17:24:32.900146 sshd-session[4115]: pam_unix(sshd:session): session closed for user core Sep 12 17:24:32.911900 systemd[1]: sshd@11-10.0.0.105:22-10.0.0.1:34584.service: Deactivated successfully. Sep 12 17:24:32.915216 systemd[1]: session-12.scope: Deactivated successfully. Sep 12 17:24:32.917060 systemd-logind[1498]: Session 12 logged out. Waiting for processes to exit. Sep 12 17:24:32.921090 systemd[1]: Started sshd@12-10.0.0.105:22-10.0.0.1:34586.service - OpenSSH per-connection server daemon (10.0.0.1:34586). Sep 12 17:24:32.925156 systemd-logind[1498]: Removed session 12. Sep 12 17:24:32.975582 sshd[4130]: Accepted publickey for core from 10.0.0.1 port 34586 ssh2: RSA SHA256:UT5jL9R+kNVMu55HRewvy3KiK11NkEv9jWcPEawXfBI Sep 12 17:24:32.976951 sshd-session[4130]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:24:32.981665 systemd-logind[1498]: New session 13 of user core. Sep 12 17:24:32.991929 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 12 17:24:33.104277 sshd[4133]: Connection closed by 10.0.0.1 port 34586 Sep 12 17:24:33.104614 sshd-session[4130]: pam_unix(sshd:session): session closed for user core Sep 12 17:24:33.107978 systemd-logind[1498]: Session 13 logged out. Waiting for processes to exit. Sep 12 17:24:33.108184 systemd[1]: sshd@12-10.0.0.105:22-10.0.0.1:34586.service: Deactivated successfully. Sep 12 17:24:33.110657 systemd[1]: session-13.scope: Deactivated successfully. Sep 12 17:24:33.112513 systemd-logind[1498]: Removed session 13. Sep 12 17:24:38.125264 systemd[1]: Started sshd@13-10.0.0.105:22-10.0.0.1:34598.service - OpenSSH per-connection server daemon (10.0.0.1:34598). Sep 12 17:24:38.207504 sshd[4146]: Accepted publickey for core from 10.0.0.1 port 34598 ssh2: RSA SHA256:UT5jL9R+kNVMu55HRewvy3KiK11NkEv9jWcPEawXfBI Sep 12 17:24:38.209634 sshd-session[4146]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:24:38.216491 systemd-logind[1498]: New session 14 of user core. Sep 12 17:24:38.228950 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 12 17:24:38.361955 sshd[4149]: Connection closed by 10.0.0.1 port 34598 Sep 12 17:24:38.362327 sshd-session[4146]: pam_unix(sshd:session): session closed for user core Sep 12 17:24:38.367157 systemd[1]: sshd@13-10.0.0.105:22-10.0.0.1:34598.service: Deactivated successfully. Sep 12 17:24:38.368946 systemd[1]: session-14.scope: Deactivated successfully. Sep 12 17:24:38.370183 systemd-logind[1498]: Session 14 logged out. Waiting for processes to exit. Sep 12 17:24:38.371343 systemd-logind[1498]: Removed session 14. Sep 12 17:24:43.379642 systemd[1]: Started sshd@14-10.0.0.105:22-10.0.0.1:46458.service - OpenSSH per-connection server daemon (10.0.0.1:46458). Sep 12 17:24:43.454082 sshd[4162]: Accepted publickey for core from 10.0.0.1 port 46458 ssh2: RSA SHA256:UT5jL9R+kNVMu55HRewvy3KiK11NkEv9jWcPEawXfBI Sep 12 17:24:43.457259 sshd-session[4162]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:24:43.464816 systemd-logind[1498]: New session 15 of user core. Sep 12 17:24:43.484995 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 12 17:24:43.613358 sshd[4165]: Connection closed by 10.0.0.1 port 46458 Sep 12 17:24:43.613959 sshd-session[4162]: pam_unix(sshd:session): session closed for user core Sep 12 17:24:43.631652 systemd[1]: sshd@14-10.0.0.105:22-10.0.0.1:46458.service: Deactivated successfully. Sep 12 17:24:43.633424 systemd[1]: session-15.scope: Deactivated successfully. Sep 12 17:24:43.635479 systemd-logind[1498]: Session 15 logged out. Waiting for processes to exit. Sep 12 17:24:43.638143 systemd[1]: Started sshd@15-10.0.0.105:22-10.0.0.1:46470.service - OpenSSH per-connection server daemon (10.0.0.1:46470). Sep 12 17:24:43.640761 systemd-logind[1498]: Removed session 15. Sep 12 17:24:43.702019 sshd[4179]: Accepted publickey for core from 10.0.0.1 port 46470 ssh2: RSA SHA256:UT5jL9R+kNVMu55HRewvy3KiK11NkEv9jWcPEawXfBI Sep 12 17:24:43.703469 sshd-session[4179]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:24:43.709812 systemd-logind[1498]: New session 16 of user core. Sep 12 17:24:43.719931 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 12 17:24:43.898271 sshd[4182]: Connection closed by 10.0.0.1 port 46470 Sep 12 17:24:43.898683 sshd-session[4179]: pam_unix(sshd:session): session closed for user core Sep 12 17:24:43.911281 systemd[1]: sshd@15-10.0.0.105:22-10.0.0.1:46470.service: Deactivated successfully. Sep 12 17:24:43.913228 systemd[1]: session-16.scope: Deactivated successfully. Sep 12 17:24:43.914200 systemd-logind[1498]: Session 16 logged out. Waiting for processes to exit. Sep 12 17:24:43.917484 systemd[1]: Started sshd@16-10.0.0.105:22-10.0.0.1:46486.service - OpenSSH per-connection server daemon (10.0.0.1:46486). Sep 12 17:24:43.918045 systemd-logind[1498]: Removed session 16. Sep 12 17:24:43.976125 sshd[4193]: Accepted publickey for core from 10.0.0.1 port 46486 ssh2: RSA SHA256:UT5jL9R+kNVMu55HRewvy3KiK11NkEv9jWcPEawXfBI Sep 12 17:24:43.977497 sshd-session[4193]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:24:43.981787 systemd-logind[1498]: New session 17 of user core. Sep 12 17:24:43.987900 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 12 17:24:44.673620 sshd[4197]: Connection closed by 10.0.0.1 port 46486 Sep 12 17:24:44.675982 sshd-session[4193]: pam_unix(sshd:session): session closed for user core Sep 12 17:24:44.688932 systemd[1]: sshd@16-10.0.0.105:22-10.0.0.1:46486.service: Deactivated successfully. Sep 12 17:24:44.692272 systemd[1]: session-17.scope: Deactivated successfully. Sep 12 17:24:44.694109 systemd-logind[1498]: Session 17 logged out. Waiting for processes to exit. Sep 12 17:24:44.700043 systemd[1]: Started sshd@17-10.0.0.105:22-10.0.0.1:46502.service - OpenSSH per-connection server daemon (10.0.0.1:46502). Sep 12 17:24:44.704803 systemd-logind[1498]: Removed session 17. Sep 12 17:24:44.765630 sshd[4217]: Accepted publickey for core from 10.0.0.1 port 46502 ssh2: RSA SHA256:UT5jL9R+kNVMu55HRewvy3KiK11NkEv9jWcPEawXfBI Sep 12 17:24:44.767059 sshd-session[4217]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:24:44.771154 systemd-logind[1498]: New session 18 of user core. Sep 12 17:24:44.784925 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 12 17:24:45.014142 sshd[4221]: Connection closed by 10.0.0.1 port 46502 Sep 12 17:24:45.014631 sshd-session[4217]: pam_unix(sshd:session): session closed for user core Sep 12 17:24:45.027520 systemd[1]: sshd@17-10.0.0.105:22-10.0.0.1:46502.service: Deactivated successfully. Sep 12 17:24:45.029135 systemd[1]: session-18.scope: Deactivated successfully. Sep 12 17:24:45.030406 systemd-logind[1498]: Session 18 logged out. Waiting for processes to exit. Sep 12 17:24:45.036141 systemd[1]: Started sshd@18-10.0.0.105:22-10.0.0.1:46510.service - OpenSSH per-connection server daemon (10.0.0.1:46510). Sep 12 17:24:45.038244 systemd-logind[1498]: Removed session 18. Sep 12 17:24:45.093749 sshd[4234]: Accepted publickey for core from 10.0.0.1 port 46510 ssh2: RSA SHA256:UT5jL9R+kNVMu55HRewvy3KiK11NkEv9jWcPEawXfBI Sep 12 17:24:45.095649 sshd-session[4234]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:24:45.099835 systemd-logind[1498]: New session 19 of user core. Sep 12 17:24:45.105904 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 12 17:24:45.218761 sshd[4237]: Connection closed by 10.0.0.1 port 46510 Sep 12 17:24:45.219079 sshd-session[4234]: pam_unix(sshd:session): session closed for user core Sep 12 17:24:45.222488 systemd[1]: sshd@18-10.0.0.105:22-10.0.0.1:46510.service: Deactivated successfully. Sep 12 17:24:45.224382 systemd[1]: session-19.scope: Deactivated successfully. Sep 12 17:24:45.225251 systemd-logind[1498]: Session 19 logged out. Waiting for processes to exit. Sep 12 17:24:45.226396 systemd-logind[1498]: Removed session 19. Sep 12 17:24:50.235790 systemd[1]: Started sshd@19-10.0.0.105:22-10.0.0.1:51388.service - OpenSSH per-connection server daemon (10.0.0.1:51388). Sep 12 17:24:50.297399 sshd[4257]: Accepted publickey for core from 10.0.0.1 port 51388 ssh2: RSA SHA256:UT5jL9R+kNVMu55HRewvy3KiK11NkEv9jWcPEawXfBI Sep 12 17:24:50.299211 sshd-session[4257]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:24:50.303970 systemd-logind[1498]: New session 20 of user core. Sep 12 17:24:50.313980 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 12 17:24:50.442002 sshd[4260]: Connection closed by 10.0.0.1 port 51388 Sep 12 17:24:50.442802 sshd-session[4257]: pam_unix(sshd:session): session closed for user core Sep 12 17:24:50.447202 systemd[1]: sshd@19-10.0.0.105:22-10.0.0.1:51388.service: Deactivated successfully. Sep 12 17:24:50.453174 systemd[1]: session-20.scope: Deactivated successfully. Sep 12 17:24:50.455077 systemd-logind[1498]: Session 20 logged out. Waiting for processes to exit. Sep 12 17:24:50.456516 systemd-logind[1498]: Removed session 20. Sep 12 17:24:55.457013 systemd[1]: Started sshd@20-10.0.0.105:22-10.0.0.1:51398.service - OpenSSH per-connection server daemon (10.0.0.1:51398). Sep 12 17:24:55.523070 sshd[4275]: Accepted publickey for core from 10.0.0.1 port 51398 ssh2: RSA SHA256:UT5jL9R+kNVMu55HRewvy3KiK11NkEv9jWcPEawXfBI Sep 12 17:24:55.524474 sshd-session[4275]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:24:55.530024 systemd-logind[1498]: New session 21 of user core. Sep 12 17:24:55.541891 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 12 17:24:55.658165 sshd[4278]: Connection closed by 10.0.0.1 port 51398 Sep 12 17:24:55.658668 sshd-session[4275]: pam_unix(sshd:session): session closed for user core Sep 12 17:24:55.667049 systemd[1]: sshd@20-10.0.0.105:22-10.0.0.1:51398.service: Deactivated successfully. Sep 12 17:24:55.668614 systemd[1]: session-21.scope: Deactivated successfully. Sep 12 17:24:55.669308 systemd-logind[1498]: Session 21 logged out. Waiting for processes to exit. Sep 12 17:24:55.671626 systemd[1]: Started sshd@21-10.0.0.105:22-10.0.0.1:51412.service - OpenSSH per-connection server daemon (10.0.0.1:51412). Sep 12 17:24:55.673417 systemd-logind[1498]: Removed session 21. Sep 12 17:24:55.743839 sshd[4291]: Accepted publickey for core from 10.0.0.1 port 51412 ssh2: RSA SHA256:UT5jL9R+kNVMu55HRewvy3KiK11NkEv9jWcPEawXfBI Sep 12 17:24:55.745383 sshd-session[4291]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:24:55.752620 systemd-logind[1498]: New session 22 of user core. Sep 12 17:24:55.768976 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 12 17:24:58.008678 containerd[1518]: time="2025-09-12T17:24:58.007975465Z" level=info msg="StopContainer for \"9473901c5bb031ed257f22813112bfa356fda480baa102269360e458b20d1250\" with timeout 30 (s)" Sep 12 17:24:58.012443 containerd[1518]: time="2025-09-12T17:24:58.012081156Z" level=info msg="Stop container \"9473901c5bb031ed257f22813112bfa356fda480baa102269360e458b20d1250\" with signal terminated" Sep 12 17:24:58.024545 containerd[1518]: time="2025-09-12T17:24:58.024505592Z" level=info msg="TaskExit event in podsandbox handler container_id:\"47379d17b411fc0e47dc14ed5a6159e56d146651e292c5ce6342ebd1b7217e57\" id:\"baa0700c2ab8a155433ba4be0b306d0d6cb13c9ccf49ae2bcb85b15087619cf2\" pid:4314 exited_at:{seconds:1757697898 nanos:24152184}" Sep 12 17:24:58.026929 systemd[1]: cri-containerd-9473901c5bb031ed257f22813112bfa356fda480baa102269360e458b20d1250.scope: Deactivated successfully. Sep 12 17:24:58.028595 containerd[1518]: time="2025-09-12T17:24:58.027896867Z" level=info msg="StopContainer for \"47379d17b411fc0e47dc14ed5a6159e56d146651e292c5ce6342ebd1b7217e57\" with timeout 2 (s)" Sep 12 17:24:58.028595 containerd[1518]: time="2025-09-12T17:24:58.028371118Z" level=info msg="Stop container \"47379d17b411fc0e47dc14ed5a6159e56d146651e292c5ce6342ebd1b7217e57\" with signal terminated" Sep 12 17:24:58.031109 containerd[1518]: time="2025-09-12T17:24:58.030972455Z" level=info msg="received exit event container_id:\"9473901c5bb031ed257f22813112bfa356fda480baa102269360e458b20d1250\" id:\"9473901c5bb031ed257f22813112bfa356fda480baa102269360e458b20d1250\" pid:3312 exited_at:{seconds:1757697898 nanos:30604967}" Sep 12 17:24:58.031428 containerd[1518]: time="2025-09-12T17:24:58.031007136Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9473901c5bb031ed257f22813112bfa356fda480baa102269360e458b20d1250\" id:\"9473901c5bb031ed257f22813112bfa356fda480baa102269360e458b20d1250\" pid:3312 exited_at:{seconds:1757697898 nanos:30604967}" Sep 12 17:24:58.035266 containerd[1518]: time="2025-09-12T17:24:58.035226590Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 12 17:24:58.039083 systemd-networkd[1429]: lxc_health: Link DOWN Sep 12 17:24:58.039090 systemd-networkd[1429]: lxc_health: Lost carrier Sep 12 17:24:58.057745 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9473901c5bb031ed257f22813112bfa356fda480baa102269360e458b20d1250-rootfs.mount: Deactivated successfully. Sep 12 17:24:58.058457 systemd[1]: cri-containerd-47379d17b411fc0e47dc14ed5a6159e56d146651e292c5ce6342ebd1b7217e57.scope: Deactivated successfully. Sep 12 17:24:58.059851 systemd[1]: cri-containerd-47379d17b411fc0e47dc14ed5a6159e56d146651e292c5ce6342ebd1b7217e57.scope: Consumed 6.298s CPU time, 125.2M memory peak, 148K read from disk, 12.9M written to disk. Sep 12 17:24:58.061466 containerd[1518]: time="2025-09-12T17:24:58.061429371Z" level=info msg="received exit event container_id:\"47379d17b411fc0e47dc14ed5a6159e56d146651e292c5ce6342ebd1b7217e57\" id:\"47379d17b411fc0e47dc14ed5a6159e56d146651e292c5ce6342ebd1b7217e57\" pid:3347 exited_at:{seconds:1757697898 nanos:61241527}" Sep 12 17:24:58.061629 containerd[1518]: time="2025-09-12T17:24:58.061533694Z" level=info msg="TaskExit event in podsandbox handler container_id:\"47379d17b411fc0e47dc14ed5a6159e56d146651e292c5ce6342ebd1b7217e57\" id:\"47379d17b411fc0e47dc14ed5a6159e56d146651e292c5ce6342ebd1b7217e57\" pid:3347 exited_at:{seconds:1757697898 nanos:61241527}" Sep 12 17:24:58.076216 containerd[1518]: time="2025-09-12T17:24:58.076170858Z" level=info msg="StopContainer for \"9473901c5bb031ed257f22813112bfa356fda480baa102269360e458b20d1250\" returns successfully" Sep 12 17:24:58.077273 containerd[1518]: time="2025-09-12T17:24:58.077246842Z" level=info msg="StopPodSandbox for \"dbc5b0c5e1a177ae957a8ebf067674bf3231ff56a13f62c13671248d99dfab57\"" Sep 12 17:24:58.080611 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-47379d17b411fc0e47dc14ed5a6159e56d146651e292c5ce6342ebd1b7217e57-rootfs.mount: Deactivated successfully. Sep 12 17:24:58.088700 containerd[1518]: time="2025-09-12T17:24:58.088653255Z" level=info msg="StopContainer for \"47379d17b411fc0e47dc14ed5a6159e56d146651e292c5ce6342ebd1b7217e57\" returns successfully" Sep 12 17:24:58.089394 containerd[1518]: time="2025-09-12T17:24:58.089181147Z" level=info msg="StopPodSandbox for \"75da0b6899656bb5d4017bc9eaf404af7950025ca0bbac3c89dd5235779abdd7\"" Sep 12 17:24:58.093339 containerd[1518]: time="2025-09-12T17:24:58.093302679Z" level=info msg="Container to stop \"9473901c5bb031ed257f22813112bfa356fda480baa102269360e458b20d1250\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 17:24:58.094241 containerd[1518]: time="2025-09-12T17:24:58.094212899Z" level=info msg="Container to stop \"46f5a205e2ff98fd8095e6a877dd6903514c39f4aef80495d7af3746674226e6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 17:24:58.094715 containerd[1518]: time="2025-09-12T17:24:58.094563787Z" level=info msg="Container to stop \"47379d17b411fc0e47dc14ed5a6159e56d146651e292c5ce6342ebd1b7217e57\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 17:24:58.094715 containerd[1518]: time="2025-09-12T17:24:58.094585987Z" level=info msg="Container to stop \"d68e4f563979a4b97258fc1a920bb0365d331b8b6e4c1dbeef71db6d69140fac\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 17:24:58.094715 containerd[1518]: time="2025-09-12T17:24:58.094595027Z" level=info msg="Container to stop \"45b94f8f5f963000392c4a9acaf73f51871df6ad4a4125634a4edf371efedad3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 17:24:58.094715 containerd[1518]: time="2025-09-12T17:24:58.094603427Z" level=info msg="Container to stop \"f38f8e6fab4bc612434278ac3519a62a525509f507825acd96c73db2b0f7c9d4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 12 17:24:58.099615 systemd[1]: cri-containerd-75da0b6899656bb5d4017bc9eaf404af7950025ca0bbac3c89dd5235779abdd7.scope: Deactivated successfully. Sep 12 17:24:58.100913 systemd[1]: cri-containerd-dbc5b0c5e1a177ae957a8ebf067674bf3231ff56a13f62c13671248d99dfab57.scope: Deactivated successfully. Sep 12 17:24:58.102069 containerd[1518]: time="2025-09-12T17:24:58.102037312Z" level=info msg="TaskExit event in podsandbox handler container_id:\"75da0b6899656bb5d4017bc9eaf404af7950025ca0bbac3c89dd5235779abdd7\" id:\"75da0b6899656bb5d4017bc9eaf404af7950025ca0bbac3c89dd5235779abdd7\" pid:2830 exit_status:137 exited_at:{seconds:1757697898 nanos:101133452}" Sep 12 17:24:58.122129 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dbc5b0c5e1a177ae957a8ebf067674bf3231ff56a13f62c13671248d99dfab57-rootfs.mount: Deactivated successfully. Sep 12 17:24:58.126833 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-75da0b6899656bb5d4017bc9eaf404af7950025ca0bbac3c89dd5235779abdd7-rootfs.mount: Deactivated successfully. Sep 12 17:24:58.132892 containerd[1518]: time="2025-09-12T17:24:58.132810315Z" level=info msg="shim disconnected" id=dbc5b0c5e1a177ae957a8ebf067674bf3231ff56a13f62c13671248d99dfab57 namespace=k8s.io Sep 12 17:24:58.137419 containerd[1518]: time="2025-09-12T17:24:58.132889637Z" level=warning msg="cleaning up after shim disconnected" id=dbc5b0c5e1a177ae957a8ebf067674bf3231ff56a13f62c13671248d99dfab57 namespace=k8s.io Sep 12 17:24:58.137499 containerd[1518]: time="2025-09-12T17:24:58.137432338Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:24:58.137499 containerd[1518]: time="2025-09-12T17:24:58.132847756Z" level=info msg="shim disconnected" id=75da0b6899656bb5d4017bc9eaf404af7950025ca0bbac3c89dd5235779abdd7 namespace=k8s.io Sep 12 17:24:58.137545 containerd[1518]: time="2025-09-12T17:24:58.137508980Z" level=warning msg="cleaning up after shim disconnected" id=75da0b6899656bb5d4017bc9eaf404af7950025ca0bbac3c89dd5235779abdd7 namespace=k8s.io Sep 12 17:24:58.137545 containerd[1518]: time="2025-09-12T17:24:58.137529980Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 12 17:24:58.168196 containerd[1518]: time="2025-09-12T17:24:58.167895814Z" level=info msg="TearDown network for sandbox \"75da0b6899656bb5d4017bc9eaf404af7950025ca0bbac3c89dd5235779abdd7\" successfully" Sep 12 17:24:58.168196 containerd[1518]: time="2025-09-12T17:24:58.167936935Z" level=info msg="StopPodSandbox for \"75da0b6899656bb5d4017bc9eaf404af7950025ca0bbac3c89dd5235779abdd7\" returns successfully" Sep 12 17:24:58.168196 containerd[1518]: time="2025-09-12T17:24:58.168039657Z" level=info msg="TaskExit event in podsandbox handler container_id:\"dbc5b0c5e1a177ae957a8ebf067674bf3231ff56a13f62c13671248d99dfab57\" id:\"dbc5b0c5e1a177ae957a8ebf067674bf3231ff56a13f62c13671248d99dfab57\" pid:2909 exit_status:137 exited_at:{seconds:1757697898 nanos:101756066}" Sep 12 17:24:58.169745 containerd[1518]: time="2025-09-12T17:24:58.168531148Z" level=info msg="TearDown network for sandbox \"dbc5b0c5e1a177ae957a8ebf067674bf3231ff56a13f62c13671248d99dfab57\" successfully" Sep 12 17:24:58.169745 containerd[1518]: time="2025-09-12T17:24:58.168565109Z" level=info msg="StopPodSandbox for \"dbc5b0c5e1a177ae957a8ebf067674bf3231ff56a13f62c13671248d99dfab57\" returns successfully" Sep 12 17:24:58.169157 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-75da0b6899656bb5d4017bc9eaf404af7950025ca0bbac3c89dd5235779abdd7-shm.mount: Deactivated successfully. Sep 12 17:24:58.172839 containerd[1518]: time="2025-09-12T17:24:58.172613839Z" level=info msg="received exit event sandbox_id:\"75da0b6899656bb5d4017bc9eaf404af7950025ca0bbac3c89dd5235779abdd7\" exit_status:137 exited_at:{seconds:1757697898 nanos:101133452}" Sep 12 17:24:58.172839 containerd[1518]: time="2025-09-12T17:24:58.172749082Z" level=info msg="received exit event sandbox_id:\"dbc5b0c5e1a177ae957a8ebf067674bf3231ff56a13f62c13671248d99dfab57\" exit_status:137 exited_at:{seconds:1757697898 nanos:101756066}" Sep 12 17:24:58.287047 kubelet[2680]: I0912 17:24:58.286823 2680 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-brdgx\" (UniqueName: \"kubernetes.io/projected/de7575b0-5296-4950-86fa-e9171e9de4b5-kube-api-access-brdgx\") pod \"de7575b0-5296-4950-86fa-e9171e9de4b5\" (UID: \"de7575b0-5296-4950-86fa-e9171e9de4b5\") " Sep 12 17:24:58.287047 kubelet[2680]: I0912 17:24:58.286863 2680 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/de7575b0-5296-4950-86fa-e9171e9de4b5-cilium-run\") pod \"de7575b0-5296-4950-86fa-e9171e9de4b5\" (UID: \"de7575b0-5296-4950-86fa-e9171e9de4b5\") " Sep 12 17:24:58.287047 kubelet[2680]: I0912 17:24:58.286906 2680 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/38658b31-5065-47eb-8cfd-99f24154568b-cilium-config-path\") pod \"38658b31-5065-47eb-8cfd-99f24154568b\" (UID: \"38658b31-5065-47eb-8cfd-99f24154568b\") " Sep 12 17:24:58.287047 kubelet[2680]: I0912 17:24:58.286926 2680 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-swbtt\" (UniqueName: \"kubernetes.io/projected/38658b31-5065-47eb-8cfd-99f24154568b-kube-api-access-swbtt\") pod \"38658b31-5065-47eb-8cfd-99f24154568b\" (UID: \"38658b31-5065-47eb-8cfd-99f24154568b\") " Sep 12 17:24:58.287047 kubelet[2680]: I0912 17:24:58.287048 2680 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/de7575b0-5296-4950-86fa-e9171e9de4b5-cilium-config-path\") pod \"de7575b0-5296-4950-86fa-e9171e9de4b5\" (UID: \"de7575b0-5296-4950-86fa-e9171e9de4b5\") " Sep 12 17:24:58.287877 kubelet[2680]: I0912 17:24:58.287067 2680 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/de7575b0-5296-4950-86fa-e9171e9de4b5-bpf-maps\") pod \"de7575b0-5296-4950-86fa-e9171e9de4b5\" (UID: \"de7575b0-5296-4950-86fa-e9171e9de4b5\") " Sep 12 17:24:58.287877 kubelet[2680]: I0912 17:24:58.287083 2680 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/de7575b0-5296-4950-86fa-e9171e9de4b5-host-proc-sys-kernel\") pod \"de7575b0-5296-4950-86fa-e9171e9de4b5\" (UID: \"de7575b0-5296-4950-86fa-e9171e9de4b5\") " Sep 12 17:24:58.287877 kubelet[2680]: I0912 17:24:58.287099 2680 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/de7575b0-5296-4950-86fa-e9171e9de4b5-lib-modules\") pod \"de7575b0-5296-4950-86fa-e9171e9de4b5\" (UID: \"de7575b0-5296-4950-86fa-e9171e9de4b5\") " Sep 12 17:24:58.287877 kubelet[2680]: I0912 17:24:58.287114 2680 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/de7575b0-5296-4950-86fa-e9171e9de4b5-cilium-cgroup\") pod \"de7575b0-5296-4950-86fa-e9171e9de4b5\" (UID: \"de7575b0-5296-4950-86fa-e9171e9de4b5\") " Sep 12 17:24:58.287877 kubelet[2680]: I0912 17:24:58.287127 2680 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/de7575b0-5296-4950-86fa-e9171e9de4b5-host-proc-sys-net\") pod \"de7575b0-5296-4950-86fa-e9171e9de4b5\" (UID: \"de7575b0-5296-4950-86fa-e9171e9de4b5\") " Sep 12 17:24:58.287877 kubelet[2680]: I0912 17:24:58.287144 2680 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/de7575b0-5296-4950-86fa-e9171e9de4b5-clustermesh-secrets\") pod \"de7575b0-5296-4950-86fa-e9171e9de4b5\" (UID: \"de7575b0-5296-4950-86fa-e9171e9de4b5\") " Sep 12 17:24:58.288379 kubelet[2680]: I0912 17:24:58.287158 2680 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/de7575b0-5296-4950-86fa-e9171e9de4b5-xtables-lock\") pod \"de7575b0-5296-4950-86fa-e9171e9de4b5\" (UID: \"de7575b0-5296-4950-86fa-e9171e9de4b5\") " Sep 12 17:24:58.288379 kubelet[2680]: I0912 17:24:58.287174 2680 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/de7575b0-5296-4950-86fa-e9171e9de4b5-cni-path\") pod \"de7575b0-5296-4950-86fa-e9171e9de4b5\" (UID: \"de7575b0-5296-4950-86fa-e9171e9de4b5\") " Sep 12 17:24:58.288379 kubelet[2680]: I0912 17:24:58.287187 2680 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/de7575b0-5296-4950-86fa-e9171e9de4b5-hostproc\") pod \"de7575b0-5296-4950-86fa-e9171e9de4b5\" (UID: \"de7575b0-5296-4950-86fa-e9171e9de4b5\") " Sep 12 17:24:58.288379 kubelet[2680]: I0912 17:24:58.287202 2680 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/de7575b0-5296-4950-86fa-e9171e9de4b5-etc-cni-netd\") pod \"de7575b0-5296-4950-86fa-e9171e9de4b5\" (UID: \"de7575b0-5296-4950-86fa-e9171e9de4b5\") " Sep 12 17:24:58.288379 kubelet[2680]: I0912 17:24:58.287220 2680 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/de7575b0-5296-4950-86fa-e9171e9de4b5-hubble-tls\") pod \"de7575b0-5296-4950-86fa-e9171e9de4b5\" (UID: \"de7575b0-5296-4950-86fa-e9171e9de4b5\") " Sep 12 17:24:58.291032 kubelet[2680]: I0912 17:24:58.290984 2680 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/38658b31-5065-47eb-8cfd-99f24154568b-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "38658b31-5065-47eb-8cfd-99f24154568b" (UID: "38658b31-5065-47eb-8cfd-99f24154568b"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 12 17:24:58.291106 kubelet[2680]: I0912 17:24:58.291054 2680 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/de7575b0-5296-4950-86fa-e9171e9de4b5-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "de7575b0-5296-4950-86fa-e9171e9de4b5" (UID: "de7575b0-5296-4950-86fa-e9171e9de4b5"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 17:24:58.291226 kubelet[2680]: I0912 17:24:58.291188 2680 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/de7575b0-5296-4950-86fa-e9171e9de4b5-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "de7575b0-5296-4950-86fa-e9171e9de4b5" (UID: "de7575b0-5296-4950-86fa-e9171e9de4b5"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 17:24:58.291883 kubelet[2680]: I0912 17:24:58.291812 2680 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/de7575b0-5296-4950-86fa-e9171e9de4b5-cni-path" (OuterVolumeSpecName: "cni-path") pod "de7575b0-5296-4950-86fa-e9171e9de4b5" (UID: "de7575b0-5296-4950-86fa-e9171e9de4b5"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 17:24:58.291883 kubelet[2680]: I0912 17:24:58.291850 2680 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/de7575b0-5296-4950-86fa-e9171e9de4b5-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "de7575b0-5296-4950-86fa-e9171e9de4b5" (UID: "de7575b0-5296-4950-86fa-e9171e9de4b5"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 17:24:58.291883 kubelet[2680]: I0912 17:24:58.291864 2680 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/de7575b0-5296-4950-86fa-e9171e9de4b5-hostproc" (OuterVolumeSpecName: "hostproc") pod "de7575b0-5296-4950-86fa-e9171e9de4b5" (UID: "de7575b0-5296-4950-86fa-e9171e9de4b5"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 17:24:58.291883 kubelet[2680]: I0912 17:24:58.291878 2680 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/de7575b0-5296-4950-86fa-e9171e9de4b5-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "de7575b0-5296-4950-86fa-e9171e9de4b5" (UID: "de7575b0-5296-4950-86fa-e9171e9de4b5"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 17:24:58.292004 kubelet[2680]: I0912 17:24:58.291895 2680 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/de7575b0-5296-4950-86fa-e9171e9de4b5-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "de7575b0-5296-4950-86fa-e9171e9de4b5" (UID: "de7575b0-5296-4950-86fa-e9171e9de4b5"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 17:24:58.292004 kubelet[2680]: I0912 17:24:58.291909 2680 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/de7575b0-5296-4950-86fa-e9171e9de4b5-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "de7575b0-5296-4950-86fa-e9171e9de4b5" (UID: "de7575b0-5296-4950-86fa-e9171e9de4b5"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 17:24:58.292004 kubelet[2680]: I0912 17:24:58.291922 2680 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/de7575b0-5296-4950-86fa-e9171e9de4b5-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "de7575b0-5296-4950-86fa-e9171e9de4b5" (UID: "de7575b0-5296-4950-86fa-e9171e9de4b5"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 17:24:58.292004 kubelet[2680]: I0912 17:24:58.291936 2680 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/de7575b0-5296-4950-86fa-e9171e9de4b5-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "de7575b0-5296-4950-86fa-e9171e9de4b5" (UID: "de7575b0-5296-4950-86fa-e9171e9de4b5"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 12 17:24:58.292206 kubelet[2680]: I0912 17:24:58.292182 2680 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de7575b0-5296-4950-86fa-e9171e9de4b5-kube-api-access-brdgx" (OuterVolumeSpecName: "kube-api-access-brdgx") pod "de7575b0-5296-4950-86fa-e9171e9de4b5" (UID: "de7575b0-5296-4950-86fa-e9171e9de4b5"). InnerVolumeSpecName "kube-api-access-brdgx". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 12 17:24:58.292669 kubelet[2680]: I0912 17:24:58.292633 2680 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de7575b0-5296-4950-86fa-e9171e9de4b5-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "de7575b0-5296-4950-86fa-e9171e9de4b5" (UID: "de7575b0-5296-4950-86fa-e9171e9de4b5"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 12 17:24:58.292774 kubelet[2680]: I0912 17:24:58.292753 2680 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de7575b0-5296-4950-86fa-e9171e9de4b5-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "de7575b0-5296-4950-86fa-e9171e9de4b5" (UID: "de7575b0-5296-4950-86fa-e9171e9de4b5"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 12 17:24:58.293289 kubelet[2680]: I0912 17:24:58.293250 2680 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/38658b31-5065-47eb-8cfd-99f24154568b-kube-api-access-swbtt" (OuterVolumeSpecName: "kube-api-access-swbtt") pod "38658b31-5065-47eb-8cfd-99f24154568b" (UID: "38658b31-5065-47eb-8cfd-99f24154568b"). InnerVolumeSpecName "kube-api-access-swbtt". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 12 17:24:58.294124 kubelet[2680]: I0912 17:24:58.294097 2680 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de7575b0-5296-4950-86fa-e9171e9de4b5-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "de7575b0-5296-4950-86fa-e9171e9de4b5" (UID: "de7575b0-5296-4950-86fa-e9171e9de4b5"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 12 17:24:58.387805 kubelet[2680]: I0912 17:24:58.387767 2680 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-swbtt\" (UniqueName: \"kubernetes.io/projected/38658b31-5065-47eb-8cfd-99f24154568b-kube-api-access-swbtt\") on node \"localhost\" DevicePath \"\"" Sep 12 17:24:58.387957 kubelet[2680]: I0912 17:24:58.387947 2680 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/de7575b0-5296-4950-86fa-e9171e9de4b5-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 12 17:24:58.388017 kubelet[2680]: I0912 17:24:58.388007 2680 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/de7575b0-5296-4950-86fa-e9171e9de4b5-bpf-maps\") on node \"localhost\" DevicePath \"\"" Sep 12 17:24:58.388073 kubelet[2680]: I0912 17:24:58.388062 2680 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/de7575b0-5296-4950-86fa-e9171e9de4b5-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Sep 12 17:24:58.388122 kubelet[2680]: I0912 17:24:58.388114 2680 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/de7575b0-5296-4950-86fa-e9171e9de4b5-lib-modules\") on node \"localhost\" DevicePath \"\"" Sep 12 17:24:58.388174 kubelet[2680]: I0912 17:24:58.388165 2680 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/de7575b0-5296-4950-86fa-e9171e9de4b5-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Sep 12 17:24:58.388224 kubelet[2680]: I0912 17:24:58.388216 2680 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/de7575b0-5296-4950-86fa-e9171e9de4b5-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Sep 12 17:24:58.388280 kubelet[2680]: I0912 17:24:58.388270 2680 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/de7575b0-5296-4950-86fa-e9171e9de4b5-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Sep 12 17:24:58.388336 kubelet[2680]: I0912 17:24:58.388327 2680 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/de7575b0-5296-4950-86fa-e9171e9de4b5-xtables-lock\") on node \"localhost\" DevicePath \"\"" Sep 12 17:24:58.388403 kubelet[2680]: I0912 17:24:58.388378 2680 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/de7575b0-5296-4950-86fa-e9171e9de4b5-cni-path\") on node \"localhost\" DevicePath \"\"" Sep 12 17:24:58.388508 kubelet[2680]: I0912 17:24:58.388447 2680 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/de7575b0-5296-4950-86fa-e9171e9de4b5-hostproc\") on node \"localhost\" DevicePath \"\"" Sep 12 17:24:58.388508 kubelet[2680]: I0912 17:24:58.388460 2680 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/de7575b0-5296-4950-86fa-e9171e9de4b5-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Sep 12 17:24:58.388508 kubelet[2680]: I0912 17:24:58.388468 2680 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/de7575b0-5296-4950-86fa-e9171e9de4b5-hubble-tls\") on node \"localhost\" DevicePath \"\"" Sep 12 17:24:58.388508 kubelet[2680]: I0912 17:24:58.388475 2680 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-brdgx\" (UniqueName: \"kubernetes.io/projected/de7575b0-5296-4950-86fa-e9171e9de4b5-kube-api-access-brdgx\") on node \"localhost\" DevicePath \"\"" Sep 12 17:24:58.388508 kubelet[2680]: I0912 17:24:58.388484 2680 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/de7575b0-5296-4950-86fa-e9171e9de4b5-cilium-run\") on node \"localhost\" DevicePath \"\"" Sep 12 17:24:58.388508 kubelet[2680]: I0912 17:24:58.388493 2680 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/38658b31-5065-47eb-8cfd-99f24154568b-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 12 17:24:58.921792 systemd[1]: Removed slice kubepods-burstable-podde7575b0_5296_4950_86fa_e9171e9de4b5.slice - libcontainer container kubepods-burstable-podde7575b0_5296_4950_86fa_e9171e9de4b5.slice. Sep 12 17:24:58.921892 systemd[1]: kubepods-burstable-podde7575b0_5296_4950_86fa_e9171e9de4b5.slice: Consumed 6.391s CPU time, 125.6M memory peak, 160K read from disk, 12.9M written to disk. Sep 12 17:24:58.922896 systemd[1]: Removed slice kubepods-besteffort-pod38658b31_5065_47eb_8cfd_99f24154568b.slice - libcontainer container kubepods-besteffort-pod38658b31_5065_47eb_8cfd_99f24154568b.slice. Sep 12 17:24:59.057719 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-dbc5b0c5e1a177ae957a8ebf067674bf3231ff56a13f62c13671248d99dfab57-shm.mount: Deactivated successfully. Sep 12 17:24:59.057828 systemd[1]: var-lib-kubelet-pods-38658b31\x2d5065\x2d47eb\x2d8cfd\x2d99f24154568b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dswbtt.mount: Deactivated successfully. Sep 12 17:24:59.057882 systemd[1]: var-lib-kubelet-pods-de7575b0\x2d5296\x2d4950\x2d86fa\x2de9171e9de4b5-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dbrdgx.mount: Deactivated successfully. Sep 12 17:24:59.057927 systemd[1]: var-lib-kubelet-pods-de7575b0\x2d5296\x2d4950\x2d86fa\x2de9171e9de4b5-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 12 17:24:59.057971 systemd[1]: var-lib-kubelet-pods-de7575b0\x2d5296\x2d4950\x2d86fa\x2de9171e9de4b5-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 12 17:24:59.175153 kubelet[2680]: I0912 17:24:59.174867 2680 scope.go:117] "RemoveContainer" containerID="9473901c5bb031ed257f22813112bfa356fda480baa102269360e458b20d1250" Sep 12 17:24:59.179414 containerd[1518]: time="2025-09-12T17:24:59.179365076Z" level=info msg="RemoveContainer for \"9473901c5bb031ed257f22813112bfa356fda480baa102269360e458b20d1250\"" Sep 12 17:24:59.186646 containerd[1518]: time="2025-09-12T17:24:59.186611311Z" level=info msg="RemoveContainer for \"9473901c5bb031ed257f22813112bfa356fda480baa102269360e458b20d1250\" returns successfully" Sep 12 17:24:59.186912 kubelet[2680]: I0912 17:24:59.186884 2680 scope.go:117] "RemoveContainer" containerID="47379d17b411fc0e47dc14ed5a6159e56d146651e292c5ce6342ebd1b7217e57" Sep 12 17:24:59.191755 containerd[1518]: time="2025-09-12T17:24:59.191527896Z" level=info msg="RemoveContainer for \"47379d17b411fc0e47dc14ed5a6159e56d146651e292c5ce6342ebd1b7217e57\"" Sep 12 17:24:59.196164 containerd[1518]: time="2025-09-12T17:24:59.196128594Z" level=info msg="RemoveContainer for \"47379d17b411fc0e47dc14ed5a6159e56d146651e292c5ce6342ebd1b7217e57\" returns successfully" Sep 12 17:24:59.196407 kubelet[2680]: I0912 17:24:59.196374 2680 scope.go:117] "RemoveContainer" containerID="f38f8e6fab4bc612434278ac3519a62a525509f507825acd96c73db2b0f7c9d4" Sep 12 17:24:59.198784 containerd[1518]: time="2025-09-12T17:24:59.198701009Z" level=info msg="RemoveContainer for \"f38f8e6fab4bc612434278ac3519a62a525509f507825acd96c73db2b0f7c9d4\"" Sep 12 17:24:59.202853 containerd[1518]: time="2025-09-12T17:24:59.202749535Z" level=info msg="RemoveContainer for \"f38f8e6fab4bc612434278ac3519a62a525509f507825acd96c73db2b0f7c9d4\" returns successfully" Sep 12 17:24:59.202975 kubelet[2680]: I0912 17:24:59.202955 2680 scope.go:117] "RemoveContainer" containerID="46f5a205e2ff98fd8095e6a877dd6903514c39f4aef80495d7af3746674226e6" Sep 12 17:24:59.208607 containerd[1518]: time="2025-09-12T17:24:59.208578620Z" level=info msg="RemoveContainer for \"46f5a205e2ff98fd8095e6a877dd6903514c39f4aef80495d7af3746674226e6\"" Sep 12 17:24:59.212390 containerd[1518]: time="2025-09-12T17:24:59.212350340Z" level=info msg="RemoveContainer for \"46f5a205e2ff98fd8095e6a877dd6903514c39f4aef80495d7af3746674226e6\" returns successfully" Sep 12 17:24:59.212552 kubelet[2680]: I0912 17:24:59.212536 2680 scope.go:117] "RemoveContainer" containerID="45b94f8f5f963000392c4a9acaf73f51871df6ad4a4125634a4edf371efedad3" Sep 12 17:24:59.214130 containerd[1518]: time="2025-09-12T17:24:59.214099697Z" level=info msg="RemoveContainer for \"45b94f8f5f963000392c4a9acaf73f51871df6ad4a4125634a4edf371efedad3\"" Sep 12 17:24:59.218138 containerd[1518]: time="2025-09-12T17:24:59.218101943Z" level=info msg="RemoveContainer for \"45b94f8f5f963000392c4a9acaf73f51871df6ad4a4125634a4edf371efedad3\" returns successfully" Sep 12 17:24:59.218479 kubelet[2680]: I0912 17:24:59.218371 2680 scope.go:117] "RemoveContainer" containerID="d68e4f563979a4b97258fc1a920bb0365d331b8b6e4c1dbeef71db6d69140fac" Sep 12 17:24:59.219942 containerd[1518]: time="2025-09-12T17:24:59.219907621Z" level=info msg="RemoveContainer for \"d68e4f563979a4b97258fc1a920bb0365d331b8b6e4c1dbeef71db6d69140fac\"" Sep 12 17:24:59.222707 containerd[1518]: time="2025-09-12T17:24:59.222679601Z" level=info msg="RemoveContainer for \"d68e4f563979a4b97258fc1a920bb0365d331b8b6e4c1dbeef71db6d69140fac\" returns successfully" Sep 12 17:24:59.946597 sshd[4294]: Connection closed by 10.0.0.1 port 51412 Sep 12 17:24:59.948103 sshd-session[4291]: pam_unix(sshd:session): session closed for user core Sep 12 17:24:59.955827 systemd[1]: sshd@21-10.0.0.105:22-10.0.0.1:51412.service: Deactivated successfully. Sep 12 17:24:59.958293 systemd[1]: session-22.scope: Deactivated successfully. Sep 12 17:24:59.958667 systemd[1]: session-22.scope: Consumed 1.515s CPU time, 24M memory peak. Sep 12 17:24:59.959417 systemd-logind[1498]: Session 22 logged out. Waiting for processes to exit. Sep 12 17:24:59.961221 systemd-logind[1498]: Removed session 22. Sep 12 17:24:59.962715 systemd[1]: Started sshd@22-10.0.0.105:22-10.0.0.1:46692.service - OpenSSH per-connection server daemon (10.0.0.1:46692). Sep 12 17:25:00.029140 sshd[4445]: Accepted publickey for core from 10.0.0.1 port 46692 ssh2: RSA SHA256:UT5jL9R+kNVMu55HRewvy3KiK11NkEv9jWcPEawXfBI Sep 12 17:25:00.030434 sshd-session[4445]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:25:00.034778 systemd-logind[1498]: New session 23 of user core. Sep 12 17:25:00.046888 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 12 17:25:00.847621 sshd[4448]: Connection closed by 10.0.0.1 port 46692 Sep 12 17:25:00.846948 sshd-session[4445]: pam_unix(sshd:session): session closed for user core Sep 12 17:25:00.858264 systemd[1]: sshd@22-10.0.0.105:22-10.0.0.1:46692.service: Deactivated successfully. Sep 12 17:25:00.861036 systemd[1]: session-23.scope: Deactivated successfully. Sep 12 17:25:00.864483 systemd-logind[1498]: Session 23 logged out. Waiting for processes to exit. Sep 12 17:25:00.869649 systemd[1]: Started sshd@23-10.0.0.105:22-10.0.0.1:46700.service - OpenSSH per-connection server daemon (10.0.0.1:46700). Sep 12 17:25:00.873796 systemd-logind[1498]: Removed session 23. Sep 12 17:25:00.893974 systemd[1]: Created slice kubepods-burstable-pod61ff294d_0ed8_4988_a7e3_54144fd5c3e4.slice - libcontainer container kubepods-burstable-pod61ff294d_0ed8_4988_a7e3_54144fd5c3e4.slice. Sep 12 17:25:00.916105 kubelet[2680]: I0912 17:25:00.916068 2680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="38658b31-5065-47eb-8cfd-99f24154568b" path="/var/lib/kubelet/pods/38658b31-5065-47eb-8cfd-99f24154568b/volumes" Sep 12 17:25:00.917212 kubelet[2680]: I0912 17:25:00.916813 2680 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="de7575b0-5296-4950-86fa-e9171e9de4b5" path="/var/lib/kubelet/pods/de7575b0-5296-4950-86fa-e9171e9de4b5/volumes" Sep 12 17:25:00.939907 sshd[4461]: Accepted publickey for core from 10.0.0.1 port 46700 ssh2: RSA SHA256:UT5jL9R+kNVMu55HRewvy3KiK11NkEv9jWcPEawXfBI Sep 12 17:25:00.941230 sshd-session[4461]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:25:00.944960 systemd-logind[1498]: New session 24 of user core. Sep 12 17:25:00.954910 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 12 17:25:01.003235 kubelet[2680]: I0912 17:25:01.003184 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/61ff294d-0ed8-4988-a7e3-54144fd5c3e4-bpf-maps\") pod \"cilium-bfrk7\" (UID: \"61ff294d-0ed8-4988-a7e3-54144fd5c3e4\") " pod="kube-system/cilium-bfrk7" Sep 12 17:25:01.003235 kubelet[2680]: I0912 17:25:01.003230 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/61ff294d-0ed8-4988-a7e3-54144fd5c3e4-cni-path\") pod \"cilium-bfrk7\" (UID: \"61ff294d-0ed8-4988-a7e3-54144fd5c3e4\") " pod="kube-system/cilium-bfrk7" Sep 12 17:25:01.003393 kubelet[2680]: I0912 17:25:01.003249 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/61ff294d-0ed8-4988-a7e3-54144fd5c3e4-hostproc\") pod \"cilium-bfrk7\" (UID: \"61ff294d-0ed8-4988-a7e3-54144fd5c3e4\") " pod="kube-system/cilium-bfrk7" Sep 12 17:25:01.003393 kubelet[2680]: I0912 17:25:01.003265 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/61ff294d-0ed8-4988-a7e3-54144fd5c3e4-cilium-cgroup\") pod \"cilium-bfrk7\" (UID: \"61ff294d-0ed8-4988-a7e3-54144fd5c3e4\") " pod="kube-system/cilium-bfrk7" Sep 12 17:25:01.003393 kubelet[2680]: I0912 17:25:01.003281 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/61ff294d-0ed8-4988-a7e3-54144fd5c3e4-lib-modules\") pod \"cilium-bfrk7\" (UID: \"61ff294d-0ed8-4988-a7e3-54144fd5c3e4\") " pod="kube-system/cilium-bfrk7" Sep 12 17:25:01.003393 kubelet[2680]: I0912 17:25:01.003296 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/61ff294d-0ed8-4988-a7e3-54144fd5c3e4-clustermesh-secrets\") pod \"cilium-bfrk7\" (UID: \"61ff294d-0ed8-4988-a7e3-54144fd5c3e4\") " pod="kube-system/cilium-bfrk7" Sep 12 17:25:01.003489 kubelet[2680]: I0912 17:25:01.003433 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/61ff294d-0ed8-4988-a7e3-54144fd5c3e4-host-proc-sys-kernel\") pod \"cilium-bfrk7\" (UID: \"61ff294d-0ed8-4988-a7e3-54144fd5c3e4\") " pod="kube-system/cilium-bfrk7" Sep 12 17:25:01.003515 kubelet[2680]: I0912 17:25:01.003492 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/61ff294d-0ed8-4988-a7e3-54144fd5c3e4-etc-cni-netd\") pod \"cilium-bfrk7\" (UID: \"61ff294d-0ed8-4988-a7e3-54144fd5c3e4\") " pod="kube-system/cilium-bfrk7" Sep 12 17:25:01.003555 kubelet[2680]: I0912 17:25:01.003528 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/61ff294d-0ed8-4988-a7e3-54144fd5c3e4-xtables-lock\") pod \"cilium-bfrk7\" (UID: \"61ff294d-0ed8-4988-a7e3-54144fd5c3e4\") " pod="kube-system/cilium-bfrk7" Sep 12 17:25:01.003584 kubelet[2680]: I0912 17:25:01.003557 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/61ff294d-0ed8-4988-a7e3-54144fd5c3e4-cilium-ipsec-secrets\") pod \"cilium-bfrk7\" (UID: \"61ff294d-0ed8-4988-a7e3-54144fd5c3e4\") " pod="kube-system/cilium-bfrk7" Sep 12 17:25:01.003611 kubelet[2680]: I0912 17:25:01.003588 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/61ff294d-0ed8-4988-a7e3-54144fd5c3e4-cilium-run\") pod \"cilium-bfrk7\" (UID: \"61ff294d-0ed8-4988-a7e3-54144fd5c3e4\") " pod="kube-system/cilium-bfrk7" Sep 12 17:25:01.003611 kubelet[2680]: I0912 17:25:01.003608 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/61ff294d-0ed8-4988-a7e3-54144fd5c3e4-host-proc-sys-net\") pod \"cilium-bfrk7\" (UID: \"61ff294d-0ed8-4988-a7e3-54144fd5c3e4\") " pod="kube-system/cilium-bfrk7" Sep 12 17:25:01.003655 kubelet[2680]: I0912 17:25:01.003634 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/61ff294d-0ed8-4988-a7e3-54144fd5c3e4-cilium-config-path\") pod \"cilium-bfrk7\" (UID: \"61ff294d-0ed8-4988-a7e3-54144fd5c3e4\") " pod="kube-system/cilium-bfrk7" Sep 12 17:25:01.003675 kubelet[2680]: I0912 17:25:01.003653 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/61ff294d-0ed8-4988-a7e3-54144fd5c3e4-hubble-tls\") pod \"cilium-bfrk7\" (UID: \"61ff294d-0ed8-4988-a7e3-54144fd5c3e4\") " pod="kube-system/cilium-bfrk7" Sep 12 17:25:01.003696 kubelet[2680]: I0912 17:25:01.003672 2680 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tjsn2\" (UniqueName: \"kubernetes.io/projected/61ff294d-0ed8-4988-a7e3-54144fd5c3e4-kube-api-access-tjsn2\") pod \"cilium-bfrk7\" (UID: \"61ff294d-0ed8-4988-a7e3-54144fd5c3e4\") " pod="kube-system/cilium-bfrk7" Sep 12 17:25:01.004801 sshd[4464]: Connection closed by 10.0.0.1 port 46700 Sep 12 17:25:01.004475 sshd-session[4461]: pam_unix(sshd:session): session closed for user core Sep 12 17:25:01.016225 systemd[1]: sshd@23-10.0.0.105:22-10.0.0.1:46700.service: Deactivated successfully. Sep 12 17:25:01.017873 systemd[1]: session-24.scope: Deactivated successfully. Sep 12 17:25:01.018538 systemd-logind[1498]: Session 24 logged out. Waiting for processes to exit. Sep 12 17:25:01.022015 systemd[1]: Started sshd@24-10.0.0.105:22-10.0.0.1:46706.service - OpenSSH per-connection server daemon (10.0.0.1:46706). Sep 12 17:25:01.022814 systemd-logind[1498]: Removed session 24. Sep 12 17:25:01.086450 sshd[4471]: Accepted publickey for core from 10.0.0.1 port 46706 ssh2: RSA SHA256:UT5jL9R+kNVMu55HRewvy3KiK11NkEv9jWcPEawXfBI Sep 12 17:25:01.087812 sshd-session[4471]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 12 17:25:01.095452 systemd-logind[1498]: New session 25 of user core. Sep 12 17:25:01.104967 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 12 17:25:01.200250 kubelet[2680]: E0912 17:25:01.200202 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:25:01.201501 containerd[1518]: time="2025-09-12T17:25:01.201453077Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-bfrk7,Uid:61ff294d-0ed8-4988-a7e3-54144fd5c3e4,Namespace:kube-system,Attempt:0,}" Sep 12 17:25:01.223344 containerd[1518]: time="2025-09-12T17:25:01.223296068Z" level=info msg="connecting to shim fd6e80000574e5e633e4015369d7cd4e281e8f6a6579bb3616ff6b8503b29024" address="unix:///run/containerd/s/e490ab30d46edeb4ddf0530c09bec950da2533096357d03a410c946b2713f549" namespace=k8s.io protocol=ttrpc version=3 Sep 12 17:25:01.253952 systemd[1]: Started cri-containerd-fd6e80000574e5e633e4015369d7cd4e281e8f6a6579bb3616ff6b8503b29024.scope - libcontainer container fd6e80000574e5e633e4015369d7cd4e281e8f6a6579bb3616ff6b8503b29024. Sep 12 17:25:01.274989 containerd[1518]: time="2025-09-12T17:25:01.274934086Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-bfrk7,Uid:61ff294d-0ed8-4988-a7e3-54144fd5c3e4,Namespace:kube-system,Attempt:0,} returns sandbox id \"fd6e80000574e5e633e4015369d7cd4e281e8f6a6579bb3616ff6b8503b29024\"" Sep 12 17:25:01.275793 kubelet[2680]: E0912 17:25:01.275719 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:25:01.280913 containerd[1518]: time="2025-09-12T17:25:01.280836442Z" level=info msg="CreateContainer within sandbox \"fd6e80000574e5e633e4015369d7cd4e281e8f6a6579bb3616ff6b8503b29024\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 12 17:25:01.287312 containerd[1518]: time="2025-09-12T17:25:01.287280089Z" level=info msg="Container 0d1168979c7dad82f36da9ca8fa94334743210656317ac95b3e49e9d3a255b39: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:25:01.292907 containerd[1518]: time="2025-09-12T17:25:01.292868760Z" level=info msg="CreateContainer within sandbox \"fd6e80000574e5e633e4015369d7cd4e281e8f6a6579bb3616ff6b8503b29024\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"0d1168979c7dad82f36da9ca8fa94334743210656317ac95b3e49e9d3a255b39\"" Sep 12 17:25:01.293877 containerd[1518]: time="2025-09-12T17:25:01.293855419Z" level=info msg="StartContainer for \"0d1168979c7dad82f36da9ca8fa94334743210656317ac95b3e49e9d3a255b39\"" Sep 12 17:25:01.294822 containerd[1518]: time="2025-09-12T17:25:01.294795397Z" level=info msg="connecting to shim 0d1168979c7dad82f36da9ca8fa94334743210656317ac95b3e49e9d3a255b39" address="unix:///run/containerd/s/e490ab30d46edeb4ddf0530c09bec950da2533096357d03a410c946b2713f549" protocol=ttrpc version=3 Sep 12 17:25:01.321908 systemd[1]: Started cri-containerd-0d1168979c7dad82f36da9ca8fa94334743210656317ac95b3e49e9d3a255b39.scope - libcontainer container 0d1168979c7dad82f36da9ca8fa94334743210656317ac95b3e49e9d3a255b39. Sep 12 17:25:01.346397 containerd[1518]: time="2025-09-12T17:25:01.346354094Z" level=info msg="StartContainer for \"0d1168979c7dad82f36da9ca8fa94334743210656317ac95b3e49e9d3a255b39\" returns successfully" Sep 12 17:25:01.354343 systemd[1]: cri-containerd-0d1168979c7dad82f36da9ca8fa94334743210656317ac95b3e49e9d3a255b39.scope: Deactivated successfully. Sep 12 17:25:01.356659 containerd[1518]: time="2025-09-12T17:25:01.356566695Z" level=info msg="received exit event container_id:\"0d1168979c7dad82f36da9ca8fa94334743210656317ac95b3e49e9d3a255b39\" id:\"0d1168979c7dad82f36da9ca8fa94334743210656317ac95b3e49e9d3a255b39\" pid:4544 exited_at:{seconds:1757697901 nanos:356231049}" Sep 12 17:25:01.356887 containerd[1518]: time="2025-09-12T17:25:01.356648297Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0d1168979c7dad82f36da9ca8fa94334743210656317ac95b3e49e9d3a255b39\" id:\"0d1168979c7dad82f36da9ca8fa94334743210656317ac95b3e49e9d3a255b39\" pid:4544 exited_at:{seconds:1757697901 nanos:356231049}" Sep 12 17:25:01.973496 kubelet[2680]: E0912 17:25:01.973453 2680 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 12 17:25:02.193194 kubelet[2680]: E0912 17:25:02.193163 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:25:02.201095 containerd[1518]: time="2025-09-12T17:25:02.201056829Z" level=info msg="CreateContainer within sandbox \"fd6e80000574e5e633e4015369d7cd4e281e8f6a6579bb3616ff6b8503b29024\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 12 17:25:02.206521 containerd[1518]: time="2025-09-12T17:25:02.206214007Z" level=info msg="Container 163d56ab2ff26eac0cf1258238cf58e35e5f5e50127aca8f74d0dc2433517d6c: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:25:02.212499 containerd[1518]: time="2025-09-12T17:25:02.212462885Z" level=info msg="CreateContainer within sandbox \"fd6e80000574e5e633e4015369d7cd4e281e8f6a6579bb3616ff6b8503b29024\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"163d56ab2ff26eac0cf1258238cf58e35e5f5e50127aca8f74d0dc2433517d6c\"" Sep 12 17:25:02.213248 containerd[1518]: time="2025-09-12T17:25:02.213172859Z" level=info msg="StartContainer for \"163d56ab2ff26eac0cf1258238cf58e35e5f5e50127aca8f74d0dc2433517d6c\"" Sep 12 17:25:02.214461 containerd[1518]: time="2025-09-12T17:25:02.214398682Z" level=info msg="connecting to shim 163d56ab2ff26eac0cf1258238cf58e35e5f5e50127aca8f74d0dc2433517d6c" address="unix:///run/containerd/s/e490ab30d46edeb4ddf0530c09bec950da2533096357d03a410c946b2713f549" protocol=ttrpc version=3 Sep 12 17:25:02.231879 systemd[1]: Started cri-containerd-163d56ab2ff26eac0cf1258238cf58e35e5f5e50127aca8f74d0dc2433517d6c.scope - libcontainer container 163d56ab2ff26eac0cf1258238cf58e35e5f5e50127aca8f74d0dc2433517d6c. Sep 12 17:25:02.256108 containerd[1518]: time="2025-09-12T17:25:02.255997270Z" level=info msg="StartContainer for \"163d56ab2ff26eac0cf1258238cf58e35e5f5e50127aca8f74d0dc2433517d6c\" returns successfully" Sep 12 17:25:02.261275 systemd[1]: cri-containerd-163d56ab2ff26eac0cf1258238cf58e35e5f5e50127aca8f74d0dc2433517d6c.scope: Deactivated successfully. Sep 12 17:25:02.264571 containerd[1518]: time="2025-09-12T17:25:02.264434990Z" level=info msg="received exit event container_id:\"163d56ab2ff26eac0cf1258238cf58e35e5f5e50127aca8f74d0dc2433517d6c\" id:\"163d56ab2ff26eac0cf1258238cf58e35e5f5e50127aca8f74d0dc2433517d6c\" pid:4589 exited_at:{seconds:1757697902 nanos:264176425}" Sep 12 17:25:02.264571 containerd[1518]: time="2025-09-12T17:25:02.264533232Z" level=info msg="TaskExit event in podsandbox handler container_id:\"163d56ab2ff26eac0cf1258238cf58e35e5f5e50127aca8f74d0dc2433517d6c\" id:\"163d56ab2ff26eac0cf1258238cf58e35e5f5e50127aca8f74d0dc2433517d6c\" pid:4589 exited_at:{seconds:1757697902 nanos:264176425}" Sep 12 17:25:02.280902 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-163d56ab2ff26eac0cf1258238cf58e35e5f5e50127aca8f74d0dc2433517d6c-rootfs.mount: Deactivated successfully. Sep 12 17:25:02.914303 kubelet[2680]: E0912 17:25:02.914273 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:25:03.197675 kubelet[2680]: E0912 17:25:03.197562 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:25:03.204053 containerd[1518]: time="2025-09-12T17:25:03.204014872Z" level=info msg="CreateContainer within sandbox \"fd6e80000574e5e633e4015369d7cd4e281e8f6a6579bb3616ff6b8503b29024\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 12 17:25:03.211753 containerd[1518]: time="2025-09-12T17:25:03.211651811Z" level=info msg="Container 6aca5c518e47a790e0f20c6feb2adfcf9c49b05a934d8e1bbf7dffe596667f28: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:25:03.220796 containerd[1518]: time="2025-09-12T17:25:03.220718056Z" level=info msg="CreateContainer within sandbox \"fd6e80000574e5e633e4015369d7cd4e281e8f6a6579bb3616ff6b8503b29024\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"6aca5c518e47a790e0f20c6feb2adfcf9c49b05a934d8e1bbf7dffe596667f28\"" Sep 12 17:25:03.221448 containerd[1518]: time="2025-09-12T17:25:03.221417669Z" level=info msg="StartContainer for \"6aca5c518e47a790e0f20c6feb2adfcf9c49b05a934d8e1bbf7dffe596667f28\"" Sep 12 17:25:03.222892 containerd[1518]: time="2025-09-12T17:25:03.222845015Z" level=info msg="connecting to shim 6aca5c518e47a790e0f20c6feb2adfcf9c49b05a934d8e1bbf7dffe596667f28" address="unix:///run/containerd/s/e490ab30d46edeb4ddf0530c09bec950da2533096357d03a410c946b2713f549" protocol=ttrpc version=3 Sep 12 17:25:03.244874 systemd[1]: Started cri-containerd-6aca5c518e47a790e0f20c6feb2adfcf9c49b05a934d8e1bbf7dffe596667f28.scope - libcontainer container 6aca5c518e47a790e0f20c6feb2adfcf9c49b05a934d8e1bbf7dffe596667f28. Sep 12 17:25:03.282293 systemd[1]: cri-containerd-6aca5c518e47a790e0f20c6feb2adfcf9c49b05a934d8e1bbf7dffe596667f28.scope: Deactivated successfully. Sep 12 17:25:03.283600 containerd[1518]: time="2025-09-12T17:25:03.283564959Z" level=info msg="StartContainer for \"6aca5c518e47a790e0f20c6feb2adfcf9c49b05a934d8e1bbf7dffe596667f28\" returns successfully" Sep 12 17:25:03.287741 containerd[1518]: time="2025-09-12T17:25:03.287598993Z" level=info msg="received exit event container_id:\"6aca5c518e47a790e0f20c6feb2adfcf9c49b05a934d8e1bbf7dffe596667f28\" id:\"6aca5c518e47a790e0f20c6feb2adfcf9c49b05a934d8e1bbf7dffe596667f28\" pid:4634 exited_at:{seconds:1757697903 nanos:287298867}" Sep 12 17:25:03.287971 containerd[1518]: time="2025-09-12T17:25:03.287947119Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6aca5c518e47a790e0f20c6feb2adfcf9c49b05a934d8e1bbf7dffe596667f28\" id:\"6aca5c518e47a790e0f20c6feb2adfcf9c49b05a934d8e1bbf7dffe596667f28\" pid:4634 exited_at:{seconds:1757697903 nanos:287298867}" Sep 12 17:25:03.304623 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6aca5c518e47a790e0f20c6feb2adfcf9c49b05a934d8e1bbf7dffe596667f28-rootfs.mount: Deactivated successfully. Sep 12 17:25:04.208792 kubelet[2680]: E0912 17:25:04.207699 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:25:04.220274 containerd[1518]: time="2025-09-12T17:25:04.219635226Z" level=info msg="CreateContainer within sandbox \"fd6e80000574e5e633e4015369d7cd4e281e8f6a6579bb3616ff6b8503b29024\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 12 17:25:04.242372 containerd[1518]: time="2025-09-12T17:25:04.242323303Z" level=info msg="Container d4a4c0beb838705a62006e389a9a5eddc0b343697075c961a93f6a4917e3ce08: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:25:04.244925 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2823856688.mount: Deactivated successfully. Sep 12 17:25:04.256271 containerd[1518]: time="2025-09-12T17:25:04.256164344Z" level=info msg="CreateContainer within sandbox \"fd6e80000574e5e633e4015369d7cd4e281e8f6a6579bb3616ff6b8503b29024\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"d4a4c0beb838705a62006e389a9a5eddc0b343697075c961a93f6a4917e3ce08\"" Sep 12 17:25:04.256896 containerd[1518]: time="2025-09-12T17:25:04.256851596Z" level=info msg="StartContainer for \"d4a4c0beb838705a62006e389a9a5eddc0b343697075c961a93f6a4917e3ce08\"" Sep 12 17:25:04.258061 containerd[1518]: time="2025-09-12T17:25:04.258027257Z" level=info msg="connecting to shim d4a4c0beb838705a62006e389a9a5eddc0b343697075c961a93f6a4917e3ce08" address="unix:///run/containerd/s/e490ab30d46edeb4ddf0530c09bec950da2533096357d03a410c946b2713f549" protocol=ttrpc version=3 Sep 12 17:25:04.283972 systemd[1]: Started cri-containerd-d4a4c0beb838705a62006e389a9a5eddc0b343697075c961a93f6a4917e3ce08.scope - libcontainer container d4a4c0beb838705a62006e389a9a5eddc0b343697075c961a93f6a4917e3ce08. Sep 12 17:25:04.309937 systemd[1]: cri-containerd-d4a4c0beb838705a62006e389a9a5eddc0b343697075c961a93f6a4917e3ce08.scope: Deactivated successfully. Sep 12 17:25:04.312268 containerd[1518]: time="2025-09-12T17:25:04.312181602Z" level=info msg="received exit event container_id:\"d4a4c0beb838705a62006e389a9a5eddc0b343697075c961a93f6a4917e3ce08\" id:\"d4a4c0beb838705a62006e389a9a5eddc0b343697075c961a93f6a4917e3ce08\" pid:4673 exited_at:{seconds:1757697904 nanos:311991839}" Sep 12 17:25:04.312626 containerd[1518]: time="2025-09-12T17:25:04.312591810Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d4a4c0beb838705a62006e389a9a5eddc0b343697075c961a93f6a4917e3ce08\" id:\"d4a4c0beb838705a62006e389a9a5eddc0b343697075c961a93f6a4917e3ce08\" pid:4673 exited_at:{seconds:1757697904 nanos:311991839}" Sep 12 17:25:04.314160 containerd[1518]: time="2025-09-12T17:25:04.314092516Z" level=info msg="StartContainer for \"d4a4c0beb838705a62006e389a9a5eddc0b343697075c961a93f6a4917e3ce08\" returns successfully" Sep 12 17:25:04.338191 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d4a4c0beb838705a62006e389a9a5eddc0b343697075c961a93f6a4917e3ce08-rootfs.mount: Deactivated successfully. Sep 12 17:25:04.916621 kubelet[2680]: E0912 17:25:04.915106 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:25:05.212703 kubelet[2680]: E0912 17:25:05.212585 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:25:05.220183 containerd[1518]: time="2025-09-12T17:25:05.218106548Z" level=info msg="CreateContainer within sandbox \"fd6e80000574e5e633e4015369d7cd4e281e8f6a6579bb3616ff6b8503b29024\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 12 17:25:05.229430 containerd[1518]: time="2025-09-12T17:25:05.229393858Z" level=info msg="Container 2529850769363522493d3c245ef2ca1d3d6b4be15eb81bd99fe4a6036e1d3b29: CDI devices from CRI Config.CDIDevices: []" Sep 12 17:25:05.239320 containerd[1518]: time="2025-09-12T17:25:05.239239062Z" level=info msg="CreateContainer within sandbox \"fd6e80000574e5e633e4015369d7cd4e281e8f6a6579bb3616ff6b8503b29024\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"2529850769363522493d3c245ef2ca1d3d6b4be15eb81bd99fe4a6036e1d3b29\"" Sep 12 17:25:05.240820 containerd[1518]: time="2025-09-12T17:25:05.240628566Z" level=info msg="StartContainer for \"2529850769363522493d3c245ef2ca1d3d6b4be15eb81bd99fe4a6036e1d3b29\"" Sep 12 17:25:05.242662 containerd[1518]: time="2025-09-12T17:25:05.241589422Z" level=info msg="connecting to shim 2529850769363522493d3c245ef2ca1d3d6b4be15eb81bd99fe4a6036e1d3b29" address="unix:///run/containerd/s/e490ab30d46edeb4ddf0530c09bec950da2533096357d03a410c946b2713f549" protocol=ttrpc version=3 Sep 12 17:25:05.271018 systemd[1]: Started cri-containerd-2529850769363522493d3c245ef2ca1d3d6b4be15eb81bd99fe4a6036e1d3b29.scope - libcontainer container 2529850769363522493d3c245ef2ca1d3d6b4be15eb81bd99fe4a6036e1d3b29. Sep 12 17:25:05.321415 containerd[1518]: time="2025-09-12T17:25:05.321365319Z" level=info msg="StartContainer for \"2529850769363522493d3c245ef2ca1d3d6b4be15eb81bd99fe4a6036e1d3b29\" returns successfully" Sep 12 17:25:05.382303 containerd[1518]: time="2025-09-12T17:25:05.382253579Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2529850769363522493d3c245ef2ca1d3d6b4be15eb81bd99fe4a6036e1d3b29\" id:\"0da86941317d83e486080d85b0da4979c9057432378b9f80fffb72cbf6e750a2\" pid:4740 exited_at:{seconds:1757697905 nanos:381992215}" Sep 12 17:25:05.618748 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Sep 12 17:25:06.224260 kubelet[2680]: E0912 17:25:06.224069 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:25:06.250743 kubelet[2680]: I0912 17:25:06.250283 2680 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-bfrk7" podStartSLOduration=6.250265794 podStartE2EDuration="6.250265794s" podCreationTimestamp="2025-09-12 17:25:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-12 17:25:06.248216521 +0000 UTC m=+79.428924637" watchObservedRunningTime="2025-09-12 17:25:06.250265794 +0000 UTC m=+79.430973910" Sep 12 17:25:07.225273 kubelet[2680]: E0912 17:25:07.225229 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:25:07.531425 containerd[1518]: time="2025-09-12T17:25:07.531320715Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2529850769363522493d3c245ef2ca1d3d6b4be15eb81bd99fe4a6036e1d3b29\" id:\"e65b7a8255358dc16c27a82b1acee89df6e4beb24ad0eb905618aea77cfb0e4a\" pid:4907 exit_status:1 exited_at:{seconds:1757697907 nanos:530840508}" Sep 12 17:25:08.705648 systemd-networkd[1429]: lxc_health: Link UP Sep 12 17:25:08.705926 systemd-networkd[1429]: lxc_health: Gained carrier Sep 12 17:25:09.202743 kubelet[2680]: E0912 17:25:09.202605 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:25:09.232421 kubelet[2680]: E0912 17:25:09.232146 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:25:09.657843 containerd[1518]: time="2025-09-12T17:25:09.657786182Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2529850769363522493d3c245ef2ca1d3d6b4be15eb81bd99fe4a6036e1d3b29\" id:\"7eedb14acdc3e16fa677b23dc36f496d8b8e883399021eeb3a21dd79549384c2\" pid:5278 exited_at:{seconds:1757697909 nanos:657415336}" Sep 12 17:25:10.232596 kubelet[2680]: E0912 17:25:10.232475 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Sep 12 17:25:10.255946 systemd-networkd[1429]: lxc_health: Gained IPv6LL Sep 12 17:25:11.775322 containerd[1518]: time="2025-09-12T17:25:11.775278030Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2529850769363522493d3c245ef2ca1d3d6b4be15eb81bd99fe4a6036e1d3b29\" id:\"bbc33323a42c4b68f9a355e9be7fe30345d1bb7092d0142de0ee0172617c3a41\" pid:5314 exited_at:{seconds:1757697911 nanos:774013934}" Sep 12 17:25:13.901895 containerd[1518]: time="2025-09-12T17:25:13.901848682Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2529850769363522493d3c245ef2ca1d3d6b4be15eb81bd99fe4a6036e1d3b29\" id:\"5d93cb7f780005800b567faa9efae9d5e092423232a342cfc84dd1c29ddd04a0\" pid:5347 exited_at:{seconds:1757697913 nanos:901538599}" Sep 12 17:25:13.906627 sshd[4479]: Connection closed by 10.0.0.1 port 46706 Sep 12 17:25:13.907160 sshd-session[4471]: pam_unix(sshd:session): session closed for user core Sep 12 17:25:13.910898 systemd[1]: sshd@24-10.0.0.105:22-10.0.0.1:46706.service: Deactivated successfully. Sep 12 17:25:13.912865 systemd[1]: session-25.scope: Deactivated successfully. Sep 12 17:25:13.914349 systemd-logind[1498]: Session 25 logged out. Waiting for processes to exit. Sep 12 17:25:13.915543 systemd-logind[1498]: Removed session 25. Sep 12 17:25:14.914779 kubelet[2680]: E0912 17:25:14.914666 2680 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8"