Sep 9 23:59:44.767537 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Sep 9 23:59:44.767557 kernel: Linux version 6.12.45-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT Tue Sep 9 22:10:22 -00 2025 Sep 9 23:59:44.767582 kernel: KASLR enabled Sep 9 23:59:44.767588 kernel: efi: EFI v2.7 by EDK II Sep 9 23:59:44.767594 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb228018 ACPI 2.0=0xdb9b8018 RNG=0xdb9b8a18 MEMRESERVE=0xdb221f18 Sep 9 23:59:44.767599 kernel: random: crng init done Sep 9 23:59:44.767606 kernel: Kernel is locked down from EFI Secure Boot; see man kernel_lockdown.7 Sep 9 23:59:44.767612 kernel: secureboot: Secure boot enabled Sep 9 23:59:44.767617 kernel: ACPI: Early table checksum verification disabled Sep 9 23:59:44.767625 kernel: ACPI: RSDP 0x00000000DB9B8018 000024 (v02 BOCHS ) Sep 9 23:59:44.767631 kernel: ACPI: XSDT 0x00000000DB9B8F18 000064 (v01 BOCHS BXPC 00000001 01000013) Sep 9 23:59:44.767637 kernel: ACPI: FACP 0x00000000DB9B8B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 23:59:44.767643 kernel: ACPI: DSDT 0x00000000DB904018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 23:59:44.767649 kernel: ACPI: APIC 0x00000000DB9B8C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 23:59:44.767664 kernel: ACPI: PPTT 0x00000000DB9B8098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 23:59:44.767672 kernel: ACPI: GTDT 0x00000000DB9B8818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 23:59:44.767678 kernel: ACPI: MCFG 0x00000000DB9B8A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 23:59:44.767684 kernel: ACPI: SPCR 0x00000000DB9B8918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 23:59:44.767690 kernel: ACPI: DBG2 0x00000000DB9B8998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 23:59:44.767697 kernel: ACPI: IORT 0x00000000DB9B8198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Sep 9 23:59:44.767703 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Sep 9 23:59:44.767709 kernel: ACPI: Use ACPI SPCR as default console: No Sep 9 23:59:44.767715 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Sep 9 23:59:44.767721 kernel: NODE_DATA(0) allocated [mem 0xdc737a00-0xdc73efff] Sep 9 23:59:44.767727 kernel: Zone ranges: Sep 9 23:59:44.767734 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Sep 9 23:59:44.767740 kernel: DMA32 empty Sep 9 23:59:44.767746 kernel: Normal empty Sep 9 23:59:44.767752 kernel: Device empty Sep 9 23:59:44.767758 kernel: Movable zone start for each node Sep 9 23:59:44.767763 kernel: Early memory node ranges Sep 9 23:59:44.767770 kernel: node 0: [mem 0x0000000040000000-0x00000000dbb4ffff] Sep 9 23:59:44.767776 kernel: node 0: [mem 0x00000000dbb50000-0x00000000dbe7ffff] Sep 9 23:59:44.767782 kernel: node 0: [mem 0x00000000dbe80000-0x00000000dbe9ffff] Sep 9 23:59:44.767788 kernel: node 0: [mem 0x00000000dbea0000-0x00000000dbedffff] Sep 9 23:59:44.767794 kernel: node 0: [mem 0x00000000dbee0000-0x00000000dbf1ffff] Sep 9 23:59:44.767800 kernel: node 0: [mem 0x00000000dbf20000-0x00000000dbf6ffff] Sep 9 23:59:44.767807 kernel: node 0: [mem 0x00000000dbf70000-0x00000000dcbfffff] Sep 9 23:59:44.767813 kernel: node 0: [mem 0x00000000dcc00000-0x00000000dcfdffff] Sep 9 23:59:44.767819 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Sep 9 23:59:44.767828 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Sep 9 23:59:44.767835 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Sep 9 23:59:44.767841 kernel: cma: Reserved 16 MiB at 0x00000000d7a00000 on node -1 Sep 9 23:59:44.767848 kernel: psci: probing for conduit method from ACPI. Sep 9 23:59:44.767855 kernel: psci: PSCIv1.1 detected in firmware. Sep 9 23:59:44.767862 kernel: psci: Using standard PSCI v0.2 function IDs Sep 9 23:59:44.767868 kernel: psci: Trusted OS migration not required Sep 9 23:59:44.767874 kernel: psci: SMC Calling Convention v1.1 Sep 9 23:59:44.767881 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Sep 9 23:59:44.767887 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Sep 9 23:59:44.767894 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Sep 9 23:59:44.767901 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Sep 9 23:59:44.767907 kernel: Detected PIPT I-cache on CPU0 Sep 9 23:59:44.767915 kernel: CPU features: detected: GIC system register CPU interface Sep 9 23:59:44.767922 kernel: CPU features: detected: Spectre-v4 Sep 9 23:59:44.767928 kernel: CPU features: detected: Spectre-BHB Sep 9 23:59:44.767935 kernel: CPU features: kernel page table isolation forced ON by KASLR Sep 9 23:59:44.767942 kernel: CPU features: detected: Kernel page table isolation (KPTI) Sep 9 23:59:44.767948 kernel: CPU features: detected: ARM erratum 1418040 Sep 9 23:59:44.767955 kernel: CPU features: detected: SSBS not fully self-synchronizing Sep 9 23:59:44.767961 kernel: alternatives: applying boot alternatives Sep 9 23:59:44.767969 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=fc7b279c2d918629032c01551b74c66c198cf923a976f9b3bc0d959e7c2302db Sep 9 23:59:44.767976 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Sep 9 23:59:44.767982 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Sep 9 23:59:44.767990 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Sep 9 23:59:44.767997 kernel: Fallback order for Node 0: 0 Sep 9 23:59:44.768003 kernel: Built 1 zonelists, mobility grouping on. Total pages: 643072 Sep 9 23:59:44.768010 kernel: Policy zone: DMA Sep 9 23:59:44.768017 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Sep 9 23:59:44.768023 kernel: software IO TLB: SWIOTLB bounce buffer size adjusted to 2MB Sep 9 23:59:44.768030 kernel: software IO TLB: area num 4. Sep 9 23:59:44.768036 kernel: software IO TLB: SWIOTLB bounce buffer size roundup to 4MB Sep 9 23:59:44.768043 kernel: software IO TLB: mapped [mem 0x00000000db504000-0x00000000db904000] (4MB) Sep 9 23:59:44.768050 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Sep 9 23:59:44.768056 kernel: rcu: Preemptible hierarchical RCU implementation. Sep 9 23:59:44.768064 kernel: rcu: RCU event tracing is enabled. Sep 9 23:59:44.768072 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Sep 9 23:59:44.768079 kernel: Trampoline variant of Tasks RCU enabled. Sep 9 23:59:44.768086 kernel: Tracing variant of Tasks RCU enabled. Sep 9 23:59:44.768092 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Sep 9 23:59:44.768102 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Sep 9 23:59:44.768109 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 9 23:59:44.768116 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Sep 9 23:59:44.768122 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Sep 9 23:59:44.768128 kernel: GICv3: 256 SPIs implemented Sep 9 23:59:44.768135 kernel: GICv3: 0 Extended SPIs implemented Sep 9 23:59:44.768142 kernel: Root IRQ handler: gic_handle_irq Sep 9 23:59:44.768150 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Sep 9 23:59:44.768156 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Sep 9 23:59:44.768163 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Sep 9 23:59:44.768169 kernel: ITS [mem 0x08080000-0x0809ffff] Sep 9 23:59:44.768176 kernel: ITS@0x0000000008080000: allocated 8192 Devices @40110000 (indirect, esz 8, psz 64K, shr 1) Sep 9 23:59:44.768183 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @40120000 (flat, esz 8, psz 64K, shr 1) Sep 9 23:59:44.768191 kernel: GICv3: using LPI property table @0x0000000040130000 Sep 9 23:59:44.768198 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040140000 Sep 9 23:59:44.768205 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Sep 9 23:59:44.768211 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 9 23:59:44.768218 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Sep 9 23:59:44.768226 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Sep 9 23:59:44.768236 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Sep 9 23:59:44.768245 kernel: arm-pv: using stolen time PV Sep 9 23:59:44.768252 kernel: Console: colour dummy device 80x25 Sep 9 23:59:44.768259 kernel: ACPI: Core revision 20240827 Sep 9 23:59:44.768266 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Sep 9 23:59:44.768273 kernel: pid_max: default: 32768 minimum: 301 Sep 9 23:59:44.768280 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Sep 9 23:59:44.768287 kernel: landlock: Up and running. Sep 9 23:59:44.768293 kernel: SELinux: Initializing. Sep 9 23:59:44.768304 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 9 23:59:44.768310 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Sep 9 23:59:44.768317 kernel: rcu: Hierarchical SRCU implementation. Sep 9 23:59:44.768325 kernel: rcu: Max phase no-delay instances is 400. Sep 9 23:59:44.768334 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Sep 9 23:59:44.768341 kernel: Remapping and enabling EFI services. Sep 9 23:59:44.768348 kernel: smp: Bringing up secondary CPUs ... Sep 9 23:59:44.768355 kernel: Detected PIPT I-cache on CPU1 Sep 9 23:59:44.768362 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Sep 9 23:59:44.768377 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040150000 Sep 9 23:59:44.768389 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 9 23:59:44.768395 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Sep 9 23:59:44.768404 kernel: Detected PIPT I-cache on CPU2 Sep 9 23:59:44.768411 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Sep 9 23:59:44.768418 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040160000 Sep 9 23:59:44.768425 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 9 23:59:44.768432 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Sep 9 23:59:44.768439 kernel: Detected PIPT I-cache on CPU3 Sep 9 23:59:44.768448 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Sep 9 23:59:44.768455 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040170000 Sep 9 23:59:44.768462 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Sep 9 23:59:44.768468 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Sep 9 23:59:44.768475 kernel: smp: Brought up 1 node, 4 CPUs Sep 9 23:59:44.768482 kernel: SMP: Total of 4 processors activated. Sep 9 23:59:44.768489 kernel: CPU: All CPU(s) started at EL1 Sep 9 23:59:44.768496 kernel: CPU features: detected: 32-bit EL0 Support Sep 9 23:59:44.768503 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Sep 9 23:59:44.768511 kernel: CPU features: detected: Common not Private translations Sep 9 23:59:44.768518 kernel: CPU features: detected: CRC32 instructions Sep 9 23:59:44.768525 kernel: CPU features: detected: Enhanced Virtualization Traps Sep 9 23:59:44.768532 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Sep 9 23:59:44.768540 kernel: CPU features: detected: LSE atomic instructions Sep 9 23:59:44.768547 kernel: CPU features: detected: Privileged Access Never Sep 9 23:59:44.768554 kernel: CPU features: detected: RAS Extension Support Sep 9 23:59:44.768562 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Sep 9 23:59:44.768581 kernel: alternatives: applying system-wide alternatives Sep 9 23:59:44.768590 kernel: CPU features: detected: Hardware dirty bit management on CPU0-3 Sep 9 23:59:44.768598 kernel: Memory: 2422436K/2572288K available (11136K kernel code, 2436K rwdata, 9060K rodata, 38912K init, 1038K bss, 127516K reserved, 16384K cma-reserved) Sep 9 23:59:44.768605 kernel: devtmpfs: initialized Sep 9 23:59:44.768612 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Sep 9 23:59:44.768619 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Sep 9 23:59:44.768626 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Sep 9 23:59:44.768633 kernel: 0 pages in range for non-PLT usage Sep 9 23:59:44.768639 kernel: 508576 pages in range for PLT usage Sep 9 23:59:44.768646 kernel: pinctrl core: initialized pinctrl subsystem Sep 9 23:59:44.768659 kernel: SMBIOS 3.0.0 present. Sep 9 23:59:44.768667 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Sep 9 23:59:44.768674 kernel: DMI: Memory slots populated: 1/1 Sep 9 23:59:44.768681 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Sep 9 23:59:44.768688 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Sep 9 23:59:44.768695 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Sep 9 23:59:44.768702 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Sep 9 23:59:44.768709 kernel: audit: initializing netlink subsys (disabled) Sep 9 23:59:44.768716 kernel: audit: type=2000 audit(0.025:1): state=initialized audit_enabled=0 res=1 Sep 9 23:59:44.768725 kernel: thermal_sys: Registered thermal governor 'step_wise' Sep 9 23:59:44.768732 kernel: cpuidle: using governor menu Sep 9 23:59:44.768739 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Sep 9 23:59:44.768746 kernel: ASID allocator initialised with 32768 entries Sep 9 23:59:44.768754 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Sep 9 23:59:44.768761 kernel: Serial: AMBA PL011 UART driver Sep 9 23:59:44.768768 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Sep 9 23:59:44.768775 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Sep 9 23:59:44.768782 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Sep 9 23:59:44.768790 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Sep 9 23:59:44.768798 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Sep 9 23:59:44.768805 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Sep 9 23:59:44.768812 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Sep 9 23:59:44.768818 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Sep 9 23:59:44.768825 kernel: ACPI: Added _OSI(Module Device) Sep 9 23:59:44.768832 kernel: ACPI: Added _OSI(Processor Device) Sep 9 23:59:44.768839 kernel: ACPI: Added _OSI(Processor Aggregator Device) Sep 9 23:59:44.768846 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Sep 9 23:59:44.768854 kernel: ACPI: Interpreter enabled Sep 9 23:59:44.768861 kernel: ACPI: Using GIC for interrupt routing Sep 9 23:59:44.768868 kernel: ACPI: MCFG table detected, 1 entries Sep 9 23:59:44.768875 kernel: ACPI: CPU0 has been hot-added Sep 9 23:59:44.768882 kernel: ACPI: CPU1 has been hot-added Sep 9 23:59:44.768889 kernel: ACPI: CPU2 has been hot-added Sep 9 23:59:44.768896 kernel: ACPI: CPU3 has been hot-added Sep 9 23:59:44.768903 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Sep 9 23:59:44.768910 kernel: printk: legacy console [ttyAMA0] enabled Sep 9 23:59:44.768918 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Sep 9 23:59:44.769050 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Sep 9 23:59:44.769116 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Sep 9 23:59:44.769176 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Sep 9 23:59:44.769234 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Sep 9 23:59:44.769291 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Sep 9 23:59:44.769301 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Sep 9 23:59:44.769310 kernel: PCI host bridge to bus 0000:00 Sep 9 23:59:44.769380 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Sep 9 23:59:44.769436 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Sep 9 23:59:44.769490 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Sep 9 23:59:44.769545 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Sep 9 23:59:44.769644 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint Sep 9 23:59:44.769734 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Sep 9 23:59:44.769818 kernel: pci 0000:00:01.0: BAR 0 [io 0x0000-0x001f] Sep 9 23:59:44.769889 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff] Sep 9 23:59:44.769953 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] Sep 9 23:59:44.770013 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned Sep 9 23:59:44.770074 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff]: assigned Sep 9 23:59:44.770133 kernel: pci 0000:00:01.0: BAR 0 [io 0x1000-0x101f]: assigned Sep 9 23:59:44.770188 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Sep 9 23:59:44.770243 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Sep 9 23:59:44.770296 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Sep 9 23:59:44.770306 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Sep 9 23:59:44.770313 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Sep 9 23:59:44.770320 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Sep 9 23:59:44.770327 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Sep 9 23:59:44.770334 kernel: iommu: Default domain type: Translated Sep 9 23:59:44.770341 kernel: iommu: DMA domain TLB invalidation policy: strict mode Sep 9 23:59:44.770350 kernel: efivars: Registered efivars operations Sep 9 23:59:44.770357 kernel: vgaarb: loaded Sep 9 23:59:44.770365 kernel: clocksource: Switched to clocksource arch_sys_counter Sep 9 23:59:44.770372 kernel: VFS: Disk quotas dquot_6.6.0 Sep 9 23:59:44.770379 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Sep 9 23:59:44.770386 kernel: pnp: PnP ACPI init Sep 9 23:59:44.770452 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Sep 9 23:59:44.770463 kernel: pnp: PnP ACPI: found 1 devices Sep 9 23:59:44.770472 kernel: NET: Registered PF_INET protocol family Sep 9 23:59:44.770479 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Sep 9 23:59:44.770487 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Sep 9 23:59:44.770494 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Sep 9 23:59:44.770501 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Sep 9 23:59:44.770508 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Sep 9 23:59:44.770515 kernel: TCP: Hash tables configured (established 32768 bind 32768) Sep 9 23:59:44.770522 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 9 23:59:44.770529 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Sep 9 23:59:44.770538 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Sep 9 23:59:44.770544 kernel: PCI: CLS 0 bytes, default 64 Sep 9 23:59:44.770551 kernel: kvm [1]: HYP mode not available Sep 9 23:59:44.770558 kernel: Initialise system trusted keyrings Sep 9 23:59:44.770575 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Sep 9 23:59:44.770582 kernel: Key type asymmetric registered Sep 9 23:59:44.770589 kernel: Asymmetric key parser 'x509' registered Sep 9 23:59:44.770596 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Sep 9 23:59:44.770603 kernel: io scheduler mq-deadline registered Sep 9 23:59:44.770612 kernel: io scheduler kyber registered Sep 9 23:59:44.770619 kernel: io scheduler bfq registered Sep 9 23:59:44.770626 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Sep 9 23:59:44.770633 kernel: ACPI: button: Power Button [PWRB] Sep 9 23:59:44.770640 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Sep 9 23:59:44.770719 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Sep 9 23:59:44.770730 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Sep 9 23:59:44.770737 kernel: thunder_xcv, ver 1.0 Sep 9 23:59:44.770744 kernel: thunder_bgx, ver 1.0 Sep 9 23:59:44.770753 kernel: nicpf, ver 1.0 Sep 9 23:59:44.770760 kernel: nicvf, ver 1.0 Sep 9 23:59:44.770831 kernel: rtc-efi rtc-efi.0: registered as rtc0 Sep 9 23:59:44.770888 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-09-09T23:59:44 UTC (1757462384) Sep 9 23:59:44.770898 kernel: hid: raw HID events driver (C) Jiri Kosina Sep 9 23:59:44.770905 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Sep 9 23:59:44.770912 kernel: watchdog: NMI not fully supported Sep 9 23:59:44.770919 kernel: watchdog: Hard watchdog permanently disabled Sep 9 23:59:44.770928 kernel: NET: Registered PF_INET6 protocol family Sep 9 23:59:44.770935 kernel: Segment Routing with IPv6 Sep 9 23:59:44.770942 kernel: In-situ OAM (IOAM) with IPv6 Sep 9 23:59:44.770949 kernel: NET: Registered PF_PACKET protocol family Sep 9 23:59:44.770956 kernel: Key type dns_resolver registered Sep 9 23:59:44.770963 kernel: registered taskstats version 1 Sep 9 23:59:44.770970 kernel: Loading compiled-in X.509 certificates Sep 9 23:59:44.770977 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.45-flatcar: 61217a1897415238555e2058a4e44c51622b0f87' Sep 9 23:59:44.770984 kernel: Demotion targets for Node 0: null Sep 9 23:59:44.770992 kernel: Key type .fscrypt registered Sep 9 23:59:44.770999 kernel: Key type fscrypt-provisioning registered Sep 9 23:59:44.771006 kernel: ima: No TPM chip found, activating TPM-bypass! Sep 9 23:59:44.771012 kernel: ima: Allocated hash algorithm: sha1 Sep 9 23:59:44.771020 kernel: ima: No architecture policies found Sep 9 23:59:44.771026 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Sep 9 23:59:44.771033 kernel: clk: Disabling unused clocks Sep 9 23:59:44.771040 kernel: PM: genpd: Disabling unused power domains Sep 9 23:59:44.771047 kernel: Warning: unable to open an initial console. Sep 9 23:59:44.771056 kernel: Freeing unused kernel memory: 38912K Sep 9 23:59:44.771063 kernel: Run /init as init process Sep 9 23:59:44.771069 kernel: with arguments: Sep 9 23:59:44.771076 kernel: /init Sep 9 23:59:44.771083 kernel: with environment: Sep 9 23:59:44.771090 kernel: HOME=/ Sep 9 23:59:44.771096 kernel: TERM=linux Sep 9 23:59:44.771103 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Sep 9 23:59:44.771111 systemd[1]: Successfully made /usr/ read-only. Sep 9 23:59:44.771122 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 9 23:59:44.771130 systemd[1]: Detected virtualization kvm. Sep 9 23:59:44.771138 systemd[1]: Detected architecture arm64. Sep 9 23:59:44.771145 systemd[1]: Running in initrd. Sep 9 23:59:44.771152 systemd[1]: No hostname configured, using default hostname. Sep 9 23:59:44.771160 systemd[1]: Hostname set to . Sep 9 23:59:44.771167 systemd[1]: Initializing machine ID from VM UUID. Sep 9 23:59:44.771176 systemd[1]: Queued start job for default target initrd.target. Sep 9 23:59:44.771184 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 9 23:59:44.771191 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 9 23:59:44.771199 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Sep 9 23:59:44.771207 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 9 23:59:44.771215 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Sep 9 23:59:44.771223 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Sep 9 23:59:44.771238 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Sep 9 23:59:44.771249 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Sep 9 23:59:44.771260 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 9 23:59:44.771270 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 9 23:59:44.771280 systemd[1]: Reached target paths.target - Path Units. Sep 9 23:59:44.771288 systemd[1]: Reached target slices.target - Slice Units. Sep 9 23:59:44.771296 systemd[1]: Reached target swap.target - Swaps. Sep 9 23:59:44.771303 systemd[1]: Reached target timers.target - Timer Units. Sep 9 23:59:44.771313 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Sep 9 23:59:44.771321 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 9 23:59:44.771328 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Sep 9 23:59:44.771336 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Sep 9 23:59:44.771343 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 9 23:59:44.771351 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 9 23:59:44.771359 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 9 23:59:44.771366 systemd[1]: Reached target sockets.target - Socket Units. Sep 9 23:59:44.771374 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Sep 9 23:59:44.771383 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 9 23:59:44.771391 systemd[1]: Finished network-cleanup.service - Network Cleanup. Sep 9 23:59:44.771399 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Sep 9 23:59:44.771406 systemd[1]: Starting systemd-fsck-usr.service... Sep 9 23:59:44.771414 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 9 23:59:44.771421 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 9 23:59:44.771430 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 23:59:44.771439 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Sep 9 23:59:44.771448 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 9 23:59:44.771456 systemd[1]: Finished systemd-fsck-usr.service. Sep 9 23:59:44.771464 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 9 23:59:44.771488 systemd-journald[245]: Collecting audit messages is disabled. Sep 9 23:59:44.771509 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 23:59:44.771517 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Sep 9 23:59:44.771525 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Sep 9 23:59:44.771532 kernel: Bridge firewalling registered Sep 9 23:59:44.771542 systemd-journald[245]: Journal started Sep 9 23:59:44.771561 systemd-journald[245]: Runtime Journal (/run/log/journal/dba6216a254a46b8b416db45c6d65f6c) is 6M, max 48.5M, 42.4M free. Sep 9 23:59:44.756919 systemd-modules-load[247]: Inserted module 'overlay' Sep 9 23:59:44.771829 systemd-modules-load[247]: Inserted module 'br_netfilter' Sep 9 23:59:44.775314 systemd[1]: Started systemd-journald.service - Journal Service. Sep 9 23:59:44.778689 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 9 23:59:44.779990 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 9 23:59:44.784400 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 9 23:59:44.786073 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 9 23:59:44.794945 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 9 23:59:44.796643 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 9 23:59:44.801735 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Sep 9 23:59:44.803891 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 9 23:59:44.805441 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 9 23:59:44.807537 systemd-tmpfiles[282]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Sep 9 23:59:44.810498 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 9 23:59:44.819976 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 9 23:59:44.830030 dracut-cmdline[284]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=fc7b279c2d918629032c01551b74c66c198cf923a976f9b3bc0d959e7c2302db Sep 9 23:59:44.861157 systemd-resolved[290]: Positive Trust Anchors: Sep 9 23:59:44.861175 systemd-resolved[290]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 9 23:59:44.861205 systemd-resolved[290]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 9 23:59:44.866075 systemd-resolved[290]: Defaulting to hostname 'linux'. Sep 9 23:59:44.867269 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 9 23:59:44.872087 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 9 23:59:44.907623 kernel: SCSI subsystem initialized Sep 9 23:59:44.912608 kernel: Loading iSCSI transport class v2.0-870. Sep 9 23:59:44.919586 kernel: iscsi: registered transport (tcp) Sep 9 23:59:44.932594 kernel: iscsi: registered transport (qla4xxx) Sep 9 23:59:44.932612 kernel: QLogic iSCSI HBA Driver Sep 9 23:59:44.949328 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 9 23:59:44.970111 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 9 23:59:44.972904 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 9 23:59:45.023668 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Sep 9 23:59:45.026272 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Sep 9 23:59:45.094626 kernel: raid6: neonx8 gen() 15758 MB/s Sep 9 23:59:45.111599 kernel: raid6: neonx4 gen() 15793 MB/s Sep 9 23:59:45.128599 kernel: raid6: neonx2 gen() 13223 MB/s Sep 9 23:59:45.145604 kernel: raid6: neonx1 gen() 10403 MB/s Sep 9 23:59:45.162817 kernel: raid6: int64x8 gen() 6889 MB/s Sep 9 23:59:45.179605 kernel: raid6: int64x4 gen() 7346 MB/s Sep 9 23:59:45.196601 kernel: raid6: int64x2 gen() 6092 MB/s Sep 9 23:59:45.213583 kernel: raid6: int64x1 gen() 5053 MB/s Sep 9 23:59:45.213599 kernel: raid6: using algorithm neonx4 gen() 15793 MB/s Sep 9 23:59:45.230595 kernel: raid6: .... xor() 12325 MB/s, rmw enabled Sep 9 23:59:45.230613 kernel: raid6: using neon recovery algorithm Sep 9 23:59:45.235913 kernel: xor: measuring software checksum speed Sep 9 23:59:45.235943 kernel: 8regs : 21607 MB/sec Sep 9 23:59:45.236971 kernel: 32regs : 21687 MB/sec Sep 9 23:59:45.236986 kernel: arm64_neon : 28109 MB/sec Sep 9 23:59:45.236997 kernel: xor: using function: arm64_neon (28109 MB/sec) Sep 9 23:59:45.291602 kernel: Btrfs loaded, zoned=no, fsverity=no Sep 9 23:59:45.297689 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Sep 9 23:59:45.300216 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 9 23:59:45.324731 systemd-udevd[500]: Using default interface naming scheme 'v255'. Sep 9 23:59:45.328762 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 9 23:59:45.331206 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Sep 9 23:59:45.357802 dracut-pre-trigger[509]: rd.md=0: removing MD RAID activation Sep 9 23:59:45.379921 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Sep 9 23:59:45.382384 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 9 23:59:45.430727 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 9 23:59:45.434102 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Sep 9 23:59:45.479012 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Sep 9 23:59:45.479156 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Sep 9 23:59:45.489937 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Sep 9 23:59:45.489984 kernel: GPT:9289727 != 19775487 Sep 9 23:59:45.489995 kernel: GPT:Alternate GPT header not at the end of the disk. Sep 9 23:59:45.491796 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 9 23:59:45.497860 kernel: GPT:9289727 != 19775487 Sep 9 23:59:45.497881 kernel: GPT: Use GNU Parted to correct GPT errors. Sep 9 23:59:45.497891 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 9 23:59:45.491913 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 23:59:45.497368 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 23:59:45.499998 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 23:59:45.531892 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 23:59:45.538965 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Sep 9 23:59:45.547927 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Sep 9 23:59:45.556787 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 9 23:59:45.565358 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Sep 9 23:59:45.571601 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Sep 9 23:59:45.572808 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Sep 9 23:59:45.575895 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Sep 9 23:59:45.578170 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 9 23:59:45.580386 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 9 23:59:45.583264 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Sep 9 23:59:45.585182 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Sep 9 23:59:45.598890 disk-uuid[592]: Primary Header is updated. Sep 9 23:59:45.598890 disk-uuid[592]: Secondary Entries is updated. Sep 9 23:59:45.598890 disk-uuid[592]: Secondary Header is updated. Sep 9 23:59:45.602600 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 9 23:59:45.602557 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Sep 9 23:59:46.612531 disk-uuid[597]: The operation has completed successfully. Sep 9 23:59:46.614067 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Sep 9 23:59:46.642740 systemd[1]: disk-uuid.service: Deactivated successfully. Sep 9 23:59:46.642865 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Sep 9 23:59:46.666270 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Sep 9 23:59:46.694599 sh[612]: Success Sep 9 23:59:46.707962 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Sep 9 23:59:46.708004 kernel: device-mapper: uevent: version 1.0.3 Sep 9 23:59:46.709337 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Sep 9 23:59:46.717586 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Sep 9 23:59:46.752184 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Sep 9 23:59:46.754329 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Sep 9 23:59:46.766077 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Sep 9 23:59:46.774597 kernel: BTRFS: device fsid 2bc16190-0dd5-44d6-b331-3d703f5a1d1f devid 1 transid 40 /dev/mapper/usr (253:0) scanned by mount (624) Sep 9 23:59:46.774987 kernel: BTRFS info (device dm-0): first mount of filesystem 2bc16190-0dd5-44d6-b331-3d703f5a1d1f Sep 9 23:59:46.776303 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Sep 9 23:59:46.780580 kernel: BTRFS info (device dm-0): disabling log replay at mount time Sep 9 23:59:46.780634 kernel: BTRFS info (device dm-0): enabling free space tree Sep 9 23:59:46.781303 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Sep 9 23:59:46.782724 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Sep 9 23:59:46.784314 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Sep 9 23:59:46.785127 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Sep 9 23:59:46.786886 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Sep 9 23:59:46.810973 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (655) Sep 9 23:59:46.811024 kernel: BTRFS info (device vda6): first mount of filesystem 3a7d3e29-58a5-4f0c-ac69-b528108338f5 Sep 9 23:59:46.811035 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 9 23:59:46.814582 kernel: BTRFS info (device vda6): turning on async discard Sep 9 23:59:46.814635 kernel: BTRFS info (device vda6): enabling free space tree Sep 9 23:59:46.818612 kernel: BTRFS info (device vda6): last unmount of filesystem 3a7d3e29-58a5-4f0c-ac69-b528108338f5 Sep 9 23:59:46.821615 systemd[1]: Finished ignition-setup.service - Ignition (setup). Sep 9 23:59:46.824087 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Sep 9 23:59:46.892918 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 9 23:59:46.896000 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 9 23:59:46.936455 systemd-networkd[798]: lo: Link UP Sep 9 23:59:46.937374 systemd-networkd[798]: lo: Gained carrier Sep 9 23:59:46.938114 systemd-networkd[798]: Enumeration completed Sep 9 23:59:46.938195 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 9 23:59:46.941030 ignition[702]: Ignition 2.21.0 Sep 9 23:59:46.939393 systemd[1]: Reached target network.target - Network. Sep 9 23:59:46.941037 ignition[702]: Stage: fetch-offline Sep 9 23:59:46.940776 systemd-networkd[798]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 23:59:46.941065 ignition[702]: no configs at "/usr/lib/ignition/base.d" Sep 9 23:59:46.940779 systemd-networkd[798]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 9 23:59:46.941072 ignition[702]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 23:59:46.942303 systemd-networkd[798]: eth0: Link UP Sep 9 23:59:46.941303 ignition[702]: parsed url from cmdline: "" Sep 9 23:59:46.942447 systemd-networkd[798]: eth0: Gained carrier Sep 9 23:59:46.941307 ignition[702]: no config URL provided Sep 9 23:59:46.942456 systemd-networkd[798]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 23:59:46.941311 ignition[702]: reading system config file "/usr/lib/ignition/user.ign" Sep 9 23:59:46.941319 ignition[702]: no config at "/usr/lib/ignition/user.ign" Sep 9 23:59:46.941337 ignition[702]: op(1): [started] loading QEMU firmware config module Sep 9 23:59:46.941342 ignition[702]: op(1): executing: "modprobe" "qemu_fw_cfg" Sep 9 23:59:46.951043 ignition[702]: op(1): [finished] loading QEMU firmware config module Sep 9 23:59:46.974628 systemd-networkd[798]: eth0: DHCPv4 address 10.0.0.122/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 9 23:59:46.999775 ignition[702]: parsing config with SHA512: 5fc5ce2bb03b95d380050d7d3b5fa0254af7f28c47e54aa01cbc096399f83a977444964d8c9936115c681f6e9d70bbd04fabd6d11e8767145b3d621f4d45c035 Sep 9 23:59:47.005780 unknown[702]: fetched base config from "system" Sep 9 23:59:47.006177 ignition[702]: fetch-offline: fetch-offline passed Sep 9 23:59:47.005790 unknown[702]: fetched user config from "qemu" Sep 9 23:59:47.006235 ignition[702]: Ignition finished successfully Sep 9 23:59:47.009668 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Sep 9 23:59:47.010973 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Sep 9 23:59:47.013683 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Sep 9 23:59:47.037125 ignition[812]: Ignition 2.21.0 Sep 9 23:59:47.037141 ignition[812]: Stage: kargs Sep 9 23:59:47.037282 ignition[812]: no configs at "/usr/lib/ignition/base.d" Sep 9 23:59:47.037291 ignition[812]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 23:59:47.039934 ignition[812]: kargs: kargs passed Sep 9 23:59:47.040257 ignition[812]: Ignition finished successfully Sep 9 23:59:47.043004 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Sep 9 23:59:47.044940 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Sep 9 23:59:47.081501 ignition[820]: Ignition 2.21.0 Sep 9 23:59:47.081517 ignition[820]: Stage: disks Sep 9 23:59:47.081720 ignition[820]: no configs at "/usr/lib/ignition/base.d" Sep 9 23:59:47.084899 systemd[1]: Finished ignition-disks.service - Ignition (disks). Sep 9 23:59:47.081729 ignition[820]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 23:59:47.086183 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Sep 9 23:59:47.082816 ignition[820]: disks: disks passed Sep 9 23:59:47.088033 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Sep 9 23:59:47.082867 ignition[820]: Ignition finished successfully Sep 9 23:59:47.090197 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 9 23:59:47.092247 systemd[1]: Reached target sysinit.target - System Initialization. Sep 9 23:59:47.093741 systemd[1]: Reached target basic.target - Basic System. Sep 9 23:59:47.096395 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Sep 9 23:59:47.119917 systemd-fsck[830]: ROOT: clean, 15/553520 files, 52789/553472 blocks Sep 9 23:59:47.124475 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Sep 9 23:59:47.126704 systemd[1]: Mounting sysroot.mount - /sysroot... Sep 9 23:59:47.187516 systemd[1]: Mounted sysroot.mount - /sysroot. Sep 9 23:59:47.189128 kernel: EXT4-fs (vda9): mounted filesystem 7cc0d7f3-e4a1-4dc4-8b58-ceece0d874c1 r/w with ordered data mode. Quota mode: none. Sep 9 23:59:47.188779 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Sep 9 23:59:47.191331 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 9 23:59:47.193046 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Sep 9 23:59:47.194026 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Sep 9 23:59:47.194063 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Sep 9 23:59:47.194099 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Sep 9 23:59:47.205050 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Sep 9 23:59:47.207122 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Sep 9 23:59:47.213728 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (839) Sep 9 23:59:47.216243 kernel: BTRFS info (device vda6): first mount of filesystem 3a7d3e29-58a5-4f0c-ac69-b528108338f5 Sep 9 23:59:47.216278 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 9 23:59:47.218691 kernel: BTRFS info (device vda6): turning on async discard Sep 9 23:59:47.218734 kernel: BTRFS info (device vda6): enabling free space tree Sep 9 23:59:47.220459 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 9 23:59:47.242597 initrd-setup-root[864]: cut: /sysroot/etc/passwd: No such file or directory Sep 9 23:59:47.245498 initrd-setup-root[871]: cut: /sysroot/etc/group: No such file or directory Sep 9 23:59:47.249367 initrd-setup-root[878]: cut: /sysroot/etc/shadow: No such file or directory Sep 9 23:59:47.253270 initrd-setup-root[885]: cut: /sysroot/etc/gshadow: No such file or directory Sep 9 23:59:47.319926 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Sep 9 23:59:47.321957 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Sep 9 23:59:47.323520 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Sep 9 23:59:47.339594 kernel: BTRFS info (device vda6): last unmount of filesystem 3a7d3e29-58a5-4f0c-ac69-b528108338f5 Sep 9 23:59:47.354632 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Sep 9 23:59:47.362525 ignition[954]: INFO : Ignition 2.21.0 Sep 9 23:59:47.362525 ignition[954]: INFO : Stage: mount Sep 9 23:59:47.364199 ignition[954]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 23:59:47.364199 ignition[954]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 23:59:47.364199 ignition[954]: INFO : mount: mount passed Sep 9 23:59:47.364199 ignition[954]: INFO : Ignition finished successfully Sep 9 23:59:47.365066 systemd[1]: Finished ignition-mount.service - Ignition (mount). Sep 9 23:59:47.369198 systemd[1]: Starting ignition-files.service - Ignition (files)... Sep 9 23:59:47.774354 systemd[1]: sysroot-oem.mount: Deactivated successfully. Sep 9 23:59:47.775878 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Sep 9 23:59:47.810224 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (967) Sep 9 23:59:47.810285 kernel: BTRFS info (device vda6): first mount of filesystem 3a7d3e29-58a5-4f0c-ac69-b528108338f5 Sep 9 23:59:47.810297 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Sep 9 23:59:47.815776 kernel: BTRFS info (device vda6): turning on async discard Sep 9 23:59:47.815845 kernel: BTRFS info (device vda6): enabling free space tree Sep 9 23:59:47.817504 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Sep 9 23:59:47.845521 ignition[984]: INFO : Ignition 2.21.0 Sep 9 23:59:47.845521 ignition[984]: INFO : Stage: files Sep 9 23:59:47.847704 ignition[984]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 23:59:47.847704 ignition[984]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 23:59:47.847704 ignition[984]: DEBUG : files: compiled without relabeling support, skipping Sep 9 23:59:47.851595 ignition[984]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Sep 9 23:59:47.851595 ignition[984]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Sep 9 23:59:47.855669 ignition[984]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Sep 9 23:59:47.856991 ignition[984]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Sep 9 23:59:47.856991 ignition[984]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Sep 9 23:59:47.856244 unknown[984]: wrote ssh authorized keys file for user: core Sep 9 23:59:47.860889 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Sep 9 23:59:47.860889 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Sep 9 23:59:47.908535 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Sep 9 23:59:48.403935 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Sep 9 23:59:48.403935 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 9 23:59:48.408286 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Sep 9 23:59:48.586507 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Sep 9 23:59:48.696850 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Sep 9 23:59:48.696850 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Sep 9 23:59:48.700386 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Sep 9 23:59:48.700386 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Sep 9 23:59:48.700386 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Sep 9 23:59:48.700386 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 9 23:59:48.700386 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Sep 9 23:59:48.700386 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 9 23:59:48.700386 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Sep 9 23:59:48.714193 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Sep 9 23:59:48.714193 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Sep 9 23:59:48.714193 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 9 23:59:48.714193 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 9 23:59:48.714193 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 9 23:59:48.714193 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-arm64.raw: attempt #1 Sep 9 23:59:48.712047 systemd-networkd[798]: eth0: Gained IPv6LL Sep 9 23:59:49.017805 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Sep 9 23:59:49.489238 ignition[984]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Sep 9 23:59:49.489238 ignition[984]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Sep 9 23:59:49.493095 ignition[984]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 9 23:59:49.497454 ignition[984]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Sep 9 23:59:49.497454 ignition[984]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Sep 9 23:59:49.497454 ignition[984]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Sep 9 23:59:49.497454 ignition[984]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 9 23:59:49.504828 ignition[984]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Sep 9 23:59:49.504828 ignition[984]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Sep 9 23:59:49.504828 ignition[984]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Sep 9 23:59:49.514729 ignition[984]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Sep 9 23:59:49.518366 ignition[984]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Sep 9 23:59:49.520642 ignition[984]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Sep 9 23:59:49.520642 ignition[984]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Sep 9 23:59:49.520642 ignition[984]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Sep 9 23:59:49.520642 ignition[984]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Sep 9 23:59:49.520642 ignition[984]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Sep 9 23:59:49.520642 ignition[984]: INFO : files: files passed Sep 9 23:59:49.520642 ignition[984]: INFO : Ignition finished successfully Sep 9 23:59:49.522655 systemd[1]: Finished ignition-files.service - Ignition (files). Sep 9 23:59:49.524918 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Sep 9 23:59:49.526908 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Sep 9 23:59:49.550989 systemd[1]: ignition-quench.service: Deactivated successfully. Sep 9 23:59:49.552642 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Sep 9 23:59:49.555234 initrd-setup-root-after-ignition[1013]: grep: /sysroot/oem/oem-release: No such file or directory Sep 9 23:59:49.558337 initrd-setup-root-after-ignition[1015]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 9 23:59:49.558337 initrd-setup-root-after-ignition[1015]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Sep 9 23:59:49.562068 initrd-setup-root-after-ignition[1019]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Sep 9 23:59:49.563641 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 9 23:59:49.565162 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Sep 9 23:59:49.569491 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Sep 9 23:59:49.623545 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Sep 9 23:59:49.623696 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Sep 9 23:59:49.625971 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Sep 9 23:59:49.627910 systemd[1]: Reached target initrd.target - Initrd Default Target. Sep 9 23:59:49.629792 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Sep 9 23:59:49.630699 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Sep 9 23:59:49.660697 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 9 23:59:49.663901 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Sep 9 23:59:49.682924 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Sep 9 23:59:49.684453 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 9 23:59:49.686918 systemd[1]: Stopped target timers.target - Timer Units. Sep 9 23:59:49.688881 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Sep 9 23:59:49.689023 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Sep 9 23:59:49.691817 systemd[1]: Stopped target initrd.target - Initrd Default Target. Sep 9 23:59:49.694051 systemd[1]: Stopped target basic.target - Basic System. Sep 9 23:59:49.695861 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Sep 9 23:59:49.697523 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Sep 9 23:59:49.699664 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Sep 9 23:59:49.701766 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Sep 9 23:59:49.703854 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Sep 9 23:59:49.706722 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Sep 9 23:59:49.708079 systemd[1]: Stopped target sysinit.target - System Initialization. Sep 9 23:59:49.711797 systemd[1]: Stopped target local-fs.target - Local File Systems. Sep 9 23:59:49.713589 systemd[1]: Stopped target swap.target - Swaps. Sep 9 23:59:49.715212 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Sep 9 23:59:49.715346 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Sep 9 23:59:49.718452 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Sep 9 23:59:49.720139 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 9 23:59:49.722180 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Sep 9 23:59:49.725601 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 9 23:59:49.726898 systemd[1]: dracut-initqueue.service: Deactivated successfully. Sep 9 23:59:49.727017 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Sep 9 23:59:49.730067 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Sep 9 23:59:49.730188 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Sep 9 23:59:49.732424 systemd[1]: Stopped target paths.target - Path Units. Sep 9 23:59:49.734056 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Sep 9 23:59:49.736674 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 9 23:59:49.738187 systemd[1]: Stopped target slices.target - Slice Units. Sep 9 23:59:49.740607 systemd[1]: Stopped target sockets.target - Socket Units. Sep 9 23:59:49.742348 systemd[1]: iscsid.socket: Deactivated successfully. Sep 9 23:59:49.742472 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Sep 9 23:59:49.744063 systemd[1]: iscsiuio.socket: Deactivated successfully. Sep 9 23:59:49.744179 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Sep 9 23:59:49.745841 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Sep 9 23:59:49.746009 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Sep 9 23:59:49.747905 systemd[1]: ignition-files.service: Deactivated successfully. Sep 9 23:59:49.748053 systemd[1]: Stopped ignition-files.service - Ignition (files). Sep 9 23:59:49.750715 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Sep 9 23:59:49.757993 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Sep 9 23:59:49.758987 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Sep 9 23:59:49.759183 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Sep 9 23:59:49.761092 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Sep 9 23:59:49.761238 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Sep 9 23:59:49.767905 systemd[1]: initrd-cleanup.service: Deactivated successfully. Sep 9 23:59:49.768016 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Sep 9 23:59:49.775786 systemd[1]: sysroot-boot.mount: Deactivated successfully. Sep 9 23:59:49.780621 ignition[1040]: INFO : Ignition 2.21.0 Sep 9 23:59:49.780621 ignition[1040]: INFO : Stage: umount Sep 9 23:59:49.782331 ignition[1040]: INFO : no configs at "/usr/lib/ignition/base.d" Sep 9 23:59:49.782331 ignition[1040]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Sep 9 23:59:49.789943 ignition[1040]: INFO : umount: umount passed Sep 9 23:59:49.790859 ignition[1040]: INFO : Ignition finished successfully Sep 9 23:59:49.792275 systemd[1]: ignition-mount.service: Deactivated successfully. Sep 9 23:59:49.793371 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Sep 9 23:59:49.794940 systemd[1]: Stopped target network.target - Network. Sep 9 23:59:49.795909 systemd[1]: ignition-disks.service: Deactivated successfully. Sep 9 23:59:49.795975 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Sep 9 23:59:49.799236 systemd[1]: ignition-kargs.service: Deactivated successfully. Sep 9 23:59:49.799285 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Sep 9 23:59:49.800869 systemd[1]: ignition-setup.service: Deactivated successfully. Sep 9 23:59:49.800927 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Sep 9 23:59:49.802452 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Sep 9 23:59:49.802496 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Sep 9 23:59:49.804449 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Sep 9 23:59:49.806269 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Sep 9 23:59:49.816112 systemd[1]: systemd-resolved.service: Deactivated successfully. Sep 9 23:59:49.816232 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Sep 9 23:59:49.820959 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Sep 9 23:59:49.821175 systemd[1]: systemd-networkd.service: Deactivated successfully. Sep 9 23:59:49.822607 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Sep 9 23:59:49.826296 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Sep 9 23:59:49.827595 systemd[1]: Stopped target network-pre.target - Preparation for Network. Sep 9 23:59:49.828885 systemd[1]: systemd-networkd.socket: Deactivated successfully. Sep 9 23:59:49.828932 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Sep 9 23:59:49.832008 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Sep 9 23:59:49.833711 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Sep 9 23:59:49.833770 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Sep 9 23:59:49.835949 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 9 23:59:49.835993 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 9 23:59:49.839247 systemd[1]: systemd-modules-load.service: Deactivated successfully. Sep 9 23:59:49.839288 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Sep 9 23:59:49.841095 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Sep 9 23:59:49.841139 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 9 23:59:49.844587 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 9 23:59:49.849848 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 9 23:59:49.849905 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Sep 9 23:59:49.855238 systemd[1]: systemd-udevd.service: Deactivated successfully. Sep 9 23:59:49.861788 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 9 23:59:49.863696 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Sep 9 23:59:49.863743 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Sep 9 23:59:49.869812 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Sep 9 23:59:49.869851 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Sep 9 23:59:49.871686 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Sep 9 23:59:49.871737 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Sep 9 23:59:49.877552 systemd[1]: dracut-cmdline.service: Deactivated successfully. Sep 9 23:59:49.877622 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Sep 9 23:59:49.880503 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Sep 9 23:59:49.880550 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Sep 9 23:59:49.883378 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Sep 9 23:59:49.884622 systemd[1]: systemd-network-generator.service: Deactivated successfully. Sep 9 23:59:49.884687 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Sep 9 23:59:49.887557 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Sep 9 23:59:49.887669 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 9 23:59:49.890944 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Sep 9 23:59:49.890989 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 9 23:59:49.894689 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Sep 9 23:59:49.894736 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Sep 9 23:59:49.897087 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Sep 9 23:59:49.897133 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 23:59:49.901457 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Sep 9 23:59:49.901503 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Sep 9 23:59:49.901532 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Sep 9 23:59:49.901594 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Sep 9 23:59:49.901888 systemd[1]: sysroot-boot.service: Deactivated successfully. Sep 9 23:59:49.902003 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Sep 9 23:59:49.903192 systemd[1]: network-cleanup.service: Deactivated successfully. Sep 9 23:59:49.903298 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Sep 9 23:59:49.905748 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Sep 9 23:59:49.905827 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Sep 9 23:59:49.909230 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Sep 9 23:59:49.911205 systemd[1]: initrd-setup-root.service: Deactivated successfully. Sep 9 23:59:49.911273 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Sep 9 23:59:49.914449 systemd[1]: Starting initrd-switch-root.service - Switch Root... Sep 9 23:59:49.945951 systemd[1]: Switching root. Sep 9 23:59:49.976868 systemd-journald[245]: Journal stopped Sep 9 23:59:50.776535 systemd-journald[245]: Received SIGTERM from PID 1 (systemd). Sep 9 23:59:50.776616 kernel: SELinux: policy capability network_peer_controls=1 Sep 9 23:59:50.776639 kernel: SELinux: policy capability open_perms=1 Sep 9 23:59:50.776650 kernel: SELinux: policy capability extended_socket_class=1 Sep 9 23:59:50.776659 kernel: SELinux: policy capability always_check_network=0 Sep 9 23:59:50.776676 kernel: SELinux: policy capability cgroup_seclabel=1 Sep 9 23:59:50.776690 kernel: SELinux: policy capability nnp_nosuid_transition=1 Sep 9 23:59:50.776700 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Sep 9 23:59:50.776709 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Sep 9 23:59:50.776718 kernel: SELinux: policy capability userspace_initial_context=0 Sep 9 23:59:50.776727 kernel: audit: type=1403 audit(1757462390.174:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Sep 9 23:59:50.776738 systemd[1]: Successfully loaded SELinux policy in 56.773ms. Sep 9 23:59:50.776756 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 5.636ms. Sep 9 23:59:50.776767 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Sep 9 23:59:50.776777 systemd[1]: Detected virtualization kvm. Sep 9 23:59:50.776792 systemd[1]: Detected architecture arm64. Sep 9 23:59:50.776802 systemd[1]: Detected first boot. Sep 9 23:59:50.776811 systemd[1]: Initializing machine ID from VM UUID. Sep 9 23:59:50.776927 zram_generator::config[1085]: No configuration found. Sep 9 23:59:50.776939 kernel: NET: Registered PF_VSOCK protocol family Sep 9 23:59:50.776948 systemd[1]: Populated /etc with preset unit settings. Sep 9 23:59:50.776959 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Sep 9 23:59:50.776969 systemd[1]: initrd-switch-root.service: Deactivated successfully. Sep 9 23:59:50.776978 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Sep 9 23:59:50.776991 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Sep 9 23:59:50.777001 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Sep 9 23:59:50.777015 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Sep 9 23:59:50.777024 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Sep 9 23:59:50.777034 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Sep 9 23:59:50.777044 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Sep 9 23:59:50.777054 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Sep 9 23:59:50.777064 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Sep 9 23:59:50.777075 systemd[1]: Created slice user.slice - User and Session Slice. Sep 9 23:59:50.777085 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Sep 9 23:59:50.777094 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Sep 9 23:59:50.777104 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Sep 9 23:59:50.777114 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Sep 9 23:59:50.777123 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Sep 9 23:59:50.777133 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Sep 9 23:59:50.777144 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Sep 9 23:59:50.777154 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Sep 9 23:59:50.777165 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Sep 9 23:59:50.777175 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Sep 9 23:59:50.777184 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Sep 9 23:59:50.777194 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Sep 9 23:59:50.777204 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Sep 9 23:59:50.777213 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Sep 9 23:59:50.777223 systemd[1]: Reached target remote-fs.target - Remote File Systems. Sep 9 23:59:50.777233 systemd[1]: Reached target slices.target - Slice Units. Sep 9 23:59:50.777244 systemd[1]: Reached target swap.target - Swaps. Sep 9 23:59:50.777253 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Sep 9 23:59:50.777263 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Sep 9 23:59:50.777273 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Sep 9 23:59:50.777283 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Sep 9 23:59:50.777293 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Sep 9 23:59:50.777302 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Sep 9 23:59:50.777312 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Sep 9 23:59:50.777321 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Sep 9 23:59:50.777332 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Sep 9 23:59:50.777342 systemd[1]: Mounting media.mount - External Media Directory... Sep 9 23:59:50.777352 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Sep 9 23:59:50.777361 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Sep 9 23:59:50.777371 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Sep 9 23:59:50.777382 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Sep 9 23:59:50.777392 systemd[1]: Reached target machines.target - Containers. Sep 9 23:59:50.777401 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Sep 9 23:59:50.777411 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 9 23:59:50.777423 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Sep 9 23:59:50.777433 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Sep 9 23:59:50.777443 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 9 23:59:50.777452 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 9 23:59:50.777462 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 9 23:59:50.777472 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Sep 9 23:59:50.777482 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 9 23:59:50.777492 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Sep 9 23:59:50.777503 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Sep 9 23:59:50.777513 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Sep 9 23:59:50.777522 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Sep 9 23:59:50.777532 systemd[1]: Stopped systemd-fsck-usr.service. Sep 9 23:59:50.777542 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 9 23:59:50.777551 kernel: fuse: init (API version 7.41) Sep 9 23:59:50.777561 systemd[1]: Starting systemd-journald.service - Journal Service... Sep 9 23:59:50.777659 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Sep 9 23:59:50.777671 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Sep 9 23:59:50.777684 kernel: ACPI: bus type drm_connector registered Sep 9 23:59:50.777693 kernel: loop: module loaded Sep 9 23:59:50.777702 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Sep 9 23:59:50.777712 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Sep 9 23:59:50.777722 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Sep 9 23:59:50.777733 systemd[1]: verity-setup.service: Deactivated successfully. Sep 9 23:59:50.777743 systemd[1]: Stopped verity-setup.service. Sep 9 23:59:50.777753 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Sep 9 23:59:50.777763 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Sep 9 23:59:50.777773 systemd[1]: Mounted media.mount - External Media Directory. Sep 9 23:59:50.777809 systemd-journald[1153]: Collecting audit messages is disabled. Sep 9 23:59:50.777832 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Sep 9 23:59:50.777843 systemd-journald[1153]: Journal started Sep 9 23:59:50.777927 systemd-journald[1153]: Runtime Journal (/run/log/journal/dba6216a254a46b8b416db45c6d65f6c) is 6M, max 48.5M, 42.4M free. Sep 9 23:59:50.548271 systemd[1]: Queued start job for default target multi-user.target. Sep 9 23:59:50.567712 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Sep 9 23:59:50.568126 systemd[1]: systemd-journald.service: Deactivated successfully. Sep 9 23:59:50.780279 systemd[1]: Started systemd-journald.service - Journal Service. Sep 9 23:59:50.780994 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Sep 9 23:59:50.783060 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Sep 9 23:59:50.784505 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Sep 9 23:59:50.787990 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Sep 9 23:59:50.789778 systemd[1]: modprobe@configfs.service: Deactivated successfully. Sep 9 23:59:50.789945 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Sep 9 23:59:50.791378 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 23:59:50.791515 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 9 23:59:50.792986 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 9 23:59:50.793141 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 9 23:59:50.794592 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 23:59:50.794774 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 9 23:59:50.796239 systemd[1]: modprobe@fuse.service: Deactivated successfully. Sep 9 23:59:50.796395 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Sep 9 23:59:50.797993 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 23:59:50.798158 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 9 23:59:50.799616 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Sep 9 23:59:50.801101 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Sep 9 23:59:50.802907 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Sep 9 23:59:50.804485 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Sep 9 23:59:50.817284 systemd[1]: Reached target network-pre.target - Preparation for Network. Sep 9 23:59:50.819735 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Sep 9 23:59:50.824697 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Sep 9 23:59:50.825822 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Sep 9 23:59:50.825860 systemd[1]: Reached target local-fs.target - Local File Systems. Sep 9 23:59:50.827808 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Sep 9 23:59:50.832655 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Sep 9 23:59:50.833733 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 23:59:50.835092 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Sep 9 23:59:50.837159 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Sep 9 23:59:50.838684 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 9 23:59:50.839981 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Sep 9 23:59:50.841232 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 9 23:59:50.842119 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 9 23:59:50.847745 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Sep 9 23:59:50.849684 systemd-journald[1153]: Time spent on flushing to /var/log/journal/dba6216a254a46b8b416db45c6d65f6c is 20.285ms for 891 entries. Sep 9 23:59:50.849684 systemd-journald[1153]: System Journal (/var/log/journal/dba6216a254a46b8b416db45c6d65f6c) is 8M, max 195.6M, 187.6M free. Sep 9 23:59:50.879087 systemd-journald[1153]: Received client request to flush runtime journal. Sep 9 23:59:50.879133 kernel: loop0: detected capacity change from 0 to 211168 Sep 9 23:59:50.859814 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Sep 9 23:59:50.864636 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Sep 9 23:59:50.867166 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Sep 9 23:59:50.868849 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Sep 9 23:59:50.882198 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Sep 9 23:59:50.884256 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Sep 9 23:59:50.886378 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 9 23:59:50.888586 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Sep 9 23:59:50.889112 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Sep 9 23:59:50.891951 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Sep 9 23:59:50.908746 systemd-tmpfiles[1202]: ACLs are not supported, ignoring. Sep 9 23:59:50.908763 systemd-tmpfiles[1202]: ACLs are not supported, ignoring. Sep 9 23:59:50.910627 kernel: loop1: detected capacity change from 0 to 119320 Sep 9 23:59:50.912707 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Sep 9 23:59:50.916884 systemd[1]: Starting systemd-sysusers.service - Create System Users... Sep 9 23:59:50.937924 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Sep 9 23:59:50.950116 kernel: loop2: detected capacity change from 0 to 100608 Sep 9 23:59:50.961922 systemd[1]: Finished systemd-sysusers.service - Create System Users. Sep 9 23:59:50.964702 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Sep 9 23:59:50.970970 kernel: loop3: detected capacity change from 0 to 211168 Sep 9 23:59:50.983598 kernel: loop4: detected capacity change from 0 to 119320 Sep 9 23:59:50.985038 systemd-tmpfiles[1223]: ACLs are not supported, ignoring. Sep 9 23:59:50.985057 systemd-tmpfiles[1223]: ACLs are not supported, ignoring. Sep 9 23:59:50.989340 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Sep 9 23:59:50.994588 kernel: loop5: detected capacity change from 0 to 100608 Sep 9 23:59:50.998845 (sd-merge)[1224]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Sep 9 23:59:50.999243 (sd-merge)[1224]: Merged extensions into '/usr'. Sep 9 23:59:51.004622 systemd[1]: Reload requested from client PID 1201 ('systemd-sysext') (unit systemd-sysext.service)... Sep 9 23:59:51.004741 systemd[1]: Reloading... Sep 9 23:59:51.049593 zram_generator::config[1251]: No configuration found. Sep 9 23:59:51.203864 ldconfig[1196]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Sep 9 23:59:51.212469 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Sep 9 23:59:51.212739 systemd[1]: Reloading finished in 207 ms. Sep 9 23:59:51.232587 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Sep 9 23:59:51.235607 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Sep 9 23:59:51.248928 systemd[1]: Starting ensure-sysext.service... Sep 9 23:59:51.250877 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Sep 9 23:59:51.269096 systemd[1]: Reload requested from client PID 1286 ('systemctl') (unit ensure-sysext.service)... Sep 9 23:59:51.269114 systemd[1]: Reloading... Sep 9 23:59:51.273073 systemd-tmpfiles[1287]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Sep 9 23:59:51.273101 systemd-tmpfiles[1287]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Sep 9 23:59:51.273334 systemd-tmpfiles[1287]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Sep 9 23:59:51.273515 systemd-tmpfiles[1287]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Sep 9 23:59:51.274133 systemd-tmpfiles[1287]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Sep 9 23:59:51.274320 systemd-tmpfiles[1287]: ACLs are not supported, ignoring. Sep 9 23:59:51.274372 systemd-tmpfiles[1287]: ACLs are not supported, ignoring. Sep 9 23:59:51.277207 systemd-tmpfiles[1287]: Detected autofs mount point /boot during canonicalization of boot. Sep 9 23:59:51.277219 systemd-tmpfiles[1287]: Skipping /boot Sep 9 23:59:51.283391 systemd-tmpfiles[1287]: Detected autofs mount point /boot during canonicalization of boot. Sep 9 23:59:51.283407 systemd-tmpfiles[1287]: Skipping /boot Sep 9 23:59:51.328604 zram_generator::config[1314]: No configuration found. Sep 9 23:59:51.458999 systemd[1]: Reloading finished in 189 ms. Sep 9 23:59:51.480694 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Sep 9 23:59:51.495054 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Sep 9 23:59:51.503657 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 9 23:59:51.506474 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Sep 9 23:59:51.514463 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Sep 9 23:59:51.519819 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Sep 9 23:59:51.523230 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Sep 9 23:59:51.530052 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Sep 9 23:59:51.533516 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 9 23:59:51.535533 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 9 23:59:51.545211 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 9 23:59:51.548101 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 9 23:59:51.550053 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 23:59:51.550182 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 9 23:59:51.552876 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Sep 9 23:59:51.554808 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 23:59:51.555004 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 9 23:59:51.567375 systemd[1]: Starting systemd-update-done.service - Update is Completed... Sep 9 23:59:51.570335 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Sep 9 23:59:51.574744 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 23:59:51.574932 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 9 23:59:51.577126 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 23:59:51.577290 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 9 23:59:51.578034 systemd-udevd[1355]: Using default interface naming scheme 'v255'. Sep 9 23:59:51.581086 systemd[1]: Finished systemd-update-done.service - Update is Completed. Sep 9 23:59:51.589513 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Sep 9 23:59:51.592149 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Sep 9 23:59:51.596804 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Sep 9 23:59:51.600892 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Sep 9 23:59:51.605392 augenrules[1389]: No rules Sep 9 23:59:51.605802 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Sep 9 23:59:51.607318 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Sep 9 23:59:51.607446 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Sep 9 23:59:51.608463 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Sep 9 23:59:51.611155 systemd[1]: audit-rules.service: Deactivated successfully. Sep 9 23:59:51.611390 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 9 23:59:51.615670 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Sep 9 23:59:51.617548 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Sep 9 23:59:51.620289 systemd[1]: modprobe@drm.service: Deactivated successfully. Sep 9 23:59:51.621286 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Sep 9 23:59:51.622963 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Sep 9 23:59:51.623643 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Sep 9 23:59:51.626225 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Sep 9 23:59:51.627616 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Sep 9 23:59:51.629179 systemd[1]: Started systemd-userdbd.service - User Database Manager. Sep 9 23:59:51.633227 systemd[1]: modprobe@loop.service: Deactivated successfully. Sep 9 23:59:51.634346 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Sep 9 23:59:51.645389 systemd[1]: Finished ensure-sysext.service. Sep 9 23:59:51.662077 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Sep 9 23:59:51.665334 systemd[1]: Starting systemd-networkd.service - Network Configuration... Sep 9 23:59:51.667670 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Sep 9 23:59:51.667737 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Sep 9 23:59:51.670418 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Sep 9 23:59:51.673776 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Sep 9 23:59:51.706127 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Sep 9 23:59:51.711097 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Sep 9 23:59:51.730633 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Sep 9 23:59:51.771304 systemd-networkd[1437]: lo: Link UP Sep 9 23:59:51.771312 systemd-networkd[1437]: lo: Gained carrier Sep 9 23:59:51.772218 systemd-networkd[1437]: Enumeration completed Sep 9 23:59:51.772427 systemd[1]: Started systemd-networkd.service - Network Configuration. Sep 9 23:59:51.772625 systemd-networkd[1437]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 23:59:51.772641 systemd-networkd[1437]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Sep 9 23:59:51.773195 systemd-networkd[1437]: eth0: Link UP Sep 9 23:59:51.773292 systemd-networkd[1437]: eth0: Gained carrier Sep 9 23:59:51.773306 systemd-networkd[1437]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Sep 9 23:59:51.773891 systemd-resolved[1353]: Positive Trust Anchors: Sep 9 23:59:51.773900 systemd-resolved[1353]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Sep 9 23:59:51.773931 systemd-resolved[1353]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Sep 9 23:59:51.775665 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Sep 9 23:59:51.778218 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Sep 9 23:59:51.782272 systemd-resolved[1353]: Defaulting to hostname 'linux'. Sep 9 23:59:51.783661 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Sep 9 23:59:51.785007 systemd[1]: Reached target network.target - Network. Sep 9 23:59:51.785936 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Sep 9 23:59:51.787365 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Sep 9 23:59:51.788655 systemd-networkd[1437]: eth0: DHCPv4 address 10.0.0.122/16, gateway 10.0.0.1 acquired from 10.0.0.1 Sep 9 23:59:51.788927 systemd[1]: Reached target sysinit.target - System Initialization. Sep 9 23:59:51.789233 systemd-timesyncd[1438]: Network configuration changed, trying to establish connection. Sep 9 23:59:51.790309 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Sep 9 23:59:51.790475 systemd-timesyncd[1438]: Contacted time server 10.0.0.1:123 (10.0.0.1). Sep 9 23:59:51.790607 systemd-timesyncd[1438]: Initial clock synchronization to Tue 2025-09-09 23:59:51.503079 UTC. Sep 9 23:59:51.791986 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Sep 9 23:59:51.793311 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Sep 9 23:59:51.794853 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Sep 9 23:59:51.794888 systemd[1]: Reached target paths.target - Path Units. Sep 9 23:59:51.795807 systemd[1]: Reached target time-set.target - System Time Set. Sep 9 23:59:51.796940 systemd[1]: Started logrotate.timer - Daily rotation of log files. Sep 9 23:59:51.798343 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Sep 9 23:59:51.799708 systemd[1]: Reached target timers.target - Timer Units. Sep 9 23:59:51.801723 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Sep 9 23:59:51.803968 systemd[1]: Starting docker.socket - Docker Socket for the API... Sep 9 23:59:51.807166 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Sep 9 23:59:51.808878 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Sep 9 23:59:51.810591 systemd[1]: Reached target ssh-access.target - SSH Access Available. Sep 9 23:59:51.814832 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Sep 9 23:59:51.817022 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Sep 9 23:59:51.819173 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Sep 9 23:59:51.820698 systemd[1]: Listening on docker.socket - Docker Socket for the API. Sep 9 23:59:51.823200 systemd[1]: Reached target sockets.target - Socket Units. Sep 9 23:59:51.824704 systemd[1]: Reached target basic.target - Basic System. Sep 9 23:59:51.825738 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Sep 9 23:59:51.825769 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Sep 9 23:59:51.826700 systemd[1]: Starting containerd.service - containerd container runtime... Sep 9 23:59:51.829199 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Sep 9 23:59:51.831440 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Sep 9 23:59:51.843407 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Sep 9 23:59:51.846590 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Sep 9 23:59:51.848664 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Sep 9 23:59:51.849970 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Sep 9 23:59:51.853713 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Sep 9 23:59:51.858789 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Sep 9 23:59:51.862167 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Sep 9 23:59:51.862539 extend-filesystems[1472]: Found /dev/vda6 Sep 9 23:59:51.866902 systemd[1]: Starting systemd-logind.service - User Login Management... Sep 9 23:59:51.871228 jq[1471]: false Sep 9 23:59:51.868759 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Sep 9 23:59:51.869162 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Sep 9 23:59:51.871735 systemd[1]: Starting update-engine.service - Update Engine... Sep 9 23:59:51.873943 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Sep 9 23:59:51.875898 extend-filesystems[1472]: Found /dev/vda9 Sep 9 23:59:51.880236 extend-filesystems[1472]: Checking size of /dev/vda9 Sep 9 23:59:51.880045 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Sep 9 23:59:51.885465 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Sep 9 23:59:51.886484 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Sep 9 23:59:51.889218 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Sep 9 23:59:51.892457 jq[1486]: true Sep 9 23:59:51.890609 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Sep 9 23:59:51.893542 systemd[1]: motdgen.service: Deactivated successfully. Sep 9 23:59:51.893761 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Sep 9 23:59:51.905345 extend-filesystems[1472]: Resized partition /dev/vda9 Sep 9 23:59:51.906874 (ntainerd)[1497]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Sep 9 23:59:51.912742 tar[1495]: linux-arm64/LICENSE Sep 9 23:59:51.912742 tar[1495]: linux-arm64/helm Sep 9 23:59:51.912869 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Sep 9 23:59:51.915313 extend-filesystems[1511]: resize2fs 1.47.2 (1-Jan-2025) Sep 9 23:59:51.916961 jq[1498]: true Sep 9 23:59:51.923545 update_engine[1484]: I20250909 23:59:51.922847 1484 main.cc:92] Flatcar Update Engine starting Sep 9 23:59:51.929342 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Sep 9 23:59:51.953222 dbus-daemon[1469]: [system] SELinux support is enabled Sep 9 23:59:51.954792 systemd[1]: Started dbus.service - D-Bus System Message Bus. Sep 9 23:59:51.958377 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Sep 9 23:59:51.958406 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Sep 9 23:59:51.960405 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Sep 9 23:59:51.960424 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Sep 9 23:59:51.965935 systemd[1]: Started update-engine.service - Update Engine. Sep 9 23:59:51.966079 update_engine[1484]: I20250909 23:59:51.966024 1484 update_check_scheduler.cc:74] Next update check in 6m35s Sep 9 23:59:51.971110 systemd[1]: Started locksmithd.service - Cluster reboot manager. Sep 9 23:59:51.989582 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Sep 9 23:59:51.991551 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Sep 9 23:59:52.004399 systemd-logind[1483]: Watching system buttons on /dev/input/event0 (Power Button) Sep 9 23:59:52.006787 systemd-logind[1483]: New seat seat0. Sep 9 23:59:52.008571 extend-filesystems[1511]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Sep 9 23:59:52.008571 extend-filesystems[1511]: old_desc_blocks = 1, new_desc_blocks = 1 Sep 9 23:59:52.008571 extend-filesystems[1511]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Sep 9 23:59:52.017722 extend-filesystems[1472]: Resized filesystem in /dev/vda9 Sep 9 23:59:52.008936 systemd[1]: Started systemd-logind.service - User Login Management. Sep 9 23:59:52.022500 bash[1534]: Updated "/home/core/.ssh/authorized_keys" Sep 9 23:59:52.021989 systemd[1]: extend-filesystems.service: Deactivated successfully. Sep 9 23:59:52.023609 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Sep 9 23:59:52.025142 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Sep 9 23:59:52.026257 locksmithd[1533]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Sep 9 23:59:52.028458 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Sep 9 23:59:52.100121 containerd[1497]: time="2025-09-09T23:59:52Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Sep 9 23:59:52.100789 containerd[1497]: time="2025-09-09T23:59:52.100755161Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Sep 9 23:59:52.112168 containerd[1497]: time="2025-09-09T23:59:52.112124569Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="10.065µs" Sep 9 23:59:52.112223 containerd[1497]: time="2025-09-09T23:59:52.112167259Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Sep 9 23:59:52.112223 containerd[1497]: time="2025-09-09T23:59:52.112186695Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Sep 9 23:59:52.112359 containerd[1497]: time="2025-09-09T23:59:52.112339254Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Sep 9 23:59:52.112386 containerd[1497]: time="2025-09-09T23:59:52.112359114Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Sep 9 23:59:52.112409 containerd[1497]: time="2025-09-09T23:59:52.112383602Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 9 23:59:52.113585 containerd[1497]: time="2025-09-09T23:59:52.112433156Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Sep 9 23:59:52.113585 containerd[1497]: time="2025-09-09T23:59:52.112449816Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 9 23:59:52.113585 containerd[1497]: time="2025-09-09T23:59:52.112738080Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Sep 9 23:59:52.113585 containerd[1497]: time="2025-09-09T23:59:52.112754354Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 9 23:59:52.113585 containerd[1497]: time="2025-09-09T23:59:52.112766039Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Sep 9 23:59:52.113585 containerd[1497]: time="2025-09-09T23:59:52.112774021Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Sep 9 23:59:52.113585 containerd[1497]: time="2025-09-09T23:59:52.112849491Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Sep 9 23:59:52.113585 containerd[1497]: time="2025-09-09T23:59:52.113029584Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 9 23:59:52.113585 containerd[1497]: time="2025-09-09T23:59:52.113067800Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Sep 9 23:59:52.113585 containerd[1497]: time="2025-09-09T23:59:52.113078212Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Sep 9 23:59:52.113585 containerd[1497]: time="2025-09-09T23:59:52.113117663Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Sep 9 23:59:52.113768 containerd[1497]: time="2025-09-09T23:59:52.113331538Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Sep 9 23:59:52.113768 containerd[1497]: time="2025-09-09T23:59:52.113393047Z" level=info msg="metadata content store policy set" policy=shared Sep 9 23:59:52.119494 containerd[1497]: time="2025-09-09T23:59:52.119456119Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Sep 9 23:59:52.119551 containerd[1497]: time="2025-09-09T23:59:52.119524416Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Sep 9 23:59:52.119551 containerd[1497]: time="2025-09-09T23:59:52.119544816Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Sep 9 23:59:52.119592 containerd[1497]: time="2025-09-09T23:59:52.119557195Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Sep 9 23:59:52.119592 containerd[1497]: time="2025-09-09T23:59:52.119581760Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Sep 9 23:59:52.119641 containerd[1497]: time="2025-09-09T23:59:52.119595604Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Sep 9 23:59:52.119641 containerd[1497]: time="2025-09-09T23:59:52.119608331Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Sep 9 23:59:52.119641 containerd[1497]: time="2025-09-09T23:59:52.119619823Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Sep 9 23:59:52.119641 containerd[1497]: time="2025-09-09T23:59:52.119632201Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Sep 9 23:59:52.119698 containerd[1497]: time="2025-09-09T23:59:52.119642151Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Sep 9 23:59:52.119698 containerd[1497]: time="2025-09-09T23:59:52.119652216Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Sep 9 23:59:52.119698 containerd[1497]: time="2025-09-09T23:59:52.119664672Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Sep 9 23:59:52.119810 containerd[1497]: time="2025-09-09T23:59:52.119787498Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Sep 9 23:59:52.119833 containerd[1497]: time="2025-09-09T23:59:52.119822282Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Sep 9 23:59:52.119850 containerd[1497]: time="2025-09-09T23:59:52.119838633Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Sep 9 23:59:52.119867 containerd[1497]: time="2025-09-09T23:59:52.119849161Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Sep 9 23:59:52.119867 containerd[1497]: time="2025-09-09T23:59:52.119859882Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Sep 9 23:59:52.119902 containerd[1497]: time="2025-09-09T23:59:52.119869870Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Sep 9 23:59:52.119902 containerd[1497]: time="2025-09-09T23:59:52.119882789Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Sep 9 23:59:52.119902 containerd[1497]: time="2025-09-09T23:59:52.119892777Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Sep 9 23:59:52.119951 containerd[1497]: time="2025-09-09T23:59:52.119903266Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Sep 9 23:59:52.119951 containerd[1497]: time="2025-09-09T23:59:52.119913794Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Sep 9 23:59:52.119951 containerd[1497]: time="2025-09-09T23:59:52.119923936Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Sep 9 23:59:52.120119 containerd[1497]: time="2025-09-09T23:59:52.120102757Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Sep 9 23:59:52.120141 containerd[1497]: time="2025-09-09T23:59:52.120123041Z" level=info msg="Start snapshots syncer" Sep 9 23:59:52.120159 containerd[1497]: time="2025-09-09T23:59:52.120150074Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Sep 9 23:59:52.120412 containerd[1497]: time="2025-09-09T23:59:52.120374053Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Sep 9 23:59:52.120503 containerd[1497]: time="2025-09-09T23:59:52.120428697Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Sep 9 23:59:52.120522 containerd[1497]: time="2025-09-09T23:59:52.120499269Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Sep 9 23:59:52.120651 containerd[1497]: time="2025-09-09T23:59:52.120632931Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Sep 9 23:59:52.120686 containerd[1497]: time="2025-09-09T23:59:52.120672073Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Sep 9 23:59:52.120709 containerd[1497]: time="2025-09-09T23:59:52.120686998Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Sep 9 23:59:52.120709 containerd[1497]: time="2025-09-09T23:59:52.120698760Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Sep 9 23:59:52.120752 containerd[1497]: time="2025-09-09T23:59:52.120710329Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Sep 9 23:59:52.120752 containerd[1497]: time="2025-09-09T23:59:52.120721744Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Sep 9 23:59:52.120752 containerd[1497]: time="2025-09-09T23:59:52.120733197Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Sep 9 23:59:52.120799 containerd[1497]: time="2025-09-09T23:59:52.120763084Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Sep 9 23:59:52.120799 containerd[1497]: time="2025-09-09T23:59:52.120774730Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Sep 9 23:59:52.120799 containerd[1497]: time="2025-09-09T23:59:52.120784988Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Sep 9 23:59:52.120847 containerd[1497]: time="2025-09-09T23:59:52.120815531Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 9 23:59:52.120847 containerd[1497]: time="2025-09-09T23:59:52.120829491Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Sep 9 23:59:52.120847 containerd[1497]: time="2025-09-09T23:59:52.120837936Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 9 23:59:52.120893 containerd[1497]: time="2025-09-09T23:59:52.120847191Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Sep 9 23:59:52.120893 containerd[1497]: time="2025-09-09T23:59:52.120855136Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Sep 9 23:59:52.120893 containerd[1497]: time="2025-09-09T23:59:52.120866011Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Sep 9 23:59:52.121068 containerd[1497]: time="2025-09-09T23:59:52.121046258Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Sep 9 23:59:52.121204 containerd[1497]: time="2025-09-09T23:59:52.121184200Z" level=info msg="runtime interface created" Sep 9 23:59:52.121204 containerd[1497]: time="2025-09-09T23:59:52.121200590Z" level=info msg="created NRI interface" Sep 9 23:59:52.121242 containerd[1497]: time="2025-09-09T23:59:52.121211233Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Sep 9 23:59:52.121260 containerd[1497]: time="2025-09-09T23:59:52.121250530Z" level=info msg="Connect containerd service" Sep 9 23:59:52.121308 containerd[1497]: time="2025-09-09T23:59:52.121289672Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Sep 9 23:59:52.122329 containerd[1497]: time="2025-09-09T23:59:52.122301508Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 9 23:59:52.200389 containerd[1497]: time="2025-09-09T23:59:52.200317355Z" level=info msg="Start subscribing containerd event" Sep 9 23:59:52.200495 containerd[1497]: time="2025-09-09T23:59:52.200406630Z" level=info msg="Start recovering state" Sep 9 23:59:52.200515 containerd[1497]: time="2025-09-09T23:59:52.200505238Z" level=info msg="Start event monitor" Sep 9 23:59:52.200537 containerd[1497]: time="2025-09-09T23:59:52.200518774Z" level=info msg="Start cni network conf syncer for default" Sep 9 23:59:52.200537 containerd[1497]: time="2025-09-09T23:59:52.200526448Z" level=info msg="Start streaming server" Sep 9 23:59:52.200603 containerd[1497]: time="2025-09-09T23:59:52.200536552Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Sep 9 23:59:52.200603 containerd[1497]: time="2025-09-09T23:59:52.200543917Z" level=info msg="runtime interface starting up..." Sep 9 23:59:52.200603 containerd[1497]: time="2025-09-09T23:59:52.200550087Z" level=info msg="starting plugins..." Sep 9 23:59:52.200661 containerd[1497]: time="2025-09-09T23:59:52.200607470Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Sep 9 23:59:52.200849 containerd[1497]: time="2025-09-09T23:59:52.200825703Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Sep 9 23:59:52.200892 containerd[1497]: time="2025-09-09T23:59:52.200878728Z" level=info msg=serving... address=/run/containerd/containerd.sock Sep 9 23:59:52.201403 systemd[1]: Started containerd.service - containerd container runtime. Sep 9 23:59:52.203292 containerd[1497]: time="2025-09-09T23:59:52.203268794Z" level=info msg="containerd successfully booted in 0.103494s" Sep 9 23:59:52.267758 tar[1495]: linux-arm64/README.md Sep 9 23:59:52.291018 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Sep 9 23:59:52.517412 sshd_keygen[1499]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Sep 9 23:59:52.539657 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Sep 9 23:59:52.544873 systemd[1]: Starting issuegen.service - Generate /run/issue... Sep 9 23:59:52.560202 systemd[1]: issuegen.service: Deactivated successfully. Sep 9 23:59:52.560450 systemd[1]: Finished issuegen.service - Generate /run/issue. Sep 9 23:59:52.563345 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Sep 9 23:59:52.589465 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Sep 9 23:59:52.594142 systemd[1]: Started getty@tty1.service - Getty on tty1. Sep 9 23:59:52.596344 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Sep 9 23:59:52.599840 systemd[1]: Reached target getty.target - Login Prompts. Sep 9 23:59:52.996694 systemd-networkd[1437]: eth0: Gained IPv6LL Sep 9 23:59:52.998364 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Sep 9 23:59:53.000592 systemd[1]: Reached target network-online.target - Network is Online. Sep 9 23:59:53.003268 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Sep 9 23:59:53.005849 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 9 23:59:53.008012 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Sep 9 23:59:53.029518 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Sep 9 23:59:53.033731 systemd[1]: coreos-metadata.service: Deactivated successfully. Sep 9 23:59:53.033959 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Sep 9 23:59:53.035644 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Sep 9 23:59:53.612192 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 9 23:59:53.613924 systemd[1]: Reached target multi-user.target - Multi-User System. Sep 9 23:59:53.615717 systemd[1]: Startup finished in 2.015s (kernel) + 5.569s (initrd) + 3.498s (userspace) = 11.083s. Sep 9 23:59:53.617766 (kubelet)[1610]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 9 23:59:53.993702 kubelet[1610]: E0909 23:59:53.993611 1610 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 9 23:59:53.996750 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 9 23:59:53.996880 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 9 23:59:53.998685 systemd[1]: kubelet.service: Consumed 751ms CPU time, 257.4M memory peak. Sep 9 23:59:57.441697 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Sep 9 23:59:57.443155 systemd[1]: Started sshd@0-10.0.0.122:22-10.0.0.1:54670.service - OpenSSH per-connection server daemon (10.0.0.1:54670). Sep 9 23:59:57.537764 sshd[1623]: Accepted publickey for core from 10.0.0.1 port 54670 ssh2: RSA SHA256:BIipJKfG3sr4zTNTEUz0SDDjJtEzBqbnZB4/ga6/CtY Sep 9 23:59:57.539922 sshd-session[1623]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:59:57.546510 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Sep 9 23:59:57.547663 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Sep 9 23:59:57.557816 systemd-logind[1483]: New session 1 of user core. Sep 9 23:59:57.579507 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Sep 9 23:59:57.583384 systemd[1]: Starting user@500.service - User Manager for UID 500... Sep 9 23:59:57.607175 (systemd)[1628]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Sep 9 23:59:57.610355 systemd-logind[1483]: New session c1 of user core. Sep 9 23:59:57.728352 systemd[1628]: Queued start job for default target default.target. Sep 9 23:59:57.745845 systemd[1628]: Created slice app.slice - User Application Slice. Sep 9 23:59:57.746381 systemd[1628]: Reached target paths.target - Paths. Sep 9 23:59:57.746772 systemd[1628]: Reached target timers.target - Timers. Sep 9 23:59:57.748265 systemd[1628]: Starting dbus.socket - D-Bus User Message Bus Socket... Sep 9 23:59:57.763834 systemd[1628]: Listening on dbus.socket - D-Bus User Message Bus Socket. Sep 9 23:59:57.763967 systemd[1628]: Reached target sockets.target - Sockets. Sep 9 23:59:57.764009 systemd[1628]: Reached target basic.target - Basic System. Sep 9 23:59:57.764049 systemd[1628]: Reached target default.target - Main User Target. Sep 9 23:59:57.764076 systemd[1628]: Startup finished in 145ms. Sep 9 23:59:57.764167 systemd[1]: Started user@500.service - User Manager for UID 500. Sep 9 23:59:57.770672 systemd[1]: Started session-1.scope - Session 1 of User core. Sep 9 23:59:57.833415 systemd[1]: Started sshd@1-10.0.0.122:22-10.0.0.1:54684.service - OpenSSH per-connection server daemon (10.0.0.1:54684). Sep 9 23:59:57.911288 sshd[1639]: Accepted publickey for core from 10.0.0.1 port 54684 ssh2: RSA SHA256:BIipJKfG3sr4zTNTEUz0SDDjJtEzBqbnZB4/ga6/CtY Sep 9 23:59:57.912737 sshd-session[1639]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:59:57.919824 systemd-logind[1483]: New session 2 of user core. Sep 9 23:59:57.929094 systemd[1]: Started session-2.scope - Session 2 of User core. Sep 9 23:59:57.980980 sshd[1642]: Connection closed by 10.0.0.1 port 54684 Sep 9 23:59:57.981435 sshd-session[1639]: pam_unix(sshd:session): session closed for user core Sep 9 23:59:57.993750 systemd[1]: sshd@1-10.0.0.122:22-10.0.0.1:54684.service: Deactivated successfully. Sep 9 23:59:57.995234 systemd[1]: session-2.scope: Deactivated successfully. Sep 9 23:59:57.996932 systemd-logind[1483]: Session 2 logged out. Waiting for processes to exit. Sep 9 23:59:57.998264 systemd[1]: Started sshd@2-10.0.0.122:22-10.0.0.1:54698.service - OpenSSH per-connection server daemon (10.0.0.1:54698). Sep 9 23:59:57.999671 systemd-logind[1483]: Removed session 2. Sep 9 23:59:58.057261 sshd[1648]: Accepted publickey for core from 10.0.0.1 port 54698 ssh2: RSA SHA256:BIipJKfG3sr4zTNTEUz0SDDjJtEzBqbnZB4/ga6/CtY Sep 9 23:59:58.058727 sshd-session[1648]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:59:58.063600 systemd-logind[1483]: New session 3 of user core. Sep 9 23:59:58.081770 systemd[1]: Started session-3.scope - Session 3 of User core. Sep 9 23:59:58.130872 sshd[1651]: Connection closed by 10.0.0.1 port 54698 Sep 9 23:59:58.132902 sshd-session[1648]: pam_unix(sshd:session): session closed for user core Sep 9 23:59:58.148906 systemd[1]: sshd@2-10.0.0.122:22-10.0.0.1:54698.service: Deactivated successfully. Sep 9 23:59:58.152356 systemd[1]: session-3.scope: Deactivated successfully. Sep 9 23:59:58.153726 systemd-logind[1483]: Session 3 logged out. Waiting for processes to exit. Sep 9 23:59:58.156087 systemd-logind[1483]: Removed session 3. Sep 9 23:59:58.162513 systemd[1]: Started sshd@3-10.0.0.122:22-10.0.0.1:54708.service - OpenSSH per-connection server daemon (10.0.0.1:54708). Sep 9 23:59:58.216859 sshd[1657]: Accepted publickey for core from 10.0.0.1 port 54708 ssh2: RSA SHA256:BIipJKfG3sr4zTNTEUz0SDDjJtEzBqbnZB4/ga6/CtY Sep 9 23:59:58.219114 sshd-session[1657]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:59:58.223704 systemd-logind[1483]: New session 4 of user core. Sep 9 23:59:58.235784 systemd[1]: Started session-4.scope - Session 4 of User core. Sep 9 23:59:58.288188 sshd[1660]: Connection closed by 10.0.0.1 port 54708 Sep 9 23:59:58.288838 sshd-session[1657]: pam_unix(sshd:session): session closed for user core Sep 9 23:59:58.298457 systemd[1]: sshd@3-10.0.0.122:22-10.0.0.1:54708.service: Deactivated successfully. Sep 9 23:59:58.301949 systemd[1]: session-4.scope: Deactivated successfully. Sep 9 23:59:58.302748 systemd-logind[1483]: Session 4 logged out. Waiting for processes to exit. Sep 9 23:59:58.304782 systemd[1]: Started sshd@4-10.0.0.122:22-10.0.0.1:54718.service - OpenSSH per-connection server daemon (10.0.0.1:54718). Sep 9 23:59:58.308593 systemd-logind[1483]: Removed session 4. Sep 9 23:59:58.375200 sshd[1666]: Accepted publickey for core from 10.0.0.1 port 54718 ssh2: RSA SHA256:BIipJKfG3sr4zTNTEUz0SDDjJtEzBqbnZB4/ga6/CtY Sep 9 23:59:58.376640 sshd-session[1666]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:59:58.388407 systemd-logind[1483]: New session 5 of user core. Sep 9 23:59:58.403935 systemd[1]: Started session-5.scope - Session 5 of User core. Sep 9 23:59:58.471685 sudo[1670]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Sep 9 23:59:58.472003 sudo[1670]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 23:59:58.493467 sudo[1670]: pam_unix(sudo:session): session closed for user root Sep 9 23:59:58.495391 sshd[1669]: Connection closed by 10.0.0.1 port 54718 Sep 9 23:59:58.496025 sshd-session[1666]: pam_unix(sshd:session): session closed for user core Sep 9 23:59:58.513805 systemd[1]: sshd@4-10.0.0.122:22-10.0.0.1:54718.service: Deactivated successfully. Sep 9 23:59:58.517094 systemd[1]: session-5.scope: Deactivated successfully. Sep 9 23:59:58.517926 systemd-logind[1483]: Session 5 logged out. Waiting for processes to exit. Sep 9 23:59:58.521130 systemd[1]: Started sshd@5-10.0.0.122:22-10.0.0.1:54730.service - OpenSSH per-connection server daemon (10.0.0.1:54730). Sep 9 23:59:58.522884 systemd-logind[1483]: Removed session 5. Sep 9 23:59:58.588289 sshd[1676]: Accepted publickey for core from 10.0.0.1 port 54730 ssh2: RSA SHA256:BIipJKfG3sr4zTNTEUz0SDDjJtEzBqbnZB4/ga6/CtY Sep 9 23:59:58.589628 sshd-session[1676]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:59:58.594233 systemd-logind[1483]: New session 6 of user core. Sep 9 23:59:58.604790 systemd[1]: Started session-6.scope - Session 6 of User core. Sep 9 23:59:58.658214 sudo[1681]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Sep 9 23:59:58.658473 sudo[1681]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 23:59:58.786787 sudo[1681]: pam_unix(sudo:session): session closed for user root Sep 9 23:59:58.792273 sudo[1680]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Sep 9 23:59:58.792515 sudo[1680]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 23:59:58.806050 systemd[1]: Starting audit-rules.service - Load Audit Rules... Sep 9 23:59:58.868531 augenrules[1703]: No rules Sep 9 23:59:58.869484 systemd[1]: audit-rules.service: Deactivated successfully. Sep 9 23:59:58.869708 systemd[1]: Finished audit-rules.service - Load Audit Rules. Sep 9 23:59:58.871272 sudo[1680]: pam_unix(sudo:session): session closed for user root Sep 9 23:59:58.874404 sshd[1679]: Connection closed by 10.0.0.1 port 54730 Sep 9 23:59:58.875604 sshd-session[1676]: pam_unix(sshd:session): session closed for user core Sep 9 23:59:58.887068 systemd[1]: sshd@5-10.0.0.122:22-10.0.0.1:54730.service: Deactivated successfully. Sep 9 23:59:58.889296 systemd[1]: session-6.scope: Deactivated successfully. Sep 9 23:59:58.891656 systemd-logind[1483]: Session 6 logged out. Waiting for processes to exit. Sep 9 23:59:58.894895 systemd[1]: Started sshd@6-10.0.0.122:22-10.0.0.1:54744.service - OpenSSH per-connection server daemon (10.0.0.1:54744). Sep 9 23:59:58.895371 systemd-logind[1483]: Removed session 6. Sep 9 23:59:58.974988 sshd[1712]: Accepted publickey for core from 10.0.0.1 port 54744 ssh2: RSA SHA256:BIipJKfG3sr4zTNTEUz0SDDjJtEzBqbnZB4/ga6/CtY Sep 9 23:59:58.977265 sshd-session[1712]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 9 23:59:58.982101 systemd-logind[1483]: New session 7 of user core. Sep 9 23:59:58.995755 systemd[1]: Started session-7.scope - Session 7 of User core. Sep 9 23:59:59.047651 sudo[1716]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Sep 9 23:59:59.047971 sudo[1716]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Sep 9 23:59:59.364892 systemd[1]: Starting docker.service - Docker Application Container Engine... Sep 9 23:59:59.379018 (dockerd)[1736]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Sep 9 23:59:59.595581 dockerd[1736]: time="2025-09-09T23:59:59.592123510Z" level=info msg="Starting up" Sep 9 23:59:59.595581 dockerd[1736]: time="2025-09-09T23:59:59.593249880Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Sep 9 23:59:59.607706 dockerd[1736]: time="2025-09-09T23:59:59.607655692Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Sep 9 23:59:59.648102 dockerd[1736]: time="2025-09-09T23:59:59.647527062Z" level=info msg="Loading containers: start." Sep 9 23:59:59.657608 kernel: Initializing XFRM netlink socket Sep 9 23:59:59.929830 systemd-networkd[1437]: docker0: Link UP Sep 9 23:59:59.993579 dockerd[1736]: time="2025-09-09T23:59:59.993282911Z" level=info msg="Loading containers: done." Sep 10 00:00:00.016995 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2473603673-merged.mount: Deactivated successfully. Sep 10 00:00:00.020048 dockerd[1736]: time="2025-09-10T00:00:00.019686809Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Sep 10 00:00:00.020048 dockerd[1736]: time="2025-09-10T00:00:00.019786642Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Sep 10 00:00:00.020048 dockerd[1736]: time="2025-09-10T00:00:00.019875176Z" level=info msg="Initializing buildkit" Sep 10 00:00:00.048956 dockerd[1736]: time="2025-09-10T00:00:00.048908975Z" level=info msg="Completed buildkit initialization" Sep 10 00:00:00.057329 dockerd[1736]: time="2025-09-10T00:00:00.057261241Z" level=info msg="Daemon has completed initialization" Sep 10 00:00:00.057456 dockerd[1736]: time="2025-09-10T00:00:00.057369094Z" level=info msg="API listen on /run/docker.sock" Sep 10 00:00:00.057533 systemd[1]: Started docker.service - Docker Application Container Engine. Sep 10 00:00:00.687633 containerd[1497]: time="2025-09-10T00:00:00.686977696Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.4\"" Sep 10 00:00:01.291941 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3241972524.mount: Deactivated successfully. Sep 10 00:00:02.235662 containerd[1497]: time="2025-09-10T00:00:02.235616248Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:00:02.237454 containerd[1497]: time="2025-09-10T00:00:02.237404912Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.4: active requests=0, bytes read=27352615" Sep 10 00:00:02.238611 containerd[1497]: time="2025-09-10T00:00:02.238586009Z" level=info msg="ImageCreate event name:\"sha256:8dd08b7ae4433dd43482755f08ee0afd6de00c6ece25a8dc5814ebb4b7978e98\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:00:02.241617 containerd[1497]: time="2025-09-10T00:00:02.241554461Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:0d441d0d347145b3f02f20cb313239cdae86067643d7f70803fab8bac2d28876\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:00:02.242667 containerd[1497]: time="2025-09-10T00:00:02.242633690Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.4\" with image id \"sha256:8dd08b7ae4433dd43482755f08ee0afd6de00c6ece25a8dc5814ebb4b7978e98\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.4\", repo digest \"registry.k8s.io/kube-apiserver@sha256:0d441d0d347145b3f02f20cb313239cdae86067643d7f70803fab8bac2d28876\", size \"27349413\" in 1.555605281s" Sep 10 00:00:02.242729 containerd[1497]: time="2025-09-10T00:00:02.242677512Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.4\" returns image reference \"sha256:8dd08b7ae4433dd43482755f08ee0afd6de00c6ece25a8dc5814ebb4b7978e98\"" Sep 10 00:00:02.244079 containerd[1497]: time="2025-09-10T00:00:02.244030926Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.4\"" Sep 10 00:00:03.342913 containerd[1497]: time="2025-09-10T00:00:03.342858885Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:00:03.343727 containerd[1497]: time="2025-09-10T00:00:03.343698012Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.4: active requests=0, bytes read=23536979" Sep 10 00:00:03.344403 containerd[1497]: time="2025-09-10T00:00:03.344375883Z" level=info msg="ImageCreate event name:\"sha256:4e90c11ce4b770c38b26b3401b39c25e9871474a71ecb5eaea72082e21ba587d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:00:03.348246 containerd[1497]: time="2025-09-10T00:00:03.348197909Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:bd22c2af2f30a8f818568b4d5fe131098fdd38267e9e07872cfc33e8f5876bc3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:00:03.349059 containerd[1497]: time="2025-09-10T00:00:03.349026325Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.4\" with image id \"sha256:4e90c11ce4b770c38b26b3401b39c25e9871474a71ecb5eaea72082e21ba587d\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.4\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:bd22c2af2f30a8f818568b4d5fe131098fdd38267e9e07872cfc33e8f5876bc3\", size \"25093155\" in 1.104953856s" Sep 10 00:00:03.349104 containerd[1497]: time="2025-09-10T00:00:03.349065003Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.4\" returns image reference \"sha256:4e90c11ce4b770c38b26b3401b39c25e9871474a71ecb5eaea72082e21ba587d\"" Sep 10 00:00:03.349509 containerd[1497]: time="2025-09-10T00:00:03.349487204Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.4\"" Sep 10 00:00:04.223948 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Sep 10 00:00:04.225698 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 10 00:00:04.369706 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 00:00:04.373600 (kubelet)[2022]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Sep 10 00:00:04.531256 containerd[1497]: time="2025-09-10T00:00:04.531126306Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:00:04.535120 containerd[1497]: time="2025-09-10T00:00:04.535060328Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.4: active requests=0, bytes read=18292016" Sep 10 00:00:04.535946 containerd[1497]: time="2025-09-10T00:00:04.535902592Z" level=info msg="ImageCreate event name:\"sha256:10c245abf58045f1a856bebca4ed8e0abfabe4c0256d5a3f0c475fed70c8ce59\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:00:04.539499 containerd[1497]: time="2025-09-10T00:00:04.539448680Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:71533e5a960e2955a54164905e92dac516ec874a23e0bf31304db82650101a4a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:00:04.540315 containerd[1497]: time="2025-09-10T00:00:04.540273908Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.4\" with image id \"sha256:10c245abf58045f1a856bebca4ed8e0abfabe4c0256d5a3f0c475fed70c8ce59\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.4\", repo digest \"registry.k8s.io/kube-scheduler@sha256:71533e5a960e2955a54164905e92dac516ec874a23e0bf31304db82650101a4a\", size \"19848210\" in 1.190752337s" Sep 10 00:00:04.540315 containerd[1497]: time="2025-09-10T00:00:04.540313500Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.4\" returns image reference \"sha256:10c245abf58045f1a856bebca4ed8e0abfabe4c0256d5a3f0c475fed70c8ce59\"" Sep 10 00:00:04.541041 containerd[1497]: time="2025-09-10T00:00:04.541016697Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.4\"" Sep 10 00:00:04.554089 kubelet[2022]: E0910 00:00:04.554025 2022 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Sep 10 00:00:04.557611 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Sep 10 00:00:04.557738 systemd[1]: kubelet.service: Failed with result 'exit-code'. Sep 10 00:00:04.558027 systemd[1]: kubelet.service: Consumed 285ms CPU time, 108M memory peak. Sep 10 00:00:05.544524 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3519454141.mount: Deactivated successfully. Sep 10 00:00:05.952466 containerd[1497]: time="2025-09-10T00:00:05.952318573Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:00:05.953136 containerd[1497]: time="2025-09-10T00:00:05.953100792Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.4: active requests=0, bytes read=28199961" Sep 10 00:00:05.954582 containerd[1497]: time="2025-09-10T00:00:05.954497100Z" level=info msg="ImageCreate event name:\"sha256:e19c0cda155dad39120317830ddb8b2bc22070f2c6a97973e96fb09ef504ee64\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:00:05.956171 containerd[1497]: time="2025-09-10T00:00:05.956132962Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bb04e9247da3aaeb96406b4d530a79fc865695b6807353dd1a28871df0d7f837\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:00:05.956798 containerd[1497]: time="2025-09-10T00:00:05.956766010Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.4\" with image id \"sha256:e19c0cda155dad39120317830ddb8b2bc22070f2c6a97973e96fb09ef504ee64\", repo tag \"registry.k8s.io/kube-proxy:v1.33.4\", repo digest \"registry.k8s.io/kube-proxy@sha256:bb04e9247da3aaeb96406b4d530a79fc865695b6807353dd1a28871df0d7f837\", size \"28198978\" in 1.415719365s" Sep 10 00:00:05.956842 containerd[1497]: time="2025-09-10T00:00:05.956804207Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.4\" returns image reference \"sha256:e19c0cda155dad39120317830ddb8b2bc22070f2c6a97973e96fb09ef504ee64\"" Sep 10 00:00:05.957285 containerd[1497]: time="2025-09-10T00:00:05.957265311Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Sep 10 00:00:06.516447 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3494423218.mount: Deactivated successfully. Sep 10 00:00:07.262369 containerd[1497]: time="2025-09-10T00:00:07.262311304Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:00:07.263379 containerd[1497]: time="2025-09-10T00:00:07.263343519Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=19152119" Sep 10 00:00:07.264600 containerd[1497]: time="2025-09-10T00:00:07.264406225Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:00:07.268144 containerd[1497]: time="2025-09-10T00:00:07.268070983Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:00:07.269368 containerd[1497]: time="2025-09-10T00:00:07.269325913Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 1.312026485s" Sep 10 00:00:07.269368 containerd[1497]: time="2025-09-10T00:00:07.269366038Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Sep 10 00:00:07.269923 containerd[1497]: time="2025-09-10T00:00:07.269888096Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Sep 10 00:00:07.694809 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount206183500.mount: Deactivated successfully. Sep 10 00:00:07.704215 containerd[1497]: time="2025-09-10T00:00:07.704143005Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 10 00:00:07.705781 containerd[1497]: time="2025-09-10T00:00:07.705723669Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Sep 10 00:00:07.707194 containerd[1497]: time="2025-09-10T00:00:07.707101402Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 10 00:00:07.712502 containerd[1497]: time="2025-09-10T00:00:07.712462541Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Sep 10 00:00:07.713524 containerd[1497]: time="2025-09-10T00:00:07.713162494Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 443.231128ms" Sep 10 00:00:07.713524 containerd[1497]: time="2025-09-10T00:00:07.713199712Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Sep 10 00:00:07.713793 containerd[1497]: time="2025-09-10T00:00:07.713662380Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Sep 10 00:00:08.159971 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2744822288.mount: Deactivated successfully. Sep 10 00:00:09.705592 containerd[1497]: time="2025-09-10T00:00:09.705517360Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:00:09.707275 containerd[1497]: time="2025-09-10T00:00:09.707237994Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=69465297" Sep 10 00:00:09.708372 containerd[1497]: time="2025-09-10T00:00:09.708342641Z" level=info msg="ImageCreate event name:\"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:00:09.712476 containerd[1497]: time="2025-09-10T00:00:09.712441022Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:00:09.713675 containerd[1497]: time="2025-09-10T00:00:09.713644581Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"70026017\" in 1.999952855s" Sep 10 00:00:09.713740 containerd[1497]: time="2025-09-10T00:00:09.713677140Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\"" Sep 10 00:00:14.724153 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Sep 10 00:00:14.726113 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 10 00:00:14.819072 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 10 00:00:14.819148 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 10 00:00:14.819369 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 00:00:14.824221 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 10 00:00:14.846987 systemd[1]: Reload requested from client PID 2185 ('systemctl') (unit session-7.scope)... Sep 10 00:00:14.847001 systemd[1]: Reloading... Sep 10 00:00:14.929036 zram_generator::config[2230]: No configuration found. Sep 10 00:00:15.193611 systemd[1]: Reloading finished in 346 ms. Sep 10 00:00:15.253021 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Sep 10 00:00:15.253098 systemd[1]: kubelet.service: Failed with result 'signal'. Sep 10 00:00:15.253381 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 00:00:15.253429 systemd[1]: kubelet.service: Consumed 90ms CPU time, 95M memory peak. Sep 10 00:00:15.254923 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 10 00:00:15.383987 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 00:00:15.388996 (kubelet)[2272]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 10 00:00:15.422185 kubelet[2272]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 10 00:00:15.422185 kubelet[2272]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 10 00:00:15.422185 kubelet[2272]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 10 00:00:15.422530 kubelet[2272]: I0910 00:00:15.422235 2272 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 10 00:00:16.676292 kubelet[2272]: I0910 00:00:16.676230 2272 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 10 00:00:16.676292 kubelet[2272]: I0910 00:00:16.676264 2272 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 10 00:00:16.676659 kubelet[2272]: I0910 00:00:16.676490 2272 server.go:956] "Client rotation is on, will bootstrap in background" Sep 10 00:00:16.700308 kubelet[2272]: E0910 00:00:16.700261 2272 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.122:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.122:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Sep 10 00:00:16.702449 kubelet[2272]: I0910 00:00:16.702417 2272 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 10 00:00:16.710494 kubelet[2272]: I0910 00:00:16.710437 2272 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 10 00:00:16.714587 kubelet[2272]: I0910 00:00:16.714244 2272 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 10 00:00:16.714587 kubelet[2272]: I0910 00:00:16.714549 2272 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 10 00:00:16.714742 kubelet[2272]: I0910 00:00:16.714595 2272 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 10 00:00:16.714824 kubelet[2272]: I0910 00:00:16.714806 2272 topology_manager.go:138] "Creating topology manager with none policy" Sep 10 00:00:16.714824 kubelet[2272]: I0910 00:00:16.714814 2272 container_manager_linux.go:303] "Creating device plugin manager" Sep 10 00:00:16.715020 kubelet[2272]: I0910 00:00:16.715003 2272 state_mem.go:36] "Initialized new in-memory state store" Sep 10 00:00:16.717636 kubelet[2272]: I0910 00:00:16.717551 2272 kubelet.go:480] "Attempting to sync node with API server" Sep 10 00:00:16.717636 kubelet[2272]: I0910 00:00:16.717590 2272 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 10 00:00:16.717636 kubelet[2272]: I0910 00:00:16.717615 2272 kubelet.go:386] "Adding apiserver pod source" Sep 10 00:00:16.718758 kubelet[2272]: I0910 00:00:16.718662 2272 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 10 00:00:16.719976 kubelet[2272]: I0910 00:00:16.719794 2272 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 10 00:00:16.720352 kubelet[2272]: E0910 00:00:16.720308 2272 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.122:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.122:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Sep 10 00:00:16.720352 kubelet[2272]: E0910 00:00:16.720309 2272 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.122:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.122:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Sep 10 00:00:16.720517 kubelet[2272]: I0910 00:00:16.720497 2272 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 10 00:00:16.720967 kubelet[2272]: W0910 00:00:16.720769 2272 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Sep 10 00:00:16.723530 kubelet[2272]: I0910 00:00:16.723511 2272 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 10 00:00:16.723609 kubelet[2272]: I0910 00:00:16.723558 2272 server.go:1289] "Started kubelet" Sep 10 00:00:16.726243 kubelet[2272]: I0910 00:00:16.725982 2272 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 10 00:00:16.727114 kubelet[2272]: I0910 00:00:16.727059 2272 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 10 00:00:16.731050 kubelet[2272]: I0910 00:00:16.731007 2272 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 10 00:00:16.731154 kubelet[2272]: I0910 00:00:16.731121 2272 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 10 00:00:16.732166 kubelet[2272]: I0910 00:00:16.732126 2272 server.go:317] "Adding debug handlers to kubelet server" Sep 10 00:00:16.733264 kubelet[2272]: I0910 00:00:16.733181 2272 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 10 00:00:16.734111 kubelet[2272]: I0910 00:00:16.734088 2272 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 10 00:00:16.734276 kubelet[2272]: I0910 00:00:16.734204 2272 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 10 00:00:16.734276 kubelet[2272]: E0910 00:00:16.734082 2272 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 00:00:16.734276 kubelet[2272]: I0910 00:00:16.734268 2272 reconciler.go:26] "Reconciler: start to sync state" Sep 10 00:00:16.734578 kubelet[2272]: E0910 00:00:16.734518 2272 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 10 00:00:16.734619 kubelet[2272]: E0910 00:00:16.732467 2272 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.122:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.122:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1863c2c1aee7c62b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-09-10 00:00:16.723527211 +0000 UTC m=+1.331156091,LastTimestamp:2025-09-10 00:00:16.723527211 +0000 UTC m=+1.331156091,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Sep 10 00:00:16.734763 kubelet[2272]: E0910 00:00:16.734632 2272 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.122:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.122:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Sep 10 00:00:16.734939 kubelet[2272]: I0910 00:00:16.734914 2272 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 10 00:00:16.735455 kubelet[2272]: E0910 00:00:16.735419 2272 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.122:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.122:6443: connect: connection refused" interval="200ms" Sep 10 00:00:16.736811 kubelet[2272]: I0910 00:00:16.736776 2272 factory.go:223] Registration of the containerd container factory successfully Sep 10 00:00:16.737144 kubelet[2272]: I0910 00:00:16.737066 2272 factory.go:223] Registration of the systemd container factory successfully Sep 10 00:00:16.747549 kubelet[2272]: I0910 00:00:16.747528 2272 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 10 00:00:16.747915 kubelet[2272]: I0910 00:00:16.747690 2272 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 10 00:00:16.747915 kubelet[2272]: I0910 00:00:16.747711 2272 state_mem.go:36] "Initialized new in-memory state store" Sep 10 00:00:16.819872 kubelet[2272]: I0910 00:00:16.819843 2272 policy_none.go:49] "None policy: Start" Sep 10 00:00:16.820079 kubelet[2272]: I0910 00:00:16.820064 2272 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 10 00:00:16.820165 kubelet[2272]: I0910 00:00:16.820152 2272 state_mem.go:35] "Initializing new in-memory state store" Sep 10 00:00:16.824505 kubelet[2272]: I0910 00:00:16.824466 2272 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 10 00:00:16.825734 kubelet[2272]: I0910 00:00:16.825604 2272 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 10 00:00:16.825734 kubelet[2272]: I0910 00:00:16.825630 2272 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 10 00:00:16.825734 kubelet[2272]: I0910 00:00:16.825651 2272 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 10 00:00:16.825734 kubelet[2272]: I0910 00:00:16.825659 2272 kubelet.go:2436] "Starting kubelet main sync loop" Sep 10 00:00:16.825734 kubelet[2272]: E0910 00:00:16.825704 2272 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 10 00:00:16.827992 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Sep 10 00:00:16.829630 kubelet[2272]: E0910 00:00:16.829197 2272 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.122:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.122:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Sep 10 00:00:16.835164 kubelet[2272]: E0910 00:00:16.835122 2272 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 00:00:16.840903 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Sep 10 00:00:16.844381 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Sep 10 00:00:16.859545 kubelet[2272]: E0910 00:00:16.859506 2272 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 10 00:00:16.859767 kubelet[2272]: I0910 00:00:16.859740 2272 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 10 00:00:16.859830 kubelet[2272]: I0910 00:00:16.859762 2272 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 10 00:00:16.860519 kubelet[2272]: I0910 00:00:16.860489 2272 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 10 00:00:16.861195 kubelet[2272]: E0910 00:00:16.861160 2272 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 10 00:00:16.861270 kubelet[2272]: E0910 00:00:16.861209 2272 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Sep 10 00:00:16.935469 kubelet[2272]: I0910 00:00:16.935204 2272 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a7bc36333dc35fe688f672fcc644b68b-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"a7bc36333dc35fe688f672fcc644b68b\") " pod="kube-system/kube-apiserver-localhost" Sep 10 00:00:16.935469 kubelet[2272]: I0910 00:00:16.935255 2272 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a7bc36333dc35fe688f672fcc644b68b-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"a7bc36333dc35fe688f672fcc644b68b\") " pod="kube-system/kube-apiserver-localhost" Sep 10 00:00:16.935469 kubelet[2272]: I0910 00:00:16.935325 2272 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a7bc36333dc35fe688f672fcc644b68b-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"a7bc36333dc35fe688f672fcc644b68b\") " pod="kube-system/kube-apiserver-localhost" Sep 10 00:00:16.935469 kubelet[2272]: I0910 00:00:16.935346 2272 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 00:00:16.935469 kubelet[2272]: I0910 00:00:16.935370 2272 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 00:00:16.935806 kubelet[2272]: I0910 00:00:16.935386 2272 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 00:00:16.935806 kubelet[2272]: I0910 00:00:16.935409 2272 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d75e6f6978d9f275ea19380916c9cccd-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d75e6f6978d9f275ea19380916c9cccd\") " pod="kube-system/kube-scheduler-localhost" Sep 10 00:00:16.935806 kubelet[2272]: I0910 00:00:16.935433 2272 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 00:00:16.935806 kubelet[2272]: I0910 00:00:16.935449 2272 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 00:00:16.936411 kubelet[2272]: E0910 00:00:16.936354 2272 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.122:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.122:6443: connect: connection refused" interval="400ms" Sep 10 00:00:16.937188 systemd[1]: Created slice kubepods-burstable-poda7bc36333dc35fe688f672fcc644b68b.slice - libcontainer container kubepods-burstable-poda7bc36333dc35fe688f672fcc644b68b.slice. Sep 10 00:00:16.961787 kubelet[2272]: I0910 00:00:16.961759 2272 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 10 00:00:16.962233 kubelet[2272]: E0910 00:00:16.962189 2272 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.122:6443/api/v1/nodes\": dial tcp 10.0.0.122:6443: connect: connection refused" node="localhost" Sep 10 00:00:16.964000 kubelet[2272]: E0910 00:00:16.963829 2272 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 10 00:00:16.966617 systemd[1]: Created slice kubepods-burstable-pod8de7187202bee21b84740a213836f615.slice - libcontainer container kubepods-burstable-pod8de7187202bee21b84740a213836f615.slice. Sep 10 00:00:16.968543 kubelet[2272]: E0910 00:00:16.968482 2272 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 10 00:00:16.986241 systemd[1]: Created slice kubepods-burstable-podd75e6f6978d9f275ea19380916c9cccd.slice - libcontainer container kubepods-burstable-podd75e6f6978d9f275ea19380916c9cccd.slice. Sep 10 00:00:16.988153 kubelet[2272]: E0910 00:00:16.988106 2272 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 10 00:00:17.163883 kubelet[2272]: I0910 00:00:17.163836 2272 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 10 00:00:17.164302 kubelet[2272]: E0910 00:00:17.164256 2272 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.122:6443/api/v1/nodes\": dial tcp 10.0.0.122:6443: connect: connection refused" node="localhost" Sep 10 00:00:17.265764 containerd[1497]: time="2025-09-10T00:00:17.265715205Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:a7bc36333dc35fe688f672fcc644b68b,Namespace:kube-system,Attempt:0,}" Sep 10 00:00:17.269445 containerd[1497]: time="2025-09-10T00:00:17.269318127Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8de7187202bee21b84740a213836f615,Namespace:kube-system,Attempt:0,}" Sep 10 00:00:17.290978 containerd[1497]: time="2025-09-10T00:00:17.290864070Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d75e6f6978d9f275ea19380916c9cccd,Namespace:kube-system,Attempt:0,}" Sep 10 00:00:17.296295 containerd[1497]: time="2025-09-10T00:00:17.296147008Z" level=info msg="connecting to shim d9c7db793ee57f92eef1a5a146744b83711fe825ad233e7236b80517baabd918" address="unix:///run/containerd/s/f6bd4eb5032e733c05fe27d1e0f85436d5622aaa59c2ce3f8a7ae48dd430d151" namespace=k8s.io protocol=ttrpc version=3 Sep 10 00:00:17.299053 containerd[1497]: time="2025-09-10T00:00:17.299009834Z" level=info msg="connecting to shim ba52d7b3c5ba93546944b70902e9c5ff3c2e57bd78d47acf5c8ffabe18468081" address="unix:///run/containerd/s/52904b7b5a3353431d1f875f6a54c596097fcb5221710764989c5cf9dedb105f" namespace=k8s.io protocol=ttrpc version=3 Sep 10 00:00:17.328112 systemd[1]: Started cri-containerd-d9c7db793ee57f92eef1a5a146744b83711fe825ad233e7236b80517baabd918.scope - libcontainer container d9c7db793ee57f92eef1a5a146744b83711fe825ad233e7236b80517baabd918. Sep 10 00:00:17.335158 systemd[1]: Started cri-containerd-ba52d7b3c5ba93546944b70902e9c5ff3c2e57bd78d47acf5c8ffabe18468081.scope - libcontainer container ba52d7b3c5ba93546944b70902e9c5ff3c2e57bd78d47acf5c8ffabe18468081. Sep 10 00:00:17.337353 kubelet[2272]: E0910 00:00:17.337310 2272 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.122:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.122:6443: connect: connection refused" interval="800ms" Sep 10 00:00:17.337881 containerd[1497]: time="2025-09-10T00:00:17.337829372Z" level=info msg="connecting to shim dace05996be02c37c2726d67f6af744ab6b23c7f75e16da6e2de877c92f78284" address="unix:///run/containerd/s/42430acd2f87784b32a9e31af8a32a3982904c1d998c371b17019c53c573e388" namespace=k8s.io protocol=ttrpc version=3 Sep 10 00:00:17.373058 containerd[1497]: time="2025-09-10T00:00:17.372992897Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:a7bc36333dc35fe688f672fcc644b68b,Namespace:kube-system,Attempt:0,} returns sandbox id \"d9c7db793ee57f92eef1a5a146744b83711fe825ad233e7236b80517baabd918\"" Sep 10 00:00:17.374770 systemd[1]: Started cri-containerd-dace05996be02c37c2726d67f6af744ab6b23c7f75e16da6e2de877c92f78284.scope - libcontainer container dace05996be02c37c2726d67f6af744ab6b23c7f75e16da6e2de877c92f78284. Sep 10 00:00:17.380268 containerd[1497]: time="2025-09-10T00:00:17.380226185Z" level=info msg="CreateContainer within sandbox \"d9c7db793ee57f92eef1a5a146744b83711fe825ad233e7236b80517baabd918\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Sep 10 00:00:17.381093 containerd[1497]: time="2025-09-10T00:00:17.381039667Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:8de7187202bee21b84740a213836f615,Namespace:kube-system,Attempt:0,} returns sandbox id \"ba52d7b3c5ba93546944b70902e9c5ff3c2e57bd78d47acf5c8ffabe18468081\"" Sep 10 00:00:17.385718 containerd[1497]: time="2025-09-10T00:00:17.385683101Z" level=info msg="CreateContainer within sandbox \"ba52d7b3c5ba93546944b70902e9c5ff3c2e57bd78d47acf5c8ffabe18468081\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Sep 10 00:00:17.390901 containerd[1497]: time="2025-09-10T00:00:17.390601185Z" level=info msg="Container d1d5b668e6634280a6b307a648dfcf9a95a77dfc0bf900d539314e1bc2d15f4e: CDI devices from CRI Config.CDIDevices: []" Sep 10 00:00:17.409154 containerd[1497]: time="2025-09-10T00:00:17.408997387Z" level=info msg="CreateContainer within sandbox \"d9c7db793ee57f92eef1a5a146744b83711fe825ad233e7236b80517baabd918\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"d1d5b668e6634280a6b307a648dfcf9a95a77dfc0bf900d539314e1bc2d15f4e\"" Sep 10 00:00:17.410210 containerd[1497]: time="2025-09-10T00:00:17.410183713Z" level=info msg="Container f5645116185303f36bf6de25d13292a9c1495a0152e80b53815a4cbcaa73cae8: CDI devices from CRI Config.CDIDevices: []" Sep 10 00:00:17.410717 containerd[1497]: time="2025-09-10T00:00:17.410679760Z" level=info msg="StartContainer for \"d1d5b668e6634280a6b307a648dfcf9a95a77dfc0bf900d539314e1bc2d15f4e\"" Sep 10 00:00:17.411832 containerd[1497]: time="2025-09-10T00:00:17.411801649Z" level=info msg="connecting to shim d1d5b668e6634280a6b307a648dfcf9a95a77dfc0bf900d539314e1bc2d15f4e" address="unix:///run/containerd/s/f6bd4eb5032e733c05fe27d1e0f85436d5622aaa59c2ce3f8a7ae48dd430d151" protocol=ttrpc version=3 Sep 10 00:00:17.420434 containerd[1497]: time="2025-09-10T00:00:17.420386732Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:d75e6f6978d9f275ea19380916c9cccd,Namespace:kube-system,Attempt:0,} returns sandbox id \"dace05996be02c37c2726d67f6af744ab6b23c7f75e16da6e2de877c92f78284\"" Sep 10 00:00:17.420968 containerd[1497]: time="2025-09-10T00:00:17.420942703Z" level=info msg="CreateContainer within sandbox \"ba52d7b3c5ba93546944b70902e9c5ff3c2e57bd78d47acf5c8ffabe18468081\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"f5645116185303f36bf6de25d13292a9c1495a0152e80b53815a4cbcaa73cae8\"" Sep 10 00:00:17.421723 containerd[1497]: time="2025-09-10T00:00:17.421699896Z" level=info msg="StartContainer for \"f5645116185303f36bf6de25d13292a9c1495a0152e80b53815a4cbcaa73cae8\"" Sep 10 00:00:17.422865 containerd[1497]: time="2025-09-10T00:00:17.422837565Z" level=info msg="connecting to shim f5645116185303f36bf6de25d13292a9c1495a0152e80b53815a4cbcaa73cae8" address="unix:///run/containerd/s/52904b7b5a3353431d1f875f6a54c596097fcb5221710764989c5cf9dedb105f" protocol=ttrpc version=3 Sep 10 00:00:17.427551 containerd[1497]: time="2025-09-10T00:00:17.427503170Z" level=info msg="CreateContainer within sandbox \"dace05996be02c37c2726d67f6af744ab6b23c7f75e16da6e2de877c92f78284\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Sep 10 00:00:17.432821 systemd[1]: Started cri-containerd-d1d5b668e6634280a6b307a648dfcf9a95a77dfc0bf900d539314e1bc2d15f4e.scope - libcontainer container d1d5b668e6634280a6b307a648dfcf9a95a77dfc0bf900d539314e1bc2d15f4e. Sep 10 00:00:17.436971 containerd[1497]: time="2025-09-10T00:00:17.436925106Z" level=info msg="Container 9730625fe154e37cb8e43e7363e41da6af6907b6ec0f1788f22e014c6642f612: CDI devices from CRI Config.CDIDevices: []" Sep 10 00:00:17.445575 containerd[1497]: time="2025-09-10T00:00:17.445524811Z" level=info msg="CreateContainer within sandbox \"dace05996be02c37c2726d67f6af744ab6b23c7f75e16da6e2de877c92f78284\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"9730625fe154e37cb8e43e7363e41da6af6907b6ec0f1788f22e014c6642f612\"" Sep 10 00:00:17.446857 containerd[1497]: time="2025-09-10T00:00:17.446831543Z" level=info msg="StartContainer for \"9730625fe154e37cb8e43e7363e41da6af6907b6ec0f1788f22e014c6642f612\"" Sep 10 00:00:17.448145 containerd[1497]: time="2025-09-10T00:00:17.448102801Z" level=info msg="connecting to shim 9730625fe154e37cb8e43e7363e41da6af6907b6ec0f1788f22e014c6642f612" address="unix:///run/containerd/s/42430acd2f87784b32a9e31af8a32a3982904c1d998c371b17019c53c573e388" protocol=ttrpc version=3 Sep 10 00:00:17.449733 systemd[1]: Started cri-containerd-f5645116185303f36bf6de25d13292a9c1495a0152e80b53815a4cbcaa73cae8.scope - libcontainer container f5645116185303f36bf6de25d13292a9c1495a0152e80b53815a4cbcaa73cae8. Sep 10 00:00:17.470064 systemd[1]: Started cri-containerd-9730625fe154e37cb8e43e7363e41da6af6907b6ec0f1788f22e014c6642f612.scope - libcontainer container 9730625fe154e37cb8e43e7363e41da6af6907b6ec0f1788f22e014c6642f612. Sep 10 00:00:17.485791 containerd[1497]: time="2025-09-10T00:00:17.485749955Z" level=info msg="StartContainer for \"d1d5b668e6634280a6b307a648dfcf9a95a77dfc0bf900d539314e1bc2d15f4e\" returns successfully" Sep 10 00:00:17.505223 containerd[1497]: time="2025-09-10T00:00:17.505177841Z" level=info msg="StartContainer for \"f5645116185303f36bf6de25d13292a9c1495a0152e80b53815a4cbcaa73cae8\" returns successfully" Sep 10 00:00:17.528722 containerd[1497]: time="2025-09-10T00:00:17.527737451Z" level=info msg="StartContainer for \"9730625fe154e37cb8e43e7363e41da6af6907b6ec0f1788f22e014c6642f612\" returns successfully" Sep 10 00:00:17.566025 kubelet[2272]: I0910 00:00:17.565939 2272 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 10 00:00:17.567027 kubelet[2272]: E0910 00:00:17.566986 2272 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.122:6443/api/v1/nodes\": dial tcp 10.0.0.122:6443: connect: connection refused" node="localhost" Sep 10 00:00:17.834486 kubelet[2272]: E0910 00:00:17.834389 2272 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 10 00:00:17.838433 kubelet[2272]: E0910 00:00:17.838402 2272 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 10 00:00:17.840017 kubelet[2272]: E0910 00:00:17.839993 2272 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 10 00:00:18.369118 kubelet[2272]: I0910 00:00:18.369084 2272 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 10 00:00:18.842873 kubelet[2272]: E0910 00:00:18.842843 2272 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 10 00:00:18.843654 kubelet[2272]: E0910 00:00:18.843629 2272 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 10 00:00:18.917836 kubelet[2272]: E0910 00:00:18.917805 2272 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Sep 10 00:00:18.960253 kubelet[2272]: E0910 00:00:18.960219 2272 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Sep 10 00:00:19.014909 kubelet[2272]: I0910 00:00:19.014871 2272 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 10 00:00:19.015013 kubelet[2272]: E0910 00:00:19.014947 2272 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Sep 10 00:00:19.035647 kubelet[2272]: I0910 00:00:19.035619 2272 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 10 00:00:19.092603 kubelet[2272]: E0910 00:00:19.092558 2272 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Sep 10 00:00:19.092603 kubelet[2272]: I0910 00:00:19.092604 2272 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 10 00:00:19.094648 kubelet[2272]: E0910 00:00:19.094573 2272 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Sep 10 00:00:19.094648 kubelet[2272]: I0910 00:00:19.094598 2272 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 10 00:00:19.098237 kubelet[2272]: E0910 00:00:19.098208 2272 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Sep 10 00:00:19.720094 kubelet[2272]: I0910 00:00:19.720042 2272 apiserver.go:52] "Watching apiserver" Sep 10 00:00:19.735294 kubelet[2272]: I0910 00:00:19.735249 2272 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 10 00:00:19.842724 kubelet[2272]: I0910 00:00:19.842674 2272 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 10 00:00:19.844867 kubelet[2272]: E0910 00:00:19.844832 2272 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Sep 10 00:00:21.351802 systemd[1]: Reload requested from client PID 2557 ('systemctl') (unit session-7.scope)... Sep 10 00:00:21.351816 systemd[1]: Reloading... Sep 10 00:00:21.444609 zram_generator::config[2600]: No configuration found. Sep 10 00:00:21.616737 systemd[1]: Reloading finished in 264 ms. Sep 10 00:00:21.638079 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Sep 10 00:00:21.655666 systemd[1]: kubelet.service: Deactivated successfully. Sep 10 00:00:21.656039 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 00:00:21.656125 systemd[1]: kubelet.service: Consumed 1.719s CPU time, 129M memory peak. Sep 10 00:00:21.658823 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Sep 10 00:00:21.794907 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Sep 10 00:00:21.810348 (kubelet)[2642]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Sep 10 00:00:21.858468 kubelet[2642]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 10 00:00:21.858468 kubelet[2642]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Sep 10 00:00:21.858468 kubelet[2642]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Sep 10 00:00:21.858822 kubelet[2642]: I0910 00:00:21.858506 2642 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Sep 10 00:00:21.865122 kubelet[2642]: I0910 00:00:21.865081 2642 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Sep 10 00:00:21.865122 kubelet[2642]: I0910 00:00:21.865109 2642 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Sep 10 00:00:21.865315 kubelet[2642]: I0910 00:00:21.865297 2642 server.go:956] "Client rotation is on, will bootstrap in background" Sep 10 00:00:21.866550 kubelet[2642]: I0910 00:00:21.866532 2642 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Sep 10 00:00:21.869232 kubelet[2642]: I0910 00:00:21.868713 2642 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Sep 10 00:00:21.875269 kubelet[2642]: I0910 00:00:21.875247 2642 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Sep 10 00:00:21.877781 kubelet[2642]: I0910 00:00:21.877761 2642 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Sep 10 00:00:21.877982 kubelet[2642]: I0910 00:00:21.877957 2642 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Sep 10 00:00:21.878119 kubelet[2642]: I0910 00:00:21.877981 2642 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Sep 10 00:00:21.878192 kubelet[2642]: I0910 00:00:21.878125 2642 topology_manager.go:138] "Creating topology manager with none policy" Sep 10 00:00:21.878192 kubelet[2642]: I0910 00:00:21.878134 2642 container_manager_linux.go:303] "Creating device plugin manager" Sep 10 00:00:21.878192 kubelet[2642]: I0910 00:00:21.878176 2642 state_mem.go:36] "Initialized new in-memory state store" Sep 10 00:00:21.878318 kubelet[2642]: I0910 00:00:21.878308 2642 kubelet.go:480] "Attempting to sync node with API server" Sep 10 00:00:21.878345 kubelet[2642]: I0910 00:00:21.878321 2642 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Sep 10 00:00:21.878376 kubelet[2642]: I0910 00:00:21.878347 2642 kubelet.go:386] "Adding apiserver pod source" Sep 10 00:00:21.878376 kubelet[2642]: I0910 00:00:21.878360 2642 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Sep 10 00:00:21.879149 kubelet[2642]: I0910 00:00:21.879115 2642 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Sep 10 00:00:21.881724 kubelet[2642]: I0910 00:00:21.881641 2642 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Sep 10 00:00:21.884915 kubelet[2642]: I0910 00:00:21.884894 2642 watchdog_linux.go:99] "Systemd watchdog is not enabled" Sep 10 00:00:21.885033 kubelet[2642]: I0910 00:00:21.884944 2642 server.go:1289] "Started kubelet" Sep 10 00:00:21.888198 kubelet[2642]: I0910 00:00:21.888169 2642 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Sep 10 00:00:21.888373 kubelet[2642]: I0910 00:00:21.888340 2642 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Sep 10 00:00:21.889189 kubelet[2642]: I0910 00:00:21.889160 2642 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Sep 10 00:00:21.889189 kubelet[2642]: I0910 00:00:21.889146 2642 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Sep 10 00:00:21.889441 kubelet[2642]: I0910 00:00:21.889416 2642 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Sep 10 00:00:21.889942 kubelet[2642]: I0910 00:00:21.889911 2642 server.go:317] "Adding debug handlers to kubelet server" Sep 10 00:00:21.898820 kubelet[2642]: E0910 00:00:21.898789 2642 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Sep 10 00:00:21.899750 kubelet[2642]: I0910 00:00:21.899235 2642 volume_manager.go:297] "Starting Kubelet Volume Manager" Sep 10 00:00:21.899750 kubelet[2642]: I0910 00:00:21.899349 2642 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Sep 10 00:00:21.899750 kubelet[2642]: I0910 00:00:21.899412 2642 factory.go:223] Registration of the systemd container factory successfully Sep 10 00:00:21.899750 kubelet[2642]: I0910 00:00:21.899444 2642 reconciler.go:26] "Reconciler: start to sync state" Sep 10 00:00:21.899750 kubelet[2642]: I0910 00:00:21.899499 2642 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Sep 10 00:00:21.899750 kubelet[2642]: E0910 00:00:21.899706 2642 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Sep 10 00:00:21.901663 kubelet[2642]: I0910 00:00:21.901622 2642 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Sep 10 00:00:21.902438 kubelet[2642]: I0910 00:00:21.902410 2642 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Sep 10 00:00:21.902438 kubelet[2642]: I0910 00:00:21.902431 2642 status_manager.go:230] "Starting to sync pod status with apiserver" Sep 10 00:00:21.902503 kubelet[2642]: I0910 00:00:21.902448 2642 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Sep 10 00:00:21.902503 kubelet[2642]: I0910 00:00:21.902455 2642 kubelet.go:2436] "Starting kubelet main sync loop" Sep 10 00:00:21.902503 kubelet[2642]: E0910 00:00:21.902487 2642 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Sep 10 00:00:21.905393 kubelet[2642]: I0910 00:00:21.905025 2642 factory.go:223] Registration of the containerd container factory successfully Sep 10 00:00:21.943437 kubelet[2642]: I0910 00:00:21.943262 2642 cpu_manager.go:221] "Starting CPU manager" policy="none" Sep 10 00:00:21.943437 kubelet[2642]: I0910 00:00:21.943277 2642 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Sep 10 00:00:21.943437 kubelet[2642]: I0910 00:00:21.943303 2642 state_mem.go:36] "Initialized new in-memory state store" Sep 10 00:00:21.943604 kubelet[2642]: I0910 00:00:21.943517 2642 state_mem.go:88] "Updated default CPUSet" cpuSet="" Sep 10 00:00:21.943604 kubelet[2642]: I0910 00:00:21.943532 2642 state_mem.go:96] "Updated CPUSet assignments" assignments={} Sep 10 00:00:21.943604 kubelet[2642]: I0910 00:00:21.943562 2642 policy_none.go:49] "None policy: Start" Sep 10 00:00:21.943604 kubelet[2642]: I0910 00:00:21.943584 2642 memory_manager.go:186] "Starting memorymanager" policy="None" Sep 10 00:00:21.943604 kubelet[2642]: I0910 00:00:21.943594 2642 state_mem.go:35] "Initializing new in-memory state store" Sep 10 00:00:21.943742 kubelet[2642]: I0910 00:00:21.943723 2642 state_mem.go:75] "Updated machine memory state" Sep 10 00:00:21.948534 kubelet[2642]: E0910 00:00:21.948422 2642 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Sep 10 00:00:21.948762 kubelet[2642]: I0910 00:00:21.948747 2642 eviction_manager.go:189] "Eviction manager: starting control loop" Sep 10 00:00:21.949076 kubelet[2642]: I0910 00:00:21.949043 2642 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Sep 10 00:00:21.949390 kubelet[2642]: I0910 00:00:21.949373 2642 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Sep 10 00:00:21.951858 kubelet[2642]: E0910 00:00:21.951826 2642 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Sep 10 00:00:22.003427 kubelet[2642]: I0910 00:00:22.003382 2642 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 10 00:00:22.003554 kubelet[2642]: I0910 00:00:22.003395 2642 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 10 00:00:22.003716 kubelet[2642]: I0910 00:00:22.003703 2642 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 10 00:00:22.050828 kubelet[2642]: I0910 00:00:22.050805 2642 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Sep 10 00:00:22.060641 kubelet[2642]: I0910 00:00:22.060529 2642 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Sep 10 00:00:22.060753 kubelet[2642]: I0910 00:00:22.060673 2642 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Sep 10 00:00:22.200078 kubelet[2642]: I0910 00:00:22.199798 2642 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 00:00:22.200078 kubelet[2642]: I0910 00:00:22.199843 2642 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 00:00:22.200078 kubelet[2642]: I0910 00:00:22.199860 2642 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 00:00:22.200078 kubelet[2642]: I0910 00:00:22.199880 2642 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d75e6f6978d9f275ea19380916c9cccd-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"d75e6f6978d9f275ea19380916c9cccd\") " pod="kube-system/kube-scheduler-localhost" Sep 10 00:00:22.200078 kubelet[2642]: I0910 00:00:22.199901 2642 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a7bc36333dc35fe688f672fcc644b68b-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"a7bc36333dc35fe688f672fcc644b68b\") " pod="kube-system/kube-apiserver-localhost" Sep 10 00:00:22.200306 kubelet[2642]: I0910 00:00:22.199917 2642 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a7bc36333dc35fe688f672fcc644b68b-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"a7bc36333dc35fe688f672fcc644b68b\") " pod="kube-system/kube-apiserver-localhost" Sep 10 00:00:22.200306 kubelet[2642]: I0910 00:00:22.199931 2642 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a7bc36333dc35fe688f672fcc644b68b-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"a7bc36333dc35fe688f672fcc644b68b\") " pod="kube-system/kube-apiserver-localhost" Sep 10 00:00:22.200306 kubelet[2642]: I0910 00:00:22.199946 2642 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 00:00:22.200306 kubelet[2642]: I0910 00:00:22.199967 2642 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8de7187202bee21b84740a213836f615-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"8de7187202bee21b84740a213836f615\") " pod="kube-system/kube-controller-manager-localhost" Sep 10 00:00:22.338846 sudo[2678]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Sep 10 00:00:22.339102 sudo[2678]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Sep 10 00:00:22.662102 sudo[2678]: pam_unix(sudo:session): session closed for user root Sep 10 00:00:22.879136 kubelet[2642]: I0910 00:00:22.879076 2642 apiserver.go:52] "Watching apiserver" Sep 10 00:00:22.900306 kubelet[2642]: I0910 00:00:22.900248 2642 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Sep 10 00:00:22.927261 kubelet[2642]: I0910 00:00:22.926957 2642 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Sep 10 00:00:22.927520 kubelet[2642]: I0910 00:00:22.927364 2642 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Sep 10 00:00:22.928517 kubelet[2642]: I0910 00:00:22.928411 2642 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Sep 10 00:00:22.978267 kubelet[2642]: E0910 00:00:22.978052 2642 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Sep 10 00:00:22.979836 kubelet[2642]: E0910 00:00:22.979763 2642 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Sep 10 00:00:22.981009 kubelet[2642]: E0910 00:00:22.980940 2642 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Sep 10 00:00:23.014778 kubelet[2642]: I0910 00:00:23.014632 2642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.014615297 podStartE2EDuration="1.014615297s" podCreationTimestamp="2025-09-10 00:00:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-10 00:00:22.979242647 +0000 UTC m=+1.162584124" watchObservedRunningTime="2025-09-10 00:00:23.014615297 +0000 UTC m=+1.197956774" Sep 10 00:00:23.093399 kubelet[2642]: I0910 00:00:23.092160 2642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.0921440040000001 podStartE2EDuration="1.092144004s" podCreationTimestamp="2025-09-10 00:00:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-10 00:00:23.015773594 +0000 UTC m=+1.199115111" watchObservedRunningTime="2025-09-10 00:00:23.092144004 +0000 UTC m=+1.275485481" Sep 10 00:00:23.122588 kubelet[2642]: I0910 00:00:23.122471 2642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.122454335 podStartE2EDuration="1.122454335s" podCreationTimestamp="2025-09-10 00:00:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-10 00:00:23.092443072 +0000 UTC m=+1.275784549" watchObservedRunningTime="2025-09-10 00:00:23.122454335 +0000 UTC m=+1.305795812" Sep 10 00:00:25.212377 sudo[1716]: pam_unix(sudo:session): session closed for user root Sep 10 00:00:25.213661 sshd[1715]: Connection closed by 10.0.0.1 port 54744 Sep 10 00:00:25.214136 sshd-session[1712]: pam_unix(sshd:session): session closed for user core Sep 10 00:00:25.217041 systemd[1]: sshd@6-10.0.0.122:22-10.0.0.1:54744.service: Deactivated successfully. Sep 10 00:00:25.219793 systemd[1]: session-7.scope: Deactivated successfully. Sep 10 00:00:25.219975 systemd[1]: session-7.scope: Consumed 7.862s CPU time, 260.9M memory peak. Sep 10 00:00:25.222336 systemd-logind[1483]: Session 7 logged out. Waiting for processes to exit. Sep 10 00:00:25.223359 systemd-logind[1483]: Removed session 7. Sep 10 00:00:28.038401 kubelet[2642]: I0910 00:00:28.038369 2642 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Sep 10 00:00:28.039215 containerd[1497]: time="2025-09-10T00:00:28.039183553Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Sep 10 00:00:28.039442 kubelet[2642]: I0910 00:00:28.039339 2642 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Sep 10 00:00:29.157708 systemd[1]: Created slice kubepods-besteffort-pod2d807972_a928_41d5_acaa_94c81f11b7fb.slice - libcontainer container kubepods-besteffort-pod2d807972_a928_41d5_acaa_94c81f11b7fb.slice. Sep 10 00:00:29.172000 systemd[1]: Created slice kubepods-burstable-poda03cb76b_be61_4004_96df_9b45274da63d.slice - libcontainer container kubepods-burstable-poda03cb76b_be61_4004_96df_9b45274da63d.slice. Sep 10 00:00:29.249668 kubelet[2642]: I0910 00:00:29.249615 2642 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a03cb76b-be61-4004-96df-9b45274da63d-hostproc\") pod \"cilium-9zv6r\" (UID: \"a03cb76b-be61-4004-96df-9b45274da63d\") " pod="kube-system/cilium-9zv6r" Sep 10 00:00:29.249973 kubelet[2642]: I0910 00:00:29.249681 2642 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8vs6\" (UniqueName: \"kubernetes.io/projected/66851fa4-7356-4f6a-b72d-b7034fa912d9-kube-api-access-x8vs6\") pod \"cilium-operator-6c4d7847fc-4tvb7\" (UID: \"66851fa4-7356-4f6a-b72d-b7034fa912d9\") " pod="kube-system/cilium-operator-6c4d7847fc-4tvb7" Sep 10 00:00:29.249973 kubelet[2642]: I0910 00:00:29.249703 2642 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a03cb76b-be61-4004-96df-9b45274da63d-clustermesh-secrets\") pod \"cilium-9zv6r\" (UID: \"a03cb76b-be61-4004-96df-9b45274da63d\") " pod="kube-system/cilium-9zv6r" Sep 10 00:00:29.249973 kubelet[2642]: I0910 00:00:29.249718 2642 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a03cb76b-be61-4004-96df-9b45274da63d-host-proc-sys-net\") pod \"cilium-9zv6r\" (UID: \"a03cb76b-be61-4004-96df-9b45274da63d\") " pod="kube-system/cilium-9zv6r" Sep 10 00:00:29.249973 kubelet[2642]: I0910 00:00:29.249735 2642 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a03cb76b-be61-4004-96df-9b45274da63d-hubble-tls\") pod \"cilium-9zv6r\" (UID: \"a03cb76b-be61-4004-96df-9b45274da63d\") " pod="kube-system/cilium-9zv6r" Sep 10 00:00:29.249973 kubelet[2642]: I0910 00:00:29.249749 2642 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2d807972-a928-41d5-acaa-94c81f11b7fb-xtables-lock\") pod \"kube-proxy-2zjkp\" (UID: \"2d807972-a928-41d5-acaa-94c81f11b7fb\") " pod="kube-system/kube-proxy-2zjkp" Sep 10 00:00:29.250101 kubelet[2642]: I0910 00:00:29.249765 2642 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a03cb76b-be61-4004-96df-9b45274da63d-bpf-maps\") pod \"cilium-9zv6r\" (UID: \"a03cb76b-be61-4004-96df-9b45274da63d\") " pod="kube-system/cilium-9zv6r" Sep 10 00:00:29.250007 systemd[1]: Created slice kubepods-besteffort-pod66851fa4_7356_4f6a_b72d_b7034fa912d9.slice - libcontainer container kubepods-besteffort-pod66851fa4_7356_4f6a_b72d_b7034fa912d9.slice. Sep 10 00:00:29.251850 kubelet[2642]: I0910 00:00:29.251803 2642 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a03cb76b-be61-4004-96df-9b45274da63d-cilium-config-path\") pod \"cilium-9zv6r\" (UID: \"a03cb76b-be61-4004-96df-9b45274da63d\") " pod="kube-system/cilium-9zv6r" Sep 10 00:00:29.251952 kubelet[2642]: I0910 00:00:29.251863 2642 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/66851fa4-7356-4f6a-b72d-b7034fa912d9-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-4tvb7\" (UID: \"66851fa4-7356-4f6a-b72d-b7034fa912d9\") " pod="kube-system/cilium-operator-6c4d7847fc-4tvb7" Sep 10 00:00:29.251952 kubelet[2642]: I0910 00:00:29.251925 2642 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2d807972-a928-41d5-acaa-94c81f11b7fb-lib-modules\") pod \"kube-proxy-2zjkp\" (UID: \"2d807972-a928-41d5-acaa-94c81f11b7fb\") " pod="kube-system/kube-proxy-2zjkp" Sep 10 00:00:29.252001 kubelet[2642]: I0910 00:00:29.251955 2642 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a03cb76b-be61-4004-96df-9b45274da63d-etc-cni-netd\") pod \"cilium-9zv6r\" (UID: \"a03cb76b-be61-4004-96df-9b45274da63d\") " pod="kube-system/cilium-9zv6r" Sep 10 00:00:29.256288 kubelet[2642]: I0910 00:00:29.252082 2642 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a03cb76b-be61-4004-96df-9b45274da63d-xtables-lock\") pod \"cilium-9zv6r\" (UID: \"a03cb76b-be61-4004-96df-9b45274da63d\") " pod="kube-system/cilium-9zv6r" Sep 10 00:00:29.256288 kubelet[2642]: I0910 00:00:29.252114 2642 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a03cb76b-be61-4004-96df-9b45274da63d-host-proc-sys-kernel\") pod \"cilium-9zv6r\" (UID: \"a03cb76b-be61-4004-96df-9b45274da63d\") " pod="kube-system/cilium-9zv6r" Sep 10 00:00:29.256288 kubelet[2642]: I0910 00:00:29.252143 2642 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a03cb76b-be61-4004-96df-9b45274da63d-lib-modules\") pod \"cilium-9zv6r\" (UID: \"a03cb76b-be61-4004-96df-9b45274da63d\") " pod="kube-system/cilium-9zv6r" Sep 10 00:00:29.256288 kubelet[2642]: I0910 00:00:29.252220 2642 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vbch7\" (UniqueName: \"kubernetes.io/projected/a03cb76b-be61-4004-96df-9b45274da63d-kube-api-access-vbch7\") pod \"cilium-9zv6r\" (UID: \"a03cb76b-be61-4004-96df-9b45274da63d\") " pod="kube-system/cilium-9zv6r" Sep 10 00:00:29.256288 kubelet[2642]: I0910 00:00:29.252237 2642 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a03cb76b-be61-4004-96df-9b45274da63d-cilium-run\") pod \"cilium-9zv6r\" (UID: \"a03cb76b-be61-4004-96df-9b45274da63d\") " pod="kube-system/cilium-9zv6r" Sep 10 00:00:29.256288 kubelet[2642]: I0910 00:00:29.252271 2642 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/2d807972-a928-41d5-acaa-94c81f11b7fb-kube-proxy\") pod \"kube-proxy-2zjkp\" (UID: \"2d807972-a928-41d5-acaa-94c81f11b7fb\") " pod="kube-system/kube-proxy-2zjkp" Sep 10 00:00:29.256482 kubelet[2642]: I0910 00:00:29.252307 2642 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jmdp5\" (UniqueName: \"kubernetes.io/projected/2d807972-a928-41d5-acaa-94c81f11b7fb-kube-api-access-jmdp5\") pod \"kube-proxy-2zjkp\" (UID: \"2d807972-a928-41d5-acaa-94c81f11b7fb\") " pod="kube-system/kube-proxy-2zjkp" Sep 10 00:00:29.256482 kubelet[2642]: I0910 00:00:29.252328 2642 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a03cb76b-be61-4004-96df-9b45274da63d-cilium-cgroup\") pod \"cilium-9zv6r\" (UID: \"a03cb76b-be61-4004-96df-9b45274da63d\") " pod="kube-system/cilium-9zv6r" Sep 10 00:00:29.256482 kubelet[2642]: I0910 00:00:29.252349 2642 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a03cb76b-be61-4004-96df-9b45274da63d-cni-path\") pod \"cilium-9zv6r\" (UID: \"a03cb76b-be61-4004-96df-9b45274da63d\") " pod="kube-system/cilium-9zv6r" Sep 10 00:00:29.470196 containerd[1497]: time="2025-09-10T00:00:29.469944299Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-2zjkp,Uid:2d807972-a928-41d5-acaa-94c81f11b7fb,Namespace:kube-system,Attempt:0,}" Sep 10 00:00:29.475070 containerd[1497]: time="2025-09-10T00:00:29.475024976Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9zv6r,Uid:a03cb76b-be61-4004-96df-9b45274da63d,Namespace:kube-system,Attempt:0,}" Sep 10 00:00:29.494409 containerd[1497]: time="2025-09-10T00:00:29.494368984Z" level=info msg="connecting to shim 25dc0ea9c2c77977366728a18d782f253ebf95006b36de831db40d5e64d4c40e" address="unix:///run/containerd/s/6232e55e8e9d5872f485c5714d08fc7318013817bcf80185d9052f49e7249534" namespace=k8s.io protocol=ttrpc version=3 Sep 10 00:00:29.504769 containerd[1497]: time="2025-09-10T00:00:29.504725358Z" level=info msg="connecting to shim 079c1afe3398969211cf41d1b29f0a50a6d9623336a48ead37820f21b3e1eed1" address="unix:///run/containerd/s/6e915e0d2d7b0602b44b318c69e9599decdede44a7dce2ed2bb5de1f0616fa39" namespace=k8s.io protocol=ttrpc version=3 Sep 10 00:00:29.520750 systemd[1]: Started cri-containerd-25dc0ea9c2c77977366728a18d782f253ebf95006b36de831db40d5e64d4c40e.scope - libcontainer container 25dc0ea9c2c77977366728a18d782f253ebf95006b36de831db40d5e64d4c40e. Sep 10 00:00:29.545751 systemd[1]: Started cri-containerd-079c1afe3398969211cf41d1b29f0a50a6d9623336a48ead37820f21b3e1eed1.scope - libcontainer container 079c1afe3398969211cf41d1b29f0a50a6d9623336a48ead37820f21b3e1eed1. Sep 10 00:00:29.559100 containerd[1497]: time="2025-09-10T00:00:29.559050166Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-4tvb7,Uid:66851fa4-7356-4f6a-b72d-b7034fa912d9,Namespace:kube-system,Attempt:0,}" Sep 10 00:00:29.578127 containerd[1497]: time="2025-09-10T00:00:29.578011655Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-2zjkp,Uid:2d807972-a928-41d5-acaa-94c81f11b7fb,Namespace:kube-system,Attempt:0,} returns sandbox id \"25dc0ea9c2c77977366728a18d782f253ebf95006b36de831db40d5e64d4c40e\"" Sep 10 00:00:29.585115 containerd[1497]: time="2025-09-10T00:00:29.585072454Z" level=info msg="CreateContainer within sandbox \"25dc0ea9c2c77977366728a18d782f253ebf95006b36de831db40d5e64d4c40e\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Sep 10 00:00:29.590610 containerd[1497]: time="2025-09-10T00:00:29.590507807Z" level=info msg="connecting to shim 8f95a26641b839c55865b69dda6b8cecb125e4416e6004079f4acef609b97209" address="unix:///run/containerd/s/4b572282566f6d6d466ed9c517f3af4b844ca988796d2601f89b4cf7af9774f3" namespace=k8s.io protocol=ttrpc version=3 Sep 10 00:00:29.601189 containerd[1497]: time="2025-09-10T00:00:29.601039438Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-9zv6r,Uid:a03cb76b-be61-4004-96df-9b45274da63d,Namespace:kube-system,Attempt:0,} returns sandbox id \"079c1afe3398969211cf41d1b29f0a50a6d9623336a48ead37820f21b3e1eed1\"" Sep 10 00:00:29.602091 containerd[1497]: time="2025-09-10T00:00:29.602053262Z" level=info msg="Container 9ce5ae38cad8278d7b99f05f250d1aa5ffec69148da353ceb593e4a04e94e378: CDI devices from CRI Config.CDIDevices: []" Sep 10 00:00:29.603529 containerd[1497]: time="2025-09-10T00:00:29.603496889Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Sep 10 00:00:29.611140 containerd[1497]: time="2025-09-10T00:00:29.611098822Z" level=info msg="CreateContainer within sandbox \"25dc0ea9c2c77977366728a18d782f253ebf95006b36de831db40d5e64d4c40e\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"9ce5ae38cad8278d7b99f05f250d1aa5ffec69148da353ceb593e4a04e94e378\"" Sep 10 00:00:29.612632 containerd[1497]: time="2025-09-10T00:00:29.612606215Z" level=info msg="StartContainer for \"9ce5ae38cad8278d7b99f05f250d1aa5ffec69148da353ceb593e4a04e94e378\"" Sep 10 00:00:29.614054 containerd[1497]: time="2025-09-10T00:00:29.614012358Z" level=info msg="connecting to shim 9ce5ae38cad8278d7b99f05f250d1aa5ffec69148da353ceb593e4a04e94e378" address="unix:///run/containerd/s/6232e55e8e9d5872f485c5714d08fc7318013817bcf80185d9052f49e7249534" protocol=ttrpc version=3 Sep 10 00:00:29.619768 systemd[1]: Started cri-containerd-8f95a26641b839c55865b69dda6b8cecb125e4416e6004079f4acef609b97209.scope - libcontainer container 8f95a26641b839c55865b69dda6b8cecb125e4416e6004079f4acef609b97209. Sep 10 00:00:29.641799 systemd[1]: Started cri-containerd-9ce5ae38cad8278d7b99f05f250d1aa5ffec69148da353ceb593e4a04e94e378.scope - libcontainer container 9ce5ae38cad8278d7b99f05f250d1aa5ffec69148da353ceb593e4a04e94e378. Sep 10 00:00:29.656915 containerd[1497]: time="2025-09-10T00:00:29.656870719Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-4tvb7,Uid:66851fa4-7356-4f6a-b72d-b7034fa912d9,Namespace:kube-system,Attempt:0,} returns sandbox id \"8f95a26641b839c55865b69dda6b8cecb125e4416e6004079f4acef609b97209\"" Sep 10 00:00:29.686864 containerd[1497]: time="2025-09-10T00:00:29.686818527Z" level=info msg="StartContainer for \"9ce5ae38cad8278d7b99f05f250d1aa5ffec69148da353ceb593e4a04e94e378\" returns successfully" Sep 10 00:00:31.750264 kubelet[2642]: I0910 00:00:31.749769 2642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-2zjkp" podStartSLOduration=2.749751283 podStartE2EDuration="2.749751283s" podCreationTimestamp="2025-09-10 00:00:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-10 00:00:29.952414952 +0000 UTC m=+8.135756429" watchObservedRunningTime="2025-09-10 00:00:31.749751283 +0000 UTC m=+9.933092760" Sep 10 00:00:37.539687 update_engine[1484]: I20250910 00:00:37.539604 1484 update_attempter.cc:509] Updating boot flags... Sep 10 00:00:42.587484 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount926286855.mount: Deactivated successfully. Sep 10 00:00:43.933863 containerd[1497]: time="2025-09-10T00:00:43.933806979Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:00:43.934660 containerd[1497]: time="2025-09-10T00:00:43.934474292Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Sep 10 00:00:43.935785 containerd[1497]: time="2025-09-10T00:00:43.935749636Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:00:43.938596 containerd[1497]: time="2025-09-10T00:00:43.938547696Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 14.334981322s" Sep 10 00:00:43.938596 containerd[1497]: time="2025-09-10T00:00:43.938595579Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Sep 10 00:00:43.939687 containerd[1497]: time="2025-09-10T00:00:43.939656032Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Sep 10 00:00:43.946442 containerd[1497]: time="2025-09-10T00:00:43.945991430Z" level=info msg="CreateContainer within sandbox \"079c1afe3398969211cf41d1b29f0a50a6d9623336a48ead37820f21b3e1eed1\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 10 00:00:43.954892 containerd[1497]: time="2025-09-10T00:00:43.954854794Z" level=info msg="Container 52bb49f01be7f1c96154659584c7cfcddef94434389663ff71eb1beb7ba1746e: CDI devices from CRI Config.CDIDevices: []" Sep 10 00:00:43.960670 containerd[1497]: time="2025-09-10T00:00:43.960630724Z" level=info msg="CreateContainer within sandbox \"079c1afe3398969211cf41d1b29f0a50a6d9623336a48ead37820f21b3e1eed1\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"52bb49f01be7f1c96154659584c7cfcddef94434389663ff71eb1beb7ba1746e\"" Sep 10 00:00:43.961160 containerd[1497]: time="2025-09-10T00:00:43.961133909Z" level=info msg="StartContainer for \"52bb49f01be7f1c96154659584c7cfcddef94434389663ff71eb1beb7ba1746e\"" Sep 10 00:00:43.962221 containerd[1497]: time="2025-09-10T00:00:43.962193402Z" level=info msg="connecting to shim 52bb49f01be7f1c96154659584c7cfcddef94434389663ff71eb1beb7ba1746e" address="unix:///run/containerd/s/6e915e0d2d7b0602b44b318c69e9599decdede44a7dce2ed2bb5de1f0616fa39" protocol=ttrpc version=3 Sep 10 00:00:44.014762 systemd[1]: Started cri-containerd-52bb49f01be7f1c96154659584c7cfcddef94434389663ff71eb1beb7ba1746e.scope - libcontainer container 52bb49f01be7f1c96154659584c7cfcddef94434389663ff71eb1beb7ba1746e. Sep 10 00:00:44.040702 containerd[1497]: time="2025-09-10T00:00:44.040659170Z" level=info msg="StartContainer for \"52bb49f01be7f1c96154659584c7cfcddef94434389663ff71eb1beb7ba1746e\" returns successfully" Sep 10 00:00:44.054602 systemd[1]: cri-containerd-52bb49f01be7f1c96154659584c7cfcddef94434389663ff71eb1beb7ba1746e.scope: Deactivated successfully. Sep 10 00:00:44.091060 containerd[1497]: time="2025-09-10T00:00:44.090970782Z" level=info msg="received exit event container_id:\"52bb49f01be7f1c96154659584c7cfcddef94434389663ff71eb1beb7ba1746e\" id:\"52bb49f01be7f1c96154659584c7cfcddef94434389663ff71eb1beb7ba1746e\" pid:3082 exited_at:{seconds:1757462444 nanos:84786965}" Sep 10 00:00:44.091198 containerd[1497]: time="2025-09-10T00:00:44.091077267Z" level=info msg="TaskExit event in podsandbox handler container_id:\"52bb49f01be7f1c96154659584c7cfcddef94434389663ff71eb1beb7ba1746e\" id:\"52bb49f01be7f1c96154659584c7cfcddef94434389663ff71eb1beb7ba1746e\" pid:3082 exited_at:{seconds:1757462444 nanos:84786965}" Sep 10 00:00:44.952820 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-52bb49f01be7f1c96154659584c7cfcddef94434389663ff71eb1beb7ba1746e-rootfs.mount: Deactivated successfully. Sep 10 00:00:45.006673 containerd[1497]: time="2025-09-10T00:00:45.006618747Z" level=info msg="CreateContainer within sandbox \"079c1afe3398969211cf41d1b29f0a50a6d9623336a48ead37820f21b3e1eed1\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 10 00:00:45.017949 containerd[1497]: time="2025-09-10T00:00:45.017899104Z" level=info msg="Container d1d2914611ff8bc94917c8567fd74398f6e3d6814810123a5f0db9a9e7e98edc: CDI devices from CRI Config.CDIDevices: []" Sep 10 00:00:45.033975 containerd[1497]: time="2025-09-10T00:00:45.033915759Z" level=info msg="CreateContainer within sandbox \"079c1afe3398969211cf41d1b29f0a50a6d9623336a48ead37820f21b3e1eed1\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"d1d2914611ff8bc94917c8567fd74398f6e3d6814810123a5f0db9a9e7e98edc\"" Sep 10 00:00:45.034427 containerd[1497]: time="2025-09-10T00:00:45.034403301Z" level=info msg="StartContainer for \"d1d2914611ff8bc94917c8567fd74398f6e3d6814810123a5f0db9a9e7e98edc\"" Sep 10 00:00:45.035605 containerd[1497]: time="2025-09-10T00:00:45.035433669Z" level=info msg="connecting to shim d1d2914611ff8bc94917c8567fd74398f6e3d6814810123a5f0db9a9e7e98edc" address="unix:///run/containerd/s/6e915e0d2d7b0602b44b318c69e9599decdede44a7dce2ed2bb5de1f0616fa39" protocol=ttrpc version=3 Sep 10 00:00:45.053759 systemd[1]: Started cri-containerd-d1d2914611ff8bc94917c8567fd74398f6e3d6814810123a5f0db9a9e7e98edc.scope - libcontainer container d1d2914611ff8bc94917c8567fd74398f6e3d6814810123a5f0db9a9e7e98edc. Sep 10 00:00:45.087833 containerd[1497]: time="2025-09-10T00:00:45.087788631Z" level=info msg="StartContainer for \"d1d2914611ff8bc94917c8567fd74398f6e3d6814810123a5f0db9a9e7e98edc\" returns successfully" Sep 10 00:00:45.100114 systemd[1]: systemd-sysctl.service: Deactivated successfully. Sep 10 00:00:45.100381 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Sep 10 00:00:45.102168 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Sep 10 00:00:45.104629 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Sep 10 00:00:45.106313 containerd[1497]: time="2025-09-10T00:00:45.106165154Z" level=info msg="received exit event container_id:\"d1d2914611ff8bc94917c8567fd74398f6e3d6814810123a5f0db9a9e7e98edc\" id:\"d1d2914611ff8bc94917c8567fd74398f6e3d6814810123a5f0db9a9e7e98edc\" pid:3130 exited_at:{seconds:1757462445 nanos:105855219}" Sep 10 00:00:45.106224 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Sep 10 00:00:45.106692 systemd[1]: cri-containerd-d1d2914611ff8bc94917c8567fd74398f6e3d6814810123a5f0db9a9e7e98edc.scope: Deactivated successfully. Sep 10 00:00:45.106874 containerd[1497]: time="2025-09-10T00:00:45.106752221Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d1d2914611ff8bc94917c8567fd74398f6e3d6814810123a5f0db9a9e7e98edc\" id:\"d1d2914611ff8bc94917c8567fd74398f6e3d6814810123a5f0db9a9e7e98edc\" pid:3130 exited_at:{seconds:1757462445 nanos:105855219}" Sep 10 00:00:45.145705 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Sep 10 00:00:45.953225 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d1d2914611ff8bc94917c8567fd74398f6e3d6814810123a5f0db9a9e7e98edc-rootfs.mount: Deactivated successfully. Sep 10 00:00:46.009828 containerd[1497]: time="2025-09-10T00:00:46.008876515Z" level=info msg="CreateContainer within sandbox \"079c1afe3398969211cf41d1b29f0a50a6d9623336a48ead37820f21b3e1eed1\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 10 00:00:46.033983 containerd[1497]: time="2025-09-10T00:00:46.030360139Z" level=info msg="Container 6eae703885109c65f01a6e2040654b4ef049a281b07cd0eb213130d2508ffd87: CDI devices from CRI Config.CDIDevices: []" Sep 10 00:00:46.045616 containerd[1497]: time="2025-09-10T00:00:46.045046945Z" level=info msg="CreateContainer within sandbox \"079c1afe3398969211cf41d1b29f0a50a6d9623336a48ead37820f21b3e1eed1\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"6eae703885109c65f01a6e2040654b4ef049a281b07cd0eb213130d2508ffd87\"" Sep 10 00:00:46.046845 containerd[1497]: time="2025-09-10T00:00:46.046800302Z" level=info msg="StartContainer for \"6eae703885109c65f01a6e2040654b4ef049a281b07cd0eb213130d2508ffd87\"" Sep 10 00:00:46.050960 containerd[1497]: time="2025-09-10T00:00:46.050905162Z" level=info msg="connecting to shim 6eae703885109c65f01a6e2040654b4ef049a281b07cd0eb213130d2508ffd87" address="unix:///run/containerd/s/6e915e0d2d7b0602b44b318c69e9599decdede44a7dce2ed2bb5de1f0616fa39" protocol=ttrpc version=3 Sep 10 00:00:46.072808 systemd[1]: Started cri-containerd-6eae703885109c65f01a6e2040654b4ef049a281b07cd0eb213130d2508ffd87.scope - libcontainer container 6eae703885109c65f01a6e2040654b4ef049a281b07cd0eb213130d2508ffd87. Sep 10 00:00:46.116053 containerd[1497]: time="2025-09-10T00:00:46.115969741Z" level=info msg="StartContainer for \"6eae703885109c65f01a6e2040654b4ef049a281b07cd0eb213130d2508ffd87\" returns successfully" Sep 10 00:00:46.118152 systemd[1]: cri-containerd-6eae703885109c65f01a6e2040654b4ef049a281b07cd0eb213130d2508ffd87.scope: Deactivated successfully. Sep 10 00:00:46.127649 containerd[1497]: time="2025-09-10T00:00:46.127587212Z" level=info msg="received exit event container_id:\"6eae703885109c65f01a6e2040654b4ef049a281b07cd0eb213130d2508ffd87\" id:\"6eae703885109c65f01a6e2040654b4ef049a281b07cd0eb213130d2508ffd87\" pid:3181 exited_at:{seconds:1757462446 nanos:127330961}" Sep 10 00:00:46.127784 containerd[1497]: time="2025-09-10T00:00:46.127714018Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6eae703885109c65f01a6e2040654b4ef049a281b07cd0eb213130d2508ffd87\" id:\"6eae703885109c65f01a6e2040654b4ef049a281b07cd0eb213130d2508ffd87\" pid:3181 exited_at:{seconds:1757462446 nanos:127330961}" Sep 10 00:00:46.147580 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6eae703885109c65f01a6e2040654b4ef049a281b07cd0eb213130d2508ffd87-rootfs.mount: Deactivated successfully. Sep 10 00:00:47.012265 containerd[1497]: time="2025-09-10T00:00:47.012202270Z" level=info msg="CreateContainer within sandbox \"079c1afe3398969211cf41d1b29f0a50a6d9623336a48ead37820f21b3e1eed1\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 10 00:00:47.032558 containerd[1497]: time="2025-09-10T00:00:47.031502883Z" level=info msg="Container 792a2715f4c70d569d0641fe8d46f03db1a714176cb6d563f8963029b4a702ab: CDI devices from CRI Config.CDIDevices: []" Sep 10 00:00:47.040773 containerd[1497]: time="2025-09-10T00:00:47.040724431Z" level=info msg="CreateContainer within sandbox \"079c1afe3398969211cf41d1b29f0a50a6d9623336a48ead37820f21b3e1eed1\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"792a2715f4c70d569d0641fe8d46f03db1a714176cb6d563f8963029b4a702ab\"" Sep 10 00:00:47.042032 containerd[1497]: time="2025-09-10T00:00:47.041668831Z" level=info msg="StartContainer for \"792a2715f4c70d569d0641fe8d46f03db1a714176cb6d563f8963029b4a702ab\"" Sep 10 00:00:47.044894 containerd[1497]: time="2025-09-10T00:00:47.043582872Z" level=info msg="connecting to shim 792a2715f4c70d569d0641fe8d46f03db1a714176cb6d563f8963029b4a702ab" address="unix:///run/containerd/s/6e915e0d2d7b0602b44b318c69e9599decdede44a7dce2ed2bb5de1f0616fa39" protocol=ttrpc version=3 Sep 10 00:00:47.082782 systemd[1]: Started cri-containerd-792a2715f4c70d569d0641fe8d46f03db1a714176cb6d563f8963029b4a702ab.scope - libcontainer container 792a2715f4c70d569d0641fe8d46f03db1a714176cb6d563f8963029b4a702ab. Sep 10 00:00:47.120906 systemd[1]: cri-containerd-792a2715f4c70d569d0641fe8d46f03db1a714176cb6d563f8963029b4a702ab.scope: Deactivated successfully. Sep 10 00:00:47.122405 containerd[1497]: time="2025-09-10T00:00:47.122355071Z" level=info msg="TaskExit event in podsandbox handler container_id:\"792a2715f4c70d569d0641fe8d46f03db1a714176cb6d563f8963029b4a702ab\" id:\"792a2715f4c70d569d0641fe8d46f03db1a714176cb6d563f8963029b4a702ab\" pid:3225 exited_at:{seconds:1757462447 nanos:121414391}" Sep 10 00:00:47.131046 containerd[1497]: time="2025-09-10T00:00:47.130979955Z" level=info msg="received exit event container_id:\"792a2715f4c70d569d0641fe8d46f03db1a714176cb6d563f8963029b4a702ab\" id:\"792a2715f4c70d569d0641fe8d46f03db1a714176cb6d563f8963029b4a702ab\" pid:3225 exited_at:{seconds:1757462447 nanos:121414391}" Sep 10 00:00:47.138598 containerd[1497]: time="2025-09-10T00:00:47.138442069Z" level=info msg="StartContainer for \"792a2715f4c70d569d0641fe8d46f03db1a714176cb6d563f8963029b4a702ab\" returns successfully" Sep 10 00:00:47.804418 containerd[1497]: time="2025-09-10T00:00:47.804365729Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:00:47.805155 containerd[1497]: time="2025-09-10T00:00:47.805122241Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Sep 10 00:00:47.806225 containerd[1497]: time="2025-09-10T00:00:47.805917995Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Sep 10 00:00:47.807361 containerd[1497]: time="2025-09-10T00:00:47.807323254Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 3.867628379s" Sep 10 00:00:47.807475 containerd[1497]: time="2025-09-10T00:00:47.807444099Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Sep 10 00:00:47.813426 containerd[1497]: time="2025-09-10T00:00:47.813374869Z" level=info msg="CreateContainer within sandbox \"8f95a26641b839c55865b69dda6b8cecb125e4416e6004079f4acef609b97209\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Sep 10 00:00:47.819153 containerd[1497]: time="2025-09-10T00:00:47.819105630Z" level=info msg="Container 9c2a44313e2cd08070b143656763807ee763962b8979895195d74794584170d1: CDI devices from CRI Config.CDIDevices: []" Sep 10 00:00:47.824379 containerd[1497]: time="2025-09-10T00:00:47.824326250Z" level=info msg="CreateContainer within sandbox \"8f95a26641b839c55865b69dda6b8cecb125e4416e6004079f4acef609b97209\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"9c2a44313e2cd08070b143656763807ee763962b8979895195d74794584170d1\"" Sep 10 00:00:47.824866 containerd[1497]: time="2025-09-10T00:00:47.824809871Z" level=info msg="StartContainer for \"9c2a44313e2cd08070b143656763807ee763962b8979895195d74794584170d1\"" Sep 10 00:00:47.826080 containerd[1497]: time="2025-09-10T00:00:47.826029842Z" level=info msg="connecting to shim 9c2a44313e2cd08070b143656763807ee763962b8979895195d74794584170d1" address="unix:///run/containerd/s/4b572282566f6d6d466ed9c517f3af4b844ca988796d2601f89b4cf7af9774f3" protocol=ttrpc version=3 Sep 10 00:00:47.846771 systemd[1]: Started cri-containerd-9c2a44313e2cd08070b143656763807ee763962b8979895195d74794584170d1.scope - libcontainer container 9c2a44313e2cd08070b143656763807ee763962b8979895195d74794584170d1. Sep 10 00:00:47.931463 containerd[1497]: time="2025-09-10T00:00:47.931424483Z" level=info msg="StartContainer for \"9c2a44313e2cd08070b143656763807ee763962b8979895195d74794584170d1\" returns successfully" Sep 10 00:00:48.023996 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-792a2715f4c70d569d0641fe8d46f03db1a714176cb6d563f8963029b4a702ab-rootfs.mount: Deactivated successfully. Sep 10 00:00:48.027488 containerd[1497]: time="2025-09-10T00:00:48.027314920Z" level=info msg="CreateContainer within sandbox \"079c1afe3398969211cf41d1b29f0a50a6d9623336a48ead37820f21b3e1eed1\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 10 00:00:48.049558 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3934441684.mount: Deactivated successfully. Sep 10 00:00:48.054236 containerd[1497]: time="2025-09-10T00:00:48.047852910Z" level=info msg="Container ba3cb9a7cc98cda4e6f8b225175557b7c2f5c85701fb882dbb434aec7d1832bb: CDI devices from CRI Config.CDIDevices: []" Sep 10 00:00:48.067682 containerd[1497]: time="2025-09-10T00:00:48.066672311Z" level=info msg="CreateContainer within sandbox \"079c1afe3398969211cf41d1b29f0a50a6d9623336a48ead37820f21b3e1eed1\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"ba3cb9a7cc98cda4e6f8b225175557b7c2f5c85701fb882dbb434aec7d1832bb\"" Sep 10 00:00:48.068664 containerd[1497]: time="2025-09-10T00:00:48.067937043Z" level=info msg="StartContainer for \"ba3cb9a7cc98cda4e6f8b225175557b7c2f5c85701fb882dbb434aec7d1832bb\"" Sep 10 00:00:48.068751 kubelet[2642]: I0910 00:00:48.068007 2642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-4tvb7" podStartSLOduration=0.917989407 podStartE2EDuration="19.067987405s" podCreationTimestamp="2025-09-10 00:00:29 +0000 UTC" firstStartedPulling="2025-09-10 00:00:29.658167171 +0000 UTC m=+7.841508648" lastFinishedPulling="2025-09-10 00:00:47.808165169 +0000 UTC m=+25.991506646" observedRunningTime="2025-09-10 00:00:48.066121969 +0000 UTC m=+26.249463446" watchObservedRunningTime="2025-09-10 00:00:48.067987405 +0000 UTC m=+26.251328882" Sep 10 00:00:48.070528 containerd[1497]: time="2025-09-10T00:00:48.070472545Z" level=info msg="connecting to shim ba3cb9a7cc98cda4e6f8b225175557b7c2f5c85701fb882dbb434aec7d1832bb" address="unix:///run/containerd/s/6e915e0d2d7b0602b44b318c69e9599decdede44a7dce2ed2bb5de1f0616fa39" protocol=ttrpc version=3 Sep 10 00:00:48.115840 systemd[1]: Started cri-containerd-ba3cb9a7cc98cda4e6f8b225175557b7c2f5c85701fb882dbb434aec7d1832bb.scope - libcontainer container ba3cb9a7cc98cda4e6f8b225175557b7c2f5c85701fb882dbb434aec7d1832bb. Sep 10 00:00:48.155698 containerd[1497]: time="2025-09-10T00:00:48.155656910Z" level=info msg="StartContainer for \"ba3cb9a7cc98cda4e6f8b225175557b7c2f5c85701fb882dbb434aec7d1832bb\" returns successfully" Sep 10 00:00:48.290236 containerd[1497]: time="2025-09-10T00:00:48.290178910Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ba3cb9a7cc98cda4e6f8b225175557b7c2f5c85701fb882dbb434aec7d1832bb\" id:\"85adfec46b9b38255f36f168179a26aa65e355f4cd97c0f77ebf3648ddbcc2f0\" pid:3337 exited_at:{seconds:1757462448 nanos:289754613}" Sep 10 00:00:48.341925 kubelet[2642]: I0910 00:00:48.341816 2642 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Sep 10 00:00:48.391595 systemd[1]: Created slice kubepods-burstable-pod0543ea50_1525_4279_acd5_60f83e5f0c54.slice - libcontainer container kubepods-burstable-pod0543ea50_1525_4279_acd5_60f83e5f0c54.slice. Sep 10 00:00:48.398684 systemd[1]: Created slice kubepods-burstable-podd01a6dc8_9639_4ac2_9925_8ddac74d9950.slice - libcontainer container kubepods-burstable-podd01a6dc8_9639_4ac2_9925_8ddac74d9950.slice. Sep 10 00:00:48.498364 kubelet[2642]: I0910 00:00:48.498315 2642 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8wz98\" (UniqueName: \"kubernetes.io/projected/0543ea50-1525-4279-acd5-60f83e5f0c54-kube-api-access-8wz98\") pod \"coredns-674b8bbfcf-txjwb\" (UID: \"0543ea50-1525-4279-acd5-60f83e5f0c54\") " pod="kube-system/coredns-674b8bbfcf-txjwb" Sep 10 00:00:48.498364 kubelet[2642]: I0910 00:00:48.498365 2642 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fqznb\" (UniqueName: \"kubernetes.io/projected/d01a6dc8-9639-4ac2-9925-8ddac74d9950-kube-api-access-fqznb\") pod \"coredns-674b8bbfcf-vc6h2\" (UID: \"d01a6dc8-9639-4ac2-9925-8ddac74d9950\") " pod="kube-system/coredns-674b8bbfcf-vc6h2" Sep 10 00:00:48.498555 kubelet[2642]: I0910 00:00:48.498470 2642 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d01a6dc8-9639-4ac2-9925-8ddac74d9950-config-volume\") pod \"coredns-674b8bbfcf-vc6h2\" (UID: \"d01a6dc8-9639-4ac2-9925-8ddac74d9950\") " pod="kube-system/coredns-674b8bbfcf-vc6h2" Sep 10 00:00:48.498555 kubelet[2642]: I0910 00:00:48.498518 2642 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0543ea50-1525-4279-acd5-60f83e5f0c54-config-volume\") pod \"coredns-674b8bbfcf-txjwb\" (UID: \"0543ea50-1525-4279-acd5-60f83e5f0c54\") " pod="kube-system/coredns-674b8bbfcf-txjwb" Sep 10 00:00:48.697471 containerd[1497]: time="2025-09-10T00:00:48.697347616Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-txjwb,Uid:0543ea50-1525-4279-acd5-60f83e5f0c54,Namespace:kube-system,Attempt:0,}" Sep 10 00:00:48.703698 containerd[1497]: time="2025-09-10T00:00:48.703394340Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-vc6h2,Uid:d01a6dc8-9639-4ac2-9925-8ddac74d9950,Namespace:kube-system,Attempt:0,}" Sep 10 00:00:49.065848 kubelet[2642]: I0910 00:00:49.065754 2642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-9zv6r" podStartSLOduration=5.729509805 podStartE2EDuration="20.065734292s" podCreationTimestamp="2025-09-10 00:00:29 +0000 UTC" firstStartedPulling="2025-09-10 00:00:29.603104689 +0000 UTC m=+7.786446126" lastFinishedPulling="2025-09-10 00:00:43.939329136 +0000 UTC m=+22.122670613" observedRunningTime="2025-09-10 00:00:49.065082426 +0000 UTC m=+27.248423903" watchObservedRunningTime="2025-09-10 00:00:49.065734292 +0000 UTC m=+27.249075769" Sep 10 00:00:51.955297 systemd-networkd[1437]: cilium_host: Link UP Sep 10 00:00:51.955419 systemd-networkd[1437]: cilium_net: Link UP Sep 10 00:00:51.955534 systemd-networkd[1437]: cilium_net: Gained carrier Sep 10 00:00:51.955661 systemd-networkd[1437]: cilium_host: Gained carrier Sep 10 00:00:51.960833 systemd-networkd[1437]: cilium_host: Gained IPv6LL Sep 10 00:00:52.049080 systemd-networkd[1437]: cilium_vxlan: Link UP Sep 10 00:00:52.049087 systemd-networkd[1437]: cilium_vxlan: Gained carrier Sep 10 00:00:52.340631 kernel: NET: Registered PF_ALG protocol family Sep 10 00:00:52.388957 systemd-networkd[1437]: cilium_net: Gained IPv6LL Sep 10 00:00:52.787846 systemd[1]: Started sshd@7-10.0.0.122:22-10.0.0.1:32776.service - OpenSSH per-connection server daemon (10.0.0.1:32776). Sep 10 00:00:52.855075 sshd[3655]: Accepted publickey for core from 10.0.0.1 port 32776 ssh2: RSA SHA256:BIipJKfG3sr4zTNTEUz0SDDjJtEzBqbnZB4/ga6/CtY Sep 10 00:00:52.856843 sshd-session[3655]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:00:52.862376 systemd-logind[1483]: New session 8 of user core. Sep 10 00:00:52.869645 systemd[1]: Started session-8.scope - Session 8 of User core. Sep 10 00:00:53.020343 sshd[3693]: Connection closed by 10.0.0.1 port 32776 Sep 10 00:00:53.020897 sshd-session[3655]: pam_unix(sshd:session): session closed for user core Sep 10 00:00:53.024747 systemd[1]: sshd@7-10.0.0.122:22-10.0.0.1:32776.service: Deactivated successfully. Sep 10 00:00:53.027436 systemd[1]: session-8.scope: Deactivated successfully. Sep 10 00:00:53.029982 systemd-logind[1483]: Session 8 logged out. Waiting for processes to exit. Sep 10 00:00:53.031326 systemd-logind[1483]: Removed session 8. Sep 10 00:00:53.079659 systemd-networkd[1437]: lxc_health: Link UP Sep 10 00:00:53.082704 systemd-networkd[1437]: lxc_health: Gained carrier Sep 10 00:00:53.220739 systemd-networkd[1437]: cilium_vxlan: Gained IPv6LL Sep 10 00:00:53.264317 systemd-networkd[1437]: lxcd6ae5d7e4d40: Link UP Sep 10 00:00:53.265603 kernel: eth0: renamed from tmp29a46 Sep 10 00:00:53.269330 systemd-networkd[1437]: lxc3079e64e06ca: Link UP Sep 10 00:00:53.274598 kernel: eth0: renamed from tmp5d599 Sep 10 00:00:53.275274 systemd-networkd[1437]: lxcd6ae5d7e4d40: Gained carrier Sep 10 00:00:53.276674 systemd-networkd[1437]: lxc3079e64e06ca: Gained carrier Sep 10 00:00:54.309916 systemd-networkd[1437]: lxc3079e64e06ca: Gained IPv6LL Sep 10 00:00:54.564742 systemd-networkd[1437]: lxcd6ae5d7e4d40: Gained IPv6LL Sep 10 00:00:54.884867 systemd-networkd[1437]: lxc_health: Gained IPv6LL Sep 10 00:00:57.159652 containerd[1497]: time="2025-09-10T00:00:57.159601996Z" level=info msg="connecting to shim 29a46599ed155b5040e4c3d6e0db9b61865649c123aea81a004623b92ecf52b5" address="unix:///run/containerd/s/98b1ed45801ce1c791ae3beb70c4896fd5c9a8815fa64cdc490b1d24243e6144" namespace=k8s.io protocol=ttrpc version=3 Sep 10 00:00:57.160595 containerd[1497]: time="2025-09-10T00:00:57.160514183Z" level=info msg="connecting to shim 5d5999803e8a0c4f682e91e2a72074e7cbffec40dbc63900ce44542618e9a7f4" address="unix:///run/containerd/s/c0cf8188a2f1130f08c4e2eb559e5e20ced943c3c3f544e9bf6d951096aa25c2" namespace=k8s.io protocol=ttrpc version=3 Sep 10 00:00:57.185801 systemd[1]: Started cri-containerd-5d5999803e8a0c4f682e91e2a72074e7cbffec40dbc63900ce44542618e9a7f4.scope - libcontainer container 5d5999803e8a0c4f682e91e2a72074e7cbffec40dbc63900ce44542618e9a7f4. Sep 10 00:00:57.188830 systemd[1]: Started cri-containerd-29a46599ed155b5040e4c3d6e0db9b61865649c123aea81a004623b92ecf52b5.scope - libcontainer container 29a46599ed155b5040e4c3d6e0db9b61865649c123aea81a004623b92ecf52b5. Sep 10 00:00:57.201620 systemd-resolved[1353]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 10 00:00:57.212813 systemd-resolved[1353]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Sep 10 00:00:57.235244 containerd[1497]: time="2025-09-10T00:00:57.235097082Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-vc6h2,Uid:d01a6dc8-9639-4ac2-9925-8ddac74d9950,Namespace:kube-system,Attempt:0,} returns sandbox id \"5d5999803e8a0c4f682e91e2a72074e7cbffec40dbc63900ce44542618e9a7f4\"" Sep 10 00:00:57.242658 containerd[1497]: time="2025-09-10T00:00:57.242554300Z" level=info msg="CreateContainer within sandbox \"5d5999803e8a0c4f682e91e2a72074e7cbffec40dbc63900ce44542618e9a7f4\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 10 00:00:57.244851 containerd[1497]: time="2025-09-10T00:00:57.244792205Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-txjwb,Uid:0543ea50-1525-4279-acd5-60f83e5f0c54,Namespace:kube-system,Attempt:0,} returns sandbox id \"29a46599ed155b5040e4c3d6e0db9b61865649c123aea81a004623b92ecf52b5\"" Sep 10 00:00:57.251070 containerd[1497]: time="2025-09-10T00:00:57.251030068Z" level=info msg="CreateContainer within sandbox \"29a46599ed155b5040e4c3d6e0db9b61865649c123aea81a004623b92ecf52b5\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Sep 10 00:00:57.261761 containerd[1497]: time="2025-09-10T00:00:57.261679859Z" level=info msg="Container 31a695bb86b447b6ed6aec35b3b59f23db15a6202108f0f128cf5db005e4af02: CDI devices from CRI Config.CDIDevices: []" Sep 10 00:00:57.270592 containerd[1497]: time="2025-09-10T00:00:57.270470916Z" level=info msg="Container 51d5c94c92bdf533d2b720a55538806b73c1b3f96f82dd9001ad1ab70d2dad0c: CDI devices from CRI Config.CDIDevices: []" Sep 10 00:00:57.274091 containerd[1497]: time="2025-09-10T00:00:57.274048220Z" level=info msg="CreateContainer within sandbox \"5d5999803e8a0c4f682e91e2a72074e7cbffec40dbc63900ce44542618e9a7f4\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"31a695bb86b447b6ed6aec35b3b59f23db15a6202108f0f128cf5db005e4af02\"" Sep 10 00:00:57.275496 containerd[1497]: time="2025-09-10T00:00:57.275111051Z" level=info msg="StartContainer for \"31a695bb86b447b6ed6aec35b3b59f23db15a6202108f0f128cf5db005e4af02\"" Sep 10 00:00:57.276541 containerd[1497]: time="2025-09-10T00:00:57.276506252Z" level=info msg="connecting to shim 31a695bb86b447b6ed6aec35b3b59f23db15a6202108f0f128cf5db005e4af02" address="unix:///run/containerd/s/c0cf8188a2f1130f08c4e2eb559e5e20ced943c3c3f544e9bf6d951096aa25c2" protocol=ttrpc version=3 Sep 10 00:00:57.279415 containerd[1497]: time="2025-09-10T00:00:57.279375936Z" level=info msg="CreateContainer within sandbox \"29a46599ed155b5040e4c3d6e0db9b61865649c123aea81a004623b92ecf52b5\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"51d5c94c92bdf533d2b720a55538806b73c1b3f96f82dd9001ad1ab70d2dad0c\"" Sep 10 00:00:57.280126 containerd[1497]: time="2025-09-10T00:00:57.280103157Z" level=info msg="StartContainer for \"51d5c94c92bdf533d2b720a55538806b73c1b3f96f82dd9001ad1ab70d2dad0c\"" Sep 10 00:00:57.281533 containerd[1497]: time="2025-09-10T00:00:57.281505718Z" level=info msg="connecting to shim 51d5c94c92bdf533d2b720a55538806b73c1b3f96f82dd9001ad1ab70d2dad0c" address="unix:///run/containerd/s/98b1ed45801ce1c791ae3beb70c4896fd5c9a8815fa64cdc490b1d24243e6144" protocol=ttrpc version=3 Sep 10 00:00:57.298825 systemd[1]: Started cri-containerd-31a695bb86b447b6ed6aec35b3b59f23db15a6202108f0f128cf5db005e4af02.scope - libcontainer container 31a695bb86b447b6ed6aec35b3b59f23db15a6202108f0f128cf5db005e4af02. Sep 10 00:00:57.311836 systemd[1]: Started cri-containerd-51d5c94c92bdf533d2b720a55538806b73c1b3f96f82dd9001ad1ab70d2dad0c.scope - libcontainer container 51d5c94c92bdf533d2b720a55538806b73c1b3f96f82dd9001ad1ab70d2dad0c. Sep 10 00:00:57.342518 containerd[1497]: time="2025-09-10T00:00:57.342467300Z" level=info msg="StartContainer for \"31a695bb86b447b6ed6aec35b3b59f23db15a6202108f0f128cf5db005e4af02\" returns successfully" Sep 10 00:00:57.353781 containerd[1497]: time="2025-09-10T00:00:57.353744909Z" level=info msg="StartContainer for \"51d5c94c92bdf533d2b720a55538806b73c1b3f96f82dd9001ad1ab70d2dad0c\" returns successfully" Sep 10 00:00:58.037394 systemd[1]: Started sshd@8-10.0.0.122:22-10.0.0.1:32782.service - OpenSSH per-connection server daemon (10.0.0.1:32782). Sep 10 00:00:58.109701 kubelet[2642]: I0910 00:00:58.108547 2642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-txjwb" podStartSLOduration=29.10853199 podStartE2EDuration="29.10853199s" podCreationTimestamp="2025-09-10 00:00:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-10 00:00:58.105182295 +0000 UTC m=+36.288523772" watchObservedRunningTime="2025-09-10 00:00:58.10853199 +0000 UTC m=+36.291873467" Sep 10 00:00:58.137628 sshd[4004]: Accepted publickey for core from 10.0.0.1 port 32782 ssh2: RSA SHA256:BIipJKfG3sr4zTNTEUz0SDDjJtEzBqbnZB4/ga6/CtY Sep 10 00:00:58.137975 sshd-session[4004]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:00:58.143498 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2006780680.mount: Deactivated successfully. Sep 10 00:00:58.151476 systemd-logind[1483]: New session 9 of user core. Sep 10 00:00:58.158082 systemd[1]: Started session-9.scope - Session 9 of User core. Sep 10 00:00:58.287841 sshd[4009]: Connection closed by 10.0.0.1 port 32782 Sep 10 00:00:58.288288 sshd-session[4004]: pam_unix(sshd:session): session closed for user core Sep 10 00:00:58.292089 systemd[1]: sshd@8-10.0.0.122:22-10.0.0.1:32782.service: Deactivated successfully. Sep 10 00:00:58.294207 systemd[1]: session-9.scope: Deactivated successfully. Sep 10 00:00:58.296211 systemd-logind[1483]: Session 9 logged out. Waiting for processes to exit. Sep 10 00:00:58.297463 systemd-logind[1483]: Removed session 9. Sep 10 00:01:03.301438 systemd[1]: Started sshd@9-10.0.0.122:22-10.0.0.1:37378.service - OpenSSH per-connection server daemon (10.0.0.1:37378). Sep 10 00:01:03.364319 sshd[4032]: Accepted publickey for core from 10.0.0.1 port 37378 ssh2: RSA SHA256:BIipJKfG3sr4zTNTEUz0SDDjJtEzBqbnZB4/ga6/CtY Sep 10 00:01:03.365415 sshd-session[4032]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:01:03.369997 systemd-logind[1483]: New session 10 of user core. Sep 10 00:01:03.377772 systemd[1]: Started session-10.scope - Session 10 of User core. Sep 10 00:01:03.496620 sshd[4035]: Connection closed by 10.0.0.1 port 37378 Sep 10 00:01:03.496366 sshd-session[4032]: pam_unix(sshd:session): session closed for user core Sep 10 00:01:03.499904 systemd[1]: sshd@9-10.0.0.122:22-10.0.0.1:37378.service: Deactivated successfully. Sep 10 00:01:03.501774 systemd[1]: session-10.scope: Deactivated successfully. Sep 10 00:01:03.502615 systemd-logind[1483]: Session 10 logged out. Waiting for processes to exit. Sep 10 00:01:03.503653 systemd-logind[1483]: Removed session 10. Sep 10 00:01:08.519890 systemd[1]: Started sshd@10-10.0.0.122:22-10.0.0.1:37392.service - OpenSSH per-connection server daemon (10.0.0.1:37392). Sep 10 00:01:08.584804 sshd[4050]: Accepted publickey for core from 10.0.0.1 port 37392 ssh2: RSA SHA256:BIipJKfG3sr4zTNTEUz0SDDjJtEzBqbnZB4/ga6/CtY Sep 10 00:01:08.588372 sshd-session[4050]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:01:08.602902 systemd-logind[1483]: New session 11 of user core. Sep 10 00:01:08.615787 systemd[1]: Started session-11.scope - Session 11 of User core. Sep 10 00:01:08.770977 sshd[4053]: Connection closed by 10.0.0.1 port 37392 Sep 10 00:01:08.771392 sshd-session[4050]: pam_unix(sshd:session): session closed for user core Sep 10 00:01:08.783177 systemd[1]: sshd@10-10.0.0.122:22-10.0.0.1:37392.service: Deactivated successfully. Sep 10 00:01:08.785719 systemd[1]: session-11.scope: Deactivated successfully. Sep 10 00:01:08.786809 systemd-logind[1483]: Session 11 logged out. Waiting for processes to exit. Sep 10 00:01:08.789425 systemd-logind[1483]: Removed session 11. Sep 10 00:01:08.792181 systemd[1]: Started sshd@11-10.0.0.122:22-10.0.0.1:37398.service - OpenSSH per-connection server daemon (10.0.0.1:37398). Sep 10 00:01:08.854396 sshd[4068]: Accepted publickey for core from 10.0.0.1 port 37398 ssh2: RSA SHA256:BIipJKfG3sr4zTNTEUz0SDDjJtEzBqbnZB4/ga6/CtY Sep 10 00:01:08.855837 sshd-session[4068]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:01:08.859879 systemd-logind[1483]: New session 12 of user core. Sep 10 00:01:08.869783 systemd[1]: Started session-12.scope - Session 12 of User core. Sep 10 00:01:09.023748 sshd[4071]: Connection closed by 10.0.0.1 port 37398 Sep 10 00:01:09.024205 sshd-session[4068]: pam_unix(sshd:session): session closed for user core Sep 10 00:01:09.037303 systemd[1]: sshd@11-10.0.0.122:22-10.0.0.1:37398.service: Deactivated successfully. Sep 10 00:01:09.044024 systemd[1]: session-12.scope: Deactivated successfully. Sep 10 00:01:09.047316 systemd-logind[1483]: Session 12 logged out. Waiting for processes to exit. Sep 10 00:01:09.051925 systemd[1]: Started sshd@12-10.0.0.122:22-10.0.0.1:37408.service - OpenSSH per-connection server daemon (10.0.0.1:37408). Sep 10 00:01:09.054220 systemd-logind[1483]: Removed session 12. Sep 10 00:01:09.112133 sshd[4083]: Accepted publickey for core from 10.0.0.1 port 37408 ssh2: RSA SHA256:BIipJKfG3sr4zTNTEUz0SDDjJtEzBqbnZB4/ga6/CtY Sep 10 00:01:09.113631 sshd-session[4083]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:01:09.118480 systemd-logind[1483]: New session 13 of user core. Sep 10 00:01:09.128831 systemd[1]: Started session-13.scope - Session 13 of User core. Sep 10 00:01:09.246923 sshd[4086]: Connection closed by 10.0.0.1 port 37408 Sep 10 00:01:09.247281 sshd-session[4083]: pam_unix(sshd:session): session closed for user core Sep 10 00:01:09.250795 systemd[1]: sshd@12-10.0.0.122:22-10.0.0.1:37408.service: Deactivated successfully. Sep 10 00:01:09.252485 systemd[1]: session-13.scope: Deactivated successfully. Sep 10 00:01:09.253404 systemd-logind[1483]: Session 13 logged out. Waiting for processes to exit. Sep 10 00:01:09.254689 systemd-logind[1483]: Removed session 13. Sep 10 00:01:14.265466 systemd[1]: Started sshd@13-10.0.0.122:22-10.0.0.1:42454.service - OpenSSH per-connection server daemon (10.0.0.1:42454). Sep 10 00:01:14.312896 sshd[4101]: Accepted publickey for core from 10.0.0.1 port 42454 ssh2: RSA SHA256:BIipJKfG3sr4zTNTEUz0SDDjJtEzBqbnZB4/ga6/CtY Sep 10 00:01:14.314905 sshd-session[4101]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:01:14.321656 systemd-logind[1483]: New session 14 of user core. Sep 10 00:01:14.326796 systemd[1]: Started session-14.scope - Session 14 of User core. Sep 10 00:01:14.452493 sshd[4104]: Connection closed by 10.0.0.1 port 42454 Sep 10 00:01:14.453226 sshd-session[4101]: pam_unix(sshd:session): session closed for user core Sep 10 00:01:14.457409 systemd[1]: sshd@13-10.0.0.122:22-10.0.0.1:42454.service: Deactivated successfully. Sep 10 00:01:14.459171 systemd[1]: session-14.scope: Deactivated successfully. Sep 10 00:01:14.463472 systemd-logind[1483]: Session 14 logged out. Waiting for processes to exit. Sep 10 00:01:14.465090 systemd-logind[1483]: Removed session 14. Sep 10 00:01:19.468289 systemd[1]: Started sshd@14-10.0.0.122:22-10.0.0.1:42456.service - OpenSSH per-connection server daemon (10.0.0.1:42456). Sep 10 00:01:19.539893 sshd[4118]: Accepted publickey for core from 10.0.0.1 port 42456 ssh2: RSA SHA256:BIipJKfG3sr4zTNTEUz0SDDjJtEzBqbnZB4/ga6/CtY Sep 10 00:01:19.542332 sshd-session[4118]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:01:19.548498 systemd-logind[1483]: New session 15 of user core. Sep 10 00:01:19.557850 systemd[1]: Started session-15.scope - Session 15 of User core. Sep 10 00:01:19.683653 sshd[4121]: Connection closed by 10.0.0.1 port 42456 Sep 10 00:01:19.684319 sshd-session[4118]: pam_unix(sshd:session): session closed for user core Sep 10 00:01:19.694059 systemd[1]: sshd@14-10.0.0.122:22-10.0.0.1:42456.service: Deactivated successfully. Sep 10 00:01:19.696459 systemd[1]: session-15.scope: Deactivated successfully. Sep 10 00:01:19.697651 systemd-logind[1483]: Session 15 logged out. Waiting for processes to exit. Sep 10 00:01:19.701224 systemd[1]: Started sshd@15-10.0.0.122:22-10.0.0.1:42460.service - OpenSSH per-connection server daemon (10.0.0.1:42460). Sep 10 00:01:19.701865 systemd-logind[1483]: Removed session 15. Sep 10 00:01:19.769611 sshd[4135]: Accepted publickey for core from 10.0.0.1 port 42460 ssh2: RSA SHA256:BIipJKfG3sr4zTNTEUz0SDDjJtEzBqbnZB4/ga6/CtY Sep 10 00:01:19.771184 sshd-session[4135]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:01:19.777611 systemd-logind[1483]: New session 16 of user core. Sep 10 00:01:19.789949 systemd[1]: Started session-16.scope - Session 16 of User core. Sep 10 00:01:19.994647 sshd[4138]: Connection closed by 10.0.0.1 port 42460 Sep 10 00:01:19.995818 sshd-session[4135]: pam_unix(sshd:session): session closed for user core Sep 10 00:01:20.004428 systemd[1]: sshd@15-10.0.0.122:22-10.0.0.1:42460.service: Deactivated successfully. Sep 10 00:01:20.006145 systemd[1]: session-16.scope: Deactivated successfully. Sep 10 00:01:20.006874 systemd-logind[1483]: Session 16 logged out. Waiting for processes to exit. Sep 10 00:01:20.010036 systemd[1]: Started sshd@16-10.0.0.122:22-10.0.0.1:52256.service - OpenSSH per-connection server daemon (10.0.0.1:52256). Sep 10 00:01:20.010903 systemd-logind[1483]: Removed session 16. Sep 10 00:01:20.072929 sshd[4150]: Accepted publickey for core from 10.0.0.1 port 52256 ssh2: RSA SHA256:BIipJKfG3sr4zTNTEUz0SDDjJtEzBqbnZB4/ga6/CtY Sep 10 00:01:20.074463 sshd-session[4150]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:01:20.078772 systemd-logind[1483]: New session 17 of user core. Sep 10 00:01:20.088797 systemd[1]: Started session-17.scope - Session 17 of User core. Sep 10 00:01:20.732551 sshd[4153]: Connection closed by 10.0.0.1 port 52256 Sep 10 00:01:20.733380 sshd-session[4150]: pam_unix(sshd:session): session closed for user core Sep 10 00:01:20.747648 systemd[1]: sshd@16-10.0.0.122:22-10.0.0.1:52256.service: Deactivated successfully. Sep 10 00:01:20.750505 systemd[1]: session-17.scope: Deactivated successfully. Sep 10 00:01:20.752537 systemd-logind[1483]: Session 17 logged out. Waiting for processes to exit. Sep 10 00:01:20.758861 systemd[1]: Started sshd@17-10.0.0.122:22-10.0.0.1:52268.service - OpenSSH per-connection server daemon (10.0.0.1:52268). Sep 10 00:01:20.760422 systemd-logind[1483]: Removed session 17. Sep 10 00:01:20.822073 sshd[4175]: Accepted publickey for core from 10.0.0.1 port 52268 ssh2: RSA SHA256:BIipJKfG3sr4zTNTEUz0SDDjJtEzBqbnZB4/ga6/CtY Sep 10 00:01:20.824071 sshd-session[4175]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:01:20.830160 systemd-logind[1483]: New session 18 of user core. Sep 10 00:01:20.839808 systemd[1]: Started session-18.scope - Session 18 of User core. Sep 10 00:01:21.101960 sshd[4178]: Connection closed by 10.0.0.1 port 52268 Sep 10 00:01:21.102420 sshd-session[4175]: pam_unix(sshd:session): session closed for user core Sep 10 00:01:21.115751 systemd[1]: sshd@17-10.0.0.122:22-10.0.0.1:52268.service: Deactivated successfully. Sep 10 00:01:21.118330 systemd[1]: session-18.scope: Deactivated successfully. Sep 10 00:01:21.120923 systemd-logind[1483]: Session 18 logged out. Waiting for processes to exit. Sep 10 00:01:21.123969 systemd[1]: Started sshd@18-10.0.0.122:22-10.0.0.1:52284.service - OpenSSH per-connection server daemon (10.0.0.1:52284). Sep 10 00:01:21.126889 systemd-logind[1483]: Removed session 18. Sep 10 00:01:21.192551 sshd[4189]: Accepted publickey for core from 10.0.0.1 port 52284 ssh2: RSA SHA256:BIipJKfG3sr4zTNTEUz0SDDjJtEzBqbnZB4/ga6/CtY Sep 10 00:01:21.194377 sshd-session[4189]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:01:21.199496 systemd-logind[1483]: New session 19 of user core. Sep 10 00:01:21.205787 systemd[1]: Started session-19.scope - Session 19 of User core. Sep 10 00:01:21.324386 sshd[4192]: Connection closed by 10.0.0.1 port 52284 Sep 10 00:01:21.324779 sshd-session[4189]: pam_unix(sshd:session): session closed for user core Sep 10 00:01:21.328349 systemd[1]: sshd@18-10.0.0.122:22-10.0.0.1:52284.service: Deactivated successfully. Sep 10 00:01:21.332250 systemd[1]: session-19.scope: Deactivated successfully. Sep 10 00:01:21.333657 systemd-logind[1483]: Session 19 logged out. Waiting for processes to exit. Sep 10 00:01:21.335296 systemd-logind[1483]: Removed session 19. Sep 10 00:01:26.336637 systemd[1]: Started sshd@19-10.0.0.122:22-10.0.0.1:52290.service - OpenSSH per-connection server daemon (10.0.0.1:52290). Sep 10 00:01:26.415853 sshd[4210]: Accepted publickey for core from 10.0.0.1 port 52290 ssh2: RSA SHA256:BIipJKfG3sr4zTNTEUz0SDDjJtEzBqbnZB4/ga6/CtY Sep 10 00:01:26.418784 sshd-session[4210]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:01:26.425363 systemd-logind[1483]: New session 20 of user core. Sep 10 00:01:26.435790 systemd[1]: Started session-20.scope - Session 20 of User core. Sep 10 00:01:26.575654 sshd[4213]: Connection closed by 10.0.0.1 port 52290 Sep 10 00:01:26.576181 sshd-session[4210]: pam_unix(sshd:session): session closed for user core Sep 10 00:01:26.579911 systemd[1]: sshd@19-10.0.0.122:22-10.0.0.1:52290.service: Deactivated successfully. Sep 10 00:01:26.581811 systemd[1]: session-20.scope: Deactivated successfully. Sep 10 00:01:26.582671 systemd-logind[1483]: Session 20 logged out. Waiting for processes to exit. Sep 10 00:01:26.583863 systemd-logind[1483]: Removed session 20. Sep 10 00:01:31.588746 systemd[1]: Started sshd@20-10.0.0.122:22-10.0.0.1:53120.service - OpenSSH per-connection server daemon (10.0.0.1:53120). Sep 10 00:01:31.665091 sshd[4231]: Accepted publickey for core from 10.0.0.1 port 53120 ssh2: RSA SHA256:BIipJKfG3sr4zTNTEUz0SDDjJtEzBqbnZB4/ga6/CtY Sep 10 00:01:31.666634 sshd-session[4231]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:01:31.670990 systemd-logind[1483]: New session 21 of user core. Sep 10 00:01:31.678141 systemd[1]: Started session-21.scope - Session 21 of User core. Sep 10 00:01:31.816969 sshd[4234]: Connection closed by 10.0.0.1 port 53120 Sep 10 00:01:31.818483 sshd-session[4231]: pam_unix(sshd:session): session closed for user core Sep 10 00:01:31.829374 systemd[1]: sshd@20-10.0.0.122:22-10.0.0.1:53120.service: Deactivated successfully. Sep 10 00:01:31.831483 systemd[1]: session-21.scope: Deactivated successfully. Sep 10 00:01:31.833762 systemd-logind[1483]: Session 21 logged out. Waiting for processes to exit. Sep 10 00:01:31.835509 systemd-logind[1483]: Removed session 21. Sep 10 00:01:31.839882 systemd[1]: Started sshd@21-10.0.0.122:22-10.0.0.1:53136.service - OpenSSH per-connection server daemon (10.0.0.1:53136). Sep 10 00:01:31.905120 sshd[4247]: Accepted publickey for core from 10.0.0.1 port 53136 ssh2: RSA SHA256:BIipJKfG3sr4zTNTEUz0SDDjJtEzBqbnZB4/ga6/CtY Sep 10 00:01:31.906036 sshd-session[4247]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:01:31.912323 systemd-logind[1483]: New session 22 of user core. Sep 10 00:01:31.923797 systemd[1]: Started session-22.scope - Session 22 of User core. Sep 10 00:01:33.866653 kubelet[2642]: I0910 00:01:33.865972 2642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-vc6h2" podStartSLOduration=64.865955952 podStartE2EDuration="1m4.865955952s" podCreationTimestamp="2025-09-10 00:00:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-10 00:00:58.143500701 +0000 UTC m=+36.326842178" watchObservedRunningTime="2025-09-10 00:01:33.865955952 +0000 UTC m=+72.049297429" Sep 10 00:01:33.882913 containerd[1497]: time="2025-09-10T00:01:33.882856159Z" level=info msg="StopContainer for \"9c2a44313e2cd08070b143656763807ee763962b8979895195d74794584170d1\" with timeout 30 (s)" Sep 10 00:01:33.883902 containerd[1497]: time="2025-09-10T00:01:33.883364716Z" level=info msg="Stop container \"9c2a44313e2cd08070b143656763807ee763962b8979895195d74794584170d1\" with signal terminated" Sep 10 00:01:33.893220 systemd[1]: cri-containerd-9c2a44313e2cd08070b143656763807ee763962b8979895195d74794584170d1.scope: Deactivated successfully. Sep 10 00:01:33.894829 containerd[1497]: time="2025-09-10T00:01:33.894790199Z" level=info msg="received exit event container_id:\"9c2a44313e2cd08070b143656763807ee763962b8979895195d74794584170d1\" id:\"9c2a44313e2cd08070b143656763807ee763962b8979895195d74794584170d1\" pid:3271 exited_at:{seconds:1757462493 nanos:894374922}" Sep 10 00:01:33.894999 containerd[1497]: time="2025-09-10T00:01:33.894947038Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9c2a44313e2cd08070b143656763807ee763962b8979895195d74794584170d1\" id:\"9c2a44313e2cd08070b143656763807ee763962b8979895195d74794584170d1\" pid:3271 exited_at:{seconds:1757462493 nanos:894374922}" Sep 10 00:01:33.908904 containerd[1497]: time="2025-09-10T00:01:33.908864905Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ba3cb9a7cc98cda4e6f8b225175557b7c2f5c85701fb882dbb434aec7d1832bb\" id:\"b192a6def0d848a5ac2d8a155cfa73caa512143693606d0c8c470b72c334023e\" pid:4277 exited_at:{seconds:1757462493 nanos:908620346}" Sep 10 00:01:33.910961 containerd[1497]: time="2025-09-10T00:01:33.910936171Z" level=info msg="StopContainer for \"ba3cb9a7cc98cda4e6f8b225175557b7c2f5c85701fb882dbb434aec7d1832bb\" with timeout 2 (s)" Sep 10 00:01:33.912172 containerd[1497]: time="2025-09-10T00:01:33.911622286Z" level=info msg="Stop container \"ba3cb9a7cc98cda4e6f8b225175557b7c2f5c85701fb882dbb434aec7d1832bb\" with signal terminated" Sep 10 00:01:33.912939 containerd[1497]: time="2025-09-10T00:01:33.912896518Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Sep 10 00:01:33.921550 systemd-networkd[1437]: lxc_health: Link DOWN Sep 10 00:01:33.921612 systemd-networkd[1437]: lxc_health: Lost carrier Sep 10 00:01:33.922127 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9c2a44313e2cd08070b143656763807ee763962b8979895195d74794584170d1-rootfs.mount: Deactivated successfully. Sep 10 00:01:33.935587 containerd[1497]: time="2025-09-10T00:01:33.933960257Z" level=info msg="StopContainer for \"9c2a44313e2cd08070b143656763807ee763962b8979895195d74794584170d1\" returns successfully" Sep 10 00:01:33.938111 containerd[1497]: time="2025-09-10T00:01:33.938074629Z" level=info msg="StopPodSandbox for \"8f95a26641b839c55865b69dda6b8cecb125e4416e6004079f4acef609b97209\"" Sep 10 00:01:33.941410 containerd[1497]: time="2025-09-10T00:01:33.941321527Z" level=info msg="Container to stop \"9c2a44313e2cd08070b143656763807ee763962b8979895195d74794584170d1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 10 00:01:33.944634 systemd[1]: cri-containerd-ba3cb9a7cc98cda4e6f8b225175557b7c2f5c85701fb882dbb434aec7d1832bb.scope: Deactivated successfully. Sep 10 00:01:33.945097 systemd[1]: cri-containerd-ba3cb9a7cc98cda4e6f8b225175557b7c2f5c85701fb882dbb434aec7d1832bb.scope: Consumed 6.673s CPU time, 124.7M memory peak, 144K read from disk, 12.9M written to disk. Sep 10 00:01:33.946266 containerd[1497]: time="2025-09-10T00:01:33.946233534Z" level=info msg="received exit event container_id:\"ba3cb9a7cc98cda4e6f8b225175557b7c2f5c85701fb882dbb434aec7d1832bb\" id:\"ba3cb9a7cc98cda4e6f8b225175557b7c2f5c85701fb882dbb434aec7d1832bb\" pid:3306 exited_at:{seconds:1757462493 nanos:945907137}" Sep 10 00:01:33.946343 containerd[1497]: time="2025-09-10T00:01:33.946276214Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ba3cb9a7cc98cda4e6f8b225175557b7c2f5c85701fb882dbb434aec7d1832bb\" id:\"ba3cb9a7cc98cda4e6f8b225175557b7c2f5c85701fb882dbb434aec7d1832bb\" pid:3306 exited_at:{seconds:1757462493 nanos:945907137}" Sep 10 00:01:33.950858 systemd[1]: cri-containerd-8f95a26641b839c55865b69dda6b8cecb125e4416e6004079f4acef609b97209.scope: Deactivated successfully. Sep 10 00:01:33.952374 containerd[1497]: time="2025-09-10T00:01:33.952343894Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8f95a26641b839c55865b69dda6b8cecb125e4416e6004079f4acef609b97209\" id:\"8f95a26641b839c55865b69dda6b8cecb125e4416e6004079f4acef609b97209\" pid:2852 exit_status:137 exited_at:{seconds:1757462493 nanos:952052736}" Sep 10 00:01:33.965403 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ba3cb9a7cc98cda4e6f8b225175557b7c2f5c85701fb882dbb434aec7d1832bb-rootfs.mount: Deactivated successfully. Sep 10 00:01:33.977184 containerd[1497]: time="2025-09-10T00:01:33.977127088Z" level=info msg="StopContainer for \"ba3cb9a7cc98cda4e6f8b225175557b7c2f5c85701fb882dbb434aec7d1832bb\" returns successfully" Sep 10 00:01:33.977644 containerd[1497]: time="2025-09-10T00:01:33.977619124Z" level=info msg="StopPodSandbox for \"079c1afe3398969211cf41d1b29f0a50a6d9623336a48ead37820f21b3e1eed1\"" Sep 10 00:01:33.981847 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8f95a26641b839c55865b69dda6b8cecb125e4416e6004079f4acef609b97209-rootfs.mount: Deactivated successfully. Sep 10 00:01:33.986492 containerd[1497]: time="2025-09-10T00:01:33.986426665Z" level=info msg="Container to stop \"d1d2914611ff8bc94917c8567fd74398f6e3d6814810123a5f0db9a9e7e98edc\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 10 00:01:33.986492 containerd[1497]: time="2025-09-10T00:01:33.986482025Z" level=info msg="Container to stop \"6eae703885109c65f01a6e2040654b4ef049a281b07cd0eb213130d2508ffd87\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 10 00:01:33.986492 containerd[1497]: time="2025-09-10T00:01:33.986492385Z" level=info msg="Container to stop \"792a2715f4c70d569d0641fe8d46f03db1a714176cb6d563f8963029b4a702ab\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 10 00:01:33.986701 containerd[1497]: time="2025-09-10T00:01:33.986502305Z" level=info msg="Container to stop \"52bb49f01be7f1c96154659584c7cfcddef94434389663ff71eb1beb7ba1746e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 10 00:01:33.986701 containerd[1497]: time="2025-09-10T00:01:33.986512945Z" level=info msg="Container to stop \"ba3cb9a7cc98cda4e6f8b225175557b7c2f5c85701fb882dbb434aec7d1832bb\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Sep 10 00:01:33.988294 containerd[1497]: time="2025-09-10T00:01:33.988268693Z" level=info msg="shim disconnected" id=8f95a26641b839c55865b69dda6b8cecb125e4416e6004079f4acef609b97209 namespace=k8s.io Sep 10 00:01:33.992393 systemd[1]: cri-containerd-079c1afe3398969211cf41d1b29f0a50a6d9623336a48ead37820f21b3e1eed1.scope: Deactivated successfully. Sep 10 00:01:33.994719 containerd[1497]: time="2025-09-10T00:01:33.988469252Z" level=warning msg="cleaning up after shim disconnected" id=8f95a26641b839c55865b69dda6b8cecb125e4416e6004079f4acef609b97209 namespace=k8s.io Sep 10 00:01:33.994719 containerd[1497]: time="2025-09-10T00:01:33.994692450Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 10 00:01:34.018064 containerd[1497]: time="2025-09-10T00:01:34.017920784Z" level=info msg="TaskExit event in podsandbox handler container_id:\"079c1afe3398969211cf41d1b29f0a50a6d9623336a48ead37820f21b3e1eed1\" id:\"079c1afe3398969211cf41d1b29f0a50a6d9623336a48ead37820f21b3e1eed1\" pid:2795 exit_status:137 exited_at:{seconds:1757462493 nanos:998866982}" Sep 10 00:01:34.020708 containerd[1497]: time="2025-09-10T00:01:34.020484848Z" level=info msg="TearDown network for sandbox \"8f95a26641b839c55865b69dda6b8cecb125e4416e6004079f4acef609b97209\" successfully" Sep 10 00:01:34.020708 containerd[1497]: time="2025-09-10T00:01:34.020519648Z" level=info msg="StopPodSandbox for \"8f95a26641b839c55865b69dda6b8cecb125e4416e6004079f4acef609b97209\" returns successfully" Sep 10 00:01:34.019039 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-079c1afe3398969211cf41d1b29f0a50a6d9623336a48ead37820f21b3e1eed1-rootfs.mount: Deactivated successfully. Sep 10 00:01:34.023257 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8f95a26641b839c55865b69dda6b8cecb125e4416e6004079f4acef609b97209-shm.mount: Deactivated successfully. Sep 10 00:01:34.025342 containerd[1497]: time="2025-09-10T00:01:34.025293219Z" level=info msg="received exit event sandbox_id:\"8f95a26641b839c55865b69dda6b8cecb125e4416e6004079f4acef609b97209\" exit_status:137 exited_at:{seconds:1757462493 nanos:952052736}" Sep 10 00:01:34.029496 containerd[1497]: time="2025-09-10T00:01:34.029464793Z" level=info msg="shim disconnected" id=079c1afe3398969211cf41d1b29f0a50a6d9623336a48ead37820f21b3e1eed1 namespace=k8s.io Sep 10 00:01:34.029596 containerd[1497]: time="2025-09-10T00:01:34.029492593Z" level=warning msg="cleaning up after shim disconnected" id=079c1afe3398969211cf41d1b29f0a50a6d9623336a48ead37820f21b3e1eed1 namespace=k8s.io Sep 10 00:01:34.029596 containerd[1497]: time="2025-09-10T00:01:34.029520833Z" level=info msg="cleaning up dead shim" namespace=k8s.io Sep 10 00:01:34.029810 containerd[1497]: time="2025-09-10T00:01:34.029779792Z" level=info msg="received exit event sandbox_id:\"079c1afe3398969211cf41d1b29f0a50a6d9623336a48ead37820f21b3e1eed1\" exit_status:137 exited_at:{seconds:1757462493 nanos:998866982}" Sep 10 00:01:34.053037 containerd[1497]: time="2025-09-10T00:01:34.052987970Z" level=info msg="TearDown network for sandbox \"079c1afe3398969211cf41d1b29f0a50a6d9623336a48ead37820f21b3e1eed1\" successfully" Sep 10 00:01:34.053037 containerd[1497]: time="2025-09-10T00:01:34.053028490Z" level=info msg="StopPodSandbox for \"079c1afe3398969211cf41d1b29f0a50a6d9623336a48ead37820f21b3e1eed1\" returns successfully" Sep 10 00:01:34.116121 kubelet[2642]: I0910 00:01:34.116084 2642 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a03cb76b-be61-4004-96df-9b45274da63d-host-proc-sys-kernel\") pod \"a03cb76b-be61-4004-96df-9b45274da63d\" (UID: \"a03cb76b-be61-4004-96df-9b45274da63d\") " Sep 10 00:01:34.116121 kubelet[2642]: I0910 00:01:34.116124 2642 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a03cb76b-be61-4004-96df-9b45274da63d-cilium-cgroup\") pod \"a03cb76b-be61-4004-96df-9b45274da63d\" (UID: \"a03cb76b-be61-4004-96df-9b45274da63d\") " Sep 10 00:01:34.116121 kubelet[2642]: I0910 00:01:34.116149 2642 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a03cb76b-be61-4004-96df-9b45274da63d-clustermesh-secrets\") pod \"a03cb76b-be61-4004-96df-9b45274da63d\" (UID: \"a03cb76b-be61-4004-96df-9b45274da63d\") " Sep 10 00:01:34.116121 kubelet[2642]: I0910 00:01:34.116167 2642 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/66851fa4-7356-4f6a-b72d-b7034fa912d9-cilium-config-path\") pod \"66851fa4-7356-4f6a-b72d-b7034fa912d9\" (UID: \"66851fa4-7356-4f6a-b72d-b7034fa912d9\") " Sep 10 00:01:34.116121 kubelet[2642]: I0910 00:01:34.116188 2642 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a03cb76b-be61-4004-96df-9b45274da63d-xtables-lock\") pod \"a03cb76b-be61-4004-96df-9b45274da63d\" (UID: \"a03cb76b-be61-4004-96df-9b45274da63d\") " Sep 10 00:01:34.116121 kubelet[2642]: I0910 00:01:34.116203 2642 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a03cb76b-be61-4004-96df-9b45274da63d-lib-modules\") pod \"a03cb76b-be61-4004-96df-9b45274da63d\" (UID: \"a03cb76b-be61-4004-96df-9b45274da63d\") " Sep 10 00:01:34.116487 kubelet[2642]: I0910 00:01:34.116218 2642 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a03cb76b-be61-4004-96df-9b45274da63d-cilium-run\") pod \"a03cb76b-be61-4004-96df-9b45274da63d\" (UID: \"a03cb76b-be61-4004-96df-9b45274da63d\") " Sep 10 00:01:34.116487 kubelet[2642]: I0910 00:01:34.116235 2642 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a03cb76b-be61-4004-96df-9b45274da63d-cilium-config-path\") pod \"a03cb76b-be61-4004-96df-9b45274da63d\" (UID: \"a03cb76b-be61-4004-96df-9b45274da63d\") " Sep 10 00:01:34.116487 kubelet[2642]: I0910 00:01:34.116250 2642 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a03cb76b-be61-4004-96df-9b45274da63d-cni-path\") pod \"a03cb76b-be61-4004-96df-9b45274da63d\" (UID: \"a03cb76b-be61-4004-96df-9b45274da63d\") " Sep 10 00:01:34.116487 kubelet[2642]: I0910 00:01:34.116266 2642 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vbch7\" (UniqueName: \"kubernetes.io/projected/a03cb76b-be61-4004-96df-9b45274da63d-kube-api-access-vbch7\") pod \"a03cb76b-be61-4004-96df-9b45274da63d\" (UID: \"a03cb76b-be61-4004-96df-9b45274da63d\") " Sep 10 00:01:34.116487 kubelet[2642]: I0910 00:01:34.116280 2642 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a03cb76b-be61-4004-96df-9b45274da63d-hostproc\") pod \"a03cb76b-be61-4004-96df-9b45274da63d\" (UID: \"a03cb76b-be61-4004-96df-9b45274da63d\") " Sep 10 00:01:34.116487 kubelet[2642]: I0910 00:01:34.116297 2642 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x8vs6\" (UniqueName: \"kubernetes.io/projected/66851fa4-7356-4f6a-b72d-b7034fa912d9-kube-api-access-x8vs6\") pod \"66851fa4-7356-4f6a-b72d-b7034fa912d9\" (UID: \"66851fa4-7356-4f6a-b72d-b7034fa912d9\") " Sep 10 00:01:34.116719 kubelet[2642]: I0910 00:01:34.116311 2642 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a03cb76b-be61-4004-96df-9b45274da63d-host-proc-sys-net\") pod \"a03cb76b-be61-4004-96df-9b45274da63d\" (UID: \"a03cb76b-be61-4004-96df-9b45274da63d\") " Sep 10 00:01:34.116719 kubelet[2642]: I0910 00:01:34.116326 2642 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a03cb76b-be61-4004-96df-9b45274da63d-hubble-tls\") pod \"a03cb76b-be61-4004-96df-9b45274da63d\" (UID: \"a03cb76b-be61-4004-96df-9b45274da63d\") " Sep 10 00:01:34.116719 kubelet[2642]: I0910 00:01:34.116340 2642 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a03cb76b-be61-4004-96df-9b45274da63d-bpf-maps\") pod \"a03cb76b-be61-4004-96df-9b45274da63d\" (UID: \"a03cb76b-be61-4004-96df-9b45274da63d\") " Sep 10 00:01:34.116719 kubelet[2642]: I0910 00:01:34.116354 2642 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a03cb76b-be61-4004-96df-9b45274da63d-etc-cni-netd\") pod \"a03cb76b-be61-4004-96df-9b45274da63d\" (UID: \"a03cb76b-be61-4004-96df-9b45274da63d\") " Sep 10 00:01:34.118767 kubelet[2642]: I0910 00:01:34.118597 2642 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a03cb76b-be61-4004-96df-9b45274da63d-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "a03cb76b-be61-4004-96df-9b45274da63d" (UID: "a03cb76b-be61-4004-96df-9b45274da63d"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 10 00:01:34.118767 kubelet[2642]: I0910 00:01:34.118596 2642 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a03cb76b-be61-4004-96df-9b45274da63d-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "a03cb76b-be61-4004-96df-9b45274da63d" (UID: "a03cb76b-be61-4004-96df-9b45274da63d"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 10 00:01:34.118767 kubelet[2642]: I0910 00:01:34.118679 2642 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a03cb76b-be61-4004-96df-9b45274da63d-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "a03cb76b-be61-4004-96df-9b45274da63d" (UID: "a03cb76b-be61-4004-96df-9b45274da63d"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 10 00:01:34.118767 kubelet[2642]: I0910 00:01:34.118697 2642 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a03cb76b-be61-4004-96df-9b45274da63d-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "a03cb76b-be61-4004-96df-9b45274da63d" (UID: "a03cb76b-be61-4004-96df-9b45274da63d"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 10 00:01:34.118767 kubelet[2642]: I0910 00:01:34.118714 2642 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a03cb76b-be61-4004-96df-9b45274da63d-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "a03cb76b-be61-4004-96df-9b45274da63d" (UID: "a03cb76b-be61-4004-96df-9b45274da63d"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 10 00:01:34.119045 kubelet[2642]: I0910 00:01:34.118998 2642 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a03cb76b-be61-4004-96df-9b45274da63d-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "a03cb76b-be61-4004-96df-9b45274da63d" (UID: "a03cb76b-be61-4004-96df-9b45274da63d"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 10 00:01:34.120965 kubelet[2642]: I0910 00:01:34.120702 2642 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a03cb76b-be61-4004-96df-9b45274da63d-hostproc" (OuterVolumeSpecName: "hostproc") pod "a03cb76b-be61-4004-96df-9b45274da63d" (UID: "a03cb76b-be61-4004-96df-9b45274da63d"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 10 00:01:34.121092 kubelet[2642]: I0910 00:01:34.121056 2642 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a03cb76b-be61-4004-96df-9b45274da63d-cni-path" (OuterVolumeSpecName: "cni-path") pod "a03cb76b-be61-4004-96df-9b45274da63d" (UID: "a03cb76b-be61-4004-96df-9b45274da63d"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 10 00:01:34.121126 kubelet[2642]: I0910 00:01:34.121116 2642 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a03cb76b-be61-4004-96df-9b45274da63d-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "a03cb76b-be61-4004-96df-9b45274da63d" (UID: "a03cb76b-be61-4004-96df-9b45274da63d"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 10 00:01:34.121166 kubelet[2642]: I0910 00:01:34.121141 2642 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a03cb76b-be61-4004-96df-9b45274da63d-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "a03cb76b-be61-4004-96df-9b45274da63d" (UID: "a03cb76b-be61-4004-96df-9b45274da63d"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Sep 10 00:01:34.122009 kubelet[2642]: I0910 00:01:34.121947 2642 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a03cb76b-be61-4004-96df-9b45274da63d-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "a03cb76b-be61-4004-96df-9b45274da63d" (UID: "a03cb76b-be61-4004-96df-9b45274da63d"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Sep 10 00:01:34.124121 kubelet[2642]: I0910 00:01:34.124080 2642 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/66851fa4-7356-4f6a-b72d-b7034fa912d9-kube-api-access-x8vs6" (OuterVolumeSpecName: "kube-api-access-x8vs6") pod "66851fa4-7356-4f6a-b72d-b7034fa912d9" (UID: "66851fa4-7356-4f6a-b72d-b7034fa912d9"). InnerVolumeSpecName "kube-api-access-x8vs6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 10 00:01:34.124287 kubelet[2642]: I0910 00:01:34.124256 2642 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a03cb76b-be61-4004-96df-9b45274da63d-kube-api-access-vbch7" (OuterVolumeSpecName: "kube-api-access-vbch7") pod "a03cb76b-be61-4004-96df-9b45274da63d" (UID: "a03cb76b-be61-4004-96df-9b45274da63d"). InnerVolumeSpecName "kube-api-access-vbch7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 10 00:01:34.125349 kubelet[2642]: I0910 00:01:34.125321 2642 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a03cb76b-be61-4004-96df-9b45274da63d-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "a03cb76b-be61-4004-96df-9b45274da63d" (UID: "a03cb76b-be61-4004-96df-9b45274da63d"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Sep 10 00:01:34.128761 kubelet[2642]: I0910 00:01:34.128688 2642 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/66851fa4-7356-4f6a-b72d-b7034fa912d9-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "66851fa4-7356-4f6a-b72d-b7034fa912d9" (UID: "66851fa4-7356-4f6a-b72d-b7034fa912d9"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 10 00:01:34.129045 kubelet[2642]: I0910 00:01:34.129008 2642 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a03cb76b-be61-4004-96df-9b45274da63d-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "a03cb76b-be61-4004-96df-9b45274da63d" (UID: "a03cb76b-be61-4004-96df-9b45274da63d"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Sep 10 00:01:34.175679 systemd[1]: Removed slice kubepods-besteffort-pod66851fa4_7356_4f6a_b72d_b7034fa912d9.slice - libcontainer container kubepods-besteffort-pod66851fa4_7356_4f6a_b72d_b7034fa912d9.slice. Sep 10 00:01:34.181932 kubelet[2642]: I0910 00:01:34.181611 2642 scope.go:117] "RemoveContainer" containerID="9c2a44313e2cd08070b143656763807ee763962b8979895195d74794584170d1" Sep 10 00:01:34.183279 containerd[1497]: time="2025-09-10T00:01:34.183245695Z" level=info msg="RemoveContainer for \"9c2a44313e2cd08070b143656763807ee763962b8979895195d74794584170d1\"" Sep 10 00:01:34.183881 systemd[1]: Removed slice kubepods-burstable-poda03cb76b_be61_4004_96df_9b45274da63d.slice - libcontainer container kubepods-burstable-poda03cb76b_be61_4004_96df_9b45274da63d.slice. Sep 10 00:01:34.184006 systemd[1]: kubepods-burstable-poda03cb76b_be61_4004_96df_9b45274da63d.slice: Consumed 6.766s CPU time, 125M memory peak, 152K read from disk, 12.9M written to disk. Sep 10 00:01:34.187781 containerd[1497]: time="2025-09-10T00:01:34.187753867Z" level=info msg="RemoveContainer for \"9c2a44313e2cd08070b143656763807ee763962b8979895195d74794584170d1\" returns successfully" Sep 10 00:01:34.188016 kubelet[2642]: I0910 00:01:34.187985 2642 scope.go:117] "RemoveContainer" containerID="9c2a44313e2cd08070b143656763807ee763962b8979895195d74794584170d1" Sep 10 00:01:34.188259 containerd[1497]: time="2025-09-10T00:01:34.188184704Z" level=error msg="ContainerStatus for \"9c2a44313e2cd08070b143656763807ee763962b8979895195d74794584170d1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"9c2a44313e2cd08070b143656763807ee763962b8979895195d74794584170d1\": not found" Sep 10 00:01:34.191007 kubelet[2642]: E0910 00:01:34.190950 2642 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"9c2a44313e2cd08070b143656763807ee763962b8979895195d74794584170d1\": not found" containerID="9c2a44313e2cd08070b143656763807ee763962b8979895195d74794584170d1" Sep 10 00:01:34.191069 kubelet[2642]: I0910 00:01:34.191011 2642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"9c2a44313e2cd08070b143656763807ee763962b8979895195d74794584170d1"} err="failed to get container status \"9c2a44313e2cd08070b143656763807ee763962b8979895195d74794584170d1\": rpc error: code = NotFound desc = an error occurred when try to find container \"9c2a44313e2cd08070b143656763807ee763962b8979895195d74794584170d1\": not found" Sep 10 00:01:34.191069 kubelet[2642]: I0910 00:01:34.191051 2642 scope.go:117] "RemoveContainer" containerID="ba3cb9a7cc98cda4e6f8b225175557b7c2f5c85701fb882dbb434aec7d1832bb" Sep 10 00:01:34.193427 containerd[1497]: time="2025-09-10T00:01:34.193371993Z" level=info msg="RemoveContainer for \"ba3cb9a7cc98cda4e6f8b225175557b7c2f5c85701fb882dbb434aec7d1832bb\"" Sep 10 00:01:34.198861 containerd[1497]: time="2025-09-10T00:01:34.198826639Z" level=info msg="RemoveContainer for \"ba3cb9a7cc98cda4e6f8b225175557b7c2f5c85701fb882dbb434aec7d1832bb\" returns successfully" Sep 10 00:01:34.199065 kubelet[2642]: I0910 00:01:34.198999 2642 scope.go:117] "RemoveContainer" containerID="792a2715f4c70d569d0641fe8d46f03db1a714176cb6d563f8963029b4a702ab" Sep 10 00:01:34.200521 containerd[1497]: time="2025-09-10T00:01:34.200495309Z" level=info msg="RemoveContainer for \"792a2715f4c70d569d0641fe8d46f03db1a714176cb6d563f8963029b4a702ab\"" Sep 10 00:01:34.214482 containerd[1497]: time="2025-09-10T00:01:34.214432744Z" level=info msg="RemoveContainer for \"792a2715f4c70d569d0641fe8d46f03db1a714176cb6d563f8963029b4a702ab\" returns successfully" Sep 10 00:01:34.216812 kubelet[2642]: I0910 00:01:34.216781 2642 scope.go:117] "RemoveContainer" containerID="6eae703885109c65f01a6e2040654b4ef049a281b07cd0eb213130d2508ffd87" Sep 10 00:01:34.217138 kubelet[2642]: I0910 00:01:34.217120 2642 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a03cb76b-be61-4004-96df-9b45274da63d-xtables-lock\") on node \"localhost\" DevicePath \"\"" Sep 10 00:01:34.217615 kubelet[2642]: I0910 00:01:34.217269 2642 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a03cb76b-be61-4004-96df-9b45274da63d-lib-modules\") on node \"localhost\" DevicePath \"\"" Sep 10 00:01:34.217660 kubelet[2642]: I0910 00:01:34.217626 2642 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a03cb76b-be61-4004-96df-9b45274da63d-cilium-run\") on node \"localhost\" DevicePath \"\"" Sep 10 00:01:34.217660 kubelet[2642]: I0910 00:01:34.217638 2642 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a03cb76b-be61-4004-96df-9b45274da63d-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 10 00:01:34.217660 kubelet[2642]: I0910 00:01:34.217647 2642 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a03cb76b-be61-4004-96df-9b45274da63d-cni-path\") on node \"localhost\" DevicePath \"\"" Sep 10 00:01:34.217660 kubelet[2642]: I0910 00:01:34.217656 2642 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-vbch7\" (UniqueName: \"kubernetes.io/projected/a03cb76b-be61-4004-96df-9b45274da63d-kube-api-access-vbch7\") on node \"localhost\" DevicePath \"\"" Sep 10 00:01:34.217764 kubelet[2642]: I0910 00:01:34.217664 2642 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a03cb76b-be61-4004-96df-9b45274da63d-hostproc\") on node \"localhost\" DevicePath \"\"" Sep 10 00:01:34.217764 kubelet[2642]: I0910 00:01:34.217676 2642 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-x8vs6\" (UniqueName: \"kubernetes.io/projected/66851fa4-7356-4f6a-b72d-b7034fa912d9-kube-api-access-x8vs6\") on node \"localhost\" DevicePath \"\"" Sep 10 00:01:34.217764 kubelet[2642]: I0910 00:01:34.217684 2642 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a03cb76b-be61-4004-96df-9b45274da63d-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Sep 10 00:01:34.217764 kubelet[2642]: I0910 00:01:34.217694 2642 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a03cb76b-be61-4004-96df-9b45274da63d-hubble-tls\") on node \"localhost\" DevicePath \"\"" Sep 10 00:01:34.217764 kubelet[2642]: I0910 00:01:34.217702 2642 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a03cb76b-be61-4004-96df-9b45274da63d-bpf-maps\") on node \"localhost\" DevicePath \"\"" Sep 10 00:01:34.217764 kubelet[2642]: I0910 00:01:34.217710 2642 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a03cb76b-be61-4004-96df-9b45274da63d-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Sep 10 00:01:34.217764 kubelet[2642]: I0910 00:01:34.217718 2642 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a03cb76b-be61-4004-96df-9b45274da63d-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Sep 10 00:01:34.217764 kubelet[2642]: I0910 00:01:34.217726 2642 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a03cb76b-be61-4004-96df-9b45274da63d-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Sep 10 00:01:34.217922 kubelet[2642]: I0910 00:01:34.217735 2642 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a03cb76b-be61-4004-96df-9b45274da63d-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Sep 10 00:01:34.217922 kubelet[2642]: I0910 00:01:34.217746 2642 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/66851fa4-7356-4f6a-b72d-b7034fa912d9-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Sep 10 00:01:34.219624 containerd[1497]: time="2025-09-10T00:01:34.219597873Z" level=info msg="RemoveContainer for \"6eae703885109c65f01a6e2040654b4ef049a281b07cd0eb213130d2508ffd87\"" Sep 10 00:01:34.224348 containerd[1497]: time="2025-09-10T00:01:34.224301164Z" level=info msg="RemoveContainer for \"6eae703885109c65f01a6e2040654b4ef049a281b07cd0eb213130d2508ffd87\" returns successfully" Sep 10 00:01:34.224668 kubelet[2642]: I0910 00:01:34.224621 2642 scope.go:117] "RemoveContainer" containerID="d1d2914611ff8bc94917c8567fd74398f6e3d6814810123a5f0db9a9e7e98edc" Sep 10 00:01:34.226316 containerd[1497]: time="2025-09-10T00:01:34.226286352Z" level=info msg="RemoveContainer for \"d1d2914611ff8bc94917c8567fd74398f6e3d6814810123a5f0db9a9e7e98edc\"" Sep 10 00:01:34.229464 containerd[1497]: time="2025-09-10T00:01:34.229436693Z" level=info msg="RemoveContainer for \"d1d2914611ff8bc94917c8567fd74398f6e3d6814810123a5f0db9a9e7e98edc\" returns successfully" Sep 10 00:01:34.229698 kubelet[2642]: I0910 00:01:34.229672 2642 scope.go:117] "RemoveContainer" containerID="52bb49f01be7f1c96154659584c7cfcddef94434389663ff71eb1beb7ba1746e" Sep 10 00:01:34.231465 containerd[1497]: time="2025-09-10T00:01:34.231407880Z" level=info msg="RemoveContainer for \"52bb49f01be7f1c96154659584c7cfcddef94434389663ff71eb1beb7ba1746e\"" Sep 10 00:01:34.234824 containerd[1497]: time="2025-09-10T00:01:34.234793780Z" level=info msg="RemoveContainer for \"52bb49f01be7f1c96154659584c7cfcddef94434389663ff71eb1beb7ba1746e\" returns successfully" Sep 10 00:01:34.235067 kubelet[2642]: I0910 00:01:34.235037 2642 scope.go:117] "RemoveContainer" containerID="ba3cb9a7cc98cda4e6f8b225175557b7c2f5c85701fb882dbb434aec7d1832bb" Sep 10 00:01:34.235319 containerd[1497]: time="2025-09-10T00:01:34.235284417Z" level=error msg="ContainerStatus for \"ba3cb9a7cc98cda4e6f8b225175557b7c2f5c85701fb882dbb434aec7d1832bb\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ba3cb9a7cc98cda4e6f8b225175557b7c2f5c85701fb882dbb434aec7d1832bb\": not found" Sep 10 00:01:34.235423 kubelet[2642]: E0910 00:01:34.235402 2642 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ba3cb9a7cc98cda4e6f8b225175557b7c2f5c85701fb882dbb434aec7d1832bb\": not found" containerID="ba3cb9a7cc98cda4e6f8b225175557b7c2f5c85701fb882dbb434aec7d1832bb" Sep 10 00:01:34.235468 kubelet[2642]: I0910 00:01:34.235433 2642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"ba3cb9a7cc98cda4e6f8b225175557b7c2f5c85701fb882dbb434aec7d1832bb"} err="failed to get container status \"ba3cb9a7cc98cda4e6f8b225175557b7c2f5c85701fb882dbb434aec7d1832bb\": rpc error: code = NotFound desc = an error occurred when try to find container \"ba3cb9a7cc98cda4e6f8b225175557b7c2f5c85701fb882dbb434aec7d1832bb\": not found" Sep 10 00:01:34.235468 kubelet[2642]: I0910 00:01:34.235456 2642 scope.go:117] "RemoveContainer" containerID="792a2715f4c70d569d0641fe8d46f03db1a714176cb6d563f8963029b4a702ab" Sep 10 00:01:34.235650 containerd[1497]: time="2025-09-10T00:01:34.235623375Z" level=error msg="ContainerStatus for \"792a2715f4c70d569d0641fe8d46f03db1a714176cb6d563f8963029b4a702ab\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"792a2715f4c70d569d0641fe8d46f03db1a714176cb6d563f8963029b4a702ab\": not found" Sep 10 00:01:34.235760 kubelet[2642]: E0910 00:01:34.235737 2642 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"792a2715f4c70d569d0641fe8d46f03db1a714176cb6d563f8963029b4a702ab\": not found" containerID="792a2715f4c70d569d0641fe8d46f03db1a714176cb6d563f8963029b4a702ab" Sep 10 00:01:34.235760 kubelet[2642]: I0910 00:01:34.235763 2642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"792a2715f4c70d569d0641fe8d46f03db1a714176cb6d563f8963029b4a702ab"} err="failed to get container status \"792a2715f4c70d569d0641fe8d46f03db1a714176cb6d563f8963029b4a702ab\": rpc error: code = NotFound desc = an error occurred when try to find container \"792a2715f4c70d569d0641fe8d46f03db1a714176cb6d563f8963029b4a702ab\": not found" Sep 10 00:01:34.235876 kubelet[2642]: I0910 00:01:34.235775 2642 scope.go:117] "RemoveContainer" containerID="6eae703885109c65f01a6e2040654b4ef049a281b07cd0eb213130d2508ffd87" Sep 10 00:01:34.235944 containerd[1497]: time="2025-09-10T00:01:34.235912973Z" level=error msg="ContainerStatus for \"6eae703885109c65f01a6e2040654b4ef049a281b07cd0eb213130d2508ffd87\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6eae703885109c65f01a6e2040654b4ef049a281b07cd0eb213130d2508ffd87\": not found" Sep 10 00:01:34.236038 kubelet[2642]: E0910 00:01:34.236018 2642 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6eae703885109c65f01a6e2040654b4ef049a281b07cd0eb213130d2508ffd87\": not found" containerID="6eae703885109c65f01a6e2040654b4ef049a281b07cd0eb213130d2508ffd87" Sep 10 00:01:34.236098 kubelet[2642]: I0910 00:01:34.236041 2642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6eae703885109c65f01a6e2040654b4ef049a281b07cd0eb213130d2508ffd87"} err="failed to get container status \"6eae703885109c65f01a6e2040654b4ef049a281b07cd0eb213130d2508ffd87\": rpc error: code = NotFound desc = an error occurred when try to find container \"6eae703885109c65f01a6e2040654b4ef049a281b07cd0eb213130d2508ffd87\": not found" Sep 10 00:01:34.236098 kubelet[2642]: I0910 00:01:34.236056 2642 scope.go:117] "RemoveContainer" containerID="d1d2914611ff8bc94917c8567fd74398f6e3d6814810123a5f0db9a9e7e98edc" Sep 10 00:01:34.236262 containerd[1497]: time="2025-09-10T00:01:34.236230131Z" level=error msg="ContainerStatus for \"d1d2914611ff8bc94917c8567fd74398f6e3d6814810123a5f0db9a9e7e98edc\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d1d2914611ff8bc94917c8567fd74398f6e3d6814810123a5f0db9a9e7e98edc\": not found" Sep 10 00:01:34.236402 kubelet[2642]: E0910 00:01:34.236379 2642 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d1d2914611ff8bc94917c8567fd74398f6e3d6814810123a5f0db9a9e7e98edc\": not found" containerID="d1d2914611ff8bc94917c8567fd74398f6e3d6814810123a5f0db9a9e7e98edc" Sep 10 00:01:34.236438 kubelet[2642]: I0910 00:01:34.236411 2642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d1d2914611ff8bc94917c8567fd74398f6e3d6814810123a5f0db9a9e7e98edc"} err="failed to get container status \"d1d2914611ff8bc94917c8567fd74398f6e3d6814810123a5f0db9a9e7e98edc\": rpc error: code = NotFound desc = an error occurred when try to find container \"d1d2914611ff8bc94917c8567fd74398f6e3d6814810123a5f0db9a9e7e98edc\": not found" Sep 10 00:01:34.236438 kubelet[2642]: I0910 00:01:34.236430 2642 scope.go:117] "RemoveContainer" containerID="52bb49f01be7f1c96154659584c7cfcddef94434389663ff71eb1beb7ba1746e" Sep 10 00:01:34.236602 containerd[1497]: time="2025-09-10T00:01:34.236561529Z" level=error msg="ContainerStatus for \"52bb49f01be7f1c96154659584c7cfcddef94434389663ff71eb1beb7ba1746e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"52bb49f01be7f1c96154659584c7cfcddef94434389663ff71eb1beb7ba1746e\": not found" Sep 10 00:01:34.236698 kubelet[2642]: E0910 00:01:34.236666 2642 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"52bb49f01be7f1c96154659584c7cfcddef94434389663ff71eb1beb7ba1746e\": not found" containerID="52bb49f01be7f1c96154659584c7cfcddef94434389663ff71eb1beb7ba1746e" Sep 10 00:01:34.236698 kubelet[2642]: I0910 00:01:34.236690 2642 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"52bb49f01be7f1c96154659584c7cfcddef94434389663ff71eb1beb7ba1746e"} err="failed to get container status \"52bb49f01be7f1c96154659584c7cfcddef94434389663ff71eb1beb7ba1746e\": rpc error: code = NotFound desc = an error occurred when try to find container \"52bb49f01be7f1c96154659584c7cfcddef94434389663ff71eb1beb7ba1746e\": not found" Sep 10 00:01:34.920961 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-079c1afe3398969211cf41d1b29f0a50a6d9623336a48ead37820f21b3e1eed1-shm.mount: Deactivated successfully. Sep 10 00:01:34.921075 systemd[1]: var-lib-kubelet-pods-66851fa4\x2d7356\x2d4f6a\x2db72d\x2db7034fa912d9-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dx8vs6.mount: Deactivated successfully. Sep 10 00:01:34.921129 systemd[1]: var-lib-kubelet-pods-a03cb76b\x2dbe61\x2d4004\x2d96df\x2d9b45274da63d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dvbch7.mount: Deactivated successfully. Sep 10 00:01:34.921178 systemd[1]: var-lib-kubelet-pods-a03cb76b\x2dbe61\x2d4004\x2d96df\x2d9b45274da63d-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Sep 10 00:01:34.921228 systemd[1]: var-lib-kubelet-pods-a03cb76b\x2dbe61\x2d4004\x2d96df\x2d9b45274da63d-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Sep 10 00:01:35.830659 sshd[4250]: Connection closed by 10.0.0.1 port 53136 Sep 10 00:01:35.832173 sshd-session[4247]: pam_unix(sshd:session): session closed for user core Sep 10 00:01:35.853806 systemd[1]: sshd@21-10.0.0.122:22-10.0.0.1:53136.service: Deactivated successfully. Sep 10 00:01:35.858797 systemd[1]: session-22.scope: Deactivated successfully. Sep 10 00:01:35.859002 systemd[1]: session-22.scope: Consumed 1.276s CPU time, 23.6M memory peak. Sep 10 00:01:35.862257 systemd-logind[1483]: Session 22 logged out. Waiting for processes to exit. Sep 10 00:01:35.863120 systemd[1]: Started sshd@22-10.0.0.122:22-10.0.0.1:53146.service - OpenSSH per-connection server daemon (10.0.0.1:53146). Sep 10 00:01:35.868014 systemd-logind[1483]: Removed session 22. Sep 10 00:01:35.907772 kubelet[2642]: I0910 00:01:35.906938 2642 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="66851fa4-7356-4f6a-b72d-b7034fa912d9" path="/var/lib/kubelet/pods/66851fa4-7356-4f6a-b72d-b7034fa912d9/volumes" Sep 10 00:01:35.907772 kubelet[2642]: I0910 00:01:35.907314 2642 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a03cb76b-be61-4004-96df-9b45274da63d" path="/var/lib/kubelet/pods/a03cb76b-be61-4004-96df-9b45274da63d/volumes" Sep 10 00:01:35.930121 sshd[4401]: Accepted publickey for core from 10.0.0.1 port 53146 ssh2: RSA SHA256:BIipJKfG3sr4zTNTEUz0SDDjJtEzBqbnZB4/ga6/CtY Sep 10 00:01:35.932805 sshd-session[4401]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:01:35.937426 systemd-logind[1483]: New session 23 of user core. Sep 10 00:01:35.947736 systemd[1]: Started session-23.scope - Session 23 of User core. Sep 10 00:01:36.975189 kubelet[2642]: E0910 00:01:36.975101 2642 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Sep 10 00:01:37.001711 sshd[4404]: Connection closed by 10.0.0.1 port 53146 Sep 10 00:01:37.001922 sshd-session[4401]: pam_unix(sshd:session): session closed for user core Sep 10 00:01:37.013964 systemd[1]: sshd@22-10.0.0.122:22-10.0.0.1:53146.service: Deactivated successfully. Sep 10 00:01:37.020201 systemd[1]: session-23.scope: Deactivated successfully. Sep 10 00:01:37.024821 systemd-logind[1483]: Session 23 logged out. Waiting for processes to exit. Sep 10 00:01:37.030009 systemd[1]: Started sshd@23-10.0.0.122:22-10.0.0.1:53162.service - OpenSSH per-connection server daemon (10.0.0.1:53162). Sep 10 00:01:37.034208 systemd-logind[1483]: Removed session 23. Sep 10 00:01:37.047487 systemd[1]: Created slice kubepods-burstable-pod284a112e_db66_4943_ae23_202a6ae40314.slice - libcontainer container kubepods-burstable-pod284a112e_db66_4943_ae23_202a6ae40314.slice. Sep 10 00:01:37.107251 sshd[4415]: Accepted publickey for core from 10.0.0.1 port 53162 ssh2: RSA SHA256:BIipJKfG3sr4zTNTEUz0SDDjJtEzBqbnZB4/ga6/CtY Sep 10 00:01:37.108609 sshd-session[4415]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:01:37.112901 systemd-logind[1483]: New session 24 of user core. Sep 10 00:01:37.121745 systemd[1]: Started session-24.scope - Session 24 of User core. Sep 10 00:01:37.137027 kubelet[2642]: I0910 00:01:37.136984 2642 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/284a112e-db66-4943-ae23-202a6ae40314-cni-path\") pod \"cilium-2h5zd\" (UID: \"284a112e-db66-4943-ae23-202a6ae40314\") " pod="kube-system/cilium-2h5zd" Sep 10 00:01:37.137027 kubelet[2642]: I0910 00:01:37.137030 2642 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/284a112e-db66-4943-ae23-202a6ae40314-clustermesh-secrets\") pod \"cilium-2h5zd\" (UID: \"284a112e-db66-4943-ae23-202a6ae40314\") " pod="kube-system/cilium-2h5zd" Sep 10 00:01:37.137205 kubelet[2642]: I0910 00:01:37.137055 2642 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/284a112e-db66-4943-ae23-202a6ae40314-cilium-config-path\") pod \"cilium-2h5zd\" (UID: \"284a112e-db66-4943-ae23-202a6ae40314\") " pod="kube-system/cilium-2h5zd" Sep 10 00:01:37.137205 kubelet[2642]: I0910 00:01:37.137071 2642 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/284a112e-db66-4943-ae23-202a6ae40314-hostproc\") pod \"cilium-2h5zd\" (UID: \"284a112e-db66-4943-ae23-202a6ae40314\") " pod="kube-system/cilium-2h5zd" Sep 10 00:01:37.137205 kubelet[2642]: I0910 00:01:37.137086 2642 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/284a112e-db66-4943-ae23-202a6ae40314-host-proc-sys-net\") pod \"cilium-2h5zd\" (UID: \"284a112e-db66-4943-ae23-202a6ae40314\") " pod="kube-system/cilium-2h5zd" Sep 10 00:01:37.137205 kubelet[2642]: I0910 00:01:37.137137 2642 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/284a112e-db66-4943-ae23-202a6ae40314-hubble-tls\") pod \"cilium-2h5zd\" (UID: \"284a112e-db66-4943-ae23-202a6ae40314\") " pod="kube-system/cilium-2h5zd" Sep 10 00:01:37.137413 kubelet[2642]: I0910 00:01:37.137215 2642 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/284a112e-db66-4943-ae23-202a6ae40314-cilium-run\") pod \"cilium-2h5zd\" (UID: \"284a112e-db66-4943-ae23-202a6ae40314\") " pod="kube-system/cilium-2h5zd" Sep 10 00:01:37.137413 kubelet[2642]: I0910 00:01:37.137242 2642 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/284a112e-db66-4943-ae23-202a6ae40314-bpf-maps\") pod \"cilium-2h5zd\" (UID: \"284a112e-db66-4943-ae23-202a6ae40314\") " pod="kube-system/cilium-2h5zd" Sep 10 00:01:37.137413 kubelet[2642]: I0910 00:01:37.137260 2642 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/284a112e-db66-4943-ae23-202a6ae40314-xtables-lock\") pod \"cilium-2h5zd\" (UID: \"284a112e-db66-4943-ae23-202a6ae40314\") " pod="kube-system/cilium-2h5zd" Sep 10 00:01:37.137413 kubelet[2642]: I0910 00:01:37.137277 2642 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/284a112e-db66-4943-ae23-202a6ae40314-cilium-ipsec-secrets\") pod \"cilium-2h5zd\" (UID: \"284a112e-db66-4943-ae23-202a6ae40314\") " pod="kube-system/cilium-2h5zd" Sep 10 00:01:37.137413 kubelet[2642]: I0910 00:01:37.137298 2642 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/284a112e-db66-4943-ae23-202a6ae40314-host-proc-sys-kernel\") pod \"cilium-2h5zd\" (UID: \"284a112e-db66-4943-ae23-202a6ae40314\") " pod="kube-system/cilium-2h5zd" Sep 10 00:01:37.137413 kubelet[2642]: I0910 00:01:37.137353 2642 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/284a112e-db66-4943-ae23-202a6ae40314-lib-modules\") pod \"cilium-2h5zd\" (UID: \"284a112e-db66-4943-ae23-202a6ae40314\") " pod="kube-system/cilium-2h5zd" Sep 10 00:01:37.137616 kubelet[2642]: I0910 00:01:37.137399 2642 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/284a112e-db66-4943-ae23-202a6ae40314-cilium-cgroup\") pod \"cilium-2h5zd\" (UID: \"284a112e-db66-4943-ae23-202a6ae40314\") " pod="kube-system/cilium-2h5zd" Sep 10 00:01:37.137616 kubelet[2642]: I0910 00:01:37.137432 2642 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/284a112e-db66-4943-ae23-202a6ae40314-etc-cni-netd\") pod \"cilium-2h5zd\" (UID: \"284a112e-db66-4943-ae23-202a6ae40314\") " pod="kube-system/cilium-2h5zd" Sep 10 00:01:37.137616 kubelet[2642]: I0910 00:01:37.137452 2642 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wq6m4\" (UniqueName: \"kubernetes.io/projected/284a112e-db66-4943-ae23-202a6ae40314-kube-api-access-wq6m4\") pod \"cilium-2h5zd\" (UID: \"284a112e-db66-4943-ae23-202a6ae40314\") " pod="kube-system/cilium-2h5zd" Sep 10 00:01:37.172284 sshd[4418]: Connection closed by 10.0.0.1 port 53162 Sep 10 00:01:37.172709 sshd-session[4415]: pam_unix(sshd:session): session closed for user core Sep 10 00:01:37.186770 systemd[1]: sshd@23-10.0.0.122:22-10.0.0.1:53162.service: Deactivated successfully. Sep 10 00:01:37.188363 systemd[1]: session-24.scope: Deactivated successfully. Sep 10 00:01:37.189088 systemd-logind[1483]: Session 24 logged out. Waiting for processes to exit. Sep 10 00:01:37.191334 systemd[1]: Started sshd@24-10.0.0.122:22-10.0.0.1:53176.service - OpenSSH per-connection server daemon (10.0.0.1:53176). Sep 10 00:01:37.192032 systemd-logind[1483]: Removed session 24. Sep 10 00:01:37.244993 sshd[4425]: Accepted publickey for core from 10.0.0.1 port 53176 ssh2: RSA SHA256:BIipJKfG3sr4zTNTEUz0SDDjJtEzBqbnZB4/ga6/CtY Sep 10 00:01:37.248841 sshd-session[4425]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Sep 10 00:01:37.256631 systemd-logind[1483]: New session 25 of user core. Sep 10 00:01:37.272744 systemd[1]: Started session-25.scope - Session 25 of User core. Sep 10 00:01:37.354066 containerd[1497]: time="2025-09-10T00:01:37.354026469Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2h5zd,Uid:284a112e-db66-4943-ae23-202a6ae40314,Namespace:kube-system,Attempt:0,}" Sep 10 00:01:37.371130 containerd[1497]: time="2025-09-10T00:01:37.371076033Z" level=info msg="connecting to shim 36711da582742b4a4ca62f07afe4d05522175cbae6c12b8fad6b8856b8f5cfa9" address="unix:///run/containerd/s/7fece8ff51a37c6b273080c8a3dfa22da73be195099461deee19577936ac6460" namespace=k8s.io protocol=ttrpc version=3 Sep 10 00:01:37.404809 systemd[1]: Started cri-containerd-36711da582742b4a4ca62f07afe4d05522175cbae6c12b8fad6b8856b8f5cfa9.scope - libcontainer container 36711da582742b4a4ca62f07afe4d05522175cbae6c12b8fad6b8856b8f5cfa9. Sep 10 00:01:37.431604 containerd[1497]: time="2025-09-10T00:01:37.431550885Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2h5zd,Uid:284a112e-db66-4943-ae23-202a6ae40314,Namespace:kube-system,Attempt:0,} returns sandbox id \"36711da582742b4a4ca62f07afe4d05522175cbae6c12b8fad6b8856b8f5cfa9\"" Sep 10 00:01:37.438156 containerd[1497]: time="2025-09-10T00:01:37.438083496Z" level=info msg="CreateContainer within sandbox \"36711da582742b4a4ca62f07afe4d05522175cbae6c12b8fad6b8856b8f5cfa9\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Sep 10 00:01:37.444820 containerd[1497]: time="2025-09-10T00:01:37.444781786Z" level=info msg="Container a27aea7e78eeff31083b60a30c280f97d65a275678c6e3b32fc9d8b4b2ad6204: CDI devices from CRI Config.CDIDevices: []" Sep 10 00:01:37.450309 containerd[1497]: time="2025-09-10T00:01:37.450257562Z" level=info msg="CreateContainer within sandbox \"36711da582742b4a4ca62f07afe4d05522175cbae6c12b8fad6b8856b8f5cfa9\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"a27aea7e78eeff31083b60a30c280f97d65a275678c6e3b32fc9d8b4b2ad6204\"" Sep 10 00:01:37.451044 containerd[1497]: time="2025-09-10T00:01:37.451018998Z" level=info msg="StartContainer for \"a27aea7e78eeff31083b60a30c280f97d65a275678c6e3b32fc9d8b4b2ad6204\"" Sep 10 00:01:37.452153 containerd[1497]: time="2025-09-10T00:01:37.452126794Z" level=info msg="connecting to shim a27aea7e78eeff31083b60a30c280f97d65a275678c6e3b32fc9d8b4b2ad6204" address="unix:///run/containerd/s/7fece8ff51a37c6b273080c8a3dfa22da73be195099461deee19577936ac6460" protocol=ttrpc version=3 Sep 10 00:01:37.474731 systemd[1]: Started cri-containerd-a27aea7e78eeff31083b60a30c280f97d65a275678c6e3b32fc9d8b4b2ad6204.scope - libcontainer container a27aea7e78eeff31083b60a30c280f97d65a275678c6e3b32fc9d8b4b2ad6204. Sep 10 00:01:37.501088 containerd[1497]: time="2025-09-10T00:01:37.500993497Z" level=info msg="StartContainer for \"a27aea7e78eeff31083b60a30c280f97d65a275678c6e3b32fc9d8b4b2ad6204\" returns successfully" Sep 10 00:01:37.506752 systemd[1]: cri-containerd-a27aea7e78eeff31083b60a30c280f97d65a275678c6e3b32fc9d8b4b2ad6204.scope: Deactivated successfully. Sep 10 00:01:37.507903 containerd[1497]: time="2025-09-10T00:01:37.507869706Z" level=info msg="received exit event container_id:\"a27aea7e78eeff31083b60a30c280f97d65a275678c6e3b32fc9d8b4b2ad6204\" id:\"a27aea7e78eeff31083b60a30c280f97d65a275678c6e3b32fc9d8b4b2ad6204\" pid:4497 exited_at:{seconds:1757462497 nanos:507678347}" Sep 10 00:01:37.508164 containerd[1497]: time="2025-09-10T00:01:37.508144385Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a27aea7e78eeff31083b60a30c280f97d65a275678c6e3b32fc9d8b4b2ad6204\" id:\"a27aea7e78eeff31083b60a30c280f97d65a275678c6e3b32fc9d8b4b2ad6204\" pid:4497 exited_at:{seconds:1757462497 nanos:507678347}" Sep 10 00:01:38.196307 containerd[1497]: time="2025-09-10T00:01:38.196267634Z" level=info msg="CreateContainer within sandbox \"36711da582742b4a4ca62f07afe4d05522175cbae6c12b8fad6b8856b8f5cfa9\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Sep 10 00:01:38.203017 containerd[1497]: time="2025-09-10T00:01:38.202840808Z" level=info msg="Container 8f5373b2ca9030cd9bd4986c4b4c08700b6db1a21c7e12681bde57c685d378ac: CDI devices from CRI Config.CDIDevices: []" Sep 10 00:01:38.209735 containerd[1497]: time="2025-09-10T00:01:38.209697181Z" level=info msg="CreateContainer within sandbox \"36711da582742b4a4ca62f07afe4d05522175cbae6c12b8fad6b8856b8f5cfa9\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"8f5373b2ca9030cd9bd4986c4b4c08700b6db1a21c7e12681bde57c685d378ac\"" Sep 10 00:01:38.210398 containerd[1497]: time="2025-09-10T00:01:38.210294019Z" level=info msg="StartContainer for \"8f5373b2ca9030cd9bd4986c4b4c08700b6db1a21c7e12681bde57c685d378ac\"" Sep 10 00:01:38.211141 containerd[1497]: time="2025-09-10T00:01:38.211117096Z" level=info msg="connecting to shim 8f5373b2ca9030cd9bd4986c4b4c08700b6db1a21c7e12681bde57c685d378ac" address="unix:///run/containerd/s/7fece8ff51a37c6b273080c8a3dfa22da73be195099461deee19577936ac6460" protocol=ttrpc version=3 Sep 10 00:01:38.232762 systemd[1]: Started cri-containerd-8f5373b2ca9030cd9bd4986c4b4c08700b6db1a21c7e12681bde57c685d378ac.scope - libcontainer container 8f5373b2ca9030cd9bd4986c4b4c08700b6db1a21c7e12681bde57c685d378ac. Sep 10 00:01:38.264907 containerd[1497]: time="2025-09-10T00:01:38.264859686Z" level=info msg="StartContainer for \"8f5373b2ca9030cd9bd4986c4b4c08700b6db1a21c7e12681bde57c685d378ac\" returns successfully" Sep 10 00:01:38.267808 systemd[1]: cri-containerd-8f5373b2ca9030cd9bd4986c4b4c08700b6db1a21c7e12681bde57c685d378ac.scope: Deactivated successfully. Sep 10 00:01:38.269907 containerd[1497]: time="2025-09-10T00:01:38.269869586Z" level=info msg="received exit event container_id:\"8f5373b2ca9030cd9bd4986c4b4c08700b6db1a21c7e12681bde57c685d378ac\" id:\"8f5373b2ca9030cd9bd4986c4b4c08700b6db1a21c7e12681bde57c685d378ac\" pid:4543 exited_at:{seconds:1757462498 nanos:269289588}" Sep 10 00:01:38.271024 containerd[1497]: time="2025-09-10T00:01:38.270973062Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8f5373b2ca9030cd9bd4986c4b4c08700b6db1a21c7e12681bde57c685d378ac\" id:\"8f5373b2ca9030cd9bd4986c4b4c08700b6db1a21c7e12681bde57c685d378ac\" pid:4543 exited_at:{seconds:1757462498 nanos:269289588}" Sep 10 00:01:38.287162 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8f5373b2ca9030cd9bd4986c4b4c08700b6db1a21c7e12681bde57c685d378ac-rootfs.mount: Deactivated successfully. Sep 10 00:01:39.201974 containerd[1497]: time="2025-09-10T00:01:39.201920721Z" level=info msg="CreateContainer within sandbox \"36711da582742b4a4ca62f07afe4d05522175cbae6c12b8fad6b8856b8f5cfa9\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Sep 10 00:01:39.218217 containerd[1497]: time="2025-09-10T00:01:39.218139226Z" level=info msg="Container 1d64bd829a51b341985ed9df56f2a493aae14756a0e2f7d3e455116368654f73: CDI devices from CRI Config.CDIDevices: []" Sep 10 00:01:39.221269 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount302710669.mount: Deactivated successfully. Sep 10 00:01:39.232996 containerd[1497]: time="2025-09-10T00:01:39.232944935Z" level=info msg="CreateContainer within sandbox \"36711da582742b4a4ca62f07afe4d05522175cbae6c12b8fad6b8856b8f5cfa9\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"1d64bd829a51b341985ed9df56f2a493aae14756a0e2f7d3e455116368654f73\"" Sep 10 00:01:39.234137 containerd[1497]: time="2025-09-10T00:01:39.234098251Z" level=info msg="StartContainer for \"1d64bd829a51b341985ed9df56f2a493aae14756a0e2f7d3e455116368654f73\"" Sep 10 00:01:39.235591 containerd[1497]: time="2025-09-10T00:01:39.235539046Z" level=info msg="connecting to shim 1d64bd829a51b341985ed9df56f2a493aae14756a0e2f7d3e455116368654f73" address="unix:///run/containerd/s/7fece8ff51a37c6b273080c8a3dfa22da73be195099461deee19577936ac6460" protocol=ttrpc version=3 Sep 10 00:01:39.261743 systemd[1]: Started cri-containerd-1d64bd829a51b341985ed9df56f2a493aae14756a0e2f7d3e455116368654f73.scope - libcontainer container 1d64bd829a51b341985ed9df56f2a493aae14756a0e2f7d3e455116368654f73. Sep 10 00:01:39.293494 systemd[1]: cri-containerd-1d64bd829a51b341985ed9df56f2a493aae14756a0e2f7d3e455116368654f73.scope: Deactivated successfully. Sep 10 00:01:39.295148 containerd[1497]: time="2025-09-10T00:01:39.294994284Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1d64bd829a51b341985ed9df56f2a493aae14756a0e2f7d3e455116368654f73\" id:\"1d64bd829a51b341985ed9df56f2a493aae14756a0e2f7d3e455116368654f73\" pid:4588 exited_at:{seconds:1757462499 nanos:294131727}" Sep 10 00:01:39.295148 containerd[1497]: time="2025-09-10T00:01:39.295039884Z" level=info msg="received exit event container_id:\"1d64bd829a51b341985ed9df56f2a493aae14756a0e2f7d3e455116368654f73\" id:\"1d64bd829a51b341985ed9df56f2a493aae14756a0e2f7d3e455116368654f73\" pid:4588 exited_at:{seconds:1757462499 nanos:294131727}" Sep 10 00:01:39.297436 containerd[1497]: time="2025-09-10T00:01:39.297398036Z" level=info msg="StartContainer for \"1d64bd829a51b341985ed9df56f2a493aae14756a0e2f7d3e455116368654f73\" returns successfully" Sep 10 00:01:39.312973 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1d64bd829a51b341985ed9df56f2a493aae14756a0e2f7d3e455116368654f73-rootfs.mount: Deactivated successfully. Sep 10 00:01:40.205426 containerd[1497]: time="2025-09-10T00:01:40.205044803Z" level=info msg="CreateContainer within sandbox \"36711da582742b4a4ca62f07afe4d05522175cbae6c12b8fad6b8856b8f5cfa9\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Sep 10 00:01:40.213411 containerd[1497]: time="2025-09-10T00:01:40.213363819Z" level=info msg="Container 5470a377164e72eb9c6eba7325f0456e7545cc04b54e61edeedfdcd71c79b02a: CDI devices from CRI Config.CDIDevices: []" Sep 10 00:01:40.221833 containerd[1497]: time="2025-09-10T00:01:40.221788114Z" level=info msg="CreateContainer within sandbox \"36711da582742b4a4ca62f07afe4d05522175cbae6c12b8fad6b8856b8f5cfa9\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"5470a377164e72eb9c6eba7325f0456e7545cc04b54e61edeedfdcd71c79b02a\"" Sep 10 00:01:40.222673 containerd[1497]: time="2025-09-10T00:01:40.222651752Z" level=info msg="StartContainer for \"5470a377164e72eb9c6eba7325f0456e7545cc04b54e61edeedfdcd71c79b02a\"" Sep 10 00:01:40.223751 containerd[1497]: time="2025-09-10T00:01:40.223723988Z" level=info msg="connecting to shim 5470a377164e72eb9c6eba7325f0456e7545cc04b54e61edeedfdcd71c79b02a" address="unix:///run/containerd/s/7fece8ff51a37c6b273080c8a3dfa22da73be195099461deee19577936ac6460" protocol=ttrpc version=3 Sep 10 00:01:40.242741 systemd[1]: Started cri-containerd-5470a377164e72eb9c6eba7325f0456e7545cc04b54e61edeedfdcd71c79b02a.scope - libcontainer container 5470a377164e72eb9c6eba7325f0456e7545cc04b54e61edeedfdcd71c79b02a. Sep 10 00:01:40.271731 systemd[1]: cri-containerd-5470a377164e72eb9c6eba7325f0456e7545cc04b54e61edeedfdcd71c79b02a.scope: Deactivated successfully. Sep 10 00:01:40.274177 containerd[1497]: time="2025-09-10T00:01:40.274144801Z" level=info msg="StartContainer for \"5470a377164e72eb9c6eba7325f0456e7545cc04b54e61edeedfdcd71c79b02a\" returns successfully" Sep 10 00:01:40.274537 containerd[1497]: time="2025-09-10T00:01:40.274498520Z" level=info msg="received exit event container_id:\"5470a377164e72eb9c6eba7325f0456e7545cc04b54e61edeedfdcd71c79b02a\" id:\"5470a377164e72eb9c6eba7325f0456e7545cc04b54e61edeedfdcd71c79b02a\" pid:4627 exited_at:{seconds:1757462500 nanos:274292001}" Sep 10 00:01:40.274812 containerd[1497]: time="2025-09-10T00:01:40.274656560Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5470a377164e72eb9c6eba7325f0456e7545cc04b54e61edeedfdcd71c79b02a\" id:\"5470a377164e72eb9c6eba7325f0456e7545cc04b54e61edeedfdcd71c79b02a\" pid:4627 exited_at:{seconds:1757462500 nanos:274292001}" Sep 10 00:01:40.296380 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5470a377164e72eb9c6eba7325f0456e7545cc04b54e61edeedfdcd71c79b02a-rootfs.mount: Deactivated successfully. Sep 10 00:01:41.215555 containerd[1497]: time="2025-09-10T00:01:41.215501637Z" level=info msg="CreateContainer within sandbox \"36711da582742b4a4ca62f07afe4d05522175cbae6c12b8fad6b8856b8f5cfa9\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Sep 10 00:01:41.227610 containerd[1497]: time="2025-09-10T00:01:41.227400448Z" level=info msg="Container 8caf1d75a4e6c405b904a0392b5373e56ef1d8bd51cbb2526870821187a37895: CDI devices from CRI Config.CDIDevices: []" Sep 10 00:01:41.234379 containerd[1497]: time="2025-09-10T00:01:41.234338871Z" level=info msg="CreateContainer within sandbox \"36711da582742b4a4ca62f07afe4d05522175cbae6c12b8fad6b8856b8f5cfa9\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"8caf1d75a4e6c405b904a0392b5373e56ef1d8bd51cbb2526870821187a37895\"" Sep 10 00:01:41.235001 containerd[1497]: time="2025-09-10T00:01:41.234901389Z" level=info msg="StartContainer for \"8caf1d75a4e6c405b904a0392b5373e56ef1d8bd51cbb2526870821187a37895\"" Sep 10 00:01:41.236355 containerd[1497]: time="2025-09-10T00:01:41.236151106Z" level=info msg="connecting to shim 8caf1d75a4e6c405b904a0392b5373e56ef1d8bd51cbb2526870821187a37895" address="unix:///run/containerd/s/7fece8ff51a37c6b273080c8a3dfa22da73be195099461deee19577936ac6460" protocol=ttrpc version=3 Sep 10 00:01:41.257739 systemd[1]: Started cri-containerd-8caf1d75a4e6c405b904a0392b5373e56ef1d8bd51cbb2526870821187a37895.scope - libcontainer container 8caf1d75a4e6c405b904a0392b5373e56ef1d8bd51cbb2526870821187a37895. Sep 10 00:01:41.288938 containerd[1497]: time="2025-09-10T00:01:41.288892098Z" level=info msg="StartContainer for \"8caf1d75a4e6c405b904a0392b5373e56ef1d8bd51cbb2526870821187a37895\" returns successfully" Sep 10 00:01:41.341107 containerd[1497]: time="2025-09-10T00:01:41.341060970Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8caf1d75a4e6c405b904a0392b5373e56ef1d8bd51cbb2526870821187a37895\" id:\"d95afb0c77157e6359ab35b8c2e13ebd54f238508928da281154434a319d94c5\" pid:4692 exited_at:{seconds:1757462501 nanos:340740771}" Sep 10 00:01:41.555591 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Sep 10 00:01:43.720690 containerd[1497]: time="2025-09-10T00:01:43.720644671Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8caf1d75a4e6c405b904a0392b5373e56ef1d8bd51cbb2526870821187a37895\" id:\"181b0fa3f165966fb352153c84ce86e8f23b841ad7ffcb85fc868cb2e88dfc15\" pid:4968 exit_status:1 exited_at:{seconds:1757462503 nanos:720362952}" Sep 10 00:01:43.736799 kubelet[2642]: E0910 00:01:43.736760 2642 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:41094->127.0.0.1:43861: write tcp 127.0.0.1:41094->127.0.0.1:43861: write: broken pipe Sep 10 00:01:44.562347 systemd-networkd[1437]: lxc_health: Link UP Sep 10 00:01:44.573229 systemd-networkd[1437]: lxc_health: Gained carrier Sep 10 00:01:45.380600 kubelet[2642]: I0910 00:01:45.380366 2642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-2h5zd" podStartSLOduration=8.380342038 podStartE2EDuration="8.380342038s" podCreationTimestamp="2025-09-10 00:01:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-09-10 00:01:42.229104785 +0000 UTC m=+80.412446262" watchObservedRunningTime="2025-09-10 00:01:45.380342038 +0000 UTC m=+83.563683515" Sep 10 00:01:45.636799 systemd-networkd[1437]: lxc_health: Gained IPv6LL Sep 10 00:01:45.858799 containerd[1497]: time="2025-09-10T00:01:45.858760711Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8caf1d75a4e6c405b904a0392b5373e56ef1d8bd51cbb2526870821187a37895\" id:\"777e2b44cae6294accebf063e873e8de482222cd7e198f00c761775cd37c1d4c\" pid:5230 exited_at:{seconds:1757462505 nanos:858398471}" Sep 10 00:01:47.979335 containerd[1497]: time="2025-09-10T00:01:47.979295965Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8caf1d75a4e6c405b904a0392b5373e56ef1d8bd51cbb2526870821187a37895\" id:\"473204929ee102226138466a3d73e7d006ce5406567c719bf0e81ca0a65306ca\" pid:5259 exited_at:{seconds:1757462507 nanos:978703045}" Sep 10 00:01:50.135640 containerd[1497]: time="2025-09-10T00:01:50.135562326Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8caf1d75a4e6c405b904a0392b5373e56ef1d8bd51cbb2526870821187a37895\" id:\"0885663c7d11b4899d22bb3b13ca195313d0cf43800b46f93f868a2329d6d031\" pid:5290 exited_at:{seconds:1757462510 nanos:135209686}" Sep 10 00:01:50.151686 sshd[4432]: Connection closed by 10.0.0.1 port 53176 Sep 10 00:01:50.152317 sshd-session[4425]: pam_unix(sshd:session): session closed for user core Sep 10 00:01:50.157100 systemd[1]: sshd@24-10.0.0.122:22-10.0.0.1:53176.service: Deactivated successfully. Sep 10 00:01:50.158751 systemd[1]: session-25.scope: Deactivated successfully. Sep 10 00:01:50.159416 systemd-logind[1483]: Session 25 logged out. Waiting for processes to exit. Sep 10 00:01:50.160696 systemd-logind[1483]: Removed session 25.