Jul 12 09:31:10.780528 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jul 12 09:31:10.780547 kernel: Linux version 6.12.36-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT Sat Jul 12 08:24:03 -00 2025 Jul 12 09:31:10.780557 kernel: KASLR enabled Jul 12 09:31:10.780562 kernel: efi: EFI v2.7 by EDK II Jul 12 09:31:10.780568 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb832018 ACPI 2.0=0xdbfd0018 RNG=0xdbfd0a18 MEMRESERVE=0xdb838218 Jul 12 09:31:10.780574 kernel: random: crng init done Jul 12 09:31:10.780580 kernel: secureboot: Secure boot disabled Jul 12 09:31:10.780586 kernel: ACPI: Early table checksum verification disabled Jul 12 09:31:10.780592 kernel: ACPI: RSDP 0x00000000DBFD0018 000024 (v02 BOCHS ) Jul 12 09:31:10.780599 kernel: ACPI: XSDT 0x00000000DBFD0F18 000064 (v01 BOCHS BXPC 00000001 01000013) Jul 12 09:31:10.780605 kernel: ACPI: FACP 0x00000000DBFD0B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jul 12 09:31:10.780610 kernel: ACPI: DSDT 0x00000000DBF0E018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 12 09:31:10.780616 kernel: ACPI: APIC 0x00000000DBFD0C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jul 12 09:31:10.780622 kernel: ACPI: PPTT 0x00000000DBFD0098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 12 09:31:10.780629 kernel: ACPI: GTDT 0x00000000DBFD0818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 12 09:31:10.780636 kernel: ACPI: MCFG 0x00000000DBFD0A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 12 09:31:10.780643 kernel: ACPI: SPCR 0x00000000DBFD0918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 12 09:31:10.780649 kernel: ACPI: DBG2 0x00000000DBFD0998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jul 12 09:31:10.780655 kernel: ACPI: IORT 0x00000000DBFD0198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 12 09:31:10.780661 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Jul 12 09:31:10.780667 kernel: ACPI: Use ACPI SPCR as default console: Yes Jul 12 09:31:10.780674 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Jul 12 09:31:10.780681 kernel: NODE_DATA(0) allocated [mem 0xdc965a00-0xdc96cfff] Jul 12 09:31:10.780687 kernel: Zone ranges: Jul 12 09:31:10.780693 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Jul 12 09:31:10.780711 kernel: DMA32 empty Jul 12 09:31:10.780718 kernel: Normal empty Jul 12 09:31:10.780724 kernel: Device empty Jul 12 09:31:10.780730 kernel: Movable zone start for each node Jul 12 09:31:10.780737 kernel: Early memory node ranges Jul 12 09:31:10.780743 kernel: node 0: [mem 0x0000000040000000-0x00000000db81ffff] Jul 12 09:31:10.780749 kernel: node 0: [mem 0x00000000db820000-0x00000000db82ffff] Jul 12 09:31:10.780756 kernel: node 0: [mem 0x00000000db830000-0x00000000dc09ffff] Jul 12 09:31:10.780762 kernel: node 0: [mem 0x00000000dc0a0000-0x00000000dc2dffff] Jul 12 09:31:10.780768 kernel: node 0: [mem 0x00000000dc2e0000-0x00000000dc36ffff] Jul 12 09:31:10.780774 kernel: node 0: [mem 0x00000000dc370000-0x00000000dc45ffff] Jul 12 09:31:10.780780 kernel: node 0: [mem 0x00000000dc460000-0x00000000dc52ffff] Jul 12 09:31:10.780788 kernel: node 0: [mem 0x00000000dc530000-0x00000000dc5cffff] Jul 12 09:31:10.780794 kernel: node 0: [mem 0x00000000dc5d0000-0x00000000dce1ffff] Jul 12 09:31:10.780800 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Jul 12 09:31:10.780808 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Jul 12 09:31:10.780815 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Jul 12 09:31:10.780821 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Jul 12 09:31:10.780829 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Jul 12 09:31:10.780836 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Jul 12 09:31:10.780842 kernel: cma: Reserved 16 MiB at 0x00000000d8000000 on node -1 Jul 12 09:31:10.780848 kernel: psci: probing for conduit method from ACPI. Jul 12 09:31:10.780855 kernel: psci: PSCIv1.1 detected in firmware. Jul 12 09:31:10.780875 kernel: psci: Using standard PSCI v0.2 function IDs Jul 12 09:31:10.780882 kernel: psci: Trusted OS migration not required Jul 12 09:31:10.780888 kernel: psci: SMC Calling Convention v1.1 Jul 12 09:31:10.780895 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jul 12 09:31:10.780902 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Jul 12 09:31:10.780910 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Jul 12 09:31:10.780940 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Jul 12 09:31:10.780946 kernel: Detected PIPT I-cache on CPU0 Jul 12 09:31:10.780953 kernel: CPU features: detected: GIC system register CPU interface Jul 12 09:31:10.780959 kernel: CPU features: detected: Spectre-v4 Jul 12 09:31:10.780966 kernel: CPU features: detected: Spectre-BHB Jul 12 09:31:10.780972 kernel: CPU features: kernel page table isolation forced ON by KASLR Jul 12 09:31:10.780979 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jul 12 09:31:10.780985 kernel: CPU features: detected: ARM erratum 1418040 Jul 12 09:31:10.780991 kernel: CPU features: detected: SSBS not fully self-synchronizing Jul 12 09:31:10.780998 kernel: alternatives: applying boot alternatives Jul 12 09:31:10.781006 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=2eed6122ab9d95fa96c8f5511b96c1220a0caf18bbf7b84035ef573d9ba90496 Jul 12 09:31:10.781015 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 12 09:31:10.781022 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 12 09:31:10.781029 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 12 09:31:10.781035 kernel: Fallback order for Node 0: 0 Jul 12 09:31:10.781041 kernel: Built 1 zonelists, mobility grouping on. Total pages: 643072 Jul 12 09:31:10.781047 kernel: Policy zone: DMA Jul 12 09:31:10.781054 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 12 09:31:10.781060 kernel: software IO TLB: SWIOTLB bounce buffer size adjusted to 2MB Jul 12 09:31:10.781066 kernel: software IO TLB: area num 4. Jul 12 09:31:10.781072 kernel: software IO TLB: SWIOTLB bounce buffer size roundup to 4MB Jul 12 09:31:10.781079 kernel: software IO TLB: mapped [mem 0x00000000d7c00000-0x00000000d8000000] (4MB) Jul 12 09:31:10.781086 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jul 12 09:31:10.781093 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 12 09:31:10.781100 kernel: rcu: RCU event tracing is enabled. Jul 12 09:31:10.781106 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jul 12 09:31:10.781113 kernel: Trampoline variant of Tasks RCU enabled. Jul 12 09:31:10.781119 kernel: Tracing variant of Tasks RCU enabled. Jul 12 09:31:10.781126 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 12 09:31:10.781132 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jul 12 09:31:10.781138 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 12 09:31:10.781145 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 12 09:31:10.781151 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jul 12 09:31:10.781159 kernel: GICv3: 256 SPIs implemented Jul 12 09:31:10.781165 kernel: GICv3: 0 Extended SPIs implemented Jul 12 09:31:10.781171 kernel: Root IRQ handler: gic_handle_irq Jul 12 09:31:10.781177 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jul 12 09:31:10.781184 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Jul 12 09:31:10.781190 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jul 12 09:31:10.781196 kernel: ITS [mem 0x08080000-0x0809ffff] Jul 12 09:31:10.781203 kernel: ITS@0x0000000008080000: allocated 8192 Devices @40110000 (indirect, esz 8, psz 64K, shr 1) Jul 12 09:31:10.781209 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @40120000 (flat, esz 8, psz 64K, shr 1) Jul 12 09:31:10.781216 kernel: GICv3: using LPI property table @0x0000000040130000 Jul 12 09:31:10.781223 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040140000 Jul 12 09:31:10.781229 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 12 09:31:10.781236 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 12 09:31:10.781243 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jul 12 09:31:10.781249 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jul 12 09:31:10.781256 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jul 12 09:31:10.781262 kernel: arm-pv: using stolen time PV Jul 12 09:31:10.781269 kernel: Console: colour dummy device 80x25 Jul 12 09:31:10.781275 kernel: ACPI: Core revision 20240827 Jul 12 09:31:10.781282 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jul 12 09:31:10.781289 kernel: pid_max: default: 32768 minimum: 301 Jul 12 09:31:10.781295 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jul 12 09:31:10.781303 kernel: landlock: Up and running. Jul 12 09:31:10.781309 kernel: SELinux: Initializing. Jul 12 09:31:10.781316 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 12 09:31:10.781322 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 12 09:31:10.781329 kernel: rcu: Hierarchical SRCU implementation. Jul 12 09:31:10.781335 kernel: rcu: Max phase no-delay instances is 400. Jul 12 09:31:10.781342 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jul 12 09:31:10.781348 kernel: Remapping and enabling EFI services. Jul 12 09:31:10.781355 kernel: smp: Bringing up secondary CPUs ... Jul 12 09:31:10.781367 kernel: Detected PIPT I-cache on CPU1 Jul 12 09:31:10.781374 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jul 12 09:31:10.781381 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040150000 Jul 12 09:31:10.781389 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 12 09:31:10.781396 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jul 12 09:31:10.781403 kernel: Detected PIPT I-cache on CPU2 Jul 12 09:31:10.781410 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Jul 12 09:31:10.781417 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040160000 Jul 12 09:31:10.781425 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 12 09:31:10.781432 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Jul 12 09:31:10.781439 kernel: Detected PIPT I-cache on CPU3 Jul 12 09:31:10.781446 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Jul 12 09:31:10.781453 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040170000 Jul 12 09:31:10.781460 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 12 09:31:10.781466 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Jul 12 09:31:10.781473 kernel: smp: Brought up 1 node, 4 CPUs Jul 12 09:31:10.781480 kernel: SMP: Total of 4 processors activated. Jul 12 09:31:10.781488 kernel: CPU: All CPU(s) started at EL1 Jul 12 09:31:10.781495 kernel: CPU features: detected: 32-bit EL0 Support Jul 12 09:31:10.781502 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jul 12 09:31:10.781509 kernel: CPU features: detected: Common not Private translations Jul 12 09:31:10.781515 kernel: CPU features: detected: CRC32 instructions Jul 12 09:31:10.781522 kernel: CPU features: detected: Enhanced Virtualization Traps Jul 12 09:31:10.781529 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jul 12 09:31:10.781536 kernel: CPU features: detected: LSE atomic instructions Jul 12 09:31:10.781543 kernel: CPU features: detected: Privileged Access Never Jul 12 09:31:10.781551 kernel: CPU features: detected: RAS Extension Support Jul 12 09:31:10.781558 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jul 12 09:31:10.781565 kernel: alternatives: applying system-wide alternatives Jul 12 09:31:10.781572 kernel: CPU features: detected: Hardware dirty bit management on CPU0-3 Jul 12 09:31:10.781579 kernel: Memory: 2424032K/2572288K available (11136K kernel code, 2436K rwdata, 9056K rodata, 39424K init, 1038K bss, 125920K reserved, 16384K cma-reserved) Jul 12 09:31:10.781586 kernel: devtmpfs: initialized Jul 12 09:31:10.781593 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 12 09:31:10.781600 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jul 12 09:31:10.781607 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jul 12 09:31:10.781614 kernel: 0 pages in range for non-PLT usage Jul 12 09:31:10.781621 kernel: 508448 pages in range for PLT usage Jul 12 09:31:10.781628 kernel: pinctrl core: initialized pinctrl subsystem Jul 12 09:31:10.781635 kernel: SMBIOS 3.0.0 present. Jul 12 09:31:10.781642 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Jul 12 09:31:10.781648 kernel: DMI: Memory slots populated: 1/1 Jul 12 09:31:10.781655 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 12 09:31:10.781662 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jul 12 09:31:10.781669 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jul 12 09:31:10.781676 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jul 12 09:31:10.781684 kernel: audit: initializing netlink subsys (disabled) Jul 12 09:31:10.781691 kernel: audit: type=2000 audit(0.020:1): state=initialized audit_enabled=0 res=1 Jul 12 09:31:10.781697 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 12 09:31:10.781710 kernel: cpuidle: using governor menu Jul 12 09:31:10.781717 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jul 12 09:31:10.781724 kernel: ASID allocator initialised with 32768 entries Jul 12 09:31:10.781731 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 12 09:31:10.781738 kernel: Serial: AMBA PL011 UART driver Jul 12 09:31:10.781744 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 12 09:31:10.781759 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jul 12 09:31:10.781766 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jul 12 09:31:10.781773 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jul 12 09:31:10.781780 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 12 09:31:10.781787 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jul 12 09:31:10.781794 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jul 12 09:31:10.781801 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jul 12 09:31:10.781807 kernel: ACPI: Added _OSI(Module Device) Jul 12 09:31:10.781814 kernel: ACPI: Added _OSI(Processor Device) Jul 12 09:31:10.781822 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 12 09:31:10.781829 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 12 09:31:10.781836 kernel: ACPI: Interpreter enabled Jul 12 09:31:10.781843 kernel: ACPI: Using GIC for interrupt routing Jul 12 09:31:10.781849 kernel: ACPI: MCFG table detected, 1 entries Jul 12 09:31:10.781856 kernel: ACPI: CPU0 has been hot-added Jul 12 09:31:10.781863 kernel: ACPI: CPU1 has been hot-added Jul 12 09:31:10.781869 kernel: ACPI: CPU2 has been hot-added Jul 12 09:31:10.781876 kernel: ACPI: CPU3 has been hot-added Jul 12 09:31:10.781885 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jul 12 09:31:10.781891 kernel: printk: legacy console [ttyAMA0] enabled Jul 12 09:31:10.781898 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 12 09:31:10.782069 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 12 09:31:10.782136 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jul 12 09:31:10.782196 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jul 12 09:31:10.782253 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jul 12 09:31:10.782313 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jul 12 09:31:10.782322 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jul 12 09:31:10.782329 kernel: PCI host bridge to bus 0000:00 Jul 12 09:31:10.782390 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jul 12 09:31:10.782445 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jul 12 09:31:10.782497 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jul 12 09:31:10.782548 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 12 09:31:10.782623 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint Jul 12 09:31:10.782693 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Jul 12 09:31:10.782768 kernel: pci 0000:00:01.0: BAR 0 [io 0x0000-0x001f] Jul 12 09:31:10.782829 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff] Jul 12 09:31:10.782889 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] Jul 12 09:31:10.782980 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned Jul 12 09:31:10.783045 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff]: assigned Jul 12 09:31:10.783108 kernel: pci 0000:00:01.0: BAR 0 [io 0x1000-0x101f]: assigned Jul 12 09:31:10.783162 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jul 12 09:31:10.783214 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jul 12 09:31:10.783267 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jul 12 09:31:10.783276 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jul 12 09:31:10.783283 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jul 12 09:31:10.783290 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jul 12 09:31:10.783299 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jul 12 09:31:10.783306 kernel: iommu: Default domain type: Translated Jul 12 09:31:10.783312 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jul 12 09:31:10.783319 kernel: efivars: Registered efivars operations Jul 12 09:31:10.783326 kernel: vgaarb: loaded Jul 12 09:31:10.783333 kernel: clocksource: Switched to clocksource arch_sys_counter Jul 12 09:31:10.783340 kernel: VFS: Disk quotas dquot_6.6.0 Jul 12 09:31:10.783347 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 12 09:31:10.783353 kernel: pnp: PnP ACPI init Jul 12 09:31:10.783418 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jul 12 09:31:10.783428 kernel: pnp: PnP ACPI: found 1 devices Jul 12 09:31:10.783436 kernel: NET: Registered PF_INET protocol family Jul 12 09:31:10.783442 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 12 09:31:10.783449 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 12 09:31:10.783456 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 12 09:31:10.783463 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 12 09:31:10.783470 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 12 09:31:10.783477 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 12 09:31:10.783485 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 12 09:31:10.783492 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 12 09:31:10.783499 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 12 09:31:10.783506 kernel: PCI: CLS 0 bytes, default 64 Jul 12 09:31:10.783513 kernel: kvm [1]: HYP mode not available Jul 12 09:31:10.783519 kernel: Initialise system trusted keyrings Jul 12 09:31:10.783526 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 12 09:31:10.783533 kernel: Key type asymmetric registered Jul 12 09:31:10.783539 kernel: Asymmetric key parser 'x509' registered Jul 12 09:31:10.783548 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jul 12 09:31:10.783555 kernel: io scheduler mq-deadline registered Jul 12 09:31:10.783562 kernel: io scheduler kyber registered Jul 12 09:31:10.783568 kernel: io scheduler bfq registered Jul 12 09:31:10.783575 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jul 12 09:31:10.783582 kernel: ACPI: button: Power Button [PWRB] Jul 12 09:31:10.783589 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jul 12 09:31:10.783648 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Jul 12 09:31:10.783657 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 12 09:31:10.783666 kernel: thunder_xcv, ver 1.0 Jul 12 09:31:10.783672 kernel: thunder_bgx, ver 1.0 Jul 12 09:31:10.783679 kernel: nicpf, ver 1.0 Jul 12 09:31:10.783686 kernel: nicvf, ver 1.0 Jul 12 09:31:10.783768 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jul 12 09:31:10.783826 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-07-12T09:31:10 UTC (1752312670) Jul 12 09:31:10.783835 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 12 09:31:10.783842 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Jul 12 09:31:10.783851 kernel: watchdog: NMI not fully supported Jul 12 09:31:10.783858 kernel: watchdog: Hard watchdog permanently disabled Jul 12 09:31:10.783865 kernel: NET: Registered PF_INET6 protocol family Jul 12 09:31:10.783871 kernel: Segment Routing with IPv6 Jul 12 09:31:10.783878 kernel: In-situ OAM (IOAM) with IPv6 Jul 12 09:31:10.783885 kernel: NET: Registered PF_PACKET protocol family Jul 12 09:31:10.783892 kernel: Key type dns_resolver registered Jul 12 09:31:10.783899 kernel: registered taskstats version 1 Jul 12 09:31:10.783906 kernel: Loading compiled-in X.509 certificates Jul 12 09:31:10.783925 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.36-flatcar: 5833903fd926e330df1283c2ccd9d99e7cfa4219' Jul 12 09:31:10.783932 kernel: Demotion targets for Node 0: null Jul 12 09:31:10.783939 kernel: Key type .fscrypt registered Jul 12 09:31:10.783946 kernel: Key type fscrypt-provisioning registered Jul 12 09:31:10.783953 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 12 09:31:10.783960 kernel: ima: Allocated hash algorithm: sha1 Jul 12 09:31:10.783966 kernel: ima: No architecture policies found Jul 12 09:31:10.783973 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jul 12 09:31:10.783980 kernel: clk: Disabling unused clocks Jul 12 09:31:10.783989 kernel: PM: genpd: Disabling unused power domains Jul 12 09:31:10.783995 kernel: Warning: unable to open an initial console. Jul 12 09:31:10.784003 kernel: Freeing unused kernel memory: 39424K Jul 12 09:31:10.784009 kernel: Run /init as init process Jul 12 09:31:10.784016 kernel: with arguments: Jul 12 09:31:10.784023 kernel: /init Jul 12 09:31:10.784030 kernel: with environment: Jul 12 09:31:10.784036 kernel: HOME=/ Jul 12 09:31:10.784043 kernel: TERM=linux Jul 12 09:31:10.784051 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 12 09:31:10.784059 systemd[1]: Successfully made /usr/ read-only. Jul 12 09:31:10.784069 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 12 09:31:10.784076 systemd[1]: Detected virtualization kvm. Jul 12 09:31:10.784084 systemd[1]: Detected architecture arm64. Jul 12 09:31:10.784091 systemd[1]: Running in initrd. Jul 12 09:31:10.784098 systemd[1]: No hostname configured, using default hostname. Jul 12 09:31:10.784107 systemd[1]: Hostname set to . Jul 12 09:31:10.784114 systemd[1]: Initializing machine ID from VM UUID. Jul 12 09:31:10.784121 systemd[1]: Queued start job for default target initrd.target. Jul 12 09:31:10.784128 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 12 09:31:10.784136 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 12 09:31:10.784144 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 12 09:31:10.784151 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 12 09:31:10.784159 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 12 09:31:10.784168 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 12 09:31:10.784176 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 12 09:31:10.784184 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 12 09:31:10.784192 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 12 09:31:10.784199 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 12 09:31:10.784206 systemd[1]: Reached target paths.target - Path Units. Jul 12 09:31:10.784214 systemd[1]: Reached target slices.target - Slice Units. Jul 12 09:31:10.784222 systemd[1]: Reached target swap.target - Swaps. Jul 12 09:31:10.784229 systemd[1]: Reached target timers.target - Timer Units. Jul 12 09:31:10.784236 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 12 09:31:10.784244 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 12 09:31:10.784251 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 12 09:31:10.784258 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jul 12 09:31:10.784266 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 12 09:31:10.784273 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 12 09:31:10.784281 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 12 09:31:10.784289 systemd[1]: Reached target sockets.target - Socket Units. Jul 12 09:31:10.784296 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 12 09:31:10.784304 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 12 09:31:10.784311 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 12 09:31:10.784319 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jul 12 09:31:10.784326 systemd[1]: Starting systemd-fsck-usr.service... Jul 12 09:31:10.784333 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 12 09:31:10.784341 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 12 09:31:10.784349 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 12 09:31:10.784356 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 12 09:31:10.784364 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 12 09:31:10.784372 systemd[1]: Finished systemd-fsck-usr.service. Jul 12 09:31:10.784380 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 12 09:31:10.784402 systemd-journald[245]: Collecting audit messages is disabled. Jul 12 09:31:10.784420 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 12 09:31:10.784429 systemd-journald[245]: Journal started Jul 12 09:31:10.784447 systemd-journald[245]: Runtime Journal (/run/log/journal/79966e0c01fd4cb7ba6668f56de5b47d) is 6M, max 48.5M, 42.4M free. Jul 12 09:31:10.775460 systemd-modules-load[246]: Inserted module 'overlay' Jul 12 09:31:10.786948 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 12 09:31:10.788183 systemd[1]: Started systemd-journald.service - Journal Service. Jul 12 09:31:10.790554 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 12 09:31:10.793023 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 12 09:31:10.797513 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 12 09:31:10.797534 kernel: Bridge firewalling registered Jul 12 09:31:10.796690 systemd-modules-load[246]: Inserted module 'br_netfilter' Jul 12 09:31:10.802525 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 12 09:31:10.803539 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 12 09:31:10.805939 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 12 09:31:10.812329 systemd-tmpfiles[269]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jul 12 09:31:10.812408 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 12 09:31:10.816117 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 12 09:31:10.820254 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 12 09:31:10.821290 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 12 09:31:10.823391 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 12 09:31:10.825156 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 12 09:31:10.847882 dracut-cmdline[290]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=2eed6122ab9d95fa96c8f5511b96c1220a0caf18bbf7b84035ef573d9ba90496 Jul 12 09:31:10.863658 systemd-resolved[291]: Positive Trust Anchors: Jul 12 09:31:10.863676 systemd-resolved[291]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 12 09:31:10.863715 systemd-resolved[291]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 12 09:31:10.868387 systemd-resolved[291]: Defaulting to hostname 'linux'. Jul 12 09:31:10.869282 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 12 09:31:10.870399 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 12 09:31:10.916941 kernel: SCSI subsystem initialized Jul 12 09:31:10.921926 kernel: Loading iSCSI transport class v2.0-870. Jul 12 09:31:10.928930 kernel: iscsi: registered transport (tcp) Jul 12 09:31:10.942948 kernel: iscsi: registered transport (qla4xxx) Jul 12 09:31:10.942962 kernel: QLogic iSCSI HBA Driver Jul 12 09:31:10.958303 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 12 09:31:10.977991 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 12 09:31:10.980615 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 12 09:31:11.022267 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 12 09:31:11.024163 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 12 09:31:11.088949 kernel: raid6: neonx8 gen() 15791 MB/s Jul 12 09:31:11.105944 kernel: raid6: neonx4 gen() 15804 MB/s Jul 12 09:31:11.122932 kernel: raid6: neonx2 gen() 13205 MB/s Jul 12 09:31:11.139938 kernel: raid6: neonx1 gen() 10480 MB/s Jul 12 09:31:11.156939 kernel: raid6: int64x8 gen() 6908 MB/s Jul 12 09:31:11.173939 kernel: raid6: int64x4 gen() 7365 MB/s Jul 12 09:31:11.190937 kernel: raid6: int64x2 gen() 6105 MB/s Jul 12 09:31:11.207938 kernel: raid6: int64x1 gen() 5053 MB/s Jul 12 09:31:11.207968 kernel: raid6: using algorithm neonx4 gen() 15804 MB/s Jul 12 09:31:11.224944 kernel: raid6: .... xor() 12336 MB/s, rmw enabled Jul 12 09:31:11.224968 kernel: raid6: using neon recovery algorithm Jul 12 09:31:11.230260 kernel: xor: measuring software checksum speed Jul 12 09:31:11.230277 kernel: 8regs : 21664 MB/sec Jul 12 09:31:11.230290 kernel: 32regs : 21699 MB/sec Jul 12 09:31:11.231164 kernel: arm64_neon : 28041 MB/sec Jul 12 09:31:11.231179 kernel: xor: using function: arm64_neon (28041 MB/sec) Jul 12 09:31:11.282948 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 12 09:31:11.288585 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 12 09:31:11.290789 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 12 09:31:11.318716 systemd-udevd[499]: Using default interface naming scheme 'v255'. Jul 12 09:31:11.322729 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 12 09:31:11.324261 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 12 09:31:11.348790 dracut-pre-trigger[508]: rd.md=0: removing MD RAID activation Jul 12 09:31:11.369559 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 12 09:31:11.371485 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 12 09:31:11.421763 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 12 09:31:11.423650 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 12 09:31:11.477095 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Jul 12 09:31:11.477231 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jul 12 09:31:11.479935 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 12 09:31:11.486033 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 12 09:31:11.486052 kernel: GPT:9289727 != 19775487 Jul 12 09:31:11.486062 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 12 09:31:11.486070 kernel: GPT:9289727 != 19775487 Jul 12 09:31:11.480007 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 12 09:31:11.488402 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 12 09:31:11.488420 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 12 09:31:11.488400 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 12 09:31:11.489975 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 12 09:31:11.517672 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 12 09:31:11.529746 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jul 12 09:31:11.530930 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 12 09:31:11.538460 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jul 12 09:31:11.546168 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 12 09:31:11.552003 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jul 12 09:31:11.552826 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jul 12 09:31:11.555007 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 12 09:31:11.556530 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 12 09:31:11.557957 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 12 09:31:11.559996 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 12 09:31:11.561408 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 12 09:31:11.581440 disk-uuid[592]: Primary Header is updated. Jul 12 09:31:11.581440 disk-uuid[592]: Secondary Entries is updated. Jul 12 09:31:11.581440 disk-uuid[592]: Secondary Header is updated. Jul 12 09:31:11.585312 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 12 09:31:11.586246 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 12 09:31:12.594922 disk-uuid[595]: The operation has completed successfully. Jul 12 09:31:12.595784 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 12 09:31:12.620864 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 12 09:31:12.620990 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 12 09:31:12.644524 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 12 09:31:12.655691 sh[611]: Success Jul 12 09:31:12.670417 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 12 09:31:12.670458 kernel: device-mapper: uevent: version 1.0.3 Jul 12 09:31:12.671622 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jul 12 09:31:12.678936 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Jul 12 09:31:12.701673 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 12 09:31:12.704102 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 12 09:31:12.720429 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 12 09:31:12.726315 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Jul 12 09:31:12.726355 kernel: BTRFS: device fsid 61a6979b-5b23-4687-8775-cb04acb91b0a devid 1 transid 37 /dev/mapper/usr (253:0) scanned by mount (623) Jul 12 09:31:12.727358 kernel: BTRFS info (device dm-0): first mount of filesystem 61a6979b-5b23-4687-8775-cb04acb91b0a Jul 12 09:31:12.727385 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jul 12 09:31:12.729726 kernel: BTRFS info (device dm-0): using free-space-tree Jul 12 09:31:12.735003 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 12 09:31:12.736000 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jul 12 09:31:12.736969 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 12 09:31:12.737677 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 12 09:31:12.740265 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 12 09:31:12.767414 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 (254:6) scanned by mount (655) Jul 12 09:31:12.767459 kernel: BTRFS info (device vda6): first mount of filesystem e5a719e8-42e4-4055-8ce0-9ce9f50475f2 Jul 12 09:31:12.767474 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 12 09:31:12.768926 kernel: BTRFS info (device vda6): using free-space-tree Jul 12 09:31:12.775394 kernel: BTRFS info (device vda6): last unmount of filesystem e5a719e8-42e4-4055-8ce0-9ce9f50475f2 Jul 12 09:31:12.776870 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 12 09:31:12.778447 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 12 09:31:12.852584 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 12 09:31:12.855624 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 12 09:31:12.897490 systemd-networkd[804]: lo: Link UP Jul 12 09:31:12.897503 systemd-networkd[804]: lo: Gained carrier Jul 12 09:31:12.898287 systemd-networkd[804]: Enumeration completed Jul 12 09:31:12.898769 systemd-networkd[804]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 12 09:31:12.898772 systemd-networkd[804]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 12 09:31:12.899396 systemd-networkd[804]: eth0: Link UP Jul 12 09:31:12.899399 systemd-networkd[804]: eth0: Gained carrier Jul 12 09:31:12.899407 systemd-networkd[804]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 12 09:31:12.901353 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 12 09:31:12.902220 systemd[1]: Reached target network.target - Network. Jul 12 09:31:12.916127 ignition[695]: Ignition 2.21.0 Jul 12 09:31:12.916139 ignition[695]: Stage: fetch-offline Jul 12 09:31:12.916168 ignition[695]: no configs at "/usr/lib/ignition/base.d" Jul 12 09:31:12.916176 ignition[695]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 12 09:31:12.916357 ignition[695]: parsed url from cmdline: "" Jul 12 09:31:12.916360 ignition[695]: no config URL provided Jul 12 09:31:12.916365 ignition[695]: reading system config file "/usr/lib/ignition/user.ign" Jul 12 09:31:12.918993 systemd-networkd[804]: eth0: DHCPv4 address 10.0.0.39/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 12 09:31:12.916371 ignition[695]: no config at "/usr/lib/ignition/user.ign" Jul 12 09:31:12.916391 ignition[695]: op(1): [started] loading QEMU firmware config module Jul 12 09:31:12.916395 ignition[695]: op(1): executing: "modprobe" "qemu_fw_cfg" Jul 12 09:31:12.923400 ignition[695]: op(1): [finished] loading QEMU firmware config module Jul 12 09:31:12.923424 ignition[695]: QEMU firmware config was not found. Ignoring... Jul 12 09:31:12.959520 ignition[695]: parsing config with SHA512: c433af93f52515152bfdb09ffcab251c0aff06546d4efde463a6c7fc1da82e9b24d00ae5b6fdd722f1bbcdda307706e92bfdb1539fa8da615dc94402600132f8 Jul 12 09:31:12.963834 unknown[695]: fetched base config from "system" Jul 12 09:31:12.963844 unknown[695]: fetched user config from "qemu" Jul 12 09:31:12.964269 ignition[695]: fetch-offline: fetch-offline passed Jul 12 09:31:12.964325 ignition[695]: Ignition finished successfully Jul 12 09:31:12.965938 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 12 09:31:12.967260 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 12 09:31:12.968069 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 12 09:31:12.996799 ignition[811]: Ignition 2.21.0 Jul 12 09:31:12.996814 ignition[811]: Stage: kargs Jul 12 09:31:12.996953 ignition[811]: no configs at "/usr/lib/ignition/base.d" Jul 12 09:31:12.996962 ignition[811]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 12 09:31:12.998457 ignition[811]: kargs: kargs passed Jul 12 09:31:12.998507 ignition[811]: Ignition finished successfully Jul 12 09:31:13.001395 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 12 09:31:13.004028 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 12 09:31:13.036141 ignition[819]: Ignition 2.21.0 Jul 12 09:31:13.036157 ignition[819]: Stage: disks Jul 12 09:31:13.036285 ignition[819]: no configs at "/usr/lib/ignition/base.d" Jul 12 09:31:13.036294 ignition[819]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 12 09:31:13.037624 ignition[819]: disks: disks passed Jul 12 09:31:13.037675 ignition[819]: Ignition finished successfully Jul 12 09:31:13.041562 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 12 09:31:13.042457 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 12 09:31:13.043640 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 12 09:31:13.045049 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 12 09:31:13.046372 systemd[1]: Reached target sysinit.target - System Initialization. Jul 12 09:31:13.047597 systemd[1]: Reached target basic.target - Basic System. Jul 12 09:31:13.049559 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 12 09:31:13.073073 systemd-fsck[829]: ROOT: clean, 15/553520 files, 52789/553472 blocks Jul 12 09:31:13.077156 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 12 09:31:13.078971 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 12 09:31:13.153944 kernel: EXT4-fs (vda9): mounted filesystem 016d0f7f-22a0-4255-85cc-97a6d773acb9 r/w with ordered data mode. Quota mode: none. Jul 12 09:31:13.153941 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 12 09:31:13.154891 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 12 09:31:13.157259 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 12 09:31:13.159188 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 12 09:31:13.160645 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 12 09:31:13.161903 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 12 09:31:13.161938 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 12 09:31:13.172199 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 12 09:31:13.174285 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 12 09:31:13.177673 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 (254:6) scanned by mount (838) Jul 12 09:31:13.177706 kernel: BTRFS info (device vda6): first mount of filesystem e5a719e8-42e4-4055-8ce0-9ce9f50475f2 Jul 12 09:31:13.178442 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 12 09:31:13.178470 kernel: BTRFS info (device vda6): using free-space-tree Jul 12 09:31:13.181127 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 12 09:31:13.214928 initrd-setup-root[862]: cut: /sysroot/etc/passwd: No such file or directory Jul 12 09:31:13.218956 initrd-setup-root[869]: cut: /sysroot/etc/group: No such file or directory Jul 12 09:31:13.222410 initrd-setup-root[876]: cut: /sysroot/etc/shadow: No such file or directory Jul 12 09:31:13.226208 initrd-setup-root[883]: cut: /sysroot/etc/gshadow: No such file or directory Jul 12 09:31:13.298808 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 12 09:31:13.300777 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 12 09:31:13.302091 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 12 09:31:13.319937 kernel: BTRFS info (device vda6): last unmount of filesystem e5a719e8-42e4-4055-8ce0-9ce9f50475f2 Jul 12 09:31:13.330028 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 12 09:31:13.342217 ignition[951]: INFO : Ignition 2.21.0 Jul 12 09:31:13.342217 ignition[951]: INFO : Stage: mount Jul 12 09:31:13.343405 ignition[951]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 12 09:31:13.343405 ignition[951]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 12 09:31:13.345952 ignition[951]: INFO : mount: mount passed Jul 12 09:31:13.345952 ignition[951]: INFO : Ignition finished successfully Jul 12 09:31:13.347114 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 12 09:31:13.349351 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 12 09:31:13.873975 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 12 09:31:13.875528 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 12 09:31:13.906372 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 (254:6) scanned by mount (964) Jul 12 09:31:13.906410 kernel: BTRFS info (device vda6): first mount of filesystem e5a719e8-42e4-4055-8ce0-9ce9f50475f2 Jul 12 09:31:13.906421 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 12 09:31:13.907060 kernel: BTRFS info (device vda6): using free-space-tree Jul 12 09:31:13.910080 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 12 09:31:13.940925 ignition[981]: INFO : Ignition 2.21.0 Jul 12 09:31:13.940925 ignition[981]: INFO : Stage: files Jul 12 09:31:13.943302 ignition[981]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 12 09:31:13.943302 ignition[981]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 12 09:31:13.944807 ignition[981]: DEBUG : files: compiled without relabeling support, skipping Jul 12 09:31:13.945775 ignition[981]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 12 09:31:13.945775 ignition[981]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 12 09:31:13.948123 ignition[981]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 12 09:31:13.948123 ignition[981]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 12 09:31:13.948123 ignition[981]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 12 09:31:13.947611 unknown[981]: wrote ssh authorized keys file for user: core Jul 12 09:31:13.951901 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Jul 12 09:31:13.951901 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Jul 12 09:31:13.997484 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 12 09:31:14.338447 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Jul 12 09:31:14.338447 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 12 09:31:14.341299 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Jul 12 09:31:14.561450 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jul 12 09:31:14.595085 systemd-networkd[804]: eth0: Gained IPv6LL Jul 12 09:31:14.682750 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 12 09:31:14.684117 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jul 12 09:31:14.684117 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jul 12 09:31:14.684117 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 12 09:31:14.684117 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 12 09:31:14.684117 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 12 09:31:14.684117 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 12 09:31:14.684117 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 12 09:31:14.684117 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 12 09:31:14.704024 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 12 09:31:14.705450 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 12 09:31:14.705450 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jul 12 09:31:14.735221 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jul 12 09:31:14.735221 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jul 12 09:31:14.738372 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-arm64.raw: attempt #1 Jul 12 09:31:15.164453 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jul 12 09:31:15.739451 ignition[981]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jul 12 09:31:15.739451 ignition[981]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jul 12 09:31:15.742347 ignition[981]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 12 09:31:15.743729 ignition[981]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 12 09:31:15.743729 ignition[981]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jul 12 09:31:15.743729 ignition[981]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Jul 12 09:31:15.743729 ignition[981]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 12 09:31:15.743729 ignition[981]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 12 09:31:15.743729 ignition[981]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Jul 12 09:31:15.743729 ignition[981]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Jul 12 09:31:15.761592 ignition[981]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 12 09:31:15.765788 ignition[981]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 12 09:31:15.766919 ignition[981]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Jul 12 09:31:15.766919 ignition[981]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Jul 12 09:31:15.766919 ignition[981]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Jul 12 09:31:15.766919 ignition[981]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 12 09:31:15.766919 ignition[981]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 12 09:31:15.766919 ignition[981]: INFO : files: files passed Jul 12 09:31:15.766919 ignition[981]: INFO : Ignition finished successfully Jul 12 09:31:15.770640 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 12 09:31:15.775028 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 12 09:31:15.777011 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 12 09:31:15.795763 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 12 09:31:15.795852 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 12 09:31:15.799147 initrd-setup-root-after-ignition[1010]: grep: /sysroot/oem/oem-release: No such file or directory Jul 12 09:31:15.802298 initrd-setup-root-after-ignition[1012]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 12 09:31:15.802298 initrd-setup-root-after-ignition[1012]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 12 09:31:15.804702 initrd-setup-root-after-ignition[1016]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 12 09:31:15.805563 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 12 09:31:15.807267 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 12 09:31:15.809007 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 12 09:31:15.866664 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 12 09:31:15.866797 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 12 09:31:15.869226 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 12 09:31:15.870475 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 12 09:31:15.871721 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 12 09:31:15.872364 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 12 09:31:15.905852 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 12 09:31:15.908109 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 12 09:31:15.930335 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 12 09:31:15.931247 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 12 09:31:15.932762 systemd[1]: Stopped target timers.target - Timer Units. Jul 12 09:31:15.934111 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 12 09:31:15.934220 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 12 09:31:15.936024 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 12 09:31:15.937472 systemd[1]: Stopped target basic.target - Basic System. Jul 12 09:31:15.938637 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 12 09:31:15.939839 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 12 09:31:15.941343 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 12 09:31:15.942733 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jul 12 09:31:15.944166 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 12 09:31:15.945461 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 12 09:31:15.946805 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 12 09:31:15.948367 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 12 09:31:15.949601 systemd[1]: Stopped target swap.target - Swaps. Jul 12 09:31:15.950661 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 12 09:31:15.950775 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 12 09:31:15.952437 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 12 09:31:15.953773 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 12 09:31:15.955121 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 12 09:31:15.955193 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 12 09:31:15.956682 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 12 09:31:15.956793 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 12 09:31:15.958835 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 12 09:31:15.958960 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 12 09:31:15.960478 systemd[1]: Stopped target paths.target - Path Units. Jul 12 09:31:15.961568 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 12 09:31:15.964959 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 12 09:31:15.965886 systemd[1]: Stopped target slices.target - Slice Units. Jul 12 09:31:15.967405 systemd[1]: Stopped target sockets.target - Socket Units. Jul 12 09:31:15.968551 systemd[1]: iscsid.socket: Deactivated successfully. Jul 12 09:31:15.968630 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 12 09:31:15.969700 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 12 09:31:15.969775 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 12 09:31:15.970834 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 12 09:31:15.970951 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 12 09:31:15.972282 systemd[1]: ignition-files.service: Deactivated successfully. Jul 12 09:31:15.972386 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 12 09:31:15.975065 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 12 09:31:15.979560 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 12 09:31:15.979701 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 12 09:31:15.981653 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 12 09:31:15.982677 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 12 09:31:15.982793 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 12 09:31:15.987818 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 12 09:31:15.987935 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 12 09:31:15.991955 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 12 09:31:15.999517 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 12 09:31:16.012049 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 12 09:31:16.013033 ignition[1037]: INFO : Ignition 2.21.0 Jul 12 09:31:16.013033 ignition[1037]: INFO : Stage: umount Jul 12 09:31:16.014267 ignition[1037]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 12 09:31:16.014267 ignition[1037]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 12 09:31:16.016430 ignition[1037]: INFO : umount: umount passed Jul 12 09:31:16.018330 ignition[1037]: INFO : Ignition finished successfully Jul 12 09:31:16.020254 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 12 09:31:16.020377 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 12 09:31:16.021882 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 12 09:31:16.023126 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 12 09:31:16.025131 systemd[1]: Stopped target network.target - Network. Jul 12 09:31:16.026515 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 12 09:31:16.027354 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 12 09:31:16.028797 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 12 09:31:16.028852 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 12 09:31:16.030282 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 12 09:31:16.030333 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 12 09:31:16.031145 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 12 09:31:16.031185 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 12 09:31:16.032391 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 12 09:31:16.032434 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 12 09:31:16.033814 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 12 09:31:16.035058 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 12 09:31:16.048517 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 12 09:31:16.051737 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 12 09:31:16.058983 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jul 12 09:31:16.059175 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 12 09:31:16.059281 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 12 09:31:16.061860 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jul 12 09:31:16.062427 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jul 12 09:31:16.063937 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 12 09:31:16.063975 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 12 09:31:16.068036 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 12 09:31:16.072655 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 12 09:31:16.072733 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 12 09:31:16.074637 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 12 09:31:16.074697 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 12 09:31:16.076861 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 12 09:31:16.076907 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 12 09:31:16.078286 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 12 09:31:16.078327 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 12 09:31:16.082804 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 12 09:31:16.086128 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 12 09:31:16.086190 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jul 12 09:31:16.096522 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 12 09:31:16.096645 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 12 09:31:16.098148 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 12 09:31:16.098258 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 12 09:31:16.099790 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 12 09:31:16.099851 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 12 09:31:16.100682 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 12 09:31:16.100720 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 12 09:31:16.102147 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 12 09:31:16.102188 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 12 09:31:16.104202 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 12 09:31:16.104245 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 12 09:31:16.106127 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 12 09:31:16.106175 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 12 09:31:16.108812 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 12 09:31:16.109601 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jul 12 09:31:16.109652 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jul 12 09:31:16.112650 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 12 09:31:16.112696 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 12 09:31:16.114942 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 12 09:31:16.114981 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 12 09:31:16.118227 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Jul 12 09:31:16.118275 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jul 12 09:31:16.118305 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 12 09:31:16.122509 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 12 09:31:16.122588 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 12 09:31:16.123591 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 12 09:31:16.125426 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 12 09:31:16.144256 systemd[1]: Switching root. Jul 12 09:31:16.166535 systemd-journald[245]: Journal stopped Jul 12 09:31:16.924167 systemd-journald[245]: Received SIGTERM from PID 1 (systemd). Jul 12 09:31:16.924216 kernel: SELinux: policy capability network_peer_controls=1 Jul 12 09:31:16.924231 kernel: SELinux: policy capability open_perms=1 Jul 12 09:31:16.924242 kernel: SELinux: policy capability extended_socket_class=1 Jul 12 09:31:16.924251 kernel: SELinux: policy capability always_check_network=0 Jul 12 09:31:16.924263 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 12 09:31:16.924278 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 12 09:31:16.924288 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 12 09:31:16.924301 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 12 09:31:16.924310 kernel: SELinux: policy capability userspace_initial_context=0 Jul 12 09:31:16.924319 kernel: audit: type=1403 audit(1752312676.361:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 12 09:31:16.924330 systemd[1]: Successfully loaded SELinux policy in 64.127ms. Jul 12 09:31:16.924346 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 5.133ms. Jul 12 09:31:16.924357 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 12 09:31:16.924368 systemd[1]: Detected virtualization kvm. Jul 12 09:31:16.924378 systemd[1]: Detected architecture arm64. Jul 12 09:31:16.924388 systemd[1]: Detected first boot. Jul 12 09:31:16.924398 systemd[1]: Initializing machine ID from VM UUID. Jul 12 09:31:16.924409 zram_generator::config[1083]: No configuration found. Jul 12 09:31:16.924421 kernel: NET: Registered PF_VSOCK protocol family Jul 12 09:31:16.924430 systemd[1]: Populated /etc with preset unit settings. Jul 12 09:31:16.924441 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jul 12 09:31:16.924450 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 12 09:31:16.924460 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 12 09:31:16.924470 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 12 09:31:16.924480 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 12 09:31:16.924491 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 12 09:31:16.924505 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 12 09:31:16.924518 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 12 09:31:16.924528 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 12 09:31:16.924539 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 12 09:31:16.924548 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 12 09:31:16.924558 systemd[1]: Created slice user.slice - User and Session Slice. Jul 12 09:31:16.924569 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 12 09:31:16.924579 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 12 09:31:16.924589 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 12 09:31:16.924600 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 12 09:31:16.924611 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 12 09:31:16.924620 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 12 09:31:16.924630 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jul 12 09:31:16.924640 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 12 09:31:16.924650 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 12 09:31:16.924660 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 12 09:31:16.924670 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 12 09:31:16.924681 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 12 09:31:16.924699 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 12 09:31:16.924709 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 12 09:31:16.924721 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 12 09:31:16.924731 systemd[1]: Reached target slices.target - Slice Units. Jul 12 09:31:16.924740 systemd[1]: Reached target swap.target - Swaps. Jul 12 09:31:16.924750 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 12 09:31:16.924760 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 12 09:31:16.924769 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jul 12 09:31:16.924781 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 12 09:31:16.924791 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 12 09:31:16.924801 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 12 09:31:16.924810 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 12 09:31:16.924820 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 12 09:31:16.924830 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 12 09:31:16.924841 systemd[1]: Mounting media.mount - External Media Directory... Jul 12 09:31:16.924850 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 12 09:31:16.924860 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 12 09:31:16.924871 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 12 09:31:16.924881 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 12 09:31:16.924891 systemd[1]: Reached target machines.target - Containers. Jul 12 09:31:16.924901 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 12 09:31:16.924919 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 12 09:31:16.924931 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 12 09:31:16.924942 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 12 09:31:16.924952 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 12 09:31:16.924963 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 12 09:31:16.924975 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 12 09:31:16.924985 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 12 09:31:16.924994 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 12 09:31:16.925004 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 12 09:31:16.925014 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 12 09:31:16.925024 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 12 09:31:16.925034 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 12 09:31:16.925043 systemd[1]: Stopped systemd-fsck-usr.service. Jul 12 09:31:16.925056 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 12 09:31:16.925066 kernel: loop: module loaded Jul 12 09:31:16.925075 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 12 09:31:16.925085 kernel: fuse: init (API version 7.41) Jul 12 09:31:16.925094 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 12 09:31:16.925103 kernel: ACPI: bus type drm_connector registered Jul 12 09:31:16.925112 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 12 09:31:16.925122 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 12 09:31:16.925132 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jul 12 09:31:16.925144 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 12 09:31:16.925155 systemd[1]: verity-setup.service: Deactivated successfully. Jul 12 09:31:16.925164 systemd[1]: Stopped verity-setup.service. Jul 12 09:31:16.925174 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 12 09:31:16.925203 systemd-journald[1155]: Collecting audit messages is disabled. Jul 12 09:31:16.925226 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 12 09:31:16.925237 systemd-journald[1155]: Journal started Jul 12 09:31:16.925257 systemd-journald[1155]: Runtime Journal (/run/log/journal/79966e0c01fd4cb7ba6668f56de5b47d) is 6M, max 48.5M, 42.4M free. Jul 12 09:31:16.727124 systemd[1]: Queued start job for default target multi-user.target. Jul 12 09:31:16.749971 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jul 12 09:31:16.750376 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 12 09:31:16.926926 systemd[1]: Started systemd-journald.service - Journal Service. Jul 12 09:31:16.928148 systemd[1]: Mounted media.mount - External Media Directory. Jul 12 09:31:16.929022 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 12 09:31:16.930017 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 12 09:31:16.930971 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 12 09:31:16.933009 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 12 09:31:16.934985 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 12 09:31:16.936175 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 12 09:31:16.936350 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 12 09:31:16.937472 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 12 09:31:16.937644 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 12 09:31:16.938752 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 12 09:31:16.938900 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 12 09:31:16.939942 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 12 09:31:16.940104 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 12 09:31:16.941231 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 12 09:31:16.941405 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 12 09:31:16.942483 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 12 09:31:16.942639 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 12 09:31:16.943946 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 12 09:31:16.945098 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 12 09:31:16.946252 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 12 09:31:16.947655 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jul 12 09:31:16.959784 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 12 09:31:16.961864 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 12 09:31:16.963765 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 12 09:31:16.964702 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 12 09:31:16.964737 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 12 09:31:16.966346 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jul 12 09:31:16.973677 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 12 09:31:16.974827 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 12 09:31:16.975953 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 12 09:31:16.977571 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 12 09:31:16.978631 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 12 09:31:16.981120 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 12 09:31:16.981983 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 12 09:31:16.985045 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 12 09:31:16.987394 systemd-journald[1155]: Time spent on flushing to /var/log/journal/79966e0c01fd4cb7ba6668f56de5b47d is 21.633ms for 888 entries. Jul 12 09:31:16.987394 systemd-journald[1155]: System Journal (/var/log/journal/79966e0c01fd4cb7ba6668f56de5b47d) is 8M, max 195.6M, 187.6M free. Jul 12 09:31:17.017373 systemd-journald[1155]: Received client request to flush runtime journal. Jul 12 09:31:17.017422 kernel: loop0: detected capacity change from 0 to 211168 Jul 12 09:31:16.987255 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 12 09:31:16.994139 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 12 09:31:16.997668 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 12 09:31:16.998876 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 12 09:31:16.999971 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 12 09:31:17.007129 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 12 09:31:17.008230 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 12 09:31:17.010461 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jul 12 09:31:17.025426 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 12 09:31:17.030935 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 12 09:31:17.030934 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 12 09:31:17.039283 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jul 12 09:31:17.041957 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 12 09:31:17.044577 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 12 09:31:17.051063 kernel: loop1: detected capacity change from 0 to 134232 Jul 12 09:31:17.068178 systemd-tmpfiles[1217]: ACLs are not supported, ignoring. Jul 12 09:31:17.068196 systemd-tmpfiles[1217]: ACLs are not supported, ignoring. Jul 12 09:31:17.071726 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 12 09:31:17.081956 kernel: loop2: detected capacity change from 0 to 105936 Jul 12 09:31:17.117142 kernel: loop3: detected capacity change from 0 to 211168 Jul 12 09:31:17.122939 kernel: loop4: detected capacity change from 0 to 134232 Jul 12 09:31:17.129932 kernel: loop5: detected capacity change from 0 to 105936 Jul 12 09:31:17.134932 (sd-merge)[1224]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jul 12 09:31:17.135587 (sd-merge)[1224]: Merged extensions into '/usr'. Jul 12 09:31:17.141434 systemd[1]: Reload requested from client PID 1199 ('systemd-sysext') (unit systemd-sysext.service)... Jul 12 09:31:17.141455 systemd[1]: Reloading... Jul 12 09:31:17.217954 zram_generator::config[1256]: No configuration found. Jul 12 09:31:17.252993 ldconfig[1194]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 12 09:31:17.291525 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 12 09:31:17.355385 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 12 09:31:17.355639 systemd[1]: Reloading finished in 213 ms. Jul 12 09:31:17.387576 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 12 09:31:17.388848 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 12 09:31:17.404385 systemd[1]: Starting ensure-sysext.service... Jul 12 09:31:17.406056 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 12 09:31:17.425356 systemd-tmpfiles[1286]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jul 12 09:31:17.425387 systemd-tmpfiles[1286]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jul 12 09:31:17.425624 systemd-tmpfiles[1286]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 12 09:31:17.425813 systemd-tmpfiles[1286]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 12 09:31:17.426109 systemd[1]: Reload requested from client PID 1285 ('systemctl') (unit ensure-sysext.service)... Jul 12 09:31:17.426126 systemd[1]: Reloading... Jul 12 09:31:17.426434 systemd-tmpfiles[1286]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 12 09:31:17.426636 systemd-tmpfiles[1286]: ACLs are not supported, ignoring. Jul 12 09:31:17.426694 systemd-tmpfiles[1286]: ACLs are not supported, ignoring. Jul 12 09:31:17.429530 systemd-tmpfiles[1286]: Detected autofs mount point /boot during canonicalization of boot. Jul 12 09:31:17.429545 systemd-tmpfiles[1286]: Skipping /boot Jul 12 09:31:17.435255 systemd-tmpfiles[1286]: Detected autofs mount point /boot during canonicalization of boot. Jul 12 09:31:17.435272 systemd-tmpfiles[1286]: Skipping /boot Jul 12 09:31:17.467940 zram_generator::config[1313]: No configuration found. Jul 12 09:31:17.539354 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 12 09:31:17.603254 systemd[1]: Reloading finished in 176 ms. Jul 12 09:31:17.625385 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 12 09:31:17.630702 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 12 09:31:17.641077 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 12 09:31:17.646353 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 12 09:31:17.648263 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 12 09:31:17.651245 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 12 09:31:17.656087 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 12 09:31:17.661084 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 12 09:31:17.668727 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 12 09:31:17.670091 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 12 09:31:17.673224 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 12 09:31:17.677156 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 12 09:31:17.678230 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 12 09:31:17.678422 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 12 09:31:17.680465 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 12 09:31:17.689428 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 12 09:31:17.693860 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 12 09:31:17.694070 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 12 09:31:17.695287 systemd-udevd[1359]: Using default interface naming scheme 'v255'. Jul 12 09:31:17.696974 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 12 09:31:17.697234 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 12 09:31:17.704877 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 12 09:31:17.707190 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 12 09:31:17.709115 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 12 09:31:17.714595 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 12 09:31:17.717636 systemd[1]: Finished ensure-sysext.service. Jul 12 09:31:17.721370 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 12 09:31:17.723071 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 12 09:31:17.724387 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 12 09:31:17.728787 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 12 09:31:17.731381 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 12 09:31:17.732334 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 12 09:31:17.732382 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 12 09:31:17.739670 augenrules[1404]: No rules Jul 12 09:31:17.749167 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 12 09:31:17.750971 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 12 09:31:17.753070 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 12 09:31:17.754873 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 12 09:31:17.756995 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 12 09:31:17.761132 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 12 09:31:17.762379 systemd[1]: audit-rules.service: Deactivated successfully. Jul 12 09:31:17.762574 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 12 09:31:17.764262 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 12 09:31:17.764425 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 12 09:31:17.765620 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 12 09:31:17.765817 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 12 09:31:17.766865 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 12 09:31:17.767054 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 12 09:31:17.768149 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 12 09:31:17.779193 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 12 09:31:17.797749 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jul 12 09:31:17.847788 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 12 09:31:17.851642 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 12 09:31:17.889248 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 12 09:31:17.910576 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 12 09:31:17.911696 systemd[1]: Reached target time-set.target - System Time Set. Jul 12 09:31:17.922254 systemd-resolved[1353]: Positive Trust Anchors: Jul 12 09:31:17.922274 systemd-resolved[1353]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 12 09:31:17.922306 systemd-resolved[1353]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 12 09:31:17.930666 systemd-networkd[1423]: lo: Link UP Jul 12 09:31:17.930688 systemd-networkd[1423]: lo: Gained carrier Jul 12 09:31:17.931525 systemd-networkd[1423]: Enumeration completed Jul 12 09:31:17.931618 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 12 09:31:17.934030 systemd-networkd[1423]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 12 09:31:17.934039 systemd-networkd[1423]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 12 09:31:17.934637 systemd-networkd[1423]: eth0: Link UP Jul 12 09:31:17.934767 systemd-networkd[1423]: eth0: Gained carrier Jul 12 09:31:17.934785 systemd-networkd[1423]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 12 09:31:17.936352 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jul 12 09:31:17.937320 systemd-resolved[1353]: Defaulting to hostname 'linux'. Jul 12 09:31:17.940135 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 12 09:31:17.942140 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 12 09:31:17.943223 systemd[1]: Reached target network.target - Network. Jul 12 09:31:17.943902 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 12 09:31:17.944749 systemd[1]: Reached target sysinit.target - System Initialization. Jul 12 09:31:17.945562 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 12 09:31:17.946877 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 12 09:31:17.948229 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 12 09:31:17.949831 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 12 09:31:17.950778 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 12 09:31:17.952153 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 12 09:31:17.952194 systemd[1]: Reached target paths.target - Path Units. Jul 12 09:31:17.952946 systemd[1]: Reached target timers.target - Timer Units. Jul 12 09:31:17.954184 systemd-networkd[1423]: eth0: DHCPv4 address 10.0.0.39/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 12 09:31:17.954594 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 12 09:31:17.957559 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 12 09:31:17.960250 systemd-timesyncd[1425]: Network configuration changed, trying to establish connection. Jul 12 09:31:17.961606 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jul 12 09:31:17.961893 systemd-timesyncd[1425]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jul 12 09:31:17.962069 systemd-timesyncd[1425]: Initial clock synchronization to Sat 2025-07-12 09:31:17.892676 UTC. Jul 12 09:31:17.964585 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jul 12 09:31:17.965608 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jul 12 09:31:17.973687 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 12 09:31:17.974741 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jul 12 09:31:17.976236 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 12 09:31:17.983117 systemd[1]: Reached target sockets.target - Socket Units. Jul 12 09:31:17.983825 systemd[1]: Reached target basic.target - Basic System. Jul 12 09:31:17.984543 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 12 09:31:17.984574 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 12 09:31:17.998598 systemd[1]: Starting containerd.service - containerd container runtime... Jul 12 09:31:18.000377 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 12 09:31:18.002176 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 12 09:31:18.004478 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 12 09:31:18.006149 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 12 09:31:18.006847 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 12 09:31:18.009044 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 12 09:31:18.011103 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 12 09:31:18.015523 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 12 09:31:18.017711 jq[1468]: false Jul 12 09:31:18.022049 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 12 09:31:18.023788 extend-filesystems[1469]: Found /dev/vda6 Jul 12 09:31:18.025308 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 12 09:31:18.029928 extend-filesystems[1469]: Found /dev/vda9 Jul 12 09:31:18.029500 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 12 09:31:18.031136 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 12 09:31:18.031526 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 12 09:31:18.032044 systemd[1]: Starting update-engine.service - Update Engine... Jul 12 09:31:18.035009 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 12 09:31:18.036133 extend-filesystems[1469]: Checking size of /dev/vda9 Jul 12 09:31:18.036666 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jul 12 09:31:18.045517 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 12 09:31:18.046848 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 12 09:31:18.050588 jq[1489]: true Jul 12 09:31:18.049050 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 12 09:31:18.049354 systemd[1]: motdgen.service: Deactivated successfully. Jul 12 09:31:18.049517 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 12 09:31:18.054497 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 12 09:31:18.054716 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 12 09:31:18.064332 extend-filesystems[1469]: Resized partition /dev/vda9 Jul 12 09:31:18.079322 extend-filesystems[1506]: resize2fs 1.47.2 (1-Jan-2025) Jul 12 09:31:18.096830 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jul 12 09:31:18.096902 jq[1497]: true Jul 12 09:31:18.099905 update_engine[1485]: I20250712 09:31:18.098438 1485 main.cc:92] Flatcar Update Engine starting Jul 12 09:31:18.099317 (ntainerd)[1499]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 12 09:31:18.129327 dbus-daemon[1466]: [system] SELinux support is enabled Jul 12 09:31:18.129686 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 12 09:31:18.133117 update_engine[1485]: I20250712 09:31:18.131789 1485 update_check_scheduler.cc:74] Next update check in 4m50s Jul 12 09:31:18.133457 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 12 09:31:18.133502 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 12 09:31:18.135057 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 12 09:31:18.135080 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 12 09:31:18.147746 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jul 12 09:31:18.147780 tar[1494]: linux-arm64/LICENSE Jul 12 09:31:18.147780 tar[1494]: linux-arm64/helm Jul 12 09:31:18.136687 systemd[1]: Started update-engine.service - Update Engine. Jul 12 09:31:18.148428 extend-filesystems[1506]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 12 09:31:18.148428 extend-filesystems[1506]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 12 09:31:18.148428 extend-filesystems[1506]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jul 12 09:31:18.139035 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 12 09:31:18.154742 extend-filesystems[1469]: Resized filesystem in /dev/vda9 Jul 12 09:31:18.149737 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 12 09:31:18.150314 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 12 09:31:18.155766 bash[1530]: Updated "/home/core/.ssh/authorized_keys" Jul 12 09:31:18.156316 systemd-logind[1480]: Watching system buttons on /dev/input/event0 (Power Button) Jul 12 09:31:18.158265 systemd-logind[1480]: New seat seat0. Jul 12 09:31:18.191217 systemd[1]: Started systemd-logind.service - User Login Management. Jul 12 09:31:18.192527 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 12 09:31:18.196107 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 12 09:31:18.200025 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 12 09:31:18.221344 locksmithd[1532]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 12 09:31:18.310476 containerd[1499]: time="2025-07-12T09:31:18Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jul 12 09:31:18.313565 containerd[1499]: time="2025-07-12T09:31:18.313527629Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Jul 12 09:31:18.323889 containerd[1499]: time="2025-07-12T09:31:18.323853967Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="9.24µs" Jul 12 09:31:18.323889 containerd[1499]: time="2025-07-12T09:31:18.323887820Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jul 12 09:31:18.323974 containerd[1499]: time="2025-07-12T09:31:18.323905583Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jul 12 09:31:18.324077 containerd[1499]: time="2025-07-12T09:31:18.324057205Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jul 12 09:31:18.324100 containerd[1499]: time="2025-07-12T09:31:18.324078433Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jul 12 09:31:18.324118 containerd[1499]: time="2025-07-12T09:31:18.324102329Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 12 09:31:18.324169 containerd[1499]: time="2025-07-12T09:31:18.324151874Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 12 09:31:18.324169 containerd[1499]: time="2025-07-12T09:31:18.324166291Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 12 09:31:18.324403 containerd[1499]: time="2025-07-12T09:31:18.324382314Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 12 09:31:18.324403 containerd[1499]: time="2025-07-12T09:31:18.324401550Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 12 09:31:18.324446 containerd[1499]: time="2025-07-12T09:31:18.324412383Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 12 09:31:18.324446 containerd[1499]: time="2025-07-12T09:31:18.324419950Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jul 12 09:31:18.324502 containerd[1499]: time="2025-07-12T09:31:18.324486024Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jul 12 09:31:18.324684 containerd[1499]: time="2025-07-12T09:31:18.324664927Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 12 09:31:18.324708 containerd[1499]: time="2025-07-12T09:31:18.324697147Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 12 09:31:18.324734 containerd[1499]: time="2025-07-12T09:31:18.324708100Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jul 12 09:31:18.324755 containerd[1499]: time="2025-07-12T09:31:18.324740678Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jul 12 09:31:18.325034 containerd[1499]: time="2025-07-12T09:31:18.325002701Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jul 12 09:31:18.325126 containerd[1499]: time="2025-07-12T09:31:18.325106251Z" level=info msg="metadata content store policy set" policy=shared Jul 12 09:31:18.328459 containerd[1499]: time="2025-07-12T09:31:18.328422338Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jul 12 09:31:18.328563 containerd[1499]: time="2025-07-12T09:31:18.328541978Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jul 12 09:31:18.328717 containerd[1499]: time="2025-07-12T09:31:18.328693799Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jul 12 09:31:18.328791 containerd[1499]: time="2025-07-12T09:31:18.328731595Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jul 12 09:31:18.328818 containerd[1499]: time="2025-07-12T09:31:18.328795319Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jul 12 09:31:18.328818 containerd[1499]: time="2025-07-12T09:31:18.328807944Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jul 12 09:31:18.328885 containerd[1499]: time="2025-07-12T09:31:18.328865494Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jul 12 09:31:18.328921 containerd[1499]: time="2025-07-12T09:31:18.328888554Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jul 12 09:31:18.328990 containerd[1499]: time="2025-07-12T09:31:18.328971155Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jul 12 09:31:18.329168 containerd[1499]: time="2025-07-12T09:31:18.329142173Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jul 12 09:31:18.329192 containerd[1499]: time="2025-07-12T09:31:18.329169256Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jul 12 09:31:18.329192 containerd[1499]: time="2025-07-12T09:31:18.329185186Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jul 12 09:31:18.329343 containerd[1499]: time="2025-07-12T09:31:18.329323267Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jul 12 09:31:18.329368 containerd[1499]: time="2025-07-12T09:31:18.329349752Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jul 12 09:31:18.329368 containerd[1499]: time="2025-07-12T09:31:18.329364687Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jul 12 09:31:18.329399 containerd[1499]: time="2025-07-12T09:31:18.329380499Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jul 12 09:31:18.329399 containerd[1499]: time="2025-07-12T09:31:18.329391292Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jul 12 09:31:18.329437 containerd[1499]: time="2025-07-12T09:31:18.329402045Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jul 12 09:31:18.329437 containerd[1499]: time="2025-07-12T09:31:18.329412560Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jul 12 09:31:18.329437 containerd[1499]: time="2025-07-12T09:31:18.329423751Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jul 12 09:31:18.329437 containerd[1499]: time="2025-07-12T09:31:18.329434345Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jul 12 09:31:18.329508 containerd[1499]: time="2025-07-12T09:31:18.329444859Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jul 12 09:31:18.329508 containerd[1499]: time="2025-07-12T09:31:18.329454617Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jul 12 09:31:18.329648 containerd[1499]: time="2025-07-12T09:31:18.329631330Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jul 12 09:31:18.329672 containerd[1499]: time="2025-07-12T09:31:18.329653952Z" level=info msg="Start snapshots syncer" Jul 12 09:31:18.329825 containerd[1499]: time="2025-07-12T09:31:18.329804060Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jul 12 09:31:18.330569 containerd[1499]: time="2025-07-12T09:31:18.330444600Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jul 12 09:31:18.330654 containerd[1499]: time="2025-07-12T09:31:18.330592797Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jul 12 09:31:18.330760 containerd[1499]: time="2025-07-12T09:31:18.330733267Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jul 12 09:31:18.331026 containerd[1499]: time="2025-07-12T09:31:18.331002857Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jul 12 09:31:18.331101 containerd[1499]: time="2025-07-12T09:31:18.331083029Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jul 12 09:31:18.331123 containerd[1499]: time="2025-07-12T09:31:18.331103102Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jul 12 09:31:18.331123 containerd[1499]: time="2025-07-12T09:31:18.331115767Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jul 12 09:31:18.331155 containerd[1499]: time="2025-07-12T09:31:18.331136875Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jul 12 09:31:18.331155 containerd[1499]: time="2025-07-12T09:31:18.331148743Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jul 12 09:31:18.331196 containerd[1499]: time="2025-07-12T09:31:18.331159337Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jul 12 09:31:18.331259 containerd[1499]: time="2025-07-12T09:31:18.331232898Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jul 12 09:31:18.331283 containerd[1499]: time="2025-07-12T09:31:18.331268782Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jul 12 09:31:18.331300 containerd[1499]: time="2025-07-12T09:31:18.331281806Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jul 12 09:31:18.332316 containerd[1499]: time="2025-07-12T09:31:18.332283538Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 12 09:31:18.332496 containerd[1499]: time="2025-07-12T09:31:18.332430580Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 12 09:31:18.332523 containerd[1499]: time="2025-07-12T09:31:18.332494462Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 12 09:31:18.332523 containerd[1499]: time="2025-07-12T09:31:18.332507565Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 12 09:31:18.332523 containerd[1499]: time="2025-07-12T09:31:18.332515252Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jul 12 09:31:18.332571 containerd[1499]: time="2025-07-12T09:31:18.332524850Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jul 12 09:31:18.332571 containerd[1499]: time="2025-07-12T09:31:18.332542693Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jul 12 09:31:18.332679 containerd[1499]: time="2025-07-12T09:31:18.332662931Z" level=info msg="runtime interface created" Jul 12 09:31:18.332679 containerd[1499]: time="2025-07-12T09:31:18.332676353Z" level=info msg="created NRI interface" Jul 12 09:31:18.332727 containerd[1499]: time="2025-07-12T09:31:18.332687345Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jul 12 09:31:18.332758 containerd[1499]: time="2025-07-12T09:31:18.332705108Z" level=info msg="Connect containerd service" Jul 12 09:31:18.332796 containerd[1499]: time="2025-07-12T09:31:18.332780859Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 12 09:31:18.334326 containerd[1499]: time="2025-07-12T09:31:18.334252910Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 12 09:31:18.426880 tar[1494]: linux-arm64/README.md Jul 12 09:31:18.441266 containerd[1499]: time="2025-07-12T09:31:18.441215788Z" level=info msg="Start subscribing containerd event" Jul 12 09:31:18.441341 containerd[1499]: time="2025-07-12T09:31:18.441281343Z" level=info msg="Start recovering state" Jul 12 09:31:18.441390 containerd[1499]: time="2025-07-12T09:31:18.441367967Z" level=info msg="Start event monitor" Jul 12 09:31:18.441414 containerd[1499]: time="2025-07-12T09:31:18.441389753Z" level=info msg="Start cni network conf syncer for default" Jul 12 09:31:18.441414 containerd[1499]: time="2025-07-12T09:31:18.441404250Z" level=info msg="Start streaming server" Jul 12 09:31:18.441414 containerd[1499]: time="2025-07-12T09:31:18.441412374Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jul 12 09:31:18.441471 containerd[1499]: time="2025-07-12T09:31:18.441419026Z" level=info msg="runtime interface starting up..." Jul 12 09:31:18.441471 containerd[1499]: time="2025-07-12T09:31:18.441424960Z" level=info msg="starting plugins..." Jul 12 09:31:18.441471 containerd[1499]: time="2025-07-12T09:31:18.441437545Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jul 12 09:31:18.442102 containerd[1499]: time="2025-07-12T09:31:18.442079479Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 12 09:31:18.442143 containerd[1499]: time="2025-07-12T09:31:18.442129024Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 12 09:31:18.442204 containerd[1499]: time="2025-07-12T09:31:18.442190437Z" level=info msg="containerd successfully booted in 0.132139s" Jul 12 09:31:18.442287 systemd[1]: Started containerd.service - containerd container runtime. Jul 12 09:31:18.444042 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 12 09:31:18.523604 sshd_keygen[1490]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 12 09:31:18.542977 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 12 09:31:18.545261 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 12 09:31:18.570476 systemd[1]: issuegen.service: Deactivated successfully. Jul 12 09:31:18.570676 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 12 09:31:18.573024 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 12 09:31:18.603400 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 12 09:31:18.605787 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 12 09:31:18.607618 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jul 12 09:31:18.608662 systemd[1]: Reached target getty.target - Login Prompts. Jul 12 09:31:19.587099 systemd-networkd[1423]: eth0: Gained IPv6LL Jul 12 09:31:19.590973 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 12 09:31:19.592296 systemd[1]: Reached target network-online.target - Network is Online. Jul 12 09:31:19.594365 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jul 12 09:31:19.596383 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 12 09:31:19.598202 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 12 09:31:19.624431 systemd[1]: coreos-metadata.service: Deactivated successfully. Jul 12 09:31:19.624693 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jul 12 09:31:19.625985 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 12 09:31:19.629368 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 12 09:31:20.158461 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 09:31:20.159714 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 12 09:31:20.160639 systemd[1]: Startup finished in 2.018s (kernel) + 5.729s (initrd) + 3.864s (userspace) = 11.612s. Jul 12 09:31:20.161829 (kubelet)[1607]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 12 09:31:20.555728 kubelet[1607]: E0712 09:31:20.555606 1607 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 12 09:31:20.558365 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 12 09:31:20.558526 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 12 09:31:20.558864 systemd[1]: kubelet.service: Consumed 802ms CPU time, 258.9M memory peak. Jul 12 09:31:23.088276 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 12 09:31:23.089293 systemd[1]: Started sshd@0-10.0.0.39:22-10.0.0.1:42788.service - OpenSSH per-connection server daemon (10.0.0.1:42788). Jul 12 09:31:23.175969 sshd[1621]: Accepted publickey for core from 10.0.0.1 port 42788 ssh2: RSA SHA256:fhp558siaf39QLJw5fsAHbaRafIwNXdVZ+VoGPeGhpE Jul 12 09:31:23.177769 sshd-session[1621]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 09:31:23.189173 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 12 09:31:23.190097 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 12 09:31:23.196541 systemd-logind[1480]: New session 1 of user core. Jul 12 09:31:23.216977 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 12 09:31:23.219737 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 12 09:31:23.250213 (systemd)[1626]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 12 09:31:23.252684 systemd-logind[1480]: New session c1 of user core. Jul 12 09:31:23.361884 systemd[1626]: Queued start job for default target default.target. Jul 12 09:31:23.368998 systemd[1626]: Created slice app.slice - User Application Slice. Jul 12 09:31:23.369029 systemd[1626]: Reached target paths.target - Paths. Jul 12 09:31:23.369065 systemd[1626]: Reached target timers.target - Timers. Jul 12 09:31:23.370328 systemd[1626]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 12 09:31:23.380558 systemd[1626]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 12 09:31:23.380624 systemd[1626]: Reached target sockets.target - Sockets. Jul 12 09:31:23.380666 systemd[1626]: Reached target basic.target - Basic System. Jul 12 09:31:23.380692 systemd[1626]: Reached target default.target - Main User Target. Jul 12 09:31:23.380720 systemd[1626]: Startup finished in 122ms. Jul 12 09:31:23.380954 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 12 09:31:23.382294 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 12 09:31:23.453891 systemd[1]: Started sshd@1-10.0.0.39:22-10.0.0.1:42794.service - OpenSSH per-connection server daemon (10.0.0.1:42794). Jul 12 09:31:23.500125 sshd[1637]: Accepted publickey for core from 10.0.0.1 port 42794 ssh2: RSA SHA256:fhp558siaf39QLJw5fsAHbaRafIwNXdVZ+VoGPeGhpE Jul 12 09:31:23.501443 sshd-session[1637]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 09:31:23.505402 systemd-logind[1480]: New session 2 of user core. Jul 12 09:31:23.519061 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 12 09:31:23.570016 sshd[1640]: Connection closed by 10.0.0.1 port 42794 Jul 12 09:31:23.569842 sshd-session[1637]: pam_unix(sshd:session): session closed for user core Jul 12 09:31:23.581954 systemd[1]: sshd@1-10.0.0.39:22-10.0.0.1:42794.service: Deactivated successfully. Jul 12 09:31:23.583387 systemd[1]: session-2.scope: Deactivated successfully. Jul 12 09:31:23.584059 systemd-logind[1480]: Session 2 logged out. Waiting for processes to exit. Jul 12 09:31:23.585988 systemd[1]: Started sshd@2-10.0.0.39:22-10.0.0.1:42798.service - OpenSSH per-connection server daemon (10.0.0.1:42798). Jul 12 09:31:23.586819 systemd-logind[1480]: Removed session 2. Jul 12 09:31:23.645927 sshd[1646]: Accepted publickey for core from 10.0.0.1 port 42798 ssh2: RSA SHA256:fhp558siaf39QLJw5fsAHbaRafIwNXdVZ+VoGPeGhpE Jul 12 09:31:23.647287 sshd-session[1646]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 09:31:23.650960 systemd-logind[1480]: New session 3 of user core. Jul 12 09:31:23.665065 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 12 09:31:23.711691 sshd[1649]: Connection closed by 10.0.0.1 port 42798 Jul 12 09:31:23.712061 sshd-session[1646]: pam_unix(sshd:session): session closed for user core Jul 12 09:31:23.722854 systemd[1]: sshd@2-10.0.0.39:22-10.0.0.1:42798.service: Deactivated successfully. Jul 12 09:31:23.724307 systemd[1]: session-3.scope: Deactivated successfully. Jul 12 09:31:23.724968 systemd-logind[1480]: Session 3 logged out. Waiting for processes to exit. Jul 12 09:31:23.729140 systemd[1]: Started sshd@3-10.0.0.39:22-10.0.0.1:42810.service - OpenSSH per-connection server daemon (10.0.0.1:42810). Jul 12 09:31:23.730521 systemd-logind[1480]: Removed session 3. Jul 12 09:31:23.779990 sshd[1655]: Accepted publickey for core from 10.0.0.1 port 42810 ssh2: RSA SHA256:fhp558siaf39QLJw5fsAHbaRafIwNXdVZ+VoGPeGhpE Jul 12 09:31:23.781169 sshd-session[1655]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 09:31:23.785868 systemd-logind[1480]: New session 4 of user core. Jul 12 09:31:23.802075 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 12 09:31:23.854180 sshd[1658]: Connection closed by 10.0.0.1 port 42810 Jul 12 09:31:23.854638 sshd-session[1655]: pam_unix(sshd:session): session closed for user core Jul 12 09:31:23.863798 systemd[1]: sshd@3-10.0.0.39:22-10.0.0.1:42810.service: Deactivated successfully. Jul 12 09:31:23.866337 systemd[1]: session-4.scope: Deactivated successfully. Jul 12 09:31:23.867094 systemd-logind[1480]: Session 4 logged out. Waiting for processes to exit. Jul 12 09:31:23.869889 systemd[1]: Started sshd@4-10.0.0.39:22-10.0.0.1:42826.service - OpenSSH per-connection server daemon (10.0.0.1:42826). Jul 12 09:31:23.870611 systemd-logind[1480]: Removed session 4. Jul 12 09:31:23.925881 sshd[1664]: Accepted publickey for core from 10.0.0.1 port 42826 ssh2: RSA SHA256:fhp558siaf39QLJw5fsAHbaRafIwNXdVZ+VoGPeGhpE Jul 12 09:31:23.927002 sshd-session[1664]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 09:31:23.931782 systemd-logind[1480]: New session 5 of user core. Jul 12 09:31:23.947073 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 12 09:31:24.003731 sudo[1668]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 12 09:31:24.004039 sudo[1668]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 12 09:31:24.020780 sudo[1668]: pam_unix(sudo:session): session closed for user root Jul 12 09:31:24.023495 sshd[1667]: Connection closed by 10.0.0.1 port 42826 Jul 12 09:31:24.022610 sshd-session[1664]: pam_unix(sshd:session): session closed for user core Jul 12 09:31:24.032856 systemd[1]: sshd@4-10.0.0.39:22-10.0.0.1:42826.service: Deactivated successfully. Jul 12 09:31:24.036190 systemd[1]: session-5.scope: Deactivated successfully. Jul 12 09:31:24.036897 systemd-logind[1480]: Session 5 logged out. Waiting for processes to exit. Jul 12 09:31:24.039765 systemd[1]: Started sshd@5-10.0.0.39:22-10.0.0.1:42828.service - OpenSSH per-connection server daemon (10.0.0.1:42828). Jul 12 09:31:24.040372 systemd-logind[1480]: Removed session 5. Jul 12 09:31:24.094922 sshd[1674]: Accepted publickey for core from 10.0.0.1 port 42828 ssh2: RSA SHA256:fhp558siaf39QLJw5fsAHbaRafIwNXdVZ+VoGPeGhpE Jul 12 09:31:24.096045 sshd-session[1674]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 09:31:24.100315 systemd-logind[1480]: New session 6 of user core. Jul 12 09:31:24.110071 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 12 09:31:24.160470 sudo[1679]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 12 09:31:24.161079 sudo[1679]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 12 09:31:24.201607 sudo[1679]: pam_unix(sudo:session): session closed for user root Jul 12 09:31:24.206484 sudo[1678]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jul 12 09:31:24.206739 sudo[1678]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 12 09:31:24.215783 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 12 09:31:24.249799 augenrules[1701]: No rules Jul 12 09:31:24.251205 systemd[1]: audit-rules.service: Deactivated successfully. Jul 12 09:31:24.252056 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 12 09:31:24.253094 sudo[1678]: pam_unix(sudo:session): session closed for user root Jul 12 09:31:24.254457 sshd[1677]: Connection closed by 10.0.0.1 port 42828 Jul 12 09:31:24.254806 sshd-session[1674]: pam_unix(sshd:session): session closed for user core Jul 12 09:31:24.263970 systemd[1]: sshd@5-10.0.0.39:22-10.0.0.1:42828.service: Deactivated successfully. Jul 12 09:31:24.265336 systemd[1]: session-6.scope: Deactivated successfully. Jul 12 09:31:24.265969 systemd-logind[1480]: Session 6 logged out. Waiting for processes to exit. Jul 12 09:31:24.268249 systemd[1]: Started sshd@6-10.0.0.39:22-10.0.0.1:42840.service - OpenSSH per-connection server daemon (10.0.0.1:42840). Jul 12 09:31:24.268716 systemd-logind[1480]: Removed session 6. Jul 12 09:31:24.325946 sshd[1710]: Accepted publickey for core from 10.0.0.1 port 42840 ssh2: RSA SHA256:fhp558siaf39QLJw5fsAHbaRafIwNXdVZ+VoGPeGhpE Jul 12 09:31:24.327068 sshd-session[1710]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 09:31:24.330786 systemd-logind[1480]: New session 7 of user core. Jul 12 09:31:24.345070 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 12 09:31:24.395457 sudo[1714]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 12 09:31:24.395727 sudo[1714]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 12 09:31:24.725336 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 12 09:31:24.757321 (dockerd)[1735]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 12 09:31:24.999853 dockerd[1735]: time="2025-07-12T09:31:24.999719974Z" level=info msg="Starting up" Jul 12 09:31:25.000687 dockerd[1735]: time="2025-07-12T09:31:25.000623156Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jul 12 09:31:25.010275 dockerd[1735]: time="2025-07-12T09:31:25.010228288Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jul 12 09:31:25.171063 dockerd[1735]: time="2025-07-12T09:31:25.171014410Z" level=info msg="Loading containers: start." Jul 12 09:31:25.178946 kernel: Initializing XFRM netlink socket Jul 12 09:31:25.374608 systemd-networkd[1423]: docker0: Link UP Jul 12 09:31:25.377942 dockerd[1735]: time="2025-07-12T09:31:25.377796455Z" level=info msg="Loading containers: done." Jul 12 09:31:25.391058 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1286440265-merged.mount: Deactivated successfully. Jul 12 09:31:25.392802 dockerd[1735]: time="2025-07-12T09:31:25.392760577Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 12 09:31:25.392870 dockerd[1735]: time="2025-07-12T09:31:25.392839719Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jul 12 09:31:25.392953 dockerd[1735]: time="2025-07-12T09:31:25.392936931Z" level=info msg="Initializing buildkit" Jul 12 09:31:25.412916 dockerd[1735]: time="2025-07-12T09:31:25.412870784Z" level=info msg="Completed buildkit initialization" Jul 12 09:31:25.419359 dockerd[1735]: time="2025-07-12T09:31:25.419312376Z" level=info msg="Daemon has completed initialization" Jul 12 09:31:25.419549 dockerd[1735]: time="2025-07-12T09:31:25.419381146Z" level=info msg="API listen on /run/docker.sock" Jul 12 09:31:25.419635 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 12 09:31:25.951041 containerd[1499]: time="2025-07-12T09:31:25.951003500Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.2\"" Jul 12 09:31:26.745653 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3738133115.mount: Deactivated successfully. Jul 12 09:31:27.714080 containerd[1499]: time="2025-07-12T09:31:27.714025175Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 09:31:27.714652 containerd[1499]: time="2025-07-12T09:31:27.714608282Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.2: active requests=0, bytes read=27351718" Jul 12 09:31:27.715344 containerd[1499]: time="2025-07-12T09:31:27.715311857Z" level=info msg="ImageCreate event name:\"sha256:04ac773cca35cc457f24a6501b6b308d63a2cddd1aec14fe95559bccca3010a4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 09:31:27.717899 containerd[1499]: time="2025-07-12T09:31:27.717841725Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:e8ae58675899e946fabe38425f2b3bfd33120b7930d05b5898de97c81a7f6137\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 09:31:27.718988 containerd[1499]: time="2025-07-12T09:31:27.718954828Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.2\" with image id \"sha256:04ac773cca35cc457f24a6501b6b308d63a2cddd1aec14fe95559bccca3010a4\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.2\", repo digest \"registry.k8s.io/kube-apiserver@sha256:e8ae58675899e946fabe38425f2b3bfd33120b7930d05b5898de97c81a7f6137\", size \"27348516\" in 1.767911314s" Jul 12 09:31:27.719055 containerd[1499]: time="2025-07-12T09:31:27.718993135Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.2\" returns image reference \"sha256:04ac773cca35cc457f24a6501b6b308d63a2cddd1aec14fe95559bccca3010a4\"" Jul 12 09:31:27.722321 containerd[1499]: time="2025-07-12T09:31:27.722264486Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.2\"" Jul 12 09:31:28.890947 containerd[1499]: time="2025-07-12T09:31:28.890807403Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 09:31:28.891821 containerd[1499]: time="2025-07-12T09:31:28.891785461Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.2: active requests=0, bytes read=23537625" Jul 12 09:31:28.892498 containerd[1499]: time="2025-07-12T09:31:28.892451029Z" level=info msg="ImageCreate event name:\"sha256:99a259072231375ad69a369cdf5620d60cdff72d450951c603fad8a94667af65\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 09:31:28.895477 containerd[1499]: time="2025-07-12T09:31:28.895411104Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:2236e72a4be5dcc9c04600353ff8849db1557f5364947c520ff05471ae719081\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 09:31:28.896286 containerd[1499]: time="2025-07-12T09:31:28.896248322Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.2\" with image id \"sha256:99a259072231375ad69a369cdf5620d60cdff72d450951c603fad8a94667af65\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.2\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:2236e72a4be5dcc9c04600353ff8849db1557f5364947c520ff05471ae719081\", size \"25092541\" in 1.173948839s" Jul 12 09:31:28.896286 containerd[1499]: time="2025-07-12T09:31:28.896281128Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.2\" returns image reference \"sha256:99a259072231375ad69a369cdf5620d60cdff72d450951c603fad8a94667af65\"" Jul 12 09:31:28.896814 containerd[1499]: time="2025-07-12T09:31:28.896777640Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.2\"" Jul 12 09:31:30.095890 containerd[1499]: time="2025-07-12T09:31:30.095838246Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 09:31:30.096423 containerd[1499]: time="2025-07-12T09:31:30.096390064Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.2: active requests=0, bytes read=18293517" Jul 12 09:31:30.097198 containerd[1499]: time="2025-07-12T09:31:30.097172343Z" level=info msg="ImageCreate event name:\"sha256:bb3da57746ca4726b669d35145eb9b4085643c61bbc80b9df3bf1e6021ba9eaf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 09:31:30.101877 containerd[1499]: time="2025-07-12T09:31:30.101838668Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:304c28303133be7d927973bc9bd6c83945b3735c59d283c25b63d5b9ed53bca3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 09:31:30.102981 containerd[1499]: time="2025-07-12T09:31:30.102944780Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.2\" with image id \"sha256:bb3da57746ca4726b669d35145eb9b4085643c61bbc80b9df3bf1e6021ba9eaf\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.2\", repo digest \"registry.k8s.io/kube-scheduler@sha256:304c28303133be7d927973bc9bd6c83945b3735c59d283c25b63d5b9ed53bca3\", size \"19848451\" in 1.206055671s" Jul 12 09:31:30.103035 containerd[1499]: time="2025-07-12T09:31:30.102985020Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.2\" returns image reference \"sha256:bb3da57746ca4726b669d35145eb9b4085643c61bbc80b9df3bf1e6021ba9eaf\"" Jul 12 09:31:30.104110 containerd[1499]: time="2025-07-12T09:31:30.104066102Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.2\"" Jul 12 09:31:30.808944 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 12 09:31:30.810323 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 12 09:31:30.954052 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 09:31:30.971350 (kubelet)[2027]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 12 09:31:31.010474 kubelet[2027]: E0712 09:31:31.010431 2027 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 12 09:31:31.013421 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 12 09:31:31.013547 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 12 09:31:31.013833 systemd[1]: kubelet.service: Consumed 144ms CPU time, 107.7M memory peak. Jul 12 09:31:31.129123 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount364627143.mount: Deactivated successfully. Jul 12 09:31:31.494861 containerd[1499]: time="2025-07-12T09:31:31.494741657Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 09:31:31.495350 containerd[1499]: time="2025-07-12T09:31:31.495312548Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.2: active requests=0, bytes read=28199474" Jul 12 09:31:31.495989 containerd[1499]: time="2025-07-12T09:31:31.495964768Z" level=info msg="ImageCreate event name:\"sha256:c26522e54bad2e6bfbb1bf11500833c94433076a3fa38436a2ec496a422c5455\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 09:31:31.497625 containerd[1499]: time="2025-07-12T09:31:31.497578948Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4796ef3e43efa5ed2a5b015c18f81d3c2fe3aea36f555ea643cc01827eb65e51\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 09:31:31.498111 containerd[1499]: time="2025-07-12T09:31:31.498083044Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.2\" with image id \"sha256:c26522e54bad2e6bfbb1bf11500833c94433076a3fa38436a2ec496a422c5455\", repo tag \"registry.k8s.io/kube-proxy:v1.33.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:4796ef3e43efa5ed2a5b015c18f81d3c2fe3aea36f555ea643cc01827eb65e51\", size \"28198491\" in 1.393982251s" Jul 12 09:31:31.498168 containerd[1499]: time="2025-07-12T09:31:31.498117340Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.2\" returns image reference \"sha256:c26522e54bad2e6bfbb1bf11500833c94433076a3fa38436a2ec496a422c5455\"" Jul 12 09:31:31.498628 containerd[1499]: time="2025-07-12T09:31:31.498512800Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Jul 12 09:31:32.127091 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2001544355.mount: Deactivated successfully. Jul 12 09:31:32.832359 containerd[1499]: time="2025-07-12T09:31:32.831979180Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 09:31:32.832710 containerd[1499]: time="2025-07-12T09:31:32.832407508Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=19152119" Jul 12 09:31:32.833348 containerd[1499]: time="2025-07-12T09:31:32.833281615Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 09:31:32.835664 containerd[1499]: time="2025-07-12T09:31:32.835634528Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 09:31:32.836779 containerd[1499]: time="2025-07-12T09:31:32.836755362Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 1.338055751s" Jul 12 09:31:32.836840 containerd[1499]: time="2025-07-12T09:31:32.836784711Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Jul 12 09:31:32.837777 containerd[1499]: time="2025-07-12T09:31:32.837595289Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 12 09:31:33.303549 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount225711550.mount: Deactivated successfully. Jul 12 09:31:33.306678 containerd[1499]: time="2025-07-12T09:31:33.306640329Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 12 09:31:33.307005 containerd[1499]: time="2025-07-12T09:31:33.306966393Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Jul 12 09:31:33.307773 containerd[1499]: time="2025-07-12T09:31:33.307713485Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 12 09:31:33.309667 containerd[1499]: time="2025-07-12T09:31:33.309633209Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 12 09:31:33.310327 containerd[1499]: time="2025-07-12T09:31:33.310297237Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 472.660939ms" Jul 12 09:31:33.310398 containerd[1499]: time="2025-07-12T09:31:33.310329464Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jul 12 09:31:33.310735 containerd[1499]: time="2025-07-12T09:31:33.310719223Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Jul 12 09:31:33.748276 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1860180381.mount: Deactivated successfully. Jul 12 09:31:35.312561 containerd[1499]: time="2025-07-12T09:31:35.312510971Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 09:31:35.313538 containerd[1499]: time="2025-07-12T09:31:35.313507331Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=69334601" Jul 12 09:31:35.314212 containerd[1499]: time="2025-07-12T09:31:35.314161466Z" level=info msg="ImageCreate event name:\"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 09:31:35.317824 containerd[1499]: time="2025-07-12T09:31:35.317244013Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 09:31:35.318502 containerd[1499]: time="2025-07-12T09:31:35.318458458Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"70026017\" in 2.007714274s" Jul 12 09:31:35.318502 containerd[1499]: time="2025-07-12T09:31:35.318495245Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\"" Jul 12 09:31:41.260410 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 12 09:31:41.261810 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 12 09:31:41.274460 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 12 09:31:41.274530 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 12 09:31:41.275528 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 09:31:41.277450 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 12 09:31:41.298697 systemd[1]: Reload requested from client PID 2183 ('systemctl') (unit session-7.scope)... Jul 12 09:31:41.298718 systemd[1]: Reloading... Jul 12 09:31:41.366937 zram_generator::config[2223]: No configuration found. Jul 12 09:31:41.481546 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 12 09:31:41.566503 systemd[1]: Reloading finished in 267 ms. Jul 12 09:31:41.614332 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 12 09:31:41.614414 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 12 09:31:41.614643 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 09:31:41.614689 systemd[1]: kubelet.service: Consumed 86ms CPU time, 95.2M memory peak. Jul 12 09:31:41.616077 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 12 09:31:41.719539 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 09:31:41.723374 (kubelet)[2271]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 12 09:31:41.756604 kubelet[2271]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 12 09:31:41.756604 kubelet[2271]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 12 09:31:41.756604 kubelet[2271]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 12 09:31:41.756932 kubelet[2271]: I0712 09:31:41.756645 2271 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 12 09:31:42.547427 kubelet[2271]: I0712 09:31:42.547386 2271 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jul 12 09:31:42.547427 kubelet[2271]: I0712 09:31:42.547415 2271 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 12 09:31:42.547662 kubelet[2271]: I0712 09:31:42.547634 2271 server.go:956] "Client rotation is on, will bootstrap in background" Jul 12 09:31:42.586666 kubelet[2271]: E0712 09:31:42.586631 2271 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.0.39:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.39:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jul 12 09:31:42.586666 kubelet[2271]: I0712 09:31:42.586655 2271 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 12 09:31:42.596421 kubelet[2271]: I0712 09:31:42.596258 2271 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 12 09:31:42.600258 kubelet[2271]: I0712 09:31:42.600219 2271 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 12 09:31:42.600744 kubelet[2271]: I0712 09:31:42.600652 2271 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 12 09:31:42.600907 kubelet[2271]: I0712 09:31:42.600681 2271 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 12 09:31:42.601196 kubelet[2271]: I0712 09:31:42.601127 2271 topology_manager.go:138] "Creating topology manager with none policy" Jul 12 09:31:42.601196 kubelet[2271]: I0712 09:31:42.601142 2271 container_manager_linux.go:303] "Creating device plugin manager" Jul 12 09:31:42.601425 kubelet[2271]: I0712 09:31:42.601413 2271 state_mem.go:36] "Initialized new in-memory state store" Jul 12 09:31:42.603933 kubelet[2271]: I0712 09:31:42.603904 2271 kubelet.go:480] "Attempting to sync node with API server" Jul 12 09:31:42.604052 kubelet[2271]: I0712 09:31:42.603995 2271 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 12 09:31:42.604159 kubelet[2271]: I0712 09:31:42.604109 2271 kubelet.go:386] "Adding apiserver pod source" Jul 12 09:31:42.604159 kubelet[2271]: I0712 09:31:42.604127 2271 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 12 09:31:42.606736 kubelet[2271]: I0712 09:31:42.606715 2271 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Jul 12 09:31:42.607739 kubelet[2271]: I0712 09:31:42.607716 2271 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jul 12 09:31:42.608070 kubelet[2271]: W0712 09:31:42.608047 2271 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 12 09:31:42.610991 kubelet[2271]: E0712 09:31:42.610957 2271 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.0.39:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.39:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jul 12 09:31:42.610991 kubelet[2271]: E0712 09:31:42.610955 2271 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.0.39:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.39:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jul 12 09:31:42.612016 kubelet[2271]: I0712 09:31:42.611999 2271 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 12 09:31:42.612136 kubelet[2271]: I0712 09:31:42.612124 2271 server.go:1289] "Started kubelet" Jul 12 09:31:42.612925 kubelet[2271]: I0712 09:31:42.612885 2271 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jul 12 09:31:42.616109 kubelet[2271]: I0712 09:31:42.616033 2271 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 12 09:31:42.622273 kubelet[2271]: I0712 09:31:42.620258 2271 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 12 09:31:42.622273 kubelet[2271]: I0712 09:31:42.621468 2271 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 12 09:31:42.622273 kubelet[2271]: E0712 09:31:42.621613 2271 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 12 09:31:42.622273 kubelet[2271]: I0712 09:31:42.621960 2271 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 12 09:31:42.622273 kubelet[2271]: I0712 09:31:42.622002 2271 reconciler.go:26] "Reconciler: start to sync state" Jul 12 09:31:42.623582 kubelet[2271]: E0712 09:31:42.623482 2271 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.0.39:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.39:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jul 12 09:31:42.623582 kubelet[2271]: E0712 09:31:42.623560 2271 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.39:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.39:6443: connect: connection refused" interval="200ms" Jul 12 09:31:42.625356 kubelet[2271]: I0712 09:31:42.625334 2271 server.go:317] "Adding debug handlers to kubelet server" Jul 12 09:31:42.626657 kubelet[2271]: I0712 09:31:42.626585 2271 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 12 09:31:42.628582 kubelet[2271]: I0712 09:31:42.628445 2271 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 12 09:31:42.630001 kubelet[2271]: E0712 09:31:42.629888 2271 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 12 09:31:42.630001 kubelet[2271]: E0712 09:31:42.628530 2271 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.39:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.39:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.1851771e6e992693 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-12 09:31:42.612088467 +0000 UTC m=+0.885650977,LastTimestamp:2025-07-12 09:31:42.612088467 +0000 UTC m=+0.885650977,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 12 09:31:42.630294 kubelet[2271]: I0712 09:31:42.630259 2271 factory.go:223] Registration of the containerd container factory successfully Jul 12 09:31:42.630294 kubelet[2271]: I0712 09:31:42.630277 2271 factory.go:223] Registration of the systemd container factory successfully Jul 12 09:31:42.630363 kubelet[2271]: I0712 09:31:42.630345 2271 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 12 09:31:42.637945 kubelet[2271]: I0712 09:31:42.637899 2271 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jul 12 09:31:42.638816 kubelet[2271]: I0712 09:31:42.638790 2271 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jul 12 09:31:42.638816 kubelet[2271]: I0712 09:31:42.638810 2271 status_manager.go:230] "Starting to sync pod status with apiserver" Jul 12 09:31:42.638862 kubelet[2271]: I0712 09:31:42.638835 2271 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 12 09:31:42.638862 kubelet[2271]: I0712 09:31:42.638842 2271 kubelet.go:2436] "Starting kubelet main sync loop" Jul 12 09:31:42.638963 kubelet[2271]: E0712 09:31:42.638896 2271 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 12 09:31:42.639593 kubelet[2271]: E0712 09:31:42.639562 2271 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.0.39:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.39:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jul 12 09:31:42.641957 kubelet[2271]: I0712 09:31:42.641937 2271 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 12 09:31:42.641957 kubelet[2271]: I0712 09:31:42.641952 2271 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 12 09:31:42.642037 kubelet[2271]: I0712 09:31:42.641970 2271 state_mem.go:36] "Initialized new in-memory state store" Jul 12 09:31:42.714093 kubelet[2271]: I0712 09:31:42.714062 2271 policy_none.go:49] "None policy: Start" Jul 12 09:31:42.714093 kubelet[2271]: I0712 09:31:42.714094 2271 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 12 09:31:42.714093 kubelet[2271]: I0712 09:31:42.714107 2271 state_mem.go:35] "Initializing new in-memory state store" Jul 12 09:31:42.719534 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 12 09:31:42.722146 kubelet[2271]: E0712 09:31:42.722115 2271 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 12 09:31:42.732180 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 12 09:31:42.734872 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 12 09:31:42.739813 kubelet[2271]: E0712 09:31:42.739781 2271 kubelet.go:2460] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 12 09:31:42.752824 kubelet[2271]: E0712 09:31:42.752798 2271 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jul 12 09:31:42.753012 kubelet[2271]: I0712 09:31:42.752988 2271 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 12 09:31:42.753047 kubelet[2271]: I0712 09:31:42.753006 2271 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 12 09:31:42.753234 kubelet[2271]: I0712 09:31:42.753212 2271 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 12 09:31:42.753859 kubelet[2271]: E0712 09:31:42.753824 2271 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 12 09:31:42.753926 kubelet[2271]: E0712 09:31:42.753863 2271 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jul 12 09:31:42.824430 kubelet[2271]: E0712 09:31:42.824350 2271 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.39:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.39:6443: connect: connection refused" interval="400ms" Jul 12 09:31:42.854324 kubelet[2271]: I0712 09:31:42.854303 2271 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 12 09:31:42.858290 kubelet[2271]: E0712 09:31:42.858257 2271 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.39:6443/api/v1/nodes\": dial tcp 10.0.0.39:6443: connect: connection refused" node="localhost" Jul 12 09:31:42.950160 systemd[1]: Created slice kubepods-burstable-podbfb82c5667fbd6bea3b2e651a1799703.slice - libcontainer container kubepods-burstable-podbfb82c5667fbd6bea3b2e651a1799703.slice. Jul 12 09:31:42.964597 kubelet[2271]: E0712 09:31:42.964576 2271 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 12 09:31:42.967295 systemd[1]: Created slice kubepods-burstable-pod84b858ec27c8b2738b1d9ff9927e0dcb.slice - libcontainer container kubepods-burstable-pod84b858ec27c8b2738b1d9ff9927e0dcb.slice. Jul 12 09:31:42.968642 kubelet[2271]: E0712 09:31:42.968622 2271 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 12 09:31:42.990296 systemd[1]: Created slice kubepods-burstable-pod834ee54f1daa06092e339273649eb5ea.slice - libcontainer container kubepods-burstable-pod834ee54f1daa06092e339273649eb5ea.slice. Jul 12 09:31:42.991667 kubelet[2271]: E0712 09:31:42.991641 2271 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 12 09:31:43.023832 kubelet[2271]: I0712 09:31:43.023811 2271 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 12 09:31:43.023889 kubelet[2271]: I0712 09:31:43.023838 2271 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 12 09:31:43.023889 kubelet[2271]: I0712 09:31:43.023854 2271 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/bfb82c5667fbd6bea3b2e651a1799703-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"bfb82c5667fbd6bea3b2e651a1799703\") " pod="kube-system/kube-apiserver-localhost" Jul 12 09:31:43.023889 kubelet[2271]: I0712 09:31:43.023869 2271 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/bfb82c5667fbd6bea3b2e651a1799703-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"bfb82c5667fbd6bea3b2e651a1799703\") " pod="kube-system/kube-apiserver-localhost" Jul 12 09:31:43.023889 kubelet[2271]: I0712 09:31:43.023884 2271 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/bfb82c5667fbd6bea3b2e651a1799703-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"bfb82c5667fbd6bea3b2e651a1799703\") " pod="kube-system/kube-apiserver-localhost" Jul 12 09:31:43.024003 kubelet[2271]: I0712 09:31:43.023902 2271 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 12 09:31:43.024003 kubelet[2271]: I0712 09:31:43.023929 2271 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 12 09:31:43.024003 kubelet[2271]: I0712 09:31:43.023945 2271 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 12 09:31:43.024003 kubelet[2271]: I0712 09:31:43.023959 2271 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/834ee54f1daa06092e339273649eb5ea-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"834ee54f1daa06092e339273649eb5ea\") " pod="kube-system/kube-scheduler-localhost" Jul 12 09:31:43.059948 kubelet[2271]: I0712 09:31:43.059926 2271 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 12 09:31:43.060282 kubelet[2271]: E0712 09:31:43.060259 2271 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.39:6443/api/v1/nodes\": dial tcp 10.0.0.39:6443: connect: connection refused" node="localhost" Jul 12 09:31:43.225524 kubelet[2271]: E0712 09:31:43.225425 2271 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.39:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.39:6443: connect: connection refused" interval="800ms" Jul 12 09:31:43.266473 containerd[1499]: time="2025-07-12T09:31:43.266425751Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:bfb82c5667fbd6bea3b2e651a1799703,Namespace:kube-system,Attempt:0,}" Jul 12 09:31:43.270117 containerd[1499]: time="2025-07-12T09:31:43.269925696Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:84b858ec27c8b2738b1d9ff9927e0dcb,Namespace:kube-system,Attempt:0,}" Jul 12 09:31:43.286420 containerd[1499]: time="2025-07-12T09:31:43.286389230Z" level=info msg="connecting to shim 7b55cb6fbaaafb7293560f859eaa164f3ad0fe866f27438bd326fb8ed0f04cf6" address="unix:///run/containerd/s/afeab19cb3c48abc9cba0cbb1d8e0e0eaeef8a13eb0ead5ee8a846ba17f00764" namespace=k8s.io protocol=ttrpc version=3 Jul 12 09:31:43.292976 containerd[1499]: time="2025-07-12T09:31:43.292882555Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:834ee54f1daa06092e339273649eb5ea,Namespace:kube-system,Attempt:0,}" Jul 12 09:31:43.294386 containerd[1499]: time="2025-07-12T09:31:43.294344855Z" level=info msg="connecting to shim 7fc8a40f87adf90a8f7f8f1bdb19c978f82ff7914dec916b8dffc140e5cbe21d" address="unix:///run/containerd/s/0da2785a7a5c6881cc214ed26241284396d885dce248dd7bb3c247204a524e6f" namespace=k8s.io protocol=ttrpc version=3 Jul 12 09:31:43.313226 containerd[1499]: time="2025-07-12T09:31:43.313040906Z" level=info msg="connecting to shim 20e1d905e719561e17217ba98468052b53b2c0620b3d40a2eac745f8be47ef8f" address="unix:///run/containerd/s/1f9ff51672e8d4c400891a862b33e4cf37f00a4bf14627a4076994bafc5c2180" namespace=k8s.io protocol=ttrpc version=3 Jul 12 09:31:43.315066 systemd[1]: Started cri-containerd-7b55cb6fbaaafb7293560f859eaa164f3ad0fe866f27438bd326fb8ed0f04cf6.scope - libcontainer container 7b55cb6fbaaafb7293560f859eaa164f3ad0fe866f27438bd326fb8ed0f04cf6. Jul 12 09:31:43.318067 systemd[1]: Started cri-containerd-7fc8a40f87adf90a8f7f8f1bdb19c978f82ff7914dec916b8dffc140e5cbe21d.scope - libcontainer container 7fc8a40f87adf90a8f7f8f1bdb19c978f82ff7914dec916b8dffc140e5cbe21d. Jul 12 09:31:43.347117 systemd[1]: Started cri-containerd-20e1d905e719561e17217ba98468052b53b2c0620b3d40a2eac745f8be47ef8f.scope - libcontainer container 20e1d905e719561e17217ba98468052b53b2c0620b3d40a2eac745f8be47ef8f. Jul 12 09:31:43.387370 containerd[1499]: time="2025-07-12T09:31:43.387311952Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:bfb82c5667fbd6bea3b2e651a1799703,Namespace:kube-system,Attempt:0,} returns sandbox id \"7b55cb6fbaaafb7293560f859eaa164f3ad0fe866f27438bd326fb8ed0f04cf6\"" Jul 12 09:31:43.389675 containerd[1499]: time="2025-07-12T09:31:43.389639186Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:84b858ec27c8b2738b1d9ff9927e0dcb,Namespace:kube-system,Attempt:0,} returns sandbox id \"7fc8a40f87adf90a8f7f8f1bdb19c978f82ff7914dec916b8dffc140e5cbe21d\"" Jul 12 09:31:43.391100 containerd[1499]: time="2025-07-12T09:31:43.391062440Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:834ee54f1daa06092e339273649eb5ea,Namespace:kube-system,Attempt:0,} returns sandbox id \"20e1d905e719561e17217ba98468052b53b2c0620b3d40a2eac745f8be47ef8f\"" Jul 12 09:31:43.392597 containerd[1499]: time="2025-07-12T09:31:43.392434418Z" level=info msg="CreateContainer within sandbox \"7b55cb6fbaaafb7293560f859eaa164f3ad0fe866f27438bd326fb8ed0f04cf6\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 12 09:31:43.393646 containerd[1499]: time="2025-07-12T09:31:43.393614321Z" level=info msg="CreateContainer within sandbox \"7fc8a40f87adf90a8f7f8f1bdb19c978f82ff7914dec916b8dffc140e5cbe21d\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 12 09:31:43.397380 containerd[1499]: time="2025-07-12T09:31:43.397355498Z" level=info msg="Container 097a998dac0991e07ef79b3008452a214f61dab81006deb3a75fa2da74a6a9c3: CDI devices from CRI Config.CDIDevices: []" Jul 12 09:31:43.402764 containerd[1499]: time="2025-07-12T09:31:43.402736182Z" level=info msg="Container 4d300751884c86b88a8d718b855274a7efc4743fbbd481f2995d4c05c98befd4: CDI devices from CRI Config.CDIDevices: []" Jul 12 09:31:43.404017 containerd[1499]: time="2025-07-12T09:31:43.403981668Z" level=info msg="CreateContainer within sandbox \"20e1d905e719561e17217ba98468052b53b2c0620b3d40a2eac745f8be47ef8f\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 12 09:31:43.408824 containerd[1499]: time="2025-07-12T09:31:43.408783891Z" level=info msg="CreateContainer within sandbox \"7b55cb6fbaaafb7293560f859eaa164f3ad0fe866f27438bd326fb8ed0f04cf6\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"097a998dac0991e07ef79b3008452a214f61dab81006deb3a75fa2da74a6a9c3\"" Jul 12 09:31:43.409572 containerd[1499]: time="2025-07-12T09:31:43.409515061Z" level=info msg="StartContainer for \"097a998dac0991e07ef79b3008452a214f61dab81006deb3a75fa2da74a6a9c3\"" Jul 12 09:31:43.410759 containerd[1499]: time="2025-07-12T09:31:43.410727416Z" level=info msg="connecting to shim 097a998dac0991e07ef79b3008452a214f61dab81006deb3a75fa2da74a6a9c3" address="unix:///run/containerd/s/afeab19cb3c48abc9cba0cbb1d8e0e0eaeef8a13eb0ead5ee8a846ba17f00764" protocol=ttrpc version=3 Jul 12 09:31:43.414039 containerd[1499]: time="2025-07-12T09:31:43.413833700Z" level=info msg="CreateContainer within sandbox \"7fc8a40f87adf90a8f7f8f1bdb19c978f82ff7914dec916b8dffc140e5cbe21d\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"4d300751884c86b88a8d718b855274a7efc4743fbbd481f2995d4c05c98befd4\"" Jul 12 09:31:43.415292 containerd[1499]: time="2025-07-12T09:31:43.415240487Z" level=info msg="StartContainer for \"4d300751884c86b88a8d718b855274a7efc4743fbbd481f2995d4c05c98befd4\"" Jul 12 09:31:43.416635 containerd[1499]: time="2025-07-12T09:31:43.416602834Z" level=info msg="Container 8e7b920b2ef04ce439904f80e125878234e838357ee8547b5ded043076cb943c: CDI devices from CRI Config.CDIDevices: []" Jul 12 09:31:43.416742 containerd[1499]: time="2025-07-12T09:31:43.416712899Z" level=info msg="connecting to shim 4d300751884c86b88a8d718b855274a7efc4743fbbd481f2995d4c05c98befd4" address="unix:///run/containerd/s/0da2785a7a5c6881cc214ed26241284396d885dce248dd7bb3c247204a524e6f" protocol=ttrpc version=3 Jul 12 09:31:43.425159 containerd[1499]: time="2025-07-12T09:31:43.425117897Z" level=info msg="CreateContainer within sandbox \"20e1d905e719561e17217ba98468052b53b2c0620b3d40a2eac745f8be47ef8f\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"8e7b920b2ef04ce439904f80e125878234e838357ee8547b5ded043076cb943c\"" Jul 12 09:31:43.425679 containerd[1499]: time="2025-07-12T09:31:43.425652916Z" level=info msg="StartContainer for \"8e7b920b2ef04ce439904f80e125878234e838357ee8547b5ded043076cb943c\"" Jul 12 09:31:43.427036 containerd[1499]: time="2025-07-12T09:31:43.426988685Z" level=info msg="connecting to shim 8e7b920b2ef04ce439904f80e125878234e838357ee8547b5ded043076cb943c" address="unix:///run/containerd/s/1f9ff51672e8d4c400891a862b33e4cf37f00a4bf14627a4076994bafc5c2180" protocol=ttrpc version=3 Jul 12 09:31:43.432069 systemd[1]: Started cri-containerd-097a998dac0991e07ef79b3008452a214f61dab81006deb3a75fa2da74a6a9c3.scope - libcontainer container 097a998dac0991e07ef79b3008452a214f61dab81006deb3a75fa2da74a6a9c3. Jul 12 09:31:43.434738 systemd[1]: Started cri-containerd-4d300751884c86b88a8d718b855274a7efc4743fbbd481f2995d4c05c98befd4.scope - libcontainer container 4d300751884c86b88a8d718b855274a7efc4743fbbd481f2995d4c05c98befd4. Jul 12 09:31:43.454056 systemd[1]: Started cri-containerd-8e7b920b2ef04ce439904f80e125878234e838357ee8547b5ded043076cb943c.scope - libcontainer container 8e7b920b2ef04ce439904f80e125878234e838357ee8547b5ded043076cb943c. Jul 12 09:31:43.461501 kubelet[2271]: I0712 09:31:43.461462 2271 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 12 09:31:43.461927 kubelet[2271]: E0712 09:31:43.461791 2271 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.39:6443/api/v1/nodes\": dial tcp 10.0.0.39:6443: connect: connection refused" node="localhost" Jul 12 09:31:43.476678 containerd[1499]: time="2025-07-12T09:31:43.475910212Z" level=info msg="StartContainer for \"097a998dac0991e07ef79b3008452a214f61dab81006deb3a75fa2da74a6a9c3\" returns successfully" Jul 12 09:31:43.484209 containerd[1499]: time="2025-07-12T09:31:43.484180766Z" level=info msg="StartContainer for \"4d300751884c86b88a8d718b855274a7efc4743fbbd481f2995d4c05c98befd4\" returns successfully" Jul 12 09:31:43.529868 containerd[1499]: time="2025-07-12T09:31:43.526031586Z" level=info msg="StartContainer for \"8e7b920b2ef04ce439904f80e125878234e838357ee8547b5ded043076cb943c\" returns successfully" Jul 12 09:31:43.652793 kubelet[2271]: E0712 09:31:43.652760 2271 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 12 09:31:43.656451 kubelet[2271]: E0712 09:31:43.655027 2271 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 12 09:31:43.658757 kubelet[2271]: E0712 09:31:43.658721 2271 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 12 09:31:44.264518 kubelet[2271]: I0712 09:31:44.264246 2271 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 12 09:31:44.660435 kubelet[2271]: E0712 09:31:44.660218 2271 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 12 09:31:44.660435 kubelet[2271]: E0712 09:31:44.660396 2271 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 12 09:31:45.483451 kubelet[2271]: E0712 09:31:45.483386 2271 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jul 12 09:31:45.575514 kubelet[2271]: I0712 09:31:45.575468 2271 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jul 12 09:31:45.575514 kubelet[2271]: E0712 09:31:45.575512 2271 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jul 12 09:31:45.607017 kubelet[2271]: I0712 09:31:45.606970 2271 apiserver.go:52] "Watching apiserver" Jul 12 09:31:45.622787 kubelet[2271]: I0712 09:31:45.622751 2271 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 12 09:31:45.623475 kubelet[2271]: I0712 09:31:45.622865 2271 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 12 09:31:45.635680 kubelet[2271]: E0712 09:31:45.634940 2271 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jul 12 09:31:45.635680 kubelet[2271]: I0712 09:31:45.634970 2271 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 12 09:31:45.637171 kubelet[2271]: E0712 09:31:45.637131 2271 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Jul 12 09:31:45.637171 kubelet[2271]: I0712 09:31:45.637154 2271 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 12 09:31:45.638804 kubelet[2271]: E0712 09:31:45.638783 2271 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jul 12 09:31:45.660619 kubelet[2271]: I0712 09:31:45.660595 2271 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 12 09:31:45.663420 kubelet[2271]: E0712 09:31:45.663397 2271 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jul 12 09:31:47.675747 systemd[1]: Reload requested from client PID 2557 ('systemctl') (unit session-7.scope)... Jul 12 09:31:47.675763 systemd[1]: Reloading... Jul 12 09:31:47.734968 zram_generator::config[2603]: No configuration found. Jul 12 09:31:47.737024 kubelet[2271]: I0712 09:31:47.736676 2271 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 12 09:31:47.880760 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 12 09:31:47.977392 systemd[1]: Reloading finished in 301 ms. Jul 12 09:31:48.002661 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 12 09:31:48.014709 systemd[1]: kubelet.service: Deactivated successfully. Jul 12 09:31:48.014954 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 09:31:48.015007 systemd[1]: kubelet.service: Consumed 1.316s CPU time, 130.8M memory peak. Jul 12 09:31:48.016556 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 12 09:31:48.151874 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 12 09:31:48.156715 (kubelet)[2642]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 12 09:31:48.210152 kubelet[2642]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 12 09:31:48.210152 kubelet[2642]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 12 09:31:48.210152 kubelet[2642]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 12 09:31:48.210577 kubelet[2642]: I0712 09:31:48.210203 2642 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 12 09:31:48.216800 kubelet[2642]: I0712 09:31:48.216750 2642 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jul 12 09:31:48.216800 kubelet[2642]: I0712 09:31:48.216776 2642 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 12 09:31:48.217021 kubelet[2642]: I0712 09:31:48.216995 2642 server.go:956] "Client rotation is on, will bootstrap in background" Jul 12 09:31:48.218326 kubelet[2642]: I0712 09:31:48.218299 2642 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jul 12 09:31:48.221529 kubelet[2642]: I0712 09:31:48.221497 2642 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 12 09:31:48.225140 kubelet[2642]: I0712 09:31:48.225111 2642 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 12 09:31:48.228128 kubelet[2642]: I0712 09:31:48.227716 2642 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 12 09:31:48.228128 kubelet[2642]: I0712 09:31:48.227962 2642 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 12 09:31:48.228128 kubelet[2642]: I0712 09:31:48.227987 2642 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 12 09:31:48.228583 kubelet[2642]: I0712 09:31:48.228139 2642 topology_manager.go:138] "Creating topology manager with none policy" Jul 12 09:31:48.228583 kubelet[2642]: I0712 09:31:48.228149 2642 container_manager_linux.go:303] "Creating device plugin manager" Jul 12 09:31:48.228583 kubelet[2642]: I0712 09:31:48.228189 2642 state_mem.go:36] "Initialized new in-memory state store" Jul 12 09:31:48.228583 kubelet[2642]: I0712 09:31:48.228330 2642 kubelet.go:480] "Attempting to sync node with API server" Jul 12 09:31:48.228583 kubelet[2642]: I0712 09:31:48.228351 2642 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 12 09:31:48.228583 kubelet[2642]: I0712 09:31:48.228377 2642 kubelet.go:386] "Adding apiserver pod source" Jul 12 09:31:48.228583 kubelet[2642]: I0712 09:31:48.228397 2642 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 12 09:31:48.229439 kubelet[2642]: I0712 09:31:48.229278 2642 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Jul 12 09:31:48.230334 kubelet[2642]: I0712 09:31:48.230301 2642 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jul 12 09:31:48.234319 kubelet[2642]: I0712 09:31:48.234287 2642 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 12 09:31:48.235579 kubelet[2642]: I0712 09:31:48.235545 2642 server.go:1289] "Started kubelet" Jul 12 09:31:48.238539 kubelet[2642]: I0712 09:31:48.238517 2642 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 12 09:31:48.239645 kubelet[2642]: I0712 09:31:48.239623 2642 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 12 09:31:48.240992 kubelet[2642]: I0712 09:31:48.238774 2642 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jul 12 09:31:48.242247 kubelet[2642]: I0712 09:31:48.242089 2642 server.go:317] "Adding debug handlers to kubelet server" Jul 12 09:31:48.243878 kubelet[2642]: I0712 09:31:48.238810 2642 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 12 09:31:48.244148 kubelet[2642]: I0712 09:31:48.244131 2642 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 12 09:31:48.246263 kubelet[2642]: I0712 09:31:48.246239 2642 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 12 09:31:48.246441 kubelet[2642]: E0712 09:31:48.246419 2642 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 12 09:31:48.246472 kubelet[2642]: I0712 09:31:48.246464 2642 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 12 09:31:48.246585 kubelet[2642]: I0712 09:31:48.246572 2642 reconciler.go:26] "Reconciler: start to sync state" Jul 12 09:31:48.249833 kubelet[2642]: I0712 09:31:48.249729 2642 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 12 09:31:48.250702 kubelet[2642]: E0712 09:31:48.249890 2642 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 12 09:31:48.253207 kubelet[2642]: I0712 09:31:48.252765 2642 factory.go:223] Registration of the containerd container factory successfully Jul 12 09:31:48.253207 kubelet[2642]: I0712 09:31:48.252790 2642 factory.go:223] Registration of the systemd container factory successfully Jul 12 09:31:48.255884 kubelet[2642]: I0712 09:31:48.255844 2642 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jul 12 09:31:48.257148 kubelet[2642]: I0712 09:31:48.257123 2642 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jul 12 09:31:48.257148 kubelet[2642]: I0712 09:31:48.257147 2642 status_manager.go:230] "Starting to sync pod status with apiserver" Jul 12 09:31:48.257235 kubelet[2642]: I0712 09:31:48.257162 2642 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 12 09:31:48.257235 kubelet[2642]: I0712 09:31:48.257170 2642 kubelet.go:2436] "Starting kubelet main sync loop" Jul 12 09:31:48.257235 kubelet[2642]: E0712 09:31:48.257206 2642 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 12 09:31:48.283114 kubelet[2642]: I0712 09:31:48.283087 2642 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 12 09:31:48.283322 kubelet[2642]: I0712 09:31:48.283305 2642 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 12 09:31:48.283393 kubelet[2642]: I0712 09:31:48.283382 2642 state_mem.go:36] "Initialized new in-memory state store" Jul 12 09:31:48.283574 kubelet[2642]: I0712 09:31:48.283556 2642 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 12 09:31:48.283643 kubelet[2642]: I0712 09:31:48.283622 2642 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 12 09:31:48.283691 kubelet[2642]: I0712 09:31:48.283683 2642 policy_none.go:49] "None policy: Start" Jul 12 09:31:48.283741 kubelet[2642]: I0712 09:31:48.283733 2642 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 12 09:31:48.283787 kubelet[2642]: I0712 09:31:48.283780 2642 state_mem.go:35] "Initializing new in-memory state store" Jul 12 09:31:48.283973 kubelet[2642]: I0712 09:31:48.283956 2642 state_mem.go:75] "Updated machine memory state" Jul 12 09:31:48.287529 kubelet[2642]: E0712 09:31:48.287495 2642 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jul 12 09:31:48.287702 kubelet[2642]: I0712 09:31:48.287664 2642 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 12 09:31:48.287743 kubelet[2642]: I0712 09:31:48.287683 2642 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 12 09:31:48.288123 kubelet[2642]: I0712 09:31:48.288106 2642 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 12 09:31:48.289396 kubelet[2642]: E0712 09:31:48.289360 2642 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 12 09:31:48.358698 kubelet[2642]: I0712 09:31:48.358654 2642 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 12 09:31:48.358939 kubelet[2642]: I0712 09:31:48.358898 2642 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 12 09:31:48.360454 kubelet[2642]: I0712 09:31:48.359720 2642 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 12 09:31:48.365069 kubelet[2642]: E0712 09:31:48.365007 2642 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 12 09:31:48.389355 kubelet[2642]: I0712 09:31:48.389329 2642 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 12 09:31:48.416103 kubelet[2642]: I0712 09:31:48.416058 2642 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Jul 12 09:31:48.416229 kubelet[2642]: I0712 09:31:48.416146 2642 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jul 12 09:31:48.548101 kubelet[2642]: I0712 09:31:48.547751 2642 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/bfb82c5667fbd6bea3b2e651a1799703-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"bfb82c5667fbd6bea3b2e651a1799703\") " pod="kube-system/kube-apiserver-localhost" Jul 12 09:31:48.548101 kubelet[2642]: I0712 09:31:48.548041 2642 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 12 09:31:48.548423 kubelet[2642]: I0712 09:31:48.548180 2642 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 12 09:31:48.548423 kubelet[2642]: I0712 09:31:48.548204 2642 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/834ee54f1daa06092e339273649eb5ea-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"834ee54f1daa06092e339273649eb5ea\") " pod="kube-system/kube-scheduler-localhost" Jul 12 09:31:48.548423 kubelet[2642]: I0712 09:31:48.548221 2642 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/bfb82c5667fbd6bea3b2e651a1799703-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"bfb82c5667fbd6bea3b2e651a1799703\") " pod="kube-system/kube-apiserver-localhost" Jul 12 09:31:48.548423 kubelet[2642]: I0712 09:31:48.548338 2642 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/bfb82c5667fbd6bea3b2e651a1799703-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"bfb82c5667fbd6bea3b2e651a1799703\") " pod="kube-system/kube-apiserver-localhost" Jul 12 09:31:48.548423 kubelet[2642]: I0712 09:31:48.548361 2642 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 12 09:31:48.548632 kubelet[2642]: I0712 09:31:48.548377 2642 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 12 09:31:48.548632 kubelet[2642]: I0712 09:31:48.548391 2642 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/84b858ec27c8b2738b1d9ff9927e0dcb-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"84b858ec27c8b2738b1d9ff9927e0dcb\") " pod="kube-system/kube-controller-manager-localhost" Jul 12 09:31:48.679424 sudo[2683]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jul 12 09:31:48.679680 sudo[2683]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jul 12 09:31:48.993153 sudo[2683]: pam_unix(sudo:session): session closed for user root Jul 12 09:31:49.229476 kubelet[2642]: I0712 09:31:49.229220 2642 apiserver.go:52] "Watching apiserver" Jul 12 09:31:49.246872 kubelet[2642]: I0712 09:31:49.246780 2642 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 12 09:31:49.269524 kubelet[2642]: I0712 09:31:49.269444 2642 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 12 09:31:49.270145 kubelet[2642]: I0712 09:31:49.269903 2642 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 12 09:31:49.274071 kubelet[2642]: E0712 09:31:49.274046 2642 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jul 12 09:31:49.277565 kubelet[2642]: E0712 09:31:49.277399 2642 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 12 09:31:49.298779 kubelet[2642]: I0712 09:31:49.298696 2642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.298681743 podStartE2EDuration="2.298681743s" podCreationTimestamp="2025-07-12 09:31:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-12 09:31:49.297365513 +0000 UTC m=+1.136971153" watchObservedRunningTime="2025-07-12 09:31:49.298681743 +0000 UTC m=+1.138287383" Jul 12 09:31:49.299633 kubelet[2642]: I0712 09:31:49.299538 2642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.299529247 podStartE2EDuration="1.299529247s" podCreationTimestamp="2025-07-12 09:31:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-12 09:31:49.289247101 +0000 UTC m=+1.128852741" watchObservedRunningTime="2025-07-12 09:31:49.299529247 +0000 UTC m=+1.139134847" Jul 12 09:31:49.304194 kubelet[2642]: I0712 09:31:49.304122 2642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.304113526 podStartE2EDuration="1.304113526s" podCreationTimestamp="2025-07-12 09:31:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-12 09:31:49.303960616 +0000 UTC m=+1.143566256" watchObservedRunningTime="2025-07-12 09:31:49.304113526 +0000 UTC m=+1.143719166" Jul 12 09:31:50.897279 sudo[1714]: pam_unix(sudo:session): session closed for user root Jul 12 09:31:50.899957 sshd[1713]: Connection closed by 10.0.0.1 port 42840 Jul 12 09:31:50.900314 sshd-session[1710]: pam_unix(sshd:session): session closed for user core Jul 12 09:31:50.903613 systemd[1]: sshd@6-10.0.0.39:22-10.0.0.1:42840.service: Deactivated successfully. Jul 12 09:31:50.905685 systemd[1]: session-7.scope: Deactivated successfully. Jul 12 09:31:50.905973 systemd[1]: session-7.scope: Consumed 8.407s CPU time, 256.5M memory peak. Jul 12 09:31:50.907529 systemd-logind[1480]: Session 7 logged out. Waiting for processes to exit. Jul 12 09:31:50.908629 systemd-logind[1480]: Removed session 7. Jul 12 09:31:53.245616 kubelet[2642]: I0712 09:31:53.245556 2642 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 12 09:31:53.246856 containerd[1499]: time="2025-07-12T09:31:53.246791227Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 12 09:31:53.247342 kubelet[2642]: I0712 09:31:53.247013 2642 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 12 09:31:54.172394 systemd[1]: Created slice kubepods-besteffort-podae92ed71_74a9_4e95_af00_7fe63b2f96c3.slice - libcontainer container kubepods-besteffort-podae92ed71_74a9_4e95_af00_7fe63b2f96c3.slice. Jul 12 09:31:54.186813 kubelet[2642]: I0712 09:31:54.186757 2642 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ae92ed71-74a9-4e95-af00-7fe63b2f96c3-lib-modules\") pod \"kube-proxy-8sg9m\" (UID: \"ae92ed71-74a9-4e95-af00-7fe63b2f96c3\") " pod="kube-system/kube-proxy-8sg9m" Jul 12 09:31:54.186813 kubelet[2642]: I0712 09:31:54.186801 2642 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ae92ed71-74a9-4e95-af00-7fe63b2f96c3-kube-proxy\") pod \"kube-proxy-8sg9m\" (UID: \"ae92ed71-74a9-4e95-af00-7fe63b2f96c3\") " pod="kube-system/kube-proxy-8sg9m" Jul 12 09:31:54.186813 kubelet[2642]: I0712 09:31:54.186818 2642 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ae92ed71-74a9-4e95-af00-7fe63b2f96c3-xtables-lock\") pod \"kube-proxy-8sg9m\" (UID: \"ae92ed71-74a9-4e95-af00-7fe63b2f96c3\") " pod="kube-system/kube-proxy-8sg9m" Jul 12 09:31:54.187065 kubelet[2642]: I0712 09:31:54.186833 2642 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ml9f\" (UniqueName: \"kubernetes.io/projected/ae92ed71-74a9-4e95-af00-7fe63b2f96c3-kube-api-access-7ml9f\") pod \"kube-proxy-8sg9m\" (UID: \"ae92ed71-74a9-4e95-af00-7fe63b2f96c3\") " pod="kube-system/kube-proxy-8sg9m" Jul 12 09:31:54.193552 systemd[1]: Created slice kubepods-burstable-podc2cce705_3a2b_4f07_b418_18dc3f9ae873.slice - libcontainer container kubepods-burstable-podc2cce705_3a2b_4f07_b418_18dc3f9ae873.slice. Jul 12 09:31:54.287806 kubelet[2642]: I0712 09:31:54.287765 2642 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c2cce705-3a2b-4f07-b418-18dc3f9ae873-xtables-lock\") pod \"cilium-6frvs\" (UID: \"c2cce705-3a2b-4f07-b418-18dc3f9ae873\") " pod="kube-system/cilium-6frvs" Jul 12 09:31:54.287806 kubelet[2642]: I0712 09:31:54.287809 2642 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c2cce705-3a2b-4f07-b418-18dc3f9ae873-clustermesh-secrets\") pod \"cilium-6frvs\" (UID: \"c2cce705-3a2b-4f07-b418-18dc3f9ae873\") " pod="kube-system/cilium-6frvs" Jul 12 09:31:54.288180 kubelet[2642]: I0712 09:31:54.287829 2642 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c2cce705-3a2b-4f07-b418-18dc3f9ae873-host-proc-sys-net\") pod \"cilium-6frvs\" (UID: \"c2cce705-3a2b-4f07-b418-18dc3f9ae873\") " pod="kube-system/cilium-6frvs" Jul 12 09:31:54.288180 kubelet[2642]: I0712 09:31:54.287847 2642 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c2cce705-3a2b-4f07-b418-18dc3f9ae873-cilium-cgroup\") pod \"cilium-6frvs\" (UID: \"c2cce705-3a2b-4f07-b418-18dc3f9ae873\") " pod="kube-system/cilium-6frvs" Jul 12 09:31:54.288180 kubelet[2642]: I0712 09:31:54.287865 2642 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c2cce705-3a2b-4f07-b418-18dc3f9ae873-lib-modules\") pod \"cilium-6frvs\" (UID: \"c2cce705-3a2b-4f07-b418-18dc3f9ae873\") " pod="kube-system/cilium-6frvs" Jul 12 09:31:54.288180 kubelet[2642]: I0712 09:31:54.287934 2642 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c2cce705-3a2b-4f07-b418-18dc3f9ae873-cilium-config-path\") pod \"cilium-6frvs\" (UID: \"c2cce705-3a2b-4f07-b418-18dc3f9ae873\") " pod="kube-system/cilium-6frvs" Jul 12 09:31:54.288180 kubelet[2642]: I0712 09:31:54.287971 2642 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c2cce705-3a2b-4f07-b418-18dc3f9ae873-host-proc-sys-kernel\") pod \"cilium-6frvs\" (UID: \"c2cce705-3a2b-4f07-b418-18dc3f9ae873\") " pod="kube-system/cilium-6frvs" Jul 12 09:31:54.288180 kubelet[2642]: I0712 09:31:54.287987 2642 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c2cce705-3a2b-4f07-b418-18dc3f9ae873-hubble-tls\") pod \"cilium-6frvs\" (UID: \"c2cce705-3a2b-4f07-b418-18dc3f9ae873\") " pod="kube-system/cilium-6frvs" Jul 12 09:31:54.288317 kubelet[2642]: I0712 09:31:54.288014 2642 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bxzft\" (UniqueName: \"kubernetes.io/projected/c2cce705-3a2b-4f07-b418-18dc3f9ae873-kube-api-access-bxzft\") pod \"cilium-6frvs\" (UID: \"c2cce705-3a2b-4f07-b418-18dc3f9ae873\") " pod="kube-system/cilium-6frvs" Jul 12 09:31:54.288317 kubelet[2642]: I0712 09:31:54.288062 2642 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c2cce705-3a2b-4f07-b418-18dc3f9ae873-cilium-run\") pod \"cilium-6frvs\" (UID: \"c2cce705-3a2b-4f07-b418-18dc3f9ae873\") " pod="kube-system/cilium-6frvs" Jul 12 09:31:54.288317 kubelet[2642]: I0712 09:31:54.288076 2642 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c2cce705-3a2b-4f07-b418-18dc3f9ae873-bpf-maps\") pod \"cilium-6frvs\" (UID: \"c2cce705-3a2b-4f07-b418-18dc3f9ae873\") " pod="kube-system/cilium-6frvs" Jul 12 09:31:54.288317 kubelet[2642]: I0712 09:31:54.288123 2642 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c2cce705-3a2b-4f07-b418-18dc3f9ae873-hostproc\") pod \"cilium-6frvs\" (UID: \"c2cce705-3a2b-4f07-b418-18dc3f9ae873\") " pod="kube-system/cilium-6frvs" Jul 12 09:31:54.288317 kubelet[2642]: I0712 09:31:54.288138 2642 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c2cce705-3a2b-4f07-b418-18dc3f9ae873-cni-path\") pod \"cilium-6frvs\" (UID: \"c2cce705-3a2b-4f07-b418-18dc3f9ae873\") " pod="kube-system/cilium-6frvs" Jul 12 09:31:54.288317 kubelet[2642]: I0712 09:31:54.288153 2642 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c2cce705-3a2b-4f07-b418-18dc3f9ae873-etc-cni-netd\") pod \"cilium-6frvs\" (UID: \"c2cce705-3a2b-4f07-b418-18dc3f9ae873\") " pod="kube-system/cilium-6frvs" Jul 12 09:31:54.470638 systemd[1]: Created slice kubepods-besteffort-pod88ba0aec_98c8_414e_af52_fd66fcc62f70.slice - libcontainer container kubepods-besteffort-pod88ba0aec_98c8_414e_af52_fd66fcc62f70.slice. Jul 12 09:31:54.482888 containerd[1499]: time="2025-07-12T09:31:54.482340357Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8sg9m,Uid:ae92ed71-74a9-4e95-af00-7fe63b2f96c3,Namespace:kube-system,Attempt:0,}" Jul 12 09:31:54.489936 kubelet[2642]: I0712 09:31:54.489880 2642 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2z4kg\" (UniqueName: \"kubernetes.io/projected/88ba0aec-98c8-414e-af52-fd66fcc62f70-kube-api-access-2z4kg\") pod \"cilium-operator-6c4d7847fc-dmxtt\" (UID: \"88ba0aec-98c8-414e-af52-fd66fcc62f70\") " pod="kube-system/cilium-operator-6c4d7847fc-dmxtt" Jul 12 09:31:54.490137 kubelet[2642]: I0712 09:31:54.490098 2642 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/88ba0aec-98c8-414e-af52-fd66fcc62f70-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-dmxtt\" (UID: \"88ba0aec-98c8-414e-af52-fd66fcc62f70\") " pod="kube-system/cilium-operator-6c4d7847fc-dmxtt" Jul 12 09:31:54.497288 containerd[1499]: time="2025-07-12T09:31:54.497255201Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6frvs,Uid:c2cce705-3a2b-4f07-b418-18dc3f9ae873,Namespace:kube-system,Attempt:0,}" Jul 12 09:31:54.498021 containerd[1499]: time="2025-07-12T09:31:54.497994568Z" level=info msg="connecting to shim 0fbf1df1bd9d60320ffe822846e4af99590acb41cb9f56ebbcedaa8f4f61dfcc" address="unix:///run/containerd/s/bd24543d108c57895855cfbe24ea61861209d6d976cff8a1df882d71cdd9083d" namespace=k8s.io protocol=ttrpc version=3 Jul 12 09:31:54.520409 containerd[1499]: time="2025-07-12T09:31:54.520370812Z" level=info msg="connecting to shim 1cde4952f2d384c1e93d28ca5123e76aa06164b111a505f1437fcba55b883a61" address="unix:///run/containerd/s/3e1a713dbd9d90a9b8d3a52f9a1ecbf63b5815be9d0bbdb72912c53347bf1c00" namespace=k8s.io protocol=ttrpc version=3 Jul 12 09:31:54.526078 systemd[1]: Started cri-containerd-0fbf1df1bd9d60320ffe822846e4af99590acb41cb9f56ebbcedaa8f4f61dfcc.scope - libcontainer container 0fbf1df1bd9d60320ffe822846e4af99590acb41cb9f56ebbcedaa8f4f61dfcc. Jul 12 09:31:54.545103 systemd[1]: Started cri-containerd-1cde4952f2d384c1e93d28ca5123e76aa06164b111a505f1437fcba55b883a61.scope - libcontainer container 1cde4952f2d384c1e93d28ca5123e76aa06164b111a505f1437fcba55b883a61. Jul 12 09:31:54.554281 containerd[1499]: time="2025-07-12T09:31:54.554243908Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8sg9m,Uid:ae92ed71-74a9-4e95-af00-7fe63b2f96c3,Namespace:kube-system,Attempt:0,} returns sandbox id \"0fbf1df1bd9d60320ffe822846e4af99590acb41cb9f56ebbcedaa8f4f61dfcc\"" Jul 12 09:31:54.573880 containerd[1499]: time="2025-07-12T09:31:54.573840490Z" level=info msg="CreateContainer within sandbox \"0fbf1df1bd9d60320ffe822846e4af99590acb41cb9f56ebbcedaa8f4f61dfcc\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 12 09:31:54.574244 containerd[1499]: time="2025-07-12T09:31:54.574208774Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-6frvs,Uid:c2cce705-3a2b-4f07-b418-18dc3f9ae873,Namespace:kube-system,Attempt:0,} returns sandbox id \"1cde4952f2d384c1e93d28ca5123e76aa06164b111a505f1437fcba55b883a61\"" Jul 12 09:31:54.575394 containerd[1499]: time="2025-07-12T09:31:54.575362805Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jul 12 09:31:54.584226 containerd[1499]: time="2025-07-12T09:31:54.584177072Z" level=info msg="Container 85dc3ba6a12076803afab395153a0a4c5a3a3796a1629fbd61a7af6253bb00e8: CDI devices from CRI Config.CDIDevices: []" Jul 12 09:31:54.590743 containerd[1499]: time="2025-07-12T09:31:54.590689914Z" level=info msg="CreateContainer within sandbox \"0fbf1df1bd9d60320ffe822846e4af99590acb41cb9f56ebbcedaa8f4f61dfcc\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"85dc3ba6a12076803afab395153a0a4c5a3a3796a1629fbd61a7af6253bb00e8\"" Jul 12 09:31:54.591758 containerd[1499]: time="2025-07-12T09:31:54.591300896Z" level=info msg="StartContainer for \"85dc3ba6a12076803afab395153a0a4c5a3a3796a1629fbd61a7af6253bb00e8\"" Jul 12 09:31:54.593182 containerd[1499]: time="2025-07-12T09:31:54.593136838Z" level=info msg="connecting to shim 85dc3ba6a12076803afab395153a0a4c5a3a3796a1629fbd61a7af6253bb00e8" address="unix:///run/containerd/s/bd24543d108c57895855cfbe24ea61861209d6d976cff8a1df882d71cdd9083d" protocol=ttrpc version=3 Jul 12 09:31:54.618057 systemd[1]: Started cri-containerd-85dc3ba6a12076803afab395153a0a4c5a3a3796a1629fbd61a7af6253bb00e8.scope - libcontainer container 85dc3ba6a12076803afab395153a0a4c5a3a3796a1629fbd61a7af6253bb00e8. Jul 12 09:31:54.653366 containerd[1499]: time="2025-07-12T09:31:54.653307678Z" level=info msg="StartContainer for \"85dc3ba6a12076803afab395153a0a4c5a3a3796a1629fbd61a7af6253bb00e8\" returns successfully" Jul 12 09:31:54.774687 containerd[1499]: time="2025-07-12T09:31:54.774404038Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-dmxtt,Uid:88ba0aec-98c8-414e-af52-fd66fcc62f70,Namespace:kube-system,Attempt:0,}" Jul 12 09:31:54.905771 containerd[1499]: time="2025-07-12T09:31:54.905721110Z" level=info msg="connecting to shim 37eb84fad8df360310aa973d24317bc381490c695c454a766afc8b13cad72e19" address="unix:///run/containerd/s/330841f6157a7ff77f634ad80ee5101ac1c791f3baceefb7ad1a1d6bb2bd5566" namespace=k8s.io protocol=ttrpc version=3 Jul 12 09:31:54.927105 systemd[1]: Started cri-containerd-37eb84fad8df360310aa973d24317bc381490c695c454a766afc8b13cad72e19.scope - libcontainer container 37eb84fad8df360310aa973d24317bc381490c695c454a766afc8b13cad72e19. Jul 12 09:31:54.956450 containerd[1499]: time="2025-07-12T09:31:54.956406886Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-dmxtt,Uid:88ba0aec-98c8-414e-af52-fd66fcc62f70,Namespace:kube-system,Attempt:0,} returns sandbox id \"37eb84fad8df360310aa973d24317bc381490c695c454a766afc8b13cad72e19\"" Jul 12 09:31:55.298797 kubelet[2642]: I0712 09:31:55.298723 2642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-8sg9m" podStartSLOduration=1.298707738 podStartE2EDuration="1.298707738s" podCreationTimestamp="2025-07-12 09:31:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-12 09:31:55.298273031 +0000 UTC m=+7.137878671" watchObservedRunningTime="2025-07-12 09:31:55.298707738 +0000 UTC m=+7.138313378" Jul 12 09:32:00.532412 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1276728116.mount: Deactivated successfully. Jul 12 09:32:01.810879 containerd[1499]: time="2025-07-12T09:32:01.810353652Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 09:32:01.811609 containerd[1499]: time="2025-07-12T09:32:01.811586120Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Jul 12 09:32:01.812330 containerd[1499]: time="2025-07-12T09:32:01.812299968Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 09:32:01.813642 containerd[1499]: time="2025-07-12T09:32:01.813592779Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 7.238154204s" Jul 12 09:32:01.813642 containerd[1499]: time="2025-07-12T09:32:01.813626530Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Jul 12 09:32:01.814517 containerd[1499]: time="2025-07-12T09:32:01.814484259Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jul 12 09:32:01.822090 containerd[1499]: time="2025-07-12T09:32:01.822063096Z" level=info msg="CreateContainer within sandbox \"1cde4952f2d384c1e93d28ca5123e76aa06164b111a505f1437fcba55b883a61\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 12 09:32:01.830971 containerd[1499]: time="2025-07-12T09:32:01.830130482Z" level=info msg="Container cbcb2df0e256095b7030b599001c5eca505b559b2a19df14ffd5c9678569b626: CDI devices from CRI Config.CDIDevices: []" Jul 12 09:32:01.834649 containerd[1499]: time="2025-07-12T09:32:01.834608236Z" level=info msg="CreateContainer within sandbox \"1cde4952f2d384c1e93d28ca5123e76aa06164b111a505f1437fcba55b883a61\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"cbcb2df0e256095b7030b599001c5eca505b559b2a19df14ffd5c9678569b626\"" Jul 12 09:32:01.835066 containerd[1499]: time="2025-07-12T09:32:01.835033281Z" level=info msg="StartContainer for \"cbcb2df0e256095b7030b599001c5eca505b559b2a19df14ffd5c9678569b626\"" Jul 12 09:32:01.837242 containerd[1499]: time="2025-07-12T09:32:01.837120678Z" level=info msg="connecting to shim cbcb2df0e256095b7030b599001c5eca505b559b2a19df14ffd5c9678569b626" address="unix:///run/containerd/s/3e1a713dbd9d90a9b8d3a52f9a1ecbf63b5815be9d0bbdb72912c53347bf1c00" protocol=ttrpc version=3 Jul 12 09:32:01.895076 systemd[1]: Started cri-containerd-cbcb2df0e256095b7030b599001c5eca505b559b2a19df14ffd5c9678569b626.scope - libcontainer container cbcb2df0e256095b7030b599001c5eca505b559b2a19df14ffd5c9678569b626. Jul 12 09:32:01.921206 containerd[1499]: time="2025-07-12T09:32:01.921173667Z" level=info msg="StartContainer for \"cbcb2df0e256095b7030b599001c5eca505b559b2a19df14ffd5c9678569b626\" returns successfully" Jul 12 09:32:01.968652 systemd[1]: cri-containerd-cbcb2df0e256095b7030b599001c5eca505b559b2a19df14ffd5c9678569b626.scope: Deactivated successfully. Jul 12 09:32:01.989606 containerd[1499]: time="2025-07-12T09:32:01.989556398Z" level=info msg="received exit event container_id:\"cbcb2df0e256095b7030b599001c5eca505b559b2a19df14ffd5c9678569b626\" id:\"cbcb2df0e256095b7030b599001c5eca505b559b2a19df14ffd5c9678569b626\" pid:3072 exited_at:{seconds:1752312721 nanos:979866169}" Jul 12 09:32:01.989720 containerd[1499]: time="2025-07-12T09:32:01.989650772Z" level=info msg="TaskExit event in podsandbox handler container_id:\"cbcb2df0e256095b7030b599001c5eca505b559b2a19df14ffd5c9678569b626\" id:\"cbcb2df0e256095b7030b599001c5eca505b559b2a19df14ffd5c9678569b626\" pid:3072 exited_at:{seconds:1752312721 nanos:979866169}" Jul 12 09:32:02.306089 containerd[1499]: time="2025-07-12T09:32:02.306046966Z" level=info msg="CreateContainer within sandbox \"1cde4952f2d384c1e93d28ca5123e76aa06164b111a505f1437fcba55b883a61\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 12 09:32:02.314809 containerd[1499]: time="2025-07-12T09:32:02.314657951Z" level=info msg="Container 2e5b95277b6383ec02ffb9418434de92f05401de56d4341ebd6f45f40360f73b: CDI devices from CRI Config.CDIDevices: []" Jul 12 09:32:02.321096 containerd[1499]: time="2025-07-12T09:32:02.321056414Z" level=info msg="CreateContainer within sandbox \"1cde4952f2d384c1e93d28ca5123e76aa06164b111a505f1437fcba55b883a61\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"2e5b95277b6383ec02ffb9418434de92f05401de56d4341ebd6f45f40360f73b\"" Jul 12 09:32:02.321581 containerd[1499]: time="2025-07-12T09:32:02.321557768Z" level=info msg="StartContainer for \"2e5b95277b6383ec02ffb9418434de92f05401de56d4341ebd6f45f40360f73b\"" Jul 12 09:32:02.322441 containerd[1499]: time="2025-07-12T09:32:02.322404154Z" level=info msg="connecting to shim 2e5b95277b6383ec02ffb9418434de92f05401de56d4341ebd6f45f40360f73b" address="unix:///run/containerd/s/3e1a713dbd9d90a9b8d3a52f9a1ecbf63b5815be9d0bbdb72912c53347bf1c00" protocol=ttrpc version=3 Jul 12 09:32:02.347117 systemd[1]: Started cri-containerd-2e5b95277b6383ec02ffb9418434de92f05401de56d4341ebd6f45f40360f73b.scope - libcontainer container 2e5b95277b6383ec02ffb9418434de92f05401de56d4341ebd6f45f40360f73b. Jul 12 09:32:02.376150 containerd[1499]: time="2025-07-12T09:32:02.376081992Z" level=info msg="StartContainer for \"2e5b95277b6383ec02ffb9418434de92f05401de56d4341ebd6f45f40360f73b\" returns successfully" Jul 12 09:32:02.392636 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 12 09:32:02.392845 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 12 09:32:02.393188 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jul 12 09:32:02.394507 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 12 09:32:02.395875 systemd[1]: cri-containerd-2e5b95277b6383ec02ffb9418434de92f05401de56d4341ebd6f45f40360f73b.scope: Deactivated successfully. Jul 12 09:32:02.404211 containerd[1499]: time="2025-07-12T09:32:02.404178854Z" level=info msg="received exit event container_id:\"2e5b95277b6383ec02ffb9418434de92f05401de56d4341ebd6f45f40360f73b\" id:\"2e5b95277b6383ec02ffb9418434de92f05401de56d4341ebd6f45f40360f73b\" pid:3117 exited_at:{seconds:1752312722 nanos:403988182}" Jul 12 09:32:02.404711 containerd[1499]: time="2025-07-12T09:32:02.404293785Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2e5b95277b6383ec02ffb9418434de92f05401de56d4341ebd6f45f40360f73b\" id:\"2e5b95277b6383ec02ffb9418434de92f05401de56d4341ebd6f45f40360f73b\" pid:3117 exited_at:{seconds:1752312722 nanos:403988182}" Jul 12 09:32:02.422701 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 12 09:32:02.829305 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cbcb2df0e256095b7030b599001c5eca505b559b2a19df14ffd5c9678569b626-rootfs.mount: Deactivated successfully. Jul 12 09:32:03.236264 containerd[1499]: time="2025-07-12T09:32:03.236219558Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 09:32:03.237186 containerd[1499]: time="2025-07-12T09:32:03.237155056Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Jul 12 09:32:03.238121 containerd[1499]: time="2025-07-12T09:32:03.238077638Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 12 09:32:03.239486 containerd[1499]: time="2025-07-12T09:32:03.239378410Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 1.4248612s" Jul 12 09:32:03.239486 containerd[1499]: time="2025-07-12T09:32:03.239410322Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Jul 12 09:32:03.243324 containerd[1499]: time="2025-07-12T09:32:03.243271048Z" level=info msg="CreateContainer within sandbox \"37eb84fad8df360310aa973d24317bc381490c695c454a766afc8b13cad72e19\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jul 12 09:32:03.250538 containerd[1499]: time="2025-07-12T09:32:03.250502575Z" level=info msg="Container 2a2890ad8a386aea94451e8dc266912262bbceb826ee14525219ade3591f0243: CDI devices from CRI Config.CDIDevices: []" Jul 12 09:32:03.252552 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3794997076.mount: Deactivated successfully. Jul 12 09:32:03.256163 containerd[1499]: time="2025-07-12T09:32:03.256132001Z" level=info msg="CreateContainer within sandbox \"37eb84fad8df360310aa973d24317bc381490c695c454a766afc8b13cad72e19\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"2a2890ad8a386aea94451e8dc266912262bbceb826ee14525219ade3591f0243\"" Jul 12 09:32:03.256884 containerd[1499]: time="2025-07-12T09:32:03.256841073Z" level=info msg="StartContainer for \"2a2890ad8a386aea94451e8dc266912262bbceb826ee14525219ade3591f0243\"" Jul 12 09:32:03.257834 containerd[1499]: time="2025-07-12T09:32:03.257776572Z" level=info msg="connecting to shim 2a2890ad8a386aea94451e8dc266912262bbceb826ee14525219ade3591f0243" address="unix:///run/containerd/s/330841f6157a7ff77f634ad80ee5101ac1c791f3baceefb7ad1a1d6bb2bd5566" protocol=ttrpc version=3 Jul 12 09:32:03.280066 systemd[1]: Started cri-containerd-2a2890ad8a386aea94451e8dc266912262bbceb826ee14525219ade3591f0243.scope - libcontainer container 2a2890ad8a386aea94451e8dc266912262bbceb826ee14525219ade3591f0243. Jul 12 09:32:03.305361 containerd[1499]: time="2025-07-12T09:32:03.305314712Z" level=info msg="StartContainer for \"2a2890ad8a386aea94451e8dc266912262bbceb826ee14525219ade3591f0243\" returns successfully" Jul 12 09:32:03.321381 containerd[1499]: time="2025-07-12T09:32:03.321341996Z" level=info msg="CreateContainer within sandbox \"1cde4952f2d384c1e93d28ca5123e76aa06164b111a505f1437fcba55b883a61\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 12 09:32:03.332645 containerd[1499]: time="2025-07-12T09:32:03.332524468Z" level=info msg="Container dfc2d90cb007c57ac5729ab975544b9434fa4f346e5536e33a59f36bfb44d6ca: CDI devices from CRI Config.CDIDevices: []" Jul 12 09:32:03.342820 containerd[1499]: time="2025-07-12T09:32:03.342705976Z" level=info msg="CreateContainer within sandbox \"1cde4952f2d384c1e93d28ca5123e76aa06164b111a505f1437fcba55b883a61\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"dfc2d90cb007c57ac5729ab975544b9434fa4f346e5536e33a59f36bfb44d6ca\"" Jul 12 09:32:03.345146 containerd[1499]: time="2025-07-12T09:32:03.343408770Z" level=info msg="StartContainer for \"dfc2d90cb007c57ac5729ab975544b9434fa4f346e5536e33a59f36bfb44d6ca\"" Jul 12 09:32:03.346271 containerd[1499]: time="2025-07-12T09:32:03.346235420Z" level=info msg="connecting to shim dfc2d90cb007c57ac5729ab975544b9434fa4f346e5536e33a59f36bfb44d6ca" address="unix:///run/containerd/s/3e1a713dbd9d90a9b8d3a52f9a1ecbf63b5815be9d0bbdb72912c53347bf1c00" protocol=ttrpc version=3 Jul 12 09:32:03.384381 systemd[1]: Started cri-containerd-dfc2d90cb007c57ac5729ab975544b9434fa4f346e5536e33a59f36bfb44d6ca.scope - libcontainer container dfc2d90cb007c57ac5729ab975544b9434fa4f346e5536e33a59f36bfb44d6ca. Jul 12 09:32:03.421380 containerd[1499]: time="2025-07-12T09:32:03.421260690Z" level=info msg="StartContainer for \"dfc2d90cb007c57ac5729ab975544b9434fa4f346e5536e33a59f36bfb44d6ca\" returns successfully" Jul 12 09:32:03.434703 systemd[1]: cri-containerd-dfc2d90cb007c57ac5729ab975544b9434fa4f346e5536e33a59f36bfb44d6ca.scope: Deactivated successfully. Jul 12 09:32:03.437012 containerd[1499]: time="2025-07-12T09:32:03.436971849Z" level=info msg="TaskExit event in podsandbox handler container_id:\"dfc2d90cb007c57ac5729ab975544b9434fa4f346e5536e33a59f36bfb44d6ca\" id:\"dfc2d90cb007c57ac5729ab975544b9434fa4f346e5536e33a59f36bfb44d6ca\" pid:3212 exited_at:{seconds:1752312723 nanos:436438775}" Jul 12 09:32:03.437408 containerd[1499]: time="2025-07-12T09:32:03.437203834Z" level=info msg="received exit event container_id:\"dfc2d90cb007c57ac5729ab975544b9434fa4f346e5536e33a59f36bfb44d6ca\" id:\"dfc2d90cb007c57ac5729ab975544b9434fa4f346e5536e33a59f36bfb44d6ca\" pid:3212 exited_at:{seconds:1752312723 nanos:436438775}" Jul 12 09:32:03.548552 update_engine[1485]: I20250712 09:32:03.547955 1485 update_attempter.cc:509] Updating boot flags... Jul 12 09:32:04.323430 containerd[1499]: time="2025-07-12T09:32:04.323386714Z" level=info msg="CreateContainer within sandbox \"1cde4952f2d384c1e93d28ca5123e76aa06164b111a505f1437fcba55b883a61\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 12 09:32:04.342113 kubelet[2642]: I0712 09:32:04.342046 2642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-dmxtt" podStartSLOduration=2.059528003 podStartE2EDuration="10.341896804s" podCreationTimestamp="2025-07-12 09:31:54 +0000 UTC" firstStartedPulling="2025-07-12 09:31:54.957771868 +0000 UTC m=+6.797377508" lastFinishedPulling="2025-07-12 09:32:03.240140669 +0000 UTC m=+15.079746309" observedRunningTime="2025-07-12 09:32:04.341482016 +0000 UTC m=+16.181087656" watchObservedRunningTime="2025-07-12 09:32:04.341896804 +0000 UTC m=+16.181502444" Jul 12 09:32:04.369817 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount992855589.mount: Deactivated successfully. Jul 12 09:32:04.372717 containerd[1499]: time="2025-07-12T09:32:04.370482657Z" level=info msg="Container 3bb3423e0f47f67a931dd936ed18408b6cda6c784347c580127d8f941c944077: CDI devices from CRI Config.CDIDevices: []" Jul 12 09:32:04.378221 containerd[1499]: time="2025-07-12T09:32:04.378181707Z" level=info msg="CreateContainer within sandbox \"1cde4952f2d384c1e93d28ca5123e76aa06164b111a505f1437fcba55b883a61\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"3bb3423e0f47f67a931dd936ed18408b6cda6c784347c580127d8f941c944077\"" Jul 12 09:32:04.378636 containerd[1499]: time="2025-07-12T09:32:04.378609532Z" level=info msg="StartContainer for \"3bb3423e0f47f67a931dd936ed18408b6cda6c784347c580127d8f941c944077\"" Jul 12 09:32:04.379472 containerd[1499]: time="2025-07-12T09:32:04.379435549Z" level=info msg="connecting to shim 3bb3423e0f47f67a931dd936ed18408b6cda6c784347c580127d8f941c944077" address="unix:///run/containerd/s/3e1a713dbd9d90a9b8d3a52f9a1ecbf63b5815be9d0bbdb72912c53347bf1c00" protocol=ttrpc version=3 Jul 12 09:32:04.405056 systemd[1]: Started cri-containerd-3bb3423e0f47f67a931dd936ed18408b6cda6c784347c580127d8f941c944077.scope - libcontainer container 3bb3423e0f47f67a931dd936ed18408b6cda6c784347c580127d8f941c944077. Jul 12 09:32:04.425689 systemd[1]: cri-containerd-3bb3423e0f47f67a931dd936ed18408b6cda6c784347c580127d8f941c944077.scope: Deactivated successfully. Jul 12 09:32:04.427030 containerd[1499]: time="2025-07-12T09:32:04.426845222Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3bb3423e0f47f67a931dd936ed18408b6cda6c784347c580127d8f941c944077\" id:\"3bb3423e0f47f67a931dd936ed18408b6cda6c784347c580127d8f941c944077\" pid:3270 exited_at:{seconds:1752312724 nanos:426418756}" Jul 12 09:32:04.427120 containerd[1499]: time="2025-07-12T09:32:04.426903169Z" level=info msg="received exit event container_id:\"3bb3423e0f47f67a931dd936ed18408b6cda6c784347c580127d8f941c944077\" id:\"3bb3423e0f47f67a931dd936ed18408b6cda6c784347c580127d8f941c944077\" pid:3270 exited_at:{seconds:1752312724 nanos:426418756}" Jul 12 09:32:04.445468 containerd[1499]: time="2025-07-12T09:32:04.445422937Z" level=info msg="StartContainer for \"3bb3423e0f47f67a931dd936ed18408b6cda6c784347c580127d8f941c944077\" returns successfully" Jul 12 09:32:04.829188 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3bb3423e0f47f67a931dd936ed18408b6cda6c784347c580127d8f941c944077-rootfs.mount: Deactivated successfully. Jul 12 09:32:05.327299 containerd[1499]: time="2025-07-12T09:32:05.327243150Z" level=info msg="CreateContainer within sandbox \"1cde4952f2d384c1e93d28ca5123e76aa06164b111a505f1437fcba55b883a61\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 12 09:32:05.348833 containerd[1499]: time="2025-07-12T09:32:05.348167755Z" level=info msg="Container 9ea9ad7ebf15c98e8ee2e6f1554ae6c2b9a45b0db14b3be08f67cb96e44c0323: CDI devices from CRI Config.CDIDevices: []" Jul 12 09:32:05.352128 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1445653535.mount: Deactivated successfully. Jul 12 09:32:05.354726 containerd[1499]: time="2025-07-12T09:32:05.354668761Z" level=info msg="CreateContainer within sandbox \"1cde4952f2d384c1e93d28ca5123e76aa06164b111a505f1437fcba55b883a61\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"9ea9ad7ebf15c98e8ee2e6f1554ae6c2b9a45b0db14b3be08f67cb96e44c0323\"" Jul 12 09:32:05.355253 containerd[1499]: time="2025-07-12T09:32:05.355217847Z" level=info msg="StartContainer for \"9ea9ad7ebf15c98e8ee2e6f1554ae6c2b9a45b0db14b3be08f67cb96e44c0323\"" Jul 12 09:32:05.356394 containerd[1499]: time="2025-07-12T09:32:05.356370767Z" level=info msg="connecting to shim 9ea9ad7ebf15c98e8ee2e6f1554ae6c2b9a45b0db14b3be08f67cb96e44c0323" address="unix:///run/containerd/s/3e1a713dbd9d90a9b8d3a52f9a1ecbf63b5815be9d0bbdb72912c53347bf1c00" protocol=ttrpc version=3 Jul 12 09:32:05.379090 systemd[1]: Started cri-containerd-9ea9ad7ebf15c98e8ee2e6f1554ae6c2b9a45b0db14b3be08f67cb96e44c0323.scope - libcontainer container 9ea9ad7ebf15c98e8ee2e6f1554ae6c2b9a45b0db14b3be08f67cb96e44c0323. Jul 12 09:32:05.414584 containerd[1499]: time="2025-07-12T09:32:05.414531940Z" level=info msg="StartContainer for \"9ea9ad7ebf15c98e8ee2e6f1554ae6c2b9a45b0db14b3be08f67cb96e44c0323\" returns successfully" Jul 12 09:32:05.527068 containerd[1499]: time="2025-07-12T09:32:05.526982172Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9ea9ad7ebf15c98e8ee2e6f1554ae6c2b9a45b0db14b3be08f67cb96e44c0323\" id:\"7dc7e7ecade00284af070c0fee6b616201e9df4858a6c83bc03964b8060e5142\" pid:3337 exited_at:{seconds:1752312725 nanos:526697951}" Jul 12 09:32:05.569997 kubelet[2642]: I0712 09:32:05.569955 2642 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jul 12 09:32:05.643928 systemd[1]: Created slice kubepods-burstable-pod40dcb9be_9889_47c1_a33a_69017b785470.slice - libcontainer container kubepods-burstable-pod40dcb9be_9889_47c1_a33a_69017b785470.slice. Jul 12 09:32:05.649278 systemd[1]: Created slice kubepods-burstable-pod4f1d8cd4_5ba6_4fdb_9f88_699f27c8755c.slice - libcontainer container kubepods-burstable-pod4f1d8cd4_5ba6_4fdb_9f88_699f27c8755c.slice. Jul 12 09:32:05.676533 kubelet[2642]: I0712 09:32:05.676335 2642 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/40dcb9be-9889-47c1-a33a-69017b785470-config-volume\") pod \"coredns-674b8bbfcf-8wcw2\" (UID: \"40dcb9be-9889-47c1-a33a-69017b785470\") " pod="kube-system/coredns-674b8bbfcf-8wcw2" Jul 12 09:32:05.676533 kubelet[2642]: I0712 09:32:05.676384 2642 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4f1d8cd4-5ba6-4fdb-9f88-699f27c8755c-config-volume\") pod \"coredns-674b8bbfcf-zhxbf\" (UID: \"4f1d8cd4-5ba6-4fdb-9f88-699f27c8755c\") " pod="kube-system/coredns-674b8bbfcf-zhxbf" Jul 12 09:32:05.676533 kubelet[2642]: I0712 09:32:05.676418 2642 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mf44m\" (UniqueName: \"kubernetes.io/projected/4f1d8cd4-5ba6-4fdb-9f88-699f27c8755c-kube-api-access-mf44m\") pod \"coredns-674b8bbfcf-zhxbf\" (UID: \"4f1d8cd4-5ba6-4fdb-9f88-699f27c8755c\") " pod="kube-system/coredns-674b8bbfcf-zhxbf" Jul 12 09:32:05.676533 kubelet[2642]: I0712 09:32:05.676473 2642 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t5mxl\" (UniqueName: \"kubernetes.io/projected/40dcb9be-9889-47c1-a33a-69017b785470-kube-api-access-t5mxl\") pod \"coredns-674b8bbfcf-8wcw2\" (UID: \"40dcb9be-9889-47c1-a33a-69017b785470\") " pod="kube-system/coredns-674b8bbfcf-8wcw2" Jul 12 09:32:05.949238 containerd[1499]: time="2025-07-12T09:32:05.949132095Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-8wcw2,Uid:40dcb9be-9889-47c1-a33a-69017b785470,Namespace:kube-system,Attempt:0,}" Jul 12 09:32:05.957877 containerd[1499]: time="2025-07-12T09:32:05.957191697Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-zhxbf,Uid:4f1d8cd4-5ba6-4fdb-9f88-699f27c8755c,Namespace:kube-system,Attempt:0,}" Jul 12 09:32:06.352020 kubelet[2642]: I0712 09:32:06.351356 2642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-6frvs" podStartSLOduration=5.112119699 podStartE2EDuration="12.35133741s" podCreationTimestamp="2025-07-12 09:31:54 +0000 UTC" firstStartedPulling="2025-07-12 09:31:54.575122107 +0000 UTC m=+6.414727747" lastFinishedPulling="2025-07-12 09:32:01.814339818 +0000 UTC m=+13.653945458" observedRunningTime="2025-07-12 09:32:06.350302132 +0000 UTC m=+18.189907772" watchObservedRunningTime="2025-07-12 09:32:06.35133741 +0000 UTC m=+18.190943050" Jul 12 09:32:07.792861 systemd-networkd[1423]: cilium_host: Link UP Jul 12 09:32:07.793018 systemd-networkd[1423]: cilium_net: Link UP Jul 12 09:32:07.793154 systemd-networkd[1423]: cilium_net: Gained carrier Jul 12 09:32:07.793266 systemd-networkd[1423]: cilium_host: Gained carrier Jul 12 09:32:07.813062 systemd-networkd[1423]: cilium_net: Gained IPv6LL Jul 12 09:32:07.878058 systemd-networkd[1423]: cilium_vxlan: Link UP Jul 12 09:32:07.878067 systemd-networkd[1423]: cilium_vxlan: Gained carrier Jul 12 09:32:08.203989 kernel: NET: Registered PF_ALG protocol family Jul 12 09:32:08.776610 systemd-networkd[1423]: lxc_health: Link UP Jul 12 09:32:08.784166 systemd-networkd[1423]: lxc_health: Gained carrier Jul 12 09:32:08.804364 systemd-networkd[1423]: cilium_host: Gained IPv6LL Jul 12 09:32:09.134939 kernel: eth0: renamed from tmp5b460 Jul 12 09:32:09.137424 systemd-networkd[1423]: lxc595126fbc581: Link UP Jul 12 09:32:09.137704 systemd-networkd[1423]: lxc595126fbc581: Gained carrier Jul 12 09:32:09.139523 systemd-networkd[1423]: lxc1adfc9b598b9: Link UP Jul 12 09:32:09.149955 kernel: eth0: renamed from tmp19b5a Jul 12 09:32:09.150363 systemd-networkd[1423]: lxc1adfc9b598b9: Gained carrier Jul 12 09:32:09.699097 systemd-networkd[1423]: cilium_vxlan: Gained IPv6LL Jul 12 09:32:10.788121 systemd-networkd[1423]: lxc_health: Gained IPv6LL Jul 12 09:32:10.915054 systemd-networkd[1423]: lxc1adfc9b598b9: Gained IPv6LL Jul 12 09:32:10.980051 systemd-networkd[1423]: lxc595126fbc581: Gained IPv6LL Jul 12 09:32:12.582065 containerd[1499]: time="2025-07-12T09:32:12.581982721Z" level=info msg="connecting to shim 5b46029b13dc3c8669136e60c2bc2a80a6db2cd1af3a2e24a3673ced76182132" address="unix:///run/containerd/s/1bcb573ebed1884685da51e80b70993cb6637feef6b11287a0f59f0ba4bc6a1b" namespace=k8s.io protocol=ttrpc version=3 Jul 12 09:32:12.582840 containerd[1499]: time="2025-07-12T09:32:12.582780895Z" level=info msg="connecting to shim 19b5ab025c2062431d9040e13ca0c4bbf81f8bd8d53b18aa75bc8e387eadf418" address="unix:///run/containerd/s/3108bb94c70f2bd6f9cd7dce1c6bc7cc030b25282c062718249e991152095c32" namespace=k8s.io protocol=ttrpc version=3 Jul 12 09:32:12.609076 systemd[1]: Started cri-containerd-19b5ab025c2062431d9040e13ca0c4bbf81f8bd8d53b18aa75bc8e387eadf418.scope - libcontainer container 19b5ab025c2062431d9040e13ca0c4bbf81f8bd8d53b18aa75bc8e387eadf418. Jul 12 09:32:12.610243 systemd[1]: Started cri-containerd-5b46029b13dc3c8669136e60c2bc2a80a6db2cd1af3a2e24a3673ced76182132.scope - libcontainer container 5b46029b13dc3c8669136e60c2bc2a80a6db2cd1af3a2e24a3673ced76182132. Jul 12 09:32:12.621130 systemd-resolved[1353]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 12 09:32:12.622177 systemd-resolved[1353]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 12 09:32:12.643221 containerd[1499]: time="2025-07-12T09:32:12.643184933Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-zhxbf,Uid:4f1d8cd4-5ba6-4fdb-9f88-699f27c8755c,Namespace:kube-system,Attempt:0,} returns sandbox id \"5b46029b13dc3c8669136e60c2bc2a80a6db2cd1af3a2e24a3673ced76182132\"" Jul 12 09:32:12.647385 containerd[1499]: time="2025-07-12T09:32:12.647348741Z" level=info msg="CreateContainer within sandbox \"5b46029b13dc3c8669136e60c2bc2a80a6db2cd1af3a2e24a3673ced76182132\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 12 09:32:12.657403 containerd[1499]: time="2025-07-12T09:32:12.657361214Z" level=info msg="Container 2928f6e829947bd0c5080b4e2ca8e3dd4c44d2d7ef6adb4526ccbfa9ab271af6: CDI devices from CRI Config.CDIDevices: []" Jul 12 09:32:12.662417 containerd[1499]: time="2025-07-12T09:32:12.662382189Z" level=info msg="CreateContainer within sandbox \"5b46029b13dc3c8669136e60c2bc2a80a6db2cd1af3a2e24a3673ced76182132\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2928f6e829947bd0c5080b4e2ca8e3dd4c44d2d7ef6adb4526ccbfa9ab271af6\"" Jul 12 09:32:12.662973 containerd[1499]: time="2025-07-12T09:32:12.662948354Z" level=info msg="StartContainer for \"2928f6e829947bd0c5080b4e2ca8e3dd4c44d2d7ef6adb4526ccbfa9ab271af6\"" Jul 12 09:32:12.664092 containerd[1499]: time="2025-07-12T09:32:12.664067926Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-8wcw2,Uid:40dcb9be-9889-47c1-a33a-69017b785470,Namespace:kube-system,Attempt:0,} returns sandbox id \"19b5ab025c2062431d9040e13ca0c4bbf81f8bd8d53b18aa75bc8e387eadf418\"" Jul 12 09:32:12.664834 containerd[1499]: time="2025-07-12T09:32:12.664806748Z" level=info msg="connecting to shim 2928f6e829947bd0c5080b4e2ca8e3dd4c44d2d7ef6adb4526ccbfa9ab271af6" address="unix:///run/containerd/s/1bcb573ebed1884685da51e80b70993cb6637feef6b11287a0f59f0ba4bc6a1b" protocol=ttrpc version=3 Jul 12 09:32:12.669006 containerd[1499]: time="2025-07-12T09:32:12.668870130Z" level=info msg="CreateContainer within sandbox \"19b5ab025c2062431d9040e13ca0c4bbf81f8bd8d53b18aa75bc8e387eadf418\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 12 09:32:12.676089 containerd[1499]: time="2025-07-12T09:32:12.675885000Z" level=info msg="Container 2b126702e8a42c16b69ccb496c776bd7660c38d85a39a075d8cc0fa09af3a9b1: CDI devices from CRI Config.CDIDevices: []" Jul 12 09:32:12.682611 containerd[1499]: time="2025-07-12T09:32:12.682565675Z" level=info msg="CreateContainer within sandbox \"19b5ab025c2062431d9040e13ca0c4bbf81f8bd8d53b18aa75bc8e387eadf418\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2b126702e8a42c16b69ccb496c776bd7660c38d85a39a075d8cc0fa09af3a9b1\"" Jul 12 09:32:12.683391 containerd[1499]: time="2025-07-12T09:32:12.683360570Z" level=info msg="StartContainer for \"2b126702e8a42c16b69ccb496c776bd7660c38d85a39a075d8cc0fa09af3a9b1\"" Jul 12 09:32:12.684718 containerd[1499]: time="2025-07-12T09:32:12.684694073Z" level=info msg="connecting to shim 2b126702e8a42c16b69ccb496c776bd7660c38d85a39a075d8cc0fa09af3a9b1" address="unix:///run/containerd/s/3108bb94c70f2bd6f9cd7dce1c6bc7cc030b25282c062718249e991152095c32" protocol=ttrpc version=3 Jul 12 09:32:12.690107 systemd[1]: Started cri-containerd-2928f6e829947bd0c5080b4e2ca8e3dd4c44d2d7ef6adb4526ccbfa9ab271af6.scope - libcontainer container 2928f6e829947bd0c5080b4e2ca8e3dd4c44d2d7ef6adb4526ccbfa9ab271af6. Jul 12 09:32:12.713164 systemd[1]: Started cri-containerd-2b126702e8a42c16b69ccb496c776bd7660c38d85a39a075d8cc0fa09af3a9b1.scope - libcontainer container 2b126702e8a42c16b69ccb496c776bd7660c38d85a39a075d8cc0fa09af3a9b1. Jul 12 09:32:12.722889 containerd[1499]: time="2025-07-12T09:32:12.722849178Z" level=info msg="StartContainer for \"2928f6e829947bd0c5080b4e2ca8e3dd4c44d2d7ef6adb4526ccbfa9ab271af6\" returns successfully" Jul 12 09:32:12.745358 containerd[1499]: time="2025-07-12T09:32:12.745084072Z" level=info msg="StartContainer for \"2b126702e8a42c16b69ccb496c776bd7660c38d85a39a075d8cc0fa09af3a9b1\" returns successfully" Jul 12 09:32:13.379775 kubelet[2642]: I0712 09:32:13.379703 2642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-8wcw2" podStartSLOduration=19.379686852 podStartE2EDuration="19.379686852s" podCreationTimestamp="2025-07-12 09:31:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-12 09:32:13.378292545 +0000 UTC m=+25.217898185" watchObservedRunningTime="2025-07-12 09:32:13.379686852 +0000 UTC m=+25.219292452" Jul 12 09:32:15.997700 systemd[1]: Started sshd@7-10.0.0.39:22-10.0.0.1:34272.service - OpenSSH per-connection server daemon (10.0.0.1:34272). Jul 12 09:32:16.061107 sshd[3989]: Accepted publickey for core from 10.0.0.1 port 34272 ssh2: RSA SHA256:fhp558siaf39QLJw5fsAHbaRafIwNXdVZ+VoGPeGhpE Jul 12 09:32:16.062185 sshd-session[3989]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 09:32:16.066257 systemd-logind[1480]: New session 8 of user core. Jul 12 09:32:16.085090 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 12 09:32:16.223799 sshd[3992]: Connection closed by 10.0.0.1 port 34272 Jul 12 09:32:16.224246 sshd-session[3989]: pam_unix(sshd:session): session closed for user core Jul 12 09:32:16.228275 systemd[1]: sshd@7-10.0.0.39:22-10.0.0.1:34272.service: Deactivated successfully. Jul 12 09:32:16.230137 systemd[1]: session-8.scope: Deactivated successfully. Jul 12 09:32:16.231045 systemd-logind[1480]: Session 8 logged out. Waiting for processes to exit. Jul 12 09:32:16.232449 systemd-logind[1480]: Removed session 8. Jul 12 09:32:21.242286 systemd[1]: Started sshd@8-10.0.0.39:22-10.0.0.1:34276.service - OpenSSH per-connection server daemon (10.0.0.1:34276). Jul 12 09:32:21.288067 sshd[4009]: Accepted publickey for core from 10.0.0.1 port 34276 ssh2: RSA SHA256:fhp558siaf39QLJw5fsAHbaRafIwNXdVZ+VoGPeGhpE Jul 12 09:32:21.289236 sshd-session[4009]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 09:32:21.292761 systemd-logind[1480]: New session 9 of user core. Jul 12 09:32:21.307118 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 12 09:32:21.418059 sshd[4012]: Connection closed by 10.0.0.1 port 34276 Jul 12 09:32:21.418392 sshd-session[4009]: pam_unix(sshd:session): session closed for user core Jul 12 09:32:21.421774 systemd[1]: sshd@8-10.0.0.39:22-10.0.0.1:34276.service: Deactivated successfully. Jul 12 09:32:21.424072 systemd[1]: session-9.scope: Deactivated successfully. Jul 12 09:32:21.425019 systemd-logind[1480]: Session 9 logged out. Waiting for processes to exit. Jul 12 09:32:21.426159 systemd-logind[1480]: Removed session 9. Jul 12 09:32:26.440985 systemd[1]: Started sshd@9-10.0.0.39:22-10.0.0.1:55586.service - OpenSSH per-connection server daemon (10.0.0.1:55586). Jul 12 09:32:26.498178 sshd[4029]: Accepted publickey for core from 10.0.0.1 port 55586 ssh2: RSA SHA256:fhp558siaf39QLJw5fsAHbaRafIwNXdVZ+VoGPeGhpE Jul 12 09:32:26.499285 sshd-session[4029]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 09:32:26.502934 systemd-logind[1480]: New session 10 of user core. Jul 12 09:32:26.513068 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 12 09:32:26.624972 sshd[4032]: Connection closed by 10.0.0.1 port 55586 Jul 12 09:32:26.626194 sshd-session[4029]: pam_unix(sshd:session): session closed for user core Jul 12 09:32:26.633077 systemd[1]: sshd@9-10.0.0.39:22-10.0.0.1:55586.service: Deactivated successfully. Jul 12 09:32:26.634624 systemd[1]: session-10.scope: Deactivated successfully. Jul 12 09:32:26.635332 systemd-logind[1480]: Session 10 logged out. Waiting for processes to exit. Jul 12 09:32:26.637587 systemd[1]: Started sshd@10-10.0.0.39:22-10.0.0.1:55590.service - OpenSSH per-connection server daemon (10.0.0.1:55590). Jul 12 09:32:26.638088 systemd-logind[1480]: Removed session 10. Jul 12 09:32:26.689476 sshd[4046]: Accepted publickey for core from 10.0.0.1 port 55590 ssh2: RSA SHA256:fhp558siaf39QLJw5fsAHbaRafIwNXdVZ+VoGPeGhpE Jul 12 09:32:26.690508 sshd-session[4046]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 09:32:26.694083 systemd-logind[1480]: New session 11 of user core. Jul 12 09:32:26.708127 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 12 09:32:26.863997 sshd[4049]: Connection closed by 10.0.0.1 port 55590 Jul 12 09:32:26.864419 sshd-session[4046]: pam_unix(sshd:session): session closed for user core Jul 12 09:32:26.878139 systemd[1]: sshd@10-10.0.0.39:22-10.0.0.1:55590.service: Deactivated successfully. Jul 12 09:32:26.881443 systemd[1]: session-11.scope: Deactivated successfully. Jul 12 09:32:26.885287 systemd-logind[1480]: Session 11 logged out. Waiting for processes to exit. Jul 12 09:32:26.889255 systemd[1]: Started sshd@11-10.0.0.39:22-10.0.0.1:55592.service - OpenSSH per-connection server daemon (10.0.0.1:55592). Jul 12 09:32:26.890003 systemd-logind[1480]: Removed session 11. Jul 12 09:32:26.944795 sshd[4060]: Accepted publickey for core from 10.0.0.1 port 55592 ssh2: RSA SHA256:fhp558siaf39QLJw5fsAHbaRafIwNXdVZ+VoGPeGhpE Jul 12 09:32:26.946004 sshd-session[4060]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 09:32:26.949934 systemd-logind[1480]: New session 12 of user core. Jul 12 09:32:26.962080 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 12 09:32:27.084630 sshd[4063]: Connection closed by 10.0.0.1 port 55592 Jul 12 09:32:27.084965 sshd-session[4060]: pam_unix(sshd:session): session closed for user core Jul 12 09:32:27.088711 systemd[1]: sshd@11-10.0.0.39:22-10.0.0.1:55592.service: Deactivated successfully. Jul 12 09:32:27.090388 systemd[1]: session-12.scope: Deactivated successfully. Jul 12 09:32:27.091077 systemd-logind[1480]: Session 12 logged out. Waiting for processes to exit. Jul 12 09:32:27.092531 systemd-logind[1480]: Removed session 12. Jul 12 09:32:32.100817 systemd[1]: Started sshd@12-10.0.0.39:22-10.0.0.1:55600.service - OpenSSH per-connection server daemon (10.0.0.1:55600). Jul 12 09:32:32.149050 sshd[4077]: Accepted publickey for core from 10.0.0.1 port 55600 ssh2: RSA SHA256:fhp558siaf39QLJw5fsAHbaRafIwNXdVZ+VoGPeGhpE Jul 12 09:32:32.150325 sshd-session[4077]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 09:32:32.157307 systemd-logind[1480]: New session 13 of user core. Jul 12 09:32:32.165100 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 12 09:32:32.284008 sshd[4080]: Connection closed by 10.0.0.1 port 55600 Jul 12 09:32:32.283837 sshd-session[4077]: pam_unix(sshd:session): session closed for user core Jul 12 09:32:32.288239 systemd[1]: sshd@12-10.0.0.39:22-10.0.0.1:55600.service: Deactivated successfully. Jul 12 09:32:32.289800 systemd[1]: session-13.scope: Deactivated successfully. Jul 12 09:32:32.290793 systemd-logind[1480]: Session 13 logged out. Waiting for processes to exit. Jul 12 09:32:32.291882 systemd-logind[1480]: Removed session 13. Jul 12 09:32:37.305224 systemd[1]: Started sshd@13-10.0.0.39:22-10.0.0.1:52510.service - OpenSSH per-connection server daemon (10.0.0.1:52510). Jul 12 09:32:37.357331 sshd[4093]: Accepted publickey for core from 10.0.0.1 port 52510 ssh2: RSA SHA256:fhp558siaf39QLJw5fsAHbaRafIwNXdVZ+VoGPeGhpE Jul 12 09:32:37.358456 sshd-session[4093]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 09:32:37.363865 systemd-logind[1480]: New session 14 of user core. Jul 12 09:32:37.377011 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 12 09:32:37.495099 sshd[4096]: Connection closed by 10.0.0.1 port 52510 Jul 12 09:32:37.495436 sshd-session[4093]: pam_unix(sshd:session): session closed for user core Jul 12 09:32:37.506874 systemd[1]: sshd@13-10.0.0.39:22-10.0.0.1:52510.service: Deactivated successfully. Jul 12 09:32:37.509760 systemd[1]: session-14.scope: Deactivated successfully. Jul 12 09:32:37.511409 systemd-logind[1480]: Session 14 logged out. Waiting for processes to exit. Jul 12 09:32:37.513125 systemd-logind[1480]: Removed session 14. Jul 12 09:32:37.515160 systemd[1]: Started sshd@14-10.0.0.39:22-10.0.0.1:52520.service - OpenSSH per-connection server daemon (10.0.0.1:52520). Jul 12 09:32:37.572105 sshd[4109]: Accepted publickey for core from 10.0.0.1 port 52520 ssh2: RSA SHA256:fhp558siaf39QLJw5fsAHbaRafIwNXdVZ+VoGPeGhpE Jul 12 09:32:37.570383 sshd-session[4109]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 09:32:37.578118 systemd-logind[1480]: New session 15 of user core. Jul 12 09:32:37.585064 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 12 09:32:37.772951 sshd[4112]: Connection closed by 10.0.0.1 port 52520 Jul 12 09:32:37.771949 sshd-session[4109]: pam_unix(sshd:session): session closed for user core Jul 12 09:32:37.785799 systemd[1]: sshd@14-10.0.0.39:22-10.0.0.1:52520.service: Deactivated successfully. Jul 12 09:32:37.788268 systemd[1]: session-15.scope: Deactivated successfully. Jul 12 09:32:37.792842 systemd-logind[1480]: Session 15 logged out. Waiting for processes to exit. Jul 12 09:32:37.794181 systemd[1]: Started sshd@15-10.0.0.39:22-10.0.0.1:52532.service - OpenSSH per-connection server daemon (10.0.0.1:52532). Jul 12 09:32:37.795268 systemd-logind[1480]: Removed session 15. Jul 12 09:32:37.857961 sshd[4123]: Accepted publickey for core from 10.0.0.1 port 52532 ssh2: RSA SHA256:fhp558siaf39QLJw5fsAHbaRafIwNXdVZ+VoGPeGhpE Jul 12 09:32:37.859201 sshd-session[4123]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 09:32:37.864093 systemd-logind[1480]: New session 16 of user core. Jul 12 09:32:37.870047 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 12 09:32:38.636480 sshd[4126]: Connection closed by 10.0.0.1 port 52532 Jul 12 09:32:38.636731 sshd-session[4123]: pam_unix(sshd:session): session closed for user core Jul 12 09:32:38.648389 systemd[1]: sshd@15-10.0.0.39:22-10.0.0.1:52532.service: Deactivated successfully. Jul 12 09:32:38.652629 systemd[1]: session-16.scope: Deactivated successfully. Jul 12 09:32:38.654589 systemd-logind[1480]: Session 16 logged out. Waiting for processes to exit. Jul 12 09:32:38.660442 systemd[1]: Started sshd@16-10.0.0.39:22-10.0.0.1:52544.service - OpenSSH per-connection server daemon (10.0.0.1:52544). Jul 12 09:32:38.664215 systemd-logind[1480]: Removed session 16. Jul 12 09:32:38.712407 sshd[4144]: Accepted publickey for core from 10.0.0.1 port 52544 ssh2: RSA SHA256:fhp558siaf39QLJw5fsAHbaRafIwNXdVZ+VoGPeGhpE Jul 12 09:32:38.713595 sshd-session[4144]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 09:32:38.718468 systemd-logind[1480]: New session 17 of user core. Jul 12 09:32:38.727079 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 12 09:32:38.939473 sshd[4147]: Connection closed by 10.0.0.1 port 52544 Jul 12 09:32:38.940493 sshd-session[4144]: pam_unix(sshd:session): session closed for user core Jul 12 09:32:38.948910 systemd[1]: sshd@16-10.0.0.39:22-10.0.0.1:52544.service: Deactivated successfully. Jul 12 09:32:38.950701 systemd[1]: session-17.scope: Deactivated successfully. Jul 12 09:32:38.951844 systemd-logind[1480]: Session 17 logged out. Waiting for processes to exit. Jul 12 09:32:38.954152 systemd[1]: Started sshd@17-10.0.0.39:22-10.0.0.1:52556.service - OpenSSH per-connection server daemon (10.0.0.1:52556). Jul 12 09:32:38.957031 systemd-logind[1480]: Removed session 17. Jul 12 09:32:39.019816 sshd[4159]: Accepted publickey for core from 10.0.0.1 port 52556 ssh2: RSA SHA256:fhp558siaf39QLJw5fsAHbaRafIwNXdVZ+VoGPeGhpE Jul 12 09:32:39.021085 sshd-session[4159]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 09:32:39.025471 systemd-logind[1480]: New session 18 of user core. Jul 12 09:32:39.037104 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 12 09:32:39.142470 sshd[4162]: Connection closed by 10.0.0.1 port 52556 Jul 12 09:32:39.143001 sshd-session[4159]: pam_unix(sshd:session): session closed for user core Jul 12 09:32:39.146446 systemd[1]: sshd@17-10.0.0.39:22-10.0.0.1:52556.service: Deactivated successfully. Jul 12 09:32:39.149754 systemd[1]: session-18.scope: Deactivated successfully. Jul 12 09:32:39.151612 systemd-logind[1480]: Session 18 logged out. Waiting for processes to exit. Jul 12 09:32:39.153638 systemd-logind[1480]: Removed session 18. Jul 12 09:32:44.158209 systemd[1]: Started sshd@18-10.0.0.39:22-10.0.0.1:34430.service - OpenSSH per-connection server daemon (10.0.0.1:34430). Jul 12 09:32:44.217851 sshd[4179]: Accepted publickey for core from 10.0.0.1 port 34430 ssh2: RSA SHA256:fhp558siaf39QLJw5fsAHbaRafIwNXdVZ+VoGPeGhpE Jul 12 09:32:44.219146 sshd-session[4179]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 09:32:44.223952 systemd-logind[1480]: New session 19 of user core. Jul 12 09:32:44.233066 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 12 09:32:44.343229 sshd[4182]: Connection closed by 10.0.0.1 port 34430 Jul 12 09:32:44.343587 sshd-session[4179]: pam_unix(sshd:session): session closed for user core Jul 12 09:32:44.347638 systemd[1]: sshd@18-10.0.0.39:22-10.0.0.1:34430.service: Deactivated successfully. Jul 12 09:32:44.347779 systemd-logind[1480]: Session 19 logged out. Waiting for processes to exit. Jul 12 09:32:44.350668 systemd[1]: session-19.scope: Deactivated successfully. Jul 12 09:32:44.352117 systemd-logind[1480]: Removed session 19. Jul 12 09:32:49.359029 systemd[1]: Started sshd@19-10.0.0.39:22-10.0.0.1:34440.service - OpenSSH per-connection server daemon (10.0.0.1:34440). Jul 12 09:32:49.404424 sshd[4197]: Accepted publickey for core from 10.0.0.1 port 34440 ssh2: RSA SHA256:fhp558siaf39QLJw5fsAHbaRafIwNXdVZ+VoGPeGhpE Jul 12 09:32:49.405450 sshd-session[4197]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 09:32:49.408898 systemd-logind[1480]: New session 20 of user core. Jul 12 09:32:49.416062 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 12 09:32:49.521338 sshd[4200]: Connection closed by 10.0.0.1 port 34440 Jul 12 09:32:49.521817 sshd-session[4197]: pam_unix(sshd:session): session closed for user core Jul 12 09:32:49.535144 systemd[1]: sshd@19-10.0.0.39:22-10.0.0.1:34440.service: Deactivated successfully. Jul 12 09:32:49.536816 systemd[1]: session-20.scope: Deactivated successfully. Jul 12 09:32:49.537591 systemd-logind[1480]: Session 20 logged out. Waiting for processes to exit. Jul 12 09:32:49.539866 systemd[1]: Started sshd@20-10.0.0.39:22-10.0.0.1:34444.service - OpenSSH per-connection server daemon (10.0.0.1:34444). Jul 12 09:32:49.540552 systemd-logind[1480]: Removed session 20. Jul 12 09:32:49.598342 sshd[4213]: Accepted publickey for core from 10.0.0.1 port 34444 ssh2: RSA SHA256:fhp558siaf39QLJw5fsAHbaRafIwNXdVZ+VoGPeGhpE Jul 12 09:32:49.599499 sshd-session[4213]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 09:32:49.603830 systemd-logind[1480]: New session 21 of user core. Jul 12 09:32:49.615056 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 12 09:32:51.301521 kubelet[2642]: I0712 09:32:51.301452 2642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-zhxbf" podStartSLOduration=57.301436261 podStartE2EDuration="57.301436261s" podCreationTimestamp="2025-07-12 09:31:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-12 09:32:13.402782024 +0000 UTC m=+25.242387664" watchObservedRunningTime="2025-07-12 09:32:51.301436261 +0000 UTC m=+63.141041901" Jul 12 09:32:51.313551 containerd[1499]: time="2025-07-12T09:32:51.313484023Z" level=info msg="StopContainer for \"2a2890ad8a386aea94451e8dc266912262bbceb826ee14525219ade3591f0243\" with timeout 30 (s)" Jul 12 09:32:51.314680 containerd[1499]: time="2025-07-12T09:32:51.314591728Z" level=info msg="Stop container \"2a2890ad8a386aea94451e8dc266912262bbceb826ee14525219ade3591f0243\" with signal terminated" Jul 12 09:32:51.325473 systemd[1]: cri-containerd-2a2890ad8a386aea94451e8dc266912262bbceb826ee14525219ade3591f0243.scope: Deactivated successfully. Jul 12 09:32:51.328192 containerd[1499]: time="2025-07-12T09:32:51.328159590Z" level=info msg="received exit event container_id:\"2a2890ad8a386aea94451e8dc266912262bbceb826ee14525219ade3591f0243\" id:\"2a2890ad8a386aea94451e8dc266912262bbceb826ee14525219ade3591f0243\" pid:3180 exited_at:{seconds:1752312771 nanos:327336880}" Jul 12 09:32:51.328419 containerd[1499]: time="2025-07-12T09:32:51.328385627Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2a2890ad8a386aea94451e8dc266912262bbceb826ee14525219ade3591f0243\" id:\"2a2890ad8a386aea94451e8dc266912262bbceb826ee14525219ade3591f0243\" pid:3180 exited_at:{seconds:1752312771 nanos:327336880}" Jul 12 09:32:51.350039 containerd[1499]: time="2025-07-12T09:32:51.349984262Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 12 09:32:51.350111 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2a2890ad8a386aea94451e8dc266912262bbceb826ee14525219ade3591f0243-rootfs.mount: Deactivated successfully. Jul 12 09:32:51.359925 containerd[1499]: time="2025-07-12T09:32:51.359873332Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9ea9ad7ebf15c98e8ee2e6f1554ae6c2b9a45b0db14b3be08f67cb96e44c0323\" id:\"d8e91d89154df017fb73e12a44052afa3e9a975be01c7ee07ca50175e49196bb\" pid:4249 exited_at:{seconds:1752312771 nanos:354231046}" Jul 12 09:32:51.361845 containerd[1499]: time="2025-07-12T09:32:51.361793027Z" level=info msg="StopContainer for \"9ea9ad7ebf15c98e8ee2e6f1554ae6c2b9a45b0db14b3be08f67cb96e44c0323\" with timeout 2 (s)" Jul 12 09:32:51.362097 containerd[1499]: time="2025-07-12T09:32:51.362048903Z" level=info msg="Stop container \"9ea9ad7ebf15c98e8ee2e6f1554ae6c2b9a45b0db14b3be08f67cb96e44c0323\" with signal terminated" Jul 12 09:32:51.362450 containerd[1499]: time="2025-07-12T09:32:51.362407019Z" level=info msg="StopContainer for \"2a2890ad8a386aea94451e8dc266912262bbceb826ee14525219ade3591f0243\" returns successfully" Jul 12 09:32:51.363217 containerd[1499]: time="2025-07-12T09:32:51.363154009Z" level=info msg="StopPodSandbox for \"37eb84fad8df360310aa973d24317bc381490c695c454a766afc8b13cad72e19\"" Jul 12 09:32:51.370047 containerd[1499]: time="2025-07-12T09:32:51.370006799Z" level=info msg="Container to stop \"2a2890ad8a386aea94451e8dc266912262bbceb826ee14525219ade3591f0243\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 12 09:32:51.371343 systemd-networkd[1423]: lxc_health: Link DOWN Jul 12 09:32:51.371349 systemd-networkd[1423]: lxc_health: Lost carrier Jul 12 09:32:51.379414 systemd[1]: cri-containerd-37eb84fad8df360310aa973d24317bc381490c695c454a766afc8b13cad72e19.scope: Deactivated successfully. Jul 12 09:32:51.381537 containerd[1499]: time="2025-07-12T09:32:51.381439968Z" level=info msg="TaskExit event in podsandbox handler container_id:\"37eb84fad8df360310aa973d24317bc381490c695c454a766afc8b13cad72e19\" id:\"37eb84fad8df360310aa973d24317bc381490c695c454a766afc8b13cad72e19\" pid:2928 exit_status:137 exited_at:{seconds:1752312771 nanos:379865789}" Jul 12 09:32:51.386231 systemd[1]: cri-containerd-9ea9ad7ebf15c98e8ee2e6f1554ae6c2b9a45b0db14b3be08f67cb96e44c0323.scope: Deactivated successfully. Jul 12 09:32:51.387002 systemd[1]: cri-containerd-9ea9ad7ebf15c98e8ee2e6f1554ae6c2b9a45b0db14b3be08f67cb96e44c0323.scope: Consumed 6.519s CPU time, 122.5M memory peak, 156K read from disk, 14.3M written to disk. Jul 12 09:32:51.387928 containerd[1499]: time="2025-07-12T09:32:51.387826004Z" level=info msg="received exit event container_id:\"9ea9ad7ebf15c98e8ee2e6f1554ae6c2b9a45b0db14b3be08f67cb96e44c0323\" id:\"9ea9ad7ebf15c98e8ee2e6f1554ae6c2b9a45b0db14b3be08f67cb96e44c0323\" pid:3306 exited_at:{seconds:1752312771 nanos:387592207}" Jul 12 09:32:51.407699 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-37eb84fad8df360310aa973d24317bc381490c695c454a766afc8b13cad72e19-rootfs.mount: Deactivated successfully. Jul 12 09:32:51.410600 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9ea9ad7ebf15c98e8ee2e6f1554ae6c2b9a45b0db14b3be08f67cb96e44c0323-rootfs.mount: Deactivated successfully. Jul 12 09:32:51.416899 containerd[1499]: time="2025-07-12T09:32:51.416865862Z" level=info msg="shim disconnected" id=37eb84fad8df360310aa973d24317bc381490c695c454a766afc8b13cad72e19 namespace=k8s.io Jul 12 09:32:51.417074 containerd[1499]: time="2025-07-12T09:32:51.416896261Z" level=warning msg="cleaning up after shim disconnected" id=37eb84fad8df360310aa973d24317bc381490c695c454a766afc8b13cad72e19 namespace=k8s.io Jul 12 09:32:51.417074 containerd[1499]: time="2025-07-12T09:32:51.416970780Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 12 09:32:51.423243 containerd[1499]: time="2025-07-12T09:32:51.423204298Z" level=info msg="StopContainer for \"9ea9ad7ebf15c98e8ee2e6f1554ae6c2b9a45b0db14b3be08f67cb96e44c0323\" returns successfully" Jul 12 09:32:51.423733 containerd[1499]: time="2025-07-12T09:32:51.423710892Z" level=info msg="StopPodSandbox for \"1cde4952f2d384c1e93d28ca5123e76aa06164b111a505f1437fcba55b883a61\"" Jul 12 09:32:51.423791 containerd[1499]: time="2025-07-12T09:32:51.423774611Z" level=info msg="Container to stop \"9ea9ad7ebf15c98e8ee2e6f1554ae6c2b9a45b0db14b3be08f67cb96e44c0323\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 12 09:32:51.423822 containerd[1499]: time="2025-07-12T09:32:51.423791411Z" level=info msg="Container to stop \"cbcb2df0e256095b7030b599001c5eca505b559b2a19df14ffd5c9678569b626\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 12 09:32:51.423822 containerd[1499]: time="2025-07-12T09:32:51.423801090Z" level=info msg="Container to stop \"2e5b95277b6383ec02ffb9418434de92f05401de56d4341ebd6f45f40360f73b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 12 09:32:51.423822 containerd[1499]: time="2025-07-12T09:32:51.423810130Z" level=info msg="Container to stop \"dfc2d90cb007c57ac5729ab975544b9434fa4f346e5536e33a59f36bfb44d6ca\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 12 09:32:51.423822 containerd[1499]: time="2025-07-12T09:32:51.423817810Z" level=info msg="Container to stop \"3bb3423e0f47f67a931dd936ed18408b6cda6c784347c580127d8f941c944077\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 12 09:32:51.431043 systemd[1]: cri-containerd-1cde4952f2d384c1e93d28ca5123e76aa06164b111a505f1437fcba55b883a61.scope: Deactivated successfully. Jul 12 09:32:51.451616 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1cde4952f2d384c1e93d28ca5123e76aa06164b111a505f1437fcba55b883a61-rootfs.mount: Deactivated successfully. Jul 12 09:32:51.453887 containerd[1499]: time="2025-07-12T09:32:51.453841775Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9ea9ad7ebf15c98e8ee2e6f1554ae6c2b9a45b0db14b3be08f67cb96e44c0323\" id:\"9ea9ad7ebf15c98e8ee2e6f1554ae6c2b9a45b0db14b3be08f67cb96e44c0323\" pid:3306 exited_at:{seconds:1752312771 nanos:387592207}" Jul 12 09:32:51.454027 containerd[1499]: time="2025-07-12T09:32:51.453895134Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1cde4952f2d384c1e93d28ca5123e76aa06164b111a505f1437fcba55b883a61\" id:\"1cde4952f2d384c1e93d28ca5123e76aa06164b111a505f1437fcba55b883a61\" pid:2805 exit_status:137 exited_at:{seconds:1752312771 nanos:431008476}" Jul 12 09:32:51.455192 containerd[1499]: time="2025-07-12T09:32:51.455056079Z" level=info msg="TearDown network for sandbox \"37eb84fad8df360310aa973d24317bc381490c695c454a766afc8b13cad72e19\" successfully" Jul 12 09:32:51.455192 containerd[1499]: time="2025-07-12T09:32:51.455086919Z" level=info msg="StopPodSandbox for \"37eb84fad8df360310aa973d24317bc381490c695c454a766afc8b13cad72e19\" returns successfully" Jul 12 09:32:51.455329 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-37eb84fad8df360310aa973d24317bc381490c695c454a766afc8b13cad72e19-shm.mount: Deactivated successfully. Jul 12 09:32:51.465220 containerd[1499]: time="2025-07-12T09:32:51.465158026Z" level=info msg="received exit event sandbox_id:\"37eb84fad8df360310aa973d24317bc381490c695c454a766afc8b13cad72e19\" exit_status:137 exited_at:{seconds:1752312771 nanos:379865789}" Jul 12 09:32:51.496736 containerd[1499]: time="2025-07-12T09:32:51.496699691Z" level=info msg="received exit event sandbox_id:\"1cde4952f2d384c1e93d28ca5123e76aa06164b111a505f1437fcba55b883a61\" exit_status:137 exited_at:{seconds:1752312771 nanos:431008476}" Jul 12 09:32:51.498233 containerd[1499]: time="2025-07-12T09:32:51.498185871Z" level=info msg="TearDown network for sandbox \"1cde4952f2d384c1e93d28ca5123e76aa06164b111a505f1437fcba55b883a61\" successfully" Jul 12 09:32:51.498233 containerd[1499]: time="2025-07-12T09:32:51.498211351Z" level=info msg="StopPodSandbox for \"1cde4952f2d384c1e93d28ca5123e76aa06164b111a505f1437fcba55b883a61\" returns successfully" Jul 12 09:32:51.498471 containerd[1499]: time="2025-07-12T09:32:51.498357109Z" level=info msg="shim disconnected" id=1cde4952f2d384c1e93d28ca5123e76aa06164b111a505f1437fcba55b883a61 namespace=k8s.io Jul 12 09:32:51.498471 containerd[1499]: time="2025-07-12T09:32:51.498370629Z" level=warning msg="cleaning up after shim disconnected" id=1cde4952f2d384c1e93d28ca5123e76aa06164b111a505f1437fcba55b883a61 namespace=k8s.io Jul 12 09:32:51.498471 containerd[1499]: time="2025-07-12T09:32:51.498398068Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 12 09:32:51.557310 kubelet[2642]: I0712 09:32:51.556941 2642 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c2cce705-3a2b-4f07-b418-18dc3f9ae873-lib-modules\") pod \"c2cce705-3a2b-4f07-b418-18dc3f9ae873\" (UID: \"c2cce705-3a2b-4f07-b418-18dc3f9ae873\") " Jul 12 09:32:51.557310 kubelet[2642]: I0712 09:32:51.556989 2642 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c2cce705-3a2b-4f07-b418-18dc3f9ae873-hostproc\") pod \"c2cce705-3a2b-4f07-b418-18dc3f9ae873\" (UID: \"c2cce705-3a2b-4f07-b418-18dc3f9ae873\") " Jul 12 09:32:51.557310 kubelet[2642]: I0712 09:32:51.557013 2642 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/88ba0aec-98c8-414e-af52-fd66fcc62f70-cilium-config-path\") pod \"88ba0aec-98c8-414e-af52-fd66fcc62f70\" (UID: \"88ba0aec-98c8-414e-af52-fd66fcc62f70\") " Jul 12 09:32:51.557310 kubelet[2642]: I0712 09:32:51.557032 2642 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c2cce705-3a2b-4f07-b418-18dc3f9ae873-xtables-lock\") pod \"c2cce705-3a2b-4f07-b418-18dc3f9ae873\" (UID: \"c2cce705-3a2b-4f07-b418-18dc3f9ae873\") " Jul 12 09:32:51.557310 kubelet[2642]: I0712 09:32:51.557046 2642 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c2cce705-3a2b-4f07-b418-18dc3f9ae873-cilium-cgroup\") pod \"c2cce705-3a2b-4f07-b418-18dc3f9ae873\" (UID: \"c2cce705-3a2b-4f07-b418-18dc3f9ae873\") " Jul 12 09:32:51.557310 kubelet[2642]: I0712 09:32:51.557066 2642 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bxzft\" (UniqueName: \"kubernetes.io/projected/c2cce705-3a2b-4f07-b418-18dc3f9ae873-kube-api-access-bxzft\") pod \"c2cce705-3a2b-4f07-b418-18dc3f9ae873\" (UID: \"c2cce705-3a2b-4f07-b418-18dc3f9ae873\") " Jul 12 09:32:51.557730 kubelet[2642]: I0712 09:32:51.557090 2642 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c2cce705-3a2b-4f07-b418-18dc3f9ae873-cilium-run\") pod \"c2cce705-3a2b-4f07-b418-18dc3f9ae873\" (UID: \"c2cce705-3a2b-4f07-b418-18dc3f9ae873\") " Jul 12 09:32:51.557730 kubelet[2642]: I0712 09:32:51.557108 2642 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c2cce705-3a2b-4f07-b418-18dc3f9ae873-host-proc-sys-net\") pod \"c2cce705-3a2b-4f07-b418-18dc3f9ae873\" (UID: \"c2cce705-3a2b-4f07-b418-18dc3f9ae873\") " Jul 12 09:32:51.557730 kubelet[2642]: I0712 09:32:51.557126 2642 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c2cce705-3a2b-4f07-b418-18dc3f9ae873-clustermesh-secrets\") pod \"c2cce705-3a2b-4f07-b418-18dc3f9ae873\" (UID: \"c2cce705-3a2b-4f07-b418-18dc3f9ae873\") " Jul 12 09:32:51.557730 kubelet[2642]: I0712 09:32:51.557139 2642 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c2cce705-3a2b-4f07-b418-18dc3f9ae873-etc-cni-netd\") pod \"c2cce705-3a2b-4f07-b418-18dc3f9ae873\" (UID: \"c2cce705-3a2b-4f07-b418-18dc3f9ae873\") " Jul 12 09:32:51.557730 kubelet[2642]: I0712 09:32:51.557153 2642 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c2cce705-3a2b-4f07-b418-18dc3f9ae873-bpf-maps\") pod \"c2cce705-3a2b-4f07-b418-18dc3f9ae873\" (UID: \"c2cce705-3a2b-4f07-b418-18dc3f9ae873\") " Jul 12 09:32:51.557730 kubelet[2642]: I0712 09:32:51.557169 2642 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c2cce705-3a2b-4f07-b418-18dc3f9ae873-host-proc-sys-kernel\") pod \"c2cce705-3a2b-4f07-b418-18dc3f9ae873\" (UID: \"c2cce705-3a2b-4f07-b418-18dc3f9ae873\") " Jul 12 09:32:51.557855 kubelet[2642]: I0712 09:32:51.557182 2642 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c2cce705-3a2b-4f07-b418-18dc3f9ae873-cni-path\") pod \"c2cce705-3a2b-4f07-b418-18dc3f9ae873\" (UID: \"c2cce705-3a2b-4f07-b418-18dc3f9ae873\") " Jul 12 09:32:51.557855 kubelet[2642]: I0712 09:32:51.557201 2642 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c2cce705-3a2b-4f07-b418-18dc3f9ae873-cilium-config-path\") pod \"c2cce705-3a2b-4f07-b418-18dc3f9ae873\" (UID: \"c2cce705-3a2b-4f07-b418-18dc3f9ae873\") " Jul 12 09:32:51.557855 kubelet[2642]: I0712 09:32:51.557220 2642 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2z4kg\" (UniqueName: \"kubernetes.io/projected/88ba0aec-98c8-414e-af52-fd66fcc62f70-kube-api-access-2z4kg\") pod \"88ba0aec-98c8-414e-af52-fd66fcc62f70\" (UID: \"88ba0aec-98c8-414e-af52-fd66fcc62f70\") " Jul 12 09:32:51.557855 kubelet[2642]: I0712 09:32:51.557237 2642 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c2cce705-3a2b-4f07-b418-18dc3f9ae873-hubble-tls\") pod \"c2cce705-3a2b-4f07-b418-18dc3f9ae873\" (UID: \"c2cce705-3a2b-4f07-b418-18dc3f9ae873\") " Jul 12 09:32:51.564531 kubelet[2642]: I0712 09:32:51.564161 2642 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c2cce705-3a2b-4f07-b418-18dc3f9ae873-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "c2cce705-3a2b-4f07-b418-18dc3f9ae873" (UID: "c2cce705-3a2b-4f07-b418-18dc3f9ae873"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 12 09:32:51.564630 kubelet[2642]: I0712 09:32:51.564565 2642 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c2cce705-3a2b-4f07-b418-18dc3f9ae873-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "c2cce705-3a2b-4f07-b418-18dc3f9ae873" (UID: "c2cce705-3a2b-4f07-b418-18dc3f9ae873"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 12 09:32:51.564630 kubelet[2642]: I0712 09:32:51.564623 2642 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c2cce705-3a2b-4f07-b418-18dc3f9ae873-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "c2cce705-3a2b-4f07-b418-18dc3f9ae873" (UID: "c2cce705-3a2b-4f07-b418-18dc3f9ae873"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 12 09:32:51.564673 kubelet[2642]: I0712 09:32:51.564638 2642 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c2cce705-3a2b-4f07-b418-18dc3f9ae873-hostproc" (OuterVolumeSpecName: "hostproc") pod "c2cce705-3a2b-4f07-b418-18dc3f9ae873" (UID: "c2cce705-3a2b-4f07-b418-18dc3f9ae873"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 12 09:32:51.564673 kubelet[2642]: I0712 09:32:51.564624 2642 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c2cce705-3a2b-4f07-b418-18dc3f9ae873-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "c2cce705-3a2b-4f07-b418-18dc3f9ae873" (UID: "c2cce705-3a2b-4f07-b418-18dc3f9ae873"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 12 09:32:51.564716 kubelet[2642]: I0712 09:32:51.564671 2642 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c2cce705-3a2b-4f07-b418-18dc3f9ae873-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "c2cce705-3a2b-4f07-b418-18dc3f9ae873" (UID: "c2cce705-3a2b-4f07-b418-18dc3f9ae873"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 12 09:32:51.564716 kubelet[2642]: I0712 09:32:51.564701 2642 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c2cce705-3a2b-4f07-b418-18dc3f9ae873-cni-path" (OuterVolumeSpecName: "cni-path") pod "c2cce705-3a2b-4f07-b418-18dc3f9ae873" (UID: "c2cce705-3a2b-4f07-b418-18dc3f9ae873"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 12 09:32:51.564805 kubelet[2642]: I0712 09:32:51.564716 2642 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c2cce705-3a2b-4f07-b418-18dc3f9ae873-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "c2cce705-3a2b-4f07-b418-18dc3f9ae873" (UID: "c2cce705-3a2b-4f07-b418-18dc3f9ae873"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 12 09:32:51.564805 kubelet[2642]: I0712 09:32:51.564743 2642 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c2cce705-3a2b-4f07-b418-18dc3f9ae873-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "c2cce705-3a2b-4f07-b418-18dc3f9ae873" (UID: "c2cce705-3a2b-4f07-b418-18dc3f9ae873"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 12 09:32:51.564805 kubelet[2642]: I0712 09:32:51.564748 2642 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c2cce705-3a2b-4f07-b418-18dc3f9ae873-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "c2cce705-3a2b-4f07-b418-18dc3f9ae873" (UID: "c2cce705-3a2b-4f07-b418-18dc3f9ae873"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 12 09:32:51.573276 kubelet[2642]: I0712 09:32:51.572028 2642 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/88ba0aec-98c8-414e-af52-fd66fcc62f70-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "88ba0aec-98c8-414e-af52-fd66fcc62f70" (UID: "88ba0aec-98c8-414e-af52-fd66fcc62f70"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 12 09:32:51.574288 kubelet[2642]: I0712 09:32:51.574250 2642 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c2cce705-3a2b-4f07-b418-18dc3f9ae873-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "c2cce705-3a2b-4f07-b418-18dc3f9ae873" (UID: "c2cce705-3a2b-4f07-b418-18dc3f9ae873"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 12 09:32:51.575020 kubelet[2642]: I0712 09:32:51.574993 2642 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c2cce705-3a2b-4f07-b418-18dc3f9ae873-kube-api-access-bxzft" (OuterVolumeSpecName: "kube-api-access-bxzft") pod "c2cce705-3a2b-4f07-b418-18dc3f9ae873" (UID: "c2cce705-3a2b-4f07-b418-18dc3f9ae873"). InnerVolumeSpecName "kube-api-access-bxzft". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 12 09:32:51.575194 kubelet[2642]: I0712 09:32:51.575178 2642 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c2cce705-3a2b-4f07-b418-18dc3f9ae873-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "c2cce705-3a2b-4f07-b418-18dc3f9ae873" (UID: "c2cce705-3a2b-4f07-b418-18dc3f9ae873"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 12 09:32:51.575634 kubelet[2642]: I0712 09:32:51.575587 2642 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c2cce705-3a2b-4f07-b418-18dc3f9ae873-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "c2cce705-3a2b-4f07-b418-18dc3f9ae873" (UID: "c2cce705-3a2b-4f07-b418-18dc3f9ae873"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 12 09:32:51.576218 kubelet[2642]: I0712 09:32:51.576193 2642 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/88ba0aec-98c8-414e-af52-fd66fcc62f70-kube-api-access-2z4kg" (OuterVolumeSpecName: "kube-api-access-2z4kg") pod "88ba0aec-98c8-414e-af52-fd66fcc62f70" (UID: "88ba0aec-98c8-414e-af52-fd66fcc62f70"). InnerVolumeSpecName "kube-api-access-2z4kg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 12 09:32:51.657627 kubelet[2642]: I0712 09:32:51.657585 2642 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c2cce705-3a2b-4f07-b418-18dc3f9ae873-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 12 09:32:51.657627 kubelet[2642]: I0712 09:32:51.657619 2642 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-2z4kg\" (UniqueName: \"kubernetes.io/projected/88ba0aec-98c8-414e-af52-fd66fcc62f70-kube-api-access-2z4kg\") on node \"localhost\" DevicePath \"\"" Jul 12 09:32:51.657627 kubelet[2642]: I0712 09:32:51.657630 2642 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c2cce705-3a2b-4f07-b418-18dc3f9ae873-hubble-tls\") on node \"localhost\" DevicePath \"\"" Jul 12 09:32:51.657627 kubelet[2642]: I0712 09:32:51.657638 2642 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c2cce705-3a2b-4f07-b418-18dc3f9ae873-lib-modules\") on node \"localhost\" DevicePath \"\"" Jul 12 09:32:51.657826 kubelet[2642]: I0712 09:32:51.657651 2642 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c2cce705-3a2b-4f07-b418-18dc3f9ae873-hostproc\") on node \"localhost\" DevicePath \"\"" Jul 12 09:32:51.657826 kubelet[2642]: I0712 09:32:51.657659 2642 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/88ba0aec-98c8-414e-af52-fd66fcc62f70-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 12 09:32:51.657826 kubelet[2642]: I0712 09:32:51.657667 2642 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c2cce705-3a2b-4f07-b418-18dc3f9ae873-xtables-lock\") on node \"localhost\" DevicePath \"\"" Jul 12 09:32:51.657826 kubelet[2642]: I0712 09:32:51.657676 2642 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c2cce705-3a2b-4f07-b418-18dc3f9ae873-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Jul 12 09:32:51.657826 kubelet[2642]: I0712 09:32:51.657684 2642 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-bxzft\" (UniqueName: \"kubernetes.io/projected/c2cce705-3a2b-4f07-b418-18dc3f9ae873-kube-api-access-bxzft\") on node \"localhost\" DevicePath \"\"" Jul 12 09:32:51.657826 kubelet[2642]: I0712 09:32:51.657691 2642 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c2cce705-3a2b-4f07-b418-18dc3f9ae873-cilium-run\") on node \"localhost\" DevicePath \"\"" Jul 12 09:32:51.657826 kubelet[2642]: I0712 09:32:51.657698 2642 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c2cce705-3a2b-4f07-b418-18dc3f9ae873-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Jul 12 09:32:51.657826 kubelet[2642]: I0712 09:32:51.657706 2642 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c2cce705-3a2b-4f07-b418-18dc3f9ae873-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Jul 12 09:32:51.658009 kubelet[2642]: I0712 09:32:51.657713 2642 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c2cce705-3a2b-4f07-b418-18dc3f9ae873-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Jul 12 09:32:51.658009 kubelet[2642]: I0712 09:32:51.657720 2642 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c2cce705-3a2b-4f07-b418-18dc3f9ae873-bpf-maps\") on node \"localhost\" DevicePath \"\"" Jul 12 09:32:51.658009 kubelet[2642]: I0712 09:32:51.657728 2642 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c2cce705-3a2b-4f07-b418-18dc3f9ae873-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Jul 12 09:32:51.658009 kubelet[2642]: I0712 09:32:51.657735 2642 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c2cce705-3a2b-4f07-b418-18dc3f9ae873-cni-path\") on node \"localhost\" DevicePath \"\"" Jul 12 09:32:52.266448 systemd[1]: Removed slice kubepods-burstable-podc2cce705_3a2b_4f07_b418_18dc3f9ae873.slice - libcontainer container kubepods-burstable-podc2cce705_3a2b_4f07_b418_18dc3f9ae873.slice. Jul 12 09:32:52.266561 systemd[1]: kubepods-burstable-podc2cce705_3a2b_4f07_b418_18dc3f9ae873.slice: Consumed 6.652s CPU time, 122.9M memory peak, 160K read from disk, 14.4M written to disk. Jul 12 09:32:52.270104 systemd[1]: Removed slice kubepods-besteffort-pod88ba0aec_98c8_414e_af52_fd66fcc62f70.slice - libcontainer container kubepods-besteffort-pod88ba0aec_98c8_414e_af52_fd66fcc62f70.slice. Jul 12 09:32:52.350171 systemd[1]: var-lib-kubelet-pods-88ba0aec\x2d98c8\x2d414e\x2daf52\x2dfd66fcc62f70-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d2z4kg.mount: Deactivated successfully. Jul 12 09:32:52.350267 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1cde4952f2d384c1e93d28ca5123e76aa06164b111a505f1437fcba55b883a61-shm.mount: Deactivated successfully. Jul 12 09:32:52.350323 systemd[1]: var-lib-kubelet-pods-c2cce705\x2d3a2b\x2d4f07\x2db418\x2d18dc3f9ae873-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dbxzft.mount: Deactivated successfully. Jul 12 09:32:52.350376 systemd[1]: var-lib-kubelet-pods-c2cce705\x2d3a2b\x2d4f07\x2db418\x2d18dc3f9ae873-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 12 09:32:52.350419 systemd[1]: var-lib-kubelet-pods-c2cce705\x2d3a2b\x2d4f07\x2db418\x2d18dc3f9ae873-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 12 09:32:52.465045 kubelet[2642]: I0712 09:32:52.465020 2642 scope.go:117] "RemoveContainer" containerID="2a2890ad8a386aea94451e8dc266912262bbceb826ee14525219ade3591f0243" Jul 12 09:32:52.466572 containerd[1499]: time="2025-07-12T09:32:52.466532470Z" level=info msg="RemoveContainer for \"2a2890ad8a386aea94451e8dc266912262bbceb826ee14525219ade3591f0243\"" Jul 12 09:32:52.475117 containerd[1499]: time="2025-07-12T09:32:52.475046841Z" level=info msg="RemoveContainer for \"2a2890ad8a386aea94451e8dc266912262bbceb826ee14525219ade3591f0243\" returns successfully" Jul 12 09:32:52.475374 kubelet[2642]: I0712 09:32:52.475324 2642 scope.go:117] "RemoveContainer" containerID="9ea9ad7ebf15c98e8ee2e6f1554ae6c2b9a45b0db14b3be08f67cb96e44c0323" Jul 12 09:32:52.478361 containerd[1499]: time="2025-07-12T09:32:52.477806965Z" level=info msg="RemoveContainer for \"9ea9ad7ebf15c98e8ee2e6f1554ae6c2b9a45b0db14b3be08f67cb96e44c0323\"" Jul 12 09:32:52.481907 containerd[1499]: time="2025-07-12T09:32:52.481718635Z" level=info msg="RemoveContainer for \"9ea9ad7ebf15c98e8ee2e6f1554ae6c2b9a45b0db14b3be08f67cb96e44c0323\" returns successfully" Jul 12 09:32:52.481988 kubelet[2642]: I0712 09:32:52.481880 2642 scope.go:117] "RemoveContainer" containerID="3bb3423e0f47f67a931dd936ed18408b6cda6c784347c580127d8f941c944077" Jul 12 09:32:52.493450 containerd[1499]: time="2025-07-12T09:32:52.492751373Z" level=info msg="RemoveContainer for \"3bb3423e0f47f67a931dd936ed18408b6cda6c784347c580127d8f941c944077\"" Jul 12 09:32:52.496482 containerd[1499]: time="2025-07-12T09:32:52.496328327Z" level=info msg="RemoveContainer for \"3bb3423e0f47f67a931dd936ed18408b6cda6c784347c580127d8f941c944077\" returns successfully" Jul 12 09:32:52.496539 kubelet[2642]: I0712 09:32:52.496498 2642 scope.go:117] "RemoveContainer" containerID="dfc2d90cb007c57ac5729ab975544b9434fa4f346e5536e33a59f36bfb44d6ca" Jul 12 09:32:52.504798 containerd[1499]: time="2025-07-12T09:32:52.504762739Z" level=info msg="RemoveContainer for \"dfc2d90cb007c57ac5729ab975544b9434fa4f346e5536e33a59f36bfb44d6ca\"" Jul 12 09:32:52.509402 containerd[1499]: time="2025-07-12T09:32:52.509368000Z" level=info msg="RemoveContainer for \"dfc2d90cb007c57ac5729ab975544b9434fa4f346e5536e33a59f36bfb44d6ca\" returns successfully" Jul 12 09:32:52.511074 kubelet[2642]: I0712 09:32:52.511046 2642 scope.go:117] "RemoveContainer" containerID="2e5b95277b6383ec02ffb9418434de92f05401de56d4341ebd6f45f40360f73b" Jul 12 09:32:52.514052 containerd[1499]: time="2025-07-12T09:32:52.514004820Z" level=info msg="RemoveContainer for \"2e5b95277b6383ec02ffb9418434de92f05401de56d4341ebd6f45f40360f73b\"" Jul 12 09:32:52.518151 containerd[1499]: time="2025-07-12T09:32:52.518053528Z" level=info msg="RemoveContainer for \"2e5b95277b6383ec02ffb9418434de92f05401de56d4341ebd6f45f40360f73b\" returns successfully" Jul 12 09:32:52.518467 kubelet[2642]: I0712 09:32:52.518264 2642 scope.go:117] "RemoveContainer" containerID="cbcb2df0e256095b7030b599001c5eca505b559b2a19df14ffd5c9678569b626" Jul 12 09:32:52.519829 containerd[1499]: time="2025-07-12T09:32:52.519804426Z" level=info msg="RemoveContainer for \"cbcb2df0e256095b7030b599001c5eca505b559b2a19df14ffd5c9678569b626\"" Jul 12 09:32:52.522830 containerd[1499]: time="2025-07-12T09:32:52.522794347Z" level=info msg="RemoveContainer for \"cbcb2df0e256095b7030b599001c5eca505b559b2a19df14ffd5c9678569b626\" returns successfully" Jul 12 09:32:53.274452 sshd[4216]: Connection closed by 10.0.0.1 port 34444 Jul 12 09:32:53.275777 sshd-session[4213]: pam_unix(sshd:session): session closed for user core Jul 12 09:32:53.288081 systemd[1]: sshd@20-10.0.0.39:22-10.0.0.1:34444.service: Deactivated successfully. Jul 12 09:32:53.289792 systemd[1]: session-21.scope: Deactivated successfully. Jul 12 09:32:53.290084 systemd[1]: session-21.scope: Consumed 1.038s CPU time, 23.8M memory peak. Jul 12 09:32:53.290595 systemd-logind[1480]: Session 21 logged out. Waiting for processes to exit. Jul 12 09:32:53.293164 systemd[1]: Started sshd@21-10.0.0.39:22-10.0.0.1:42696.service - OpenSSH per-connection server daemon (10.0.0.1:42696). Jul 12 09:32:53.293792 systemd-logind[1480]: Removed session 21. Jul 12 09:32:53.299470 kubelet[2642]: E0712 09:32:53.299427 2642 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 12 09:32:53.353441 sshd[4368]: Accepted publickey for core from 10.0.0.1 port 42696 ssh2: RSA SHA256:fhp558siaf39QLJw5fsAHbaRafIwNXdVZ+VoGPeGhpE Jul 12 09:32:53.354592 sshd-session[4368]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 09:32:53.358246 systemd-logind[1480]: New session 22 of user core. Jul 12 09:32:53.366053 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 12 09:32:54.133144 sshd[4371]: Connection closed by 10.0.0.1 port 42696 Jul 12 09:32:54.134402 sshd-session[4368]: pam_unix(sshd:session): session closed for user core Jul 12 09:32:54.145351 systemd[1]: sshd@21-10.0.0.39:22-10.0.0.1:42696.service: Deactivated successfully. Jul 12 09:32:54.150323 systemd[1]: session-22.scope: Deactivated successfully. Jul 12 09:32:54.151387 systemd-logind[1480]: Session 22 logged out. Waiting for processes to exit. Jul 12 09:32:54.160486 systemd[1]: Started sshd@22-10.0.0.39:22-10.0.0.1:42712.service - OpenSSH per-connection server daemon (10.0.0.1:42712). Jul 12 09:32:54.161997 systemd-logind[1480]: Removed session 22. Jul 12 09:32:54.178599 systemd[1]: Created slice kubepods-burstable-pod0e7dd98e_da01_4066_9ad5_ee072528a8ed.slice - libcontainer container kubepods-burstable-pod0e7dd98e_da01_4066_9ad5_ee072528a8ed.slice. Jul 12 09:32:54.227031 sshd[4383]: Accepted publickey for core from 10.0.0.1 port 42712 ssh2: RSA SHA256:fhp558siaf39QLJw5fsAHbaRafIwNXdVZ+VoGPeGhpE Jul 12 09:32:54.227786 sshd-session[4383]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 09:32:54.231769 systemd-logind[1480]: New session 23 of user core. Jul 12 09:32:54.245104 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 12 09:32:54.260247 kubelet[2642]: I0712 09:32:54.260205 2642 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="88ba0aec-98c8-414e-af52-fd66fcc62f70" path="/var/lib/kubelet/pods/88ba0aec-98c8-414e-af52-fd66fcc62f70/volumes" Jul 12 09:32:54.261159 kubelet[2642]: I0712 09:32:54.261138 2642 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c2cce705-3a2b-4f07-b418-18dc3f9ae873" path="/var/lib/kubelet/pods/c2cce705-3a2b-4f07-b418-18dc3f9ae873/volumes" Jul 12 09:32:54.271235 kubelet[2642]: I0712 09:32:54.271149 2642 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0e7dd98e-da01-4066-9ad5-ee072528a8ed-lib-modules\") pod \"cilium-l7t8s\" (UID: \"0e7dd98e-da01-4066-9ad5-ee072528a8ed\") " pod="kube-system/cilium-l7t8s" Jul 12 09:32:54.271235 kubelet[2642]: I0712 09:32:54.271188 2642 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0e7dd98e-da01-4066-9ad5-ee072528a8ed-xtables-lock\") pod \"cilium-l7t8s\" (UID: \"0e7dd98e-da01-4066-9ad5-ee072528a8ed\") " pod="kube-system/cilium-l7t8s" Jul 12 09:32:54.271235 kubelet[2642]: I0712 09:32:54.271207 2642 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0e7dd98e-da01-4066-9ad5-ee072528a8ed-host-proc-sys-net\") pod \"cilium-l7t8s\" (UID: \"0e7dd98e-da01-4066-9ad5-ee072528a8ed\") " pod="kube-system/cilium-l7t8s" Jul 12 09:32:54.271352 kubelet[2642]: I0712 09:32:54.271271 2642 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/0e7dd98e-da01-4066-9ad5-ee072528a8ed-cilium-ipsec-secrets\") pod \"cilium-l7t8s\" (UID: \"0e7dd98e-da01-4066-9ad5-ee072528a8ed\") " pod="kube-system/cilium-l7t8s" Jul 12 09:32:54.271352 kubelet[2642]: I0712 09:32:54.271313 2642 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0e7dd98e-da01-4066-9ad5-ee072528a8ed-hostproc\") pod \"cilium-l7t8s\" (UID: \"0e7dd98e-da01-4066-9ad5-ee072528a8ed\") " pod="kube-system/cilium-l7t8s" Jul 12 09:32:54.271352 kubelet[2642]: I0712 09:32:54.271336 2642 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0e7dd98e-da01-4066-9ad5-ee072528a8ed-cilium-cgroup\") pod \"cilium-l7t8s\" (UID: \"0e7dd98e-da01-4066-9ad5-ee072528a8ed\") " pod="kube-system/cilium-l7t8s" Jul 12 09:32:54.271448 kubelet[2642]: I0712 09:32:54.271353 2642 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0e7dd98e-da01-4066-9ad5-ee072528a8ed-clustermesh-secrets\") pod \"cilium-l7t8s\" (UID: \"0e7dd98e-da01-4066-9ad5-ee072528a8ed\") " pod="kube-system/cilium-l7t8s" Jul 12 09:32:54.271448 kubelet[2642]: I0712 09:32:54.271371 2642 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0e7dd98e-da01-4066-9ad5-ee072528a8ed-cilium-run\") pod \"cilium-l7t8s\" (UID: \"0e7dd98e-da01-4066-9ad5-ee072528a8ed\") " pod="kube-system/cilium-l7t8s" Jul 12 09:32:54.271448 kubelet[2642]: I0712 09:32:54.271385 2642 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0e7dd98e-da01-4066-9ad5-ee072528a8ed-cilium-config-path\") pod \"cilium-l7t8s\" (UID: \"0e7dd98e-da01-4066-9ad5-ee072528a8ed\") " pod="kube-system/cilium-l7t8s" Jul 12 09:32:54.271448 kubelet[2642]: I0712 09:32:54.271400 2642 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0e7dd98e-da01-4066-9ad5-ee072528a8ed-hubble-tls\") pod \"cilium-l7t8s\" (UID: \"0e7dd98e-da01-4066-9ad5-ee072528a8ed\") " pod="kube-system/cilium-l7t8s" Jul 12 09:32:54.271448 kubelet[2642]: I0712 09:32:54.271413 2642 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0e7dd98e-da01-4066-9ad5-ee072528a8ed-bpf-maps\") pod \"cilium-l7t8s\" (UID: \"0e7dd98e-da01-4066-9ad5-ee072528a8ed\") " pod="kube-system/cilium-l7t8s" Jul 12 09:32:54.271448 kubelet[2642]: I0712 09:32:54.271429 2642 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0e7dd98e-da01-4066-9ad5-ee072528a8ed-cni-path\") pod \"cilium-l7t8s\" (UID: \"0e7dd98e-da01-4066-9ad5-ee072528a8ed\") " pod="kube-system/cilium-l7t8s" Jul 12 09:32:54.271594 kubelet[2642]: I0712 09:32:54.271445 2642 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0e7dd98e-da01-4066-9ad5-ee072528a8ed-etc-cni-netd\") pod \"cilium-l7t8s\" (UID: \"0e7dd98e-da01-4066-9ad5-ee072528a8ed\") " pod="kube-system/cilium-l7t8s" Jul 12 09:32:54.271594 kubelet[2642]: I0712 09:32:54.271460 2642 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fswsw\" (UniqueName: \"kubernetes.io/projected/0e7dd98e-da01-4066-9ad5-ee072528a8ed-kube-api-access-fswsw\") pod \"cilium-l7t8s\" (UID: \"0e7dd98e-da01-4066-9ad5-ee072528a8ed\") " pod="kube-system/cilium-l7t8s" Jul 12 09:32:54.271594 kubelet[2642]: I0712 09:32:54.271480 2642 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0e7dd98e-da01-4066-9ad5-ee072528a8ed-host-proc-sys-kernel\") pod \"cilium-l7t8s\" (UID: \"0e7dd98e-da01-4066-9ad5-ee072528a8ed\") " pod="kube-system/cilium-l7t8s" Jul 12 09:32:54.296305 sshd[4386]: Connection closed by 10.0.0.1 port 42712 Jul 12 09:32:54.296707 sshd-session[4383]: pam_unix(sshd:session): session closed for user core Jul 12 09:32:54.310024 systemd[1]: sshd@22-10.0.0.39:22-10.0.0.1:42712.service: Deactivated successfully. Jul 12 09:32:54.311729 systemd[1]: session-23.scope: Deactivated successfully. Jul 12 09:32:54.312566 systemd-logind[1480]: Session 23 logged out. Waiting for processes to exit. Jul 12 09:32:54.315321 systemd[1]: Started sshd@23-10.0.0.39:22-10.0.0.1:42724.service - OpenSSH per-connection server daemon (10.0.0.1:42724). Jul 12 09:32:54.316147 systemd-logind[1480]: Removed session 23. Jul 12 09:32:54.380995 sshd[4393]: Accepted publickey for core from 10.0.0.1 port 42724 ssh2: RSA SHA256:fhp558siaf39QLJw5fsAHbaRafIwNXdVZ+VoGPeGhpE Jul 12 09:32:54.380076 sshd-session[4393]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 12 09:32:54.394683 systemd-logind[1480]: New session 24 of user core. Jul 12 09:32:54.406058 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 12 09:32:54.486672 containerd[1499]: time="2025-07-12T09:32:54.486631831Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-l7t8s,Uid:0e7dd98e-da01-4066-9ad5-ee072528a8ed,Namespace:kube-system,Attempt:0,}" Jul 12 09:32:54.501313 containerd[1499]: time="2025-07-12T09:32:54.500575540Z" level=info msg="connecting to shim ea3931a94b99d8035372d9b2ae359192ab8c13a0feccd220a6a1d3281815abf1" address="unix:///run/containerd/s/76f792b4ba4c900d8cfccc51ba74f557f8f113e0afad896e3ebc32399b1b9128" namespace=k8s.io protocol=ttrpc version=3 Jul 12 09:32:54.531096 systemd[1]: Started cri-containerd-ea3931a94b99d8035372d9b2ae359192ab8c13a0feccd220a6a1d3281815abf1.scope - libcontainer container ea3931a94b99d8035372d9b2ae359192ab8c13a0feccd220a6a1d3281815abf1. Jul 12 09:32:54.554504 containerd[1499]: time="2025-07-12T09:32:54.554463760Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-l7t8s,Uid:0e7dd98e-da01-4066-9ad5-ee072528a8ed,Namespace:kube-system,Attempt:0,} returns sandbox id \"ea3931a94b99d8035372d9b2ae359192ab8c13a0feccd220a6a1d3281815abf1\"" Jul 12 09:32:54.559312 containerd[1499]: time="2025-07-12T09:32:54.559212942Z" level=info msg="CreateContainer within sandbox \"ea3931a94b99d8035372d9b2ae359192ab8c13a0feccd220a6a1d3281815abf1\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 12 09:32:54.564266 containerd[1499]: time="2025-07-12T09:32:54.564236121Z" level=info msg="Container b4eed7f9bd5bb78798f3fd281f7a569aeb2db177b395224863db4941eb04e51c: CDI devices from CRI Config.CDIDevices: []" Jul 12 09:32:54.568881 containerd[1499]: time="2025-07-12T09:32:54.568837744Z" level=info msg="CreateContainer within sandbox \"ea3931a94b99d8035372d9b2ae359192ab8c13a0feccd220a6a1d3281815abf1\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"b4eed7f9bd5bb78798f3fd281f7a569aeb2db177b395224863db4941eb04e51c\"" Jul 12 09:32:54.569379 containerd[1499]: time="2025-07-12T09:32:54.569358258Z" level=info msg="StartContainer for \"b4eed7f9bd5bb78798f3fd281f7a569aeb2db177b395224863db4941eb04e51c\"" Jul 12 09:32:54.570225 containerd[1499]: time="2025-07-12T09:32:54.570171728Z" level=info msg="connecting to shim b4eed7f9bd5bb78798f3fd281f7a569aeb2db177b395224863db4941eb04e51c" address="unix:///run/containerd/s/76f792b4ba4c900d8cfccc51ba74f557f8f113e0afad896e3ebc32399b1b9128" protocol=ttrpc version=3 Jul 12 09:32:54.594079 systemd[1]: Started cri-containerd-b4eed7f9bd5bb78798f3fd281f7a569aeb2db177b395224863db4941eb04e51c.scope - libcontainer container b4eed7f9bd5bb78798f3fd281f7a569aeb2db177b395224863db4941eb04e51c. Jul 12 09:32:54.618678 containerd[1499]: time="2025-07-12T09:32:54.618628335Z" level=info msg="StartContainer for \"b4eed7f9bd5bb78798f3fd281f7a569aeb2db177b395224863db4941eb04e51c\" returns successfully" Jul 12 09:32:54.655640 systemd[1]: cri-containerd-b4eed7f9bd5bb78798f3fd281f7a569aeb2db177b395224863db4941eb04e51c.scope: Deactivated successfully. Jul 12 09:32:54.658416 containerd[1499]: time="2025-07-12T09:32:54.658384728Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b4eed7f9bd5bb78798f3fd281f7a569aeb2db177b395224863db4941eb04e51c\" id:\"b4eed7f9bd5bb78798f3fd281f7a569aeb2db177b395224863db4941eb04e51c\" pid:4463 exited_at:{seconds:1752312774 nanos:657988292}" Jul 12 09:32:54.658491 containerd[1499]: time="2025-07-12T09:32:54.658435287Z" level=info msg="received exit event container_id:\"b4eed7f9bd5bb78798f3fd281f7a569aeb2db177b395224863db4941eb04e51c\" id:\"b4eed7f9bd5bb78798f3fd281f7a569aeb2db177b395224863db4941eb04e51c\" pid:4463 exited_at:{seconds:1752312774 nanos:657988292}" Jul 12 09:32:55.472490 containerd[1499]: time="2025-07-12T09:32:55.472043577Z" level=info msg="CreateContainer within sandbox \"ea3931a94b99d8035372d9b2ae359192ab8c13a0feccd220a6a1d3281815abf1\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 12 09:32:55.479935 containerd[1499]: time="2025-07-12T09:32:55.479635207Z" level=info msg="Container 41881709f73cb82a39a0af112ad7d130dde039a4e283d928c48bf7e230f481c5: CDI devices from CRI Config.CDIDevices: []" Jul 12 09:32:55.488544 containerd[1499]: time="2025-07-12T09:32:55.488483781Z" level=info msg="CreateContainer within sandbox \"ea3931a94b99d8035372d9b2ae359192ab8c13a0feccd220a6a1d3281815abf1\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"41881709f73cb82a39a0af112ad7d130dde039a4e283d928c48bf7e230f481c5\"" Jul 12 09:32:55.489364 containerd[1499]: time="2025-07-12T09:32:55.489341931Z" level=info msg="StartContainer for \"41881709f73cb82a39a0af112ad7d130dde039a4e283d928c48bf7e230f481c5\"" Jul 12 09:32:55.490212 containerd[1499]: time="2025-07-12T09:32:55.490176161Z" level=info msg="connecting to shim 41881709f73cb82a39a0af112ad7d130dde039a4e283d928c48bf7e230f481c5" address="unix:///run/containerd/s/76f792b4ba4c900d8cfccc51ba74f557f8f113e0afad896e3ebc32399b1b9128" protocol=ttrpc version=3 Jul 12 09:32:55.509086 systemd[1]: Started cri-containerd-41881709f73cb82a39a0af112ad7d130dde039a4e283d928c48bf7e230f481c5.scope - libcontainer container 41881709f73cb82a39a0af112ad7d130dde039a4e283d928c48bf7e230f481c5. Jul 12 09:32:55.536938 containerd[1499]: time="2025-07-12T09:32:55.536865962Z" level=info msg="StartContainer for \"41881709f73cb82a39a0af112ad7d130dde039a4e283d928c48bf7e230f481c5\" returns successfully" Jul 12 09:32:55.553338 systemd[1]: cri-containerd-41881709f73cb82a39a0af112ad7d130dde039a4e283d928c48bf7e230f481c5.scope: Deactivated successfully. Jul 12 09:32:55.553891 containerd[1499]: time="2025-07-12T09:32:55.553830719Z" level=info msg="received exit event container_id:\"41881709f73cb82a39a0af112ad7d130dde039a4e283d928c48bf7e230f481c5\" id:\"41881709f73cb82a39a0af112ad7d130dde039a4e283d928c48bf7e230f481c5\" pid:4511 exited_at:{seconds:1752312775 nanos:553604362}" Jul 12 09:32:55.554726 containerd[1499]: time="2025-07-12T09:32:55.554697029Z" level=info msg="TaskExit event in podsandbox handler container_id:\"41881709f73cb82a39a0af112ad7d130dde039a4e283d928c48bf7e230f481c5\" id:\"41881709f73cb82a39a0af112ad7d130dde039a4e283d928c48bf7e230f481c5\" pid:4511 exited_at:{seconds:1752312775 nanos:553604362}" Jul 12 09:32:55.573181 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-41881709f73cb82a39a0af112ad7d130dde039a4e283d928c48bf7e230f481c5-rootfs.mount: Deactivated successfully. Jul 12 09:32:56.482464 containerd[1499]: time="2025-07-12T09:32:56.482409067Z" level=info msg="CreateContainer within sandbox \"ea3931a94b99d8035372d9b2ae359192ab8c13a0feccd220a6a1d3281815abf1\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 12 09:32:56.492574 containerd[1499]: time="2025-07-12T09:32:56.492192112Z" level=info msg="Container bc009176dd6668d0f893e3614fea87d1caa9ab20531f813eac558678f5351e60: CDI devices from CRI Config.CDIDevices: []" Jul 12 09:32:56.504346 containerd[1499]: time="2025-07-12T09:32:56.504283811Z" level=info msg="CreateContainer within sandbox \"ea3931a94b99d8035372d9b2ae359192ab8c13a0feccd220a6a1d3281815abf1\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"bc009176dd6668d0f893e3614fea87d1caa9ab20531f813eac558678f5351e60\"" Jul 12 09:32:56.505008 containerd[1499]: time="2025-07-12T09:32:56.504962083Z" level=info msg="StartContainer for \"bc009176dd6668d0f893e3614fea87d1caa9ab20531f813eac558678f5351e60\"" Jul 12 09:32:56.506964 containerd[1499]: time="2025-07-12T09:32:56.506937100Z" level=info msg="connecting to shim bc009176dd6668d0f893e3614fea87d1caa9ab20531f813eac558678f5351e60" address="unix:///run/containerd/s/76f792b4ba4c900d8cfccc51ba74f557f8f113e0afad896e3ebc32399b1b9128" protocol=ttrpc version=3 Jul 12 09:32:56.526072 systemd[1]: Started cri-containerd-bc009176dd6668d0f893e3614fea87d1caa9ab20531f813eac558678f5351e60.scope - libcontainer container bc009176dd6668d0f893e3614fea87d1caa9ab20531f813eac558678f5351e60. Jul 12 09:32:56.563948 containerd[1499]: time="2025-07-12T09:32:56.563897155Z" level=info msg="StartContainer for \"bc009176dd6668d0f893e3614fea87d1caa9ab20531f813eac558678f5351e60\" returns successfully" Jul 12 09:32:56.564875 systemd[1]: cri-containerd-bc009176dd6668d0f893e3614fea87d1caa9ab20531f813eac558678f5351e60.scope: Deactivated successfully. Jul 12 09:32:56.567279 containerd[1499]: time="2025-07-12T09:32:56.567251315Z" level=info msg="received exit event container_id:\"bc009176dd6668d0f893e3614fea87d1caa9ab20531f813eac558678f5351e60\" id:\"bc009176dd6668d0f893e3614fea87d1caa9ab20531f813eac558678f5351e60\" pid:4556 exited_at:{seconds:1752312776 nanos:567069678}" Jul 12 09:32:56.568230 containerd[1499]: time="2025-07-12T09:32:56.567320555Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bc009176dd6668d0f893e3614fea87d1caa9ab20531f813eac558678f5351e60\" id:\"bc009176dd6668d0f893e3614fea87d1caa9ab20531f813eac558678f5351e60\" pid:4556 exited_at:{seconds:1752312776 nanos:567069678}" Jul 12 09:32:56.585657 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bc009176dd6668d0f893e3614fea87d1caa9ab20531f813eac558678f5351e60-rootfs.mount: Deactivated successfully. Jul 12 09:32:57.482503 containerd[1499]: time="2025-07-12T09:32:57.482456393Z" level=info msg="CreateContainer within sandbox \"ea3931a94b99d8035372d9b2ae359192ab8c13a0feccd220a6a1d3281815abf1\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 12 09:32:57.488458 containerd[1499]: time="2025-07-12T09:32:57.488411165Z" level=info msg="Container 88ee14cc55565eea4f3cbed4fa9d3b5ed185fa3300c582ba3acab5076df83c20: CDI devices from CRI Config.CDIDevices: []" Jul 12 09:32:57.495831 containerd[1499]: time="2025-07-12T09:32:57.495781841Z" level=info msg="CreateContainer within sandbox \"ea3931a94b99d8035372d9b2ae359192ab8c13a0feccd220a6a1d3281815abf1\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"88ee14cc55565eea4f3cbed4fa9d3b5ed185fa3300c582ba3acab5076df83c20\"" Jul 12 09:32:57.497156 containerd[1499]: time="2025-07-12T09:32:57.497126745Z" level=info msg="StartContainer for \"88ee14cc55565eea4f3cbed4fa9d3b5ed185fa3300c582ba3acab5076df83c20\"" Jul 12 09:32:57.497971 containerd[1499]: time="2025-07-12T09:32:57.497898697Z" level=info msg="connecting to shim 88ee14cc55565eea4f3cbed4fa9d3b5ed185fa3300c582ba3acab5076df83c20" address="unix:///run/containerd/s/76f792b4ba4c900d8cfccc51ba74f557f8f113e0afad896e3ebc32399b1b9128" protocol=ttrpc version=3 Jul 12 09:32:57.517079 systemd[1]: Started cri-containerd-88ee14cc55565eea4f3cbed4fa9d3b5ed185fa3300c582ba3acab5076df83c20.scope - libcontainer container 88ee14cc55565eea4f3cbed4fa9d3b5ed185fa3300c582ba3acab5076df83c20. Jul 12 09:32:57.539845 systemd[1]: cri-containerd-88ee14cc55565eea4f3cbed4fa9d3b5ed185fa3300c582ba3acab5076df83c20.scope: Deactivated successfully. Jul 12 09:32:57.541187 containerd[1499]: time="2025-07-12T09:32:57.541149403Z" level=info msg="TaskExit event in podsandbox handler container_id:\"88ee14cc55565eea4f3cbed4fa9d3b5ed185fa3300c582ba3acab5076df83c20\" id:\"88ee14cc55565eea4f3cbed4fa9d3b5ed185fa3300c582ba3acab5076df83c20\" pid:4594 exited_at:{seconds:1752312777 nanos:540908686}" Jul 12 09:32:57.541298 containerd[1499]: time="2025-07-12T09:32:57.541158243Z" level=info msg="received exit event container_id:\"88ee14cc55565eea4f3cbed4fa9d3b5ed185fa3300c582ba3acab5076df83c20\" id:\"88ee14cc55565eea4f3cbed4fa9d3b5ed185fa3300c582ba3acab5076df83c20\" pid:4594 exited_at:{seconds:1752312777 nanos:540908686}" Jul 12 09:32:57.549271 containerd[1499]: time="2025-07-12T09:32:57.549235471Z" level=info msg="StartContainer for \"88ee14cc55565eea4f3cbed4fa9d3b5ed185fa3300c582ba3acab5076df83c20\" returns successfully" Jul 12 09:32:57.561422 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-88ee14cc55565eea4f3cbed4fa9d3b5ed185fa3300c582ba3acab5076df83c20-rootfs.mount: Deactivated successfully. Jul 12 09:32:58.300466 kubelet[2642]: E0712 09:32:58.300199 2642 kubelet.go:3117] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 12 09:32:58.487759 containerd[1499]: time="2025-07-12T09:32:58.487655367Z" level=info msg="CreateContainer within sandbox \"ea3931a94b99d8035372d9b2ae359192ab8c13a0feccd220a6a1d3281815abf1\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 12 09:32:58.497949 containerd[1499]: time="2025-07-12T09:32:58.497595896Z" level=info msg="Container e0eb8d3cd35b2fa9350078de8f5e9bc6a3a42ca94429038990c93acebfea72bc: CDI devices from CRI Config.CDIDevices: []" Jul 12 09:32:58.500055 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1582496453.mount: Deactivated successfully. Jul 12 09:32:58.505572 containerd[1499]: time="2025-07-12T09:32:58.505531607Z" level=info msg="CreateContainer within sandbox \"ea3931a94b99d8035372d9b2ae359192ab8c13a0feccd220a6a1d3281815abf1\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"e0eb8d3cd35b2fa9350078de8f5e9bc6a3a42ca94429038990c93acebfea72bc\"" Jul 12 09:32:58.507447 containerd[1499]: time="2025-07-12T09:32:58.507411826Z" level=info msg="StartContainer for \"e0eb8d3cd35b2fa9350078de8f5e9bc6a3a42ca94429038990c93acebfea72bc\"" Jul 12 09:32:58.508395 containerd[1499]: time="2025-07-12T09:32:58.508330496Z" level=info msg="connecting to shim e0eb8d3cd35b2fa9350078de8f5e9bc6a3a42ca94429038990c93acebfea72bc" address="unix:///run/containerd/s/76f792b4ba4c900d8cfccc51ba74f557f8f113e0afad896e3ebc32399b1b9128" protocol=ttrpc version=3 Jul 12 09:32:58.534077 systemd[1]: Started cri-containerd-e0eb8d3cd35b2fa9350078de8f5e9bc6a3a42ca94429038990c93acebfea72bc.scope - libcontainer container e0eb8d3cd35b2fa9350078de8f5e9bc6a3a42ca94429038990c93acebfea72bc. Jul 12 09:32:58.568398 containerd[1499]: time="2025-07-12T09:32:58.568294747Z" level=info msg="StartContainer for \"e0eb8d3cd35b2fa9350078de8f5e9bc6a3a42ca94429038990c93acebfea72bc\" returns successfully" Jul 12 09:32:58.624213 containerd[1499]: time="2025-07-12T09:32:58.624166564Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e0eb8d3cd35b2fa9350078de8f5e9bc6a3a42ca94429038990c93acebfea72bc\" id:\"4d8a24dd24dd12410efa0642ae5ea92e7547d08f48bf81350fe2e54cf6a3e539\" pid:4662 exited_at:{seconds:1752312778 nanos:623861208}" Jul 12 09:32:58.836960 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Jul 12 09:32:59.596641 kubelet[2642]: I0712 09:32:59.596269 2642 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-l7t8s" podStartSLOduration=5.596248833 podStartE2EDuration="5.596248833s" podCreationTimestamp="2025-07-12 09:32:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-12 09:32:59.594724089 +0000 UTC m=+71.434329729" watchObservedRunningTime="2025-07-12 09:32:59.596248833 +0000 UTC m=+71.435854433" Jul 12 09:33:00.103943 kubelet[2642]: I0712 09:33:00.103536 2642 setters.go:618] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-07-12T09:33:00Z","lastTransitionTime":"2025-07-12T09:33:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jul 12 09:33:00.776050 containerd[1499]: time="2025-07-12T09:33:00.775861084Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e0eb8d3cd35b2fa9350078de8f5e9bc6a3a42ca94429038990c93acebfea72bc\" id:\"304202df4b28fdff7ddc62e4b4bc6f5e6d9b96014ca1f25f5d82cabc98189f30\" pid:4841 exit_status:1 exited_at:{seconds:1752312780 nanos:775405489}" Jul 12 09:33:01.761071 systemd-networkd[1423]: lxc_health: Link UP Jul 12 09:33:01.761291 systemd-networkd[1423]: lxc_health: Gained carrier Jul 12 09:33:02.821024 systemd-networkd[1423]: lxc_health: Gained IPv6LL Jul 12 09:33:02.904549 containerd[1499]: time="2025-07-12T09:33:02.904496381Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e0eb8d3cd35b2fa9350078de8f5e9bc6a3a42ca94429038990c93acebfea72bc\" id:\"8d8d0606116239581857492f24953e8bcbb87c9a5911d85104e29d9f189d3120\" pid:5197 exited_at:{seconds:1752312782 nanos:903859147}" Jul 12 09:33:05.015436 containerd[1499]: time="2025-07-12T09:33:05.015395143Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e0eb8d3cd35b2fa9350078de8f5e9bc6a3a42ca94429038990c93acebfea72bc\" id:\"88c22eb108ee07f755f9114876bf2a95815fb1708d494e8663f5b9be73bc08b1\" pid:5227 exited_at:{seconds:1752312785 nanos:14935507}" Jul 12 09:33:07.127943 containerd[1499]: time="2025-07-12T09:33:07.127886358Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e0eb8d3cd35b2fa9350078de8f5e9bc6a3a42ca94429038990c93acebfea72bc\" id:\"38d532491b2cd35354b06b4b225ed43e0a46f62ace8253a40ec722c93d2b1e29\" pid:5257 exited_at:{seconds:1752312787 nanos:127501201}" Jul 12 09:33:07.132002 sshd[4400]: Connection closed by 10.0.0.1 port 42724 Jul 12 09:33:07.132733 sshd-session[4393]: pam_unix(sshd:session): session closed for user core Jul 12 09:33:07.137070 systemd[1]: sshd@23-10.0.0.39:22-10.0.0.1:42724.service: Deactivated successfully. Jul 12 09:33:07.139104 systemd[1]: session-24.scope: Deactivated successfully. Jul 12 09:33:07.139935 systemd-logind[1480]: Session 24 logged out. Waiting for processes to exit. Jul 12 09:33:07.140952 systemd-logind[1480]: Removed session 24.