Jul 15 04:46:37.800038 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jul 15 04:46:37.800061 kernel: Linux version 6.12.36-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT Tue Jul 15 03:28:41 -00 2025 Jul 15 04:46:37.800070 kernel: KASLR enabled Jul 15 04:46:37.800076 kernel: efi: EFI v2.7 by EDK II Jul 15 04:46:37.800082 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb832018 ACPI 2.0=0xdbfd0018 RNG=0xdbfd0a18 MEMRESERVE=0xdb838218 Jul 15 04:46:37.800088 kernel: random: crng init done Jul 15 04:46:37.800094 kernel: secureboot: Secure boot disabled Jul 15 04:46:37.800100 kernel: ACPI: Early table checksum verification disabled Jul 15 04:46:37.800106 kernel: ACPI: RSDP 0x00000000DBFD0018 000024 (v02 BOCHS ) Jul 15 04:46:37.800114 kernel: ACPI: XSDT 0x00000000DBFD0F18 000064 (v01 BOCHS BXPC 00000001 01000013) Jul 15 04:46:37.800120 kernel: ACPI: FACP 0x00000000DBFD0B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jul 15 04:46:37.800126 kernel: ACPI: DSDT 0x00000000DBF0E018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 15 04:46:37.800132 kernel: ACPI: APIC 0x00000000DBFD0C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jul 15 04:46:37.800138 kernel: ACPI: PPTT 0x00000000DBFD0098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 15 04:46:37.800145 kernel: ACPI: GTDT 0x00000000DBFD0818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 15 04:46:37.800153 kernel: ACPI: MCFG 0x00000000DBFD0A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 15 04:46:37.800160 kernel: ACPI: SPCR 0x00000000DBFD0918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 15 04:46:37.800166 kernel: ACPI: DBG2 0x00000000DBFD0998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jul 15 04:46:37.800172 kernel: ACPI: IORT 0x00000000DBFD0198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 15 04:46:37.800178 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Jul 15 04:46:37.800184 kernel: ACPI: Use ACPI SPCR as default console: Yes Jul 15 04:46:37.800191 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Jul 15 04:46:37.800197 kernel: NODE_DATA(0) allocated [mem 0xdc965a00-0xdc96cfff] Jul 15 04:46:37.800203 kernel: Zone ranges: Jul 15 04:46:37.800210 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Jul 15 04:46:37.800218 kernel: DMA32 empty Jul 15 04:46:37.800224 kernel: Normal empty Jul 15 04:46:37.800230 kernel: Device empty Jul 15 04:46:37.800236 kernel: Movable zone start for each node Jul 15 04:46:37.800242 kernel: Early memory node ranges Jul 15 04:46:37.800248 kernel: node 0: [mem 0x0000000040000000-0x00000000db81ffff] Jul 15 04:46:37.800255 kernel: node 0: [mem 0x00000000db820000-0x00000000db82ffff] Jul 15 04:46:37.800261 kernel: node 0: [mem 0x00000000db830000-0x00000000dc09ffff] Jul 15 04:46:37.800268 kernel: node 0: [mem 0x00000000dc0a0000-0x00000000dc2dffff] Jul 15 04:46:37.800274 kernel: node 0: [mem 0x00000000dc2e0000-0x00000000dc36ffff] Jul 15 04:46:37.800280 kernel: node 0: [mem 0x00000000dc370000-0x00000000dc45ffff] Jul 15 04:46:37.800286 kernel: node 0: [mem 0x00000000dc460000-0x00000000dc52ffff] Jul 15 04:46:37.800296 kernel: node 0: [mem 0x00000000dc530000-0x00000000dc5cffff] Jul 15 04:46:37.800302 kernel: node 0: [mem 0x00000000dc5d0000-0x00000000dce1ffff] Jul 15 04:46:37.800309 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Jul 15 04:46:37.800318 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Jul 15 04:46:37.800324 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Jul 15 04:46:37.800331 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Jul 15 04:46:37.800339 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Jul 15 04:46:37.800346 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Jul 15 04:46:37.800352 kernel: cma: Reserved 16 MiB at 0x00000000d8000000 on node -1 Jul 15 04:46:37.800359 kernel: psci: probing for conduit method from ACPI. Jul 15 04:46:37.800365 kernel: psci: PSCIv1.1 detected in firmware. Jul 15 04:46:37.800372 kernel: psci: Using standard PSCI v0.2 function IDs Jul 15 04:46:37.800378 kernel: psci: Trusted OS migration not required Jul 15 04:46:37.800385 kernel: psci: SMC Calling Convention v1.1 Jul 15 04:46:37.800391 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jul 15 04:46:37.800398 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Jul 15 04:46:37.800406 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Jul 15 04:46:37.800413 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Jul 15 04:46:37.800420 kernel: Detected PIPT I-cache on CPU0 Jul 15 04:46:37.800426 kernel: CPU features: detected: GIC system register CPU interface Jul 15 04:46:37.800433 kernel: CPU features: detected: Spectre-v4 Jul 15 04:46:37.800439 kernel: CPU features: detected: Spectre-BHB Jul 15 04:46:37.800446 kernel: CPU features: kernel page table isolation forced ON by KASLR Jul 15 04:46:37.800452 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jul 15 04:46:37.800459 kernel: CPU features: detected: ARM erratum 1418040 Jul 15 04:46:37.800466 kernel: CPU features: detected: SSBS not fully self-synchronizing Jul 15 04:46:37.800472 kernel: alternatives: applying boot alternatives Jul 15 04:46:37.800480 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=71133d47dc7355ed63f3db64861b54679726ebf08c2975c3bf327e76b39a3acd Jul 15 04:46:37.800488 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 15 04:46:37.800495 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 15 04:46:37.800501 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 15 04:46:37.800508 kernel: Fallback order for Node 0: 0 Jul 15 04:46:37.800514 kernel: Built 1 zonelists, mobility grouping on. Total pages: 643072 Jul 15 04:46:37.800521 kernel: Policy zone: DMA Jul 15 04:46:37.800527 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 15 04:46:37.800534 kernel: software IO TLB: SWIOTLB bounce buffer size adjusted to 2MB Jul 15 04:46:37.800541 kernel: software IO TLB: area num 4. Jul 15 04:46:37.800547 kernel: software IO TLB: SWIOTLB bounce buffer size roundup to 4MB Jul 15 04:46:37.800554 kernel: software IO TLB: mapped [mem 0x00000000d7c00000-0x00000000d8000000] (4MB) Jul 15 04:46:37.800562 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jul 15 04:46:37.800569 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 15 04:46:37.800576 kernel: rcu: RCU event tracing is enabled. Jul 15 04:46:37.800583 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jul 15 04:46:37.800589 kernel: Trampoline variant of Tasks RCU enabled. Jul 15 04:46:37.800596 kernel: Tracing variant of Tasks RCU enabled. Jul 15 04:46:37.800603 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 15 04:46:37.800609 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jul 15 04:46:37.800616 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 15 04:46:37.800623 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 15 04:46:37.800629 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jul 15 04:46:37.800637 kernel: GICv3: 256 SPIs implemented Jul 15 04:46:37.800644 kernel: GICv3: 0 Extended SPIs implemented Jul 15 04:46:37.800654 kernel: Root IRQ handler: gic_handle_irq Jul 15 04:46:37.800660 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jul 15 04:46:37.800667 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Jul 15 04:46:37.800673 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jul 15 04:46:37.800680 kernel: ITS [mem 0x08080000-0x0809ffff] Jul 15 04:46:37.800687 kernel: ITS@0x0000000008080000: allocated 8192 Devices @40110000 (indirect, esz 8, psz 64K, shr 1) Jul 15 04:46:37.800694 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @40120000 (flat, esz 8, psz 64K, shr 1) Jul 15 04:46:37.800700 kernel: GICv3: using LPI property table @0x0000000040130000 Jul 15 04:46:37.800707 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040140000 Jul 15 04:46:37.800714 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 15 04:46:37.800734 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 15 04:46:37.800742 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jul 15 04:46:37.800748 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jul 15 04:46:37.800755 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jul 15 04:46:37.800762 kernel: arm-pv: using stolen time PV Jul 15 04:46:37.800769 kernel: Console: colour dummy device 80x25 Jul 15 04:46:37.800776 kernel: ACPI: Core revision 20240827 Jul 15 04:46:37.800783 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jul 15 04:46:37.800790 kernel: pid_max: default: 32768 minimum: 301 Jul 15 04:46:37.800797 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jul 15 04:46:37.800805 kernel: landlock: Up and running. Jul 15 04:46:37.800812 kernel: SELinux: Initializing. Jul 15 04:46:37.800819 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 15 04:46:37.800826 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 15 04:46:37.800833 kernel: rcu: Hierarchical SRCU implementation. Jul 15 04:46:37.800840 kernel: rcu: Max phase no-delay instances is 400. Jul 15 04:46:37.800854 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jul 15 04:46:37.800861 kernel: Remapping and enabling EFI services. Jul 15 04:46:37.800868 kernel: smp: Bringing up secondary CPUs ... Jul 15 04:46:37.800881 kernel: Detected PIPT I-cache on CPU1 Jul 15 04:46:37.800889 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jul 15 04:46:37.800896 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040150000 Jul 15 04:46:37.800904 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 15 04:46:37.800912 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jul 15 04:46:37.800919 kernel: Detected PIPT I-cache on CPU2 Jul 15 04:46:37.800926 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Jul 15 04:46:37.800934 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040160000 Jul 15 04:46:37.800943 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 15 04:46:37.800950 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Jul 15 04:46:37.800957 kernel: Detected PIPT I-cache on CPU3 Jul 15 04:46:37.800964 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Jul 15 04:46:37.800971 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040170000 Jul 15 04:46:37.800979 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 15 04:46:37.800986 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Jul 15 04:46:37.800993 kernel: smp: Brought up 1 node, 4 CPUs Jul 15 04:46:37.801001 kernel: SMP: Total of 4 processors activated. Jul 15 04:46:37.801009 kernel: CPU: All CPU(s) started at EL1 Jul 15 04:46:37.801016 kernel: CPU features: detected: 32-bit EL0 Support Jul 15 04:46:37.801023 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jul 15 04:46:37.801031 kernel: CPU features: detected: Common not Private translations Jul 15 04:46:37.801038 kernel: CPU features: detected: CRC32 instructions Jul 15 04:46:37.801045 kernel: CPU features: detected: Enhanced Virtualization Traps Jul 15 04:46:37.801052 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jul 15 04:46:37.801059 kernel: CPU features: detected: LSE atomic instructions Jul 15 04:46:37.801066 kernel: CPU features: detected: Privileged Access Never Jul 15 04:46:37.801075 kernel: CPU features: detected: RAS Extension Support Jul 15 04:46:37.801082 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jul 15 04:46:37.801089 kernel: alternatives: applying system-wide alternatives Jul 15 04:46:37.801097 kernel: CPU features: detected: Hardware dirty bit management on CPU0-3 Jul 15 04:46:37.801104 kernel: Memory: 2424032K/2572288K available (11136K kernel code, 2436K rwdata, 9056K rodata, 39424K init, 1038K bss, 125920K reserved, 16384K cma-reserved) Jul 15 04:46:37.801112 kernel: devtmpfs: initialized Jul 15 04:46:37.801119 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 15 04:46:37.801126 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jul 15 04:46:37.801134 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jul 15 04:46:37.801142 kernel: 0 pages in range for non-PLT usage Jul 15 04:46:37.801149 kernel: 508448 pages in range for PLT usage Jul 15 04:46:37.801156 kernel: pinctrl core: initialized pinctrl subsystem Jul 15 04:46:37.801164 kernel: SMBIOS 3.0.0 present. Jul 15 04:46:37.801171 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Jul 15 04:46:37.801178 kernel: DMI: Memory slots populated: 1/1 Jul 15 04:46:37.801185 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 15 04:46:37.801192 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jul 15 04:46:37.801200 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jul 15 04:46:37.801208 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jul 15 04:46:37.801215 kernel: audit: initializing netlink subsys (disabled) Jul 15 04:46:37.801223 kernel: audit: type=2000 audit(0.019:1): state=initialized audit_enabled=0 res=1 Jul 15 04:46:37.801230 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 15 04:46:37.801237 kernel: cpuidle: using governor menu Jul 15 04:46:37.801244 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jul 15 04:46:37.801252 kernel: ASID allocator initialised with 32768 entries Jul 15 04:46:37.801259 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 15 04:46:37.801266 kernel: Serial: AMBA PL011 UART driver Jul 15 04:46:37.801275 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 15 04:46:37.801282 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jul 15 04:46:37.801290 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jul 15 04:46:37.801297 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jul 15 04:46:37.801304 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 15 04:46:37.801312 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jul 15 04:46:37.801319 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jul 15 04:46:37.801326 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jul 15 04:46:37.801333 kernel: ACPI: Added _OSI(Module Device) Jul 15 04:46:37.801342 kernel: ACPI: Added _OSI(Processor Device) Jul 15 04:46:37.801349 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 15 04:46:37.801356 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 15 04:46:37.801363 kernel: ACPI: Interpreter enabled Jul 15 04:46:37.801371 kernel: ACPI: Using GIC for interrupt routing Jul 15 04:46:37.801378 kernel: ACPI: MCFG table detected, 1 entries Jul 15 04:46:37.801385 kernel: ACPI: CPU0 has been hot-added Jul 15 04:46:37.801392 kernel: ACPI: CPU1 has been hot-added Jul 15 04:46:37.801399 kernel: ACPI: CPU2 has been hot-added Jul 15 04:46:37.801406 kernel: ACPI: CPU3 has been hot-added Jul 15 04:46:37.801415 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jul 15 04:46:37.801425 kernel: printk: legacy console [ttyAMA0] enabled Jul 15 04:46:37.801434 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 15 04:46:37.801584 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 15 04:46:37.801655 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jul 15 04:46:37.801717 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jul 15 04:46:37.801794 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jul 15 04:46:37.801868 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jul 15 04:46:37.801878 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jul 15 04:46:37.801886 kernel: PCI host bridge to bus 0000:00 Jul 15 04:46:37.801955 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jul 15 04:46:37.802013 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jul 15 04:46:37.802068 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jul 15 04:46:37.802123 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 15 04:46:37.802208 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint Jul 15 04:46:37.802283 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Jul 15 04:46:37.802350 kernel: pci 0000:00:01.0: BAR 0 [io 0x0000-0x001f] Jul 15 04:46:37.802430 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff] Jul 15 04:46:37.802500 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] Jul 15 04:46:37.802567 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned Jul 15 04:46:37.802632 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff]: assigned Jul 15 04:46:37.802713 kernel: pci 0000:00:01.0: BAR 0 [io 0x1000-0x101f]: assigned Jul 15 04:46:37.802797 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jul 15 04:46:37.802867 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jul 15 04:46:37.802927 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jul 15 04:46:37.802937 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jul 15 04:46:37.802944 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jul 15 04:46:37.802952 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jul 15 04:46:37.802962 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jul 15 04:46:37.802969 kernel: iommu: Default domain type: Translated Jul 15 04:46:37.802976 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jul 15 04:46:37.802983 kernel: efivars: Registered efivars operations Jul 15 04:46:37.802990 kernel: vgaarb: loaded Jul 15 04:46:37.802998 kernel: clocksource: Switched to clocksource arch_sys_counter Jul 15 04:46:37.803005 kernel: VFS: Disk quotas dquot_6.6.0 Jul 15 04:46:37.803012 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 15 04:46:37.803019 kernel: pnp: PnP ACPI init Jul 15 04:46:37.803097 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jul 15 04:46:37.803108 kernel: pnp: PnP ACPI: found 1 devices Jul 15 04:46:37.803115 kernel: NET: Registered PF_INET protocol family Jul 15 04:46:37.803122 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 15 04:46:37.803129 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 15 04:46:37.803137 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 15 04:46:37.803144 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 15 04:46:37.803151 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 15 04:46:37.803160 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 15 04:46:37.803167 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 15 04:46:37.803174 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 15 04:46:37.803182 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 15 04:46:37.803188 kernel: PCI: CLS 0 bytes, default 64 Jul 15 04:46:37.803195 kernel: kvm [1]: HYP mode not available Jul 15 04:46:37.803203 kernel: Initialise system trusted keyrings Jul 15 04:46:37.803210 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 15 04:46:37.803217 kernel: Key type asymmetric registered Jul 15 04:46:37.803225 kernel: Asymmetric key parser 'x509' registered Jul 15 04:46:37.803232 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jul 15 04:46:37.803240 kernel: io scheduler mq-deadline registered Jul 15 04:46:37.803247 kernel: io scheduler kyber registered Jul 15 04:46:37.803254 kernel: io scheduler bfq registered Jul 15 04:46:37.803261 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jul 15 04:46:37.803268 kernel: ACPI: button: Power Button [PWRB] Jul 15 04:46:37.803276 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jul 15 04:46:37.803339 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Jul 15 04:46:37.803350 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 15 04:46:37.803357 kernel: thunder_xcv, ver 1.0 Jul 15 04:46:37.803364 kernel: thunder_bgx, ver 1.0 Jul 15 04:46:37.803376 kernel: nicpf, ver 1.0 Jul 15 04:46:37.803383 kernel: nicvf, ver 1.0 Jul 15 04:46:37.803455 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jul 15 04:46:37.803514 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-07-15T04:46:37 UTC (1752554797) Jul 15 04:46:37.803524 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 15 04:46:37.803531 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Jul 15 04:46:37.803541 kernel: watchdog: NMI not fully supported Jul 15 04:46:37.803548 kernel: watchdog: Hard watchdog permanently disabled Jul 15 04:46:37.803555 kernel: NET: Registered PF_INET6 protocol family Jul 15 04:46:37.803562 kernel: Segment Routing with IPv6 Jul 15 04:46:37.803569 kernel: In-situ OAM (IOAM) with IPv6 Jul 15 04:46:37.803576 kernel: NET: Registered PF_PACKET protocol family Jul 15 04:46:37.803583 kernel: Key type dns_resolver registered Jul 15 04:46:37.803591 kernel: registered taskstats version 1 Jul 15 04:46:37.803598 kernel: Loading compiled-in X.509 certificates Jul 15 04:46:37.803606 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.36-flatcar: b5c59c413839929aea5bd4b52ae6eaff0e245cd2' Jul 15 04:46:37.803613 kernel: Demotion targets for Node 0: null Jul 15 04:46:37.803620 kernel: Key type .fscrypt registered Jul 15 04:46:37.803627 kernel: Key type fscrypt-provisioning registered Jul 15 04:46:37.803634 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 15 04:46:37.803645 kernel: ima: Allocated hash algorithm: sha1 Jul 15 04:46:37.803653 kernel: ima: No architecture policies found Jul 15 04:46:37.803661 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jul 15 04:46:37.803669 kernel: clk: Disabling unused clocks Jul 15 04:46:37.803677 kernel: PM: genpd: Disabling unused power domains Jul 15 04:46:37.803683 kernel: Warning: unable to open an initial console. Jul 15 04:46:37.803691 kernel: Freeing unused kernel memory: 39424K Jul 15 04:46:37.803698 kernel: Run /init as init process Jul 15 04:46:37.803705 kernel: with arguments: Jul 15 04:46:37.803712 kernel: /init Jul 15 04:46:37.803719 kernel: with environment: Jul 15 04:46:37.804830 kernel: HOME=/ Jul 15 04:46:37.804840 kernel: TERM=linux Jul 15 04:46:37.804863 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 15 04:46:37.804872 systemd[1]: Successfully made /usr/ read-only. Jul 15 04:46:37.804884 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 15 04:46:37.804893 systemd[1]: Detected virtualization kvm. Jul 15 04:46:37.804901 systemd[1]: Detected architecture arm64. Jul 15 04:46:37.804909 systemd[1]: Running in initrd. Jul 15 04:46:37.804916 systemd[1]: No hostname configured, using default hostname. Jul 15 04:46:37.804927 systemd[1]: Hostname set to . Jul 15 04:46:37.804934 systemd[1]: Initializing machine ID from VM UUID. Jul 15 04:46:37.804942 systemd[1]: Queued start job for default target initrd.target. Jul 15 04:46:37.804950 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 15 04:46:37.804958 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 15 04:46:37.804967 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 15 04:46:37.804975 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 15 04:46:37.804983 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 15 04:46:37.804993 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 15 04:46:37.805002 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 15 04:46:37.805010 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 15 04:46:37.805018 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 15 04:46:37.805026 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 15 04:46:37.805034 systemd[1]: Reached target paths.target - Path Units. Jul 15 04:46:37.805042 systemd[1]: Reached target slices.target - Slice Units. Jul 15 04:46:37.805052 systemd[1]: Reached target swap.target - Swaps. Jul 15 04:46:37.805060 systemd[1]: Reached target timers.target - Timer Units. Jul 15 04:46:37.805068 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 15 04:46:37.805076 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 15 04:46:37.805084 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 15 04:46:37.805092 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jul 15 04:46:37.805100 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 15 04:46:37.805108 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 15 04:46:37.805118 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 15 04:46:37.805126 systemd[1]: Reached target sockets.target - Socket Units. Jul 15 04:46:37.805134 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 15 04:46:37.805142 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 15 04:46:37.805150 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 15 04:46:37.805158 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jul 15 04:46:37.805166 systemd[1]: Starting systemd-fsck-usr.service... Jul 15 04:46:37.805174 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 15 04:46:37.805182 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 15 04:46:37.805192 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 15 04:46:37.805200 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 15 04:46:37.805208 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 15 04:46:37.805216 systemd[1]: Finished systemd-fsck-usr.service. Jul 15 04:46:37.805240 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 15 04:46:37.805283 systemd-journald[244]: Collecting audit messages is disabled. Jul 15 04:46:37.805304 systemd-journald[244]: Journal started Jul 15 04:46:37.805324 systemd-journald[244]: Runtime Journal (/run/log/journal/0167c0310b0a40ba8a419141cf978c00) is 6M, max 48.5M, 42.4M free. Jul 15 04:46:37.807398 systemd[1]: Started systemd-journald.service - Journal Service. Jul 15 04:46:37.798584 systemd-modules-load[245]: Inserted module 'overlay' Jul 15 04:46:37.808822 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 15 04:46:37.811816 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 15 04:46:37.816748 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 15 04:46:37.818013 systemd-modules-load[245]: Inserted module 'br_netfilter' Jul 15 04:46:37.818772 kernel: Bridge firewalling registered Jul 15 04:46:37.822933 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 15 04:46:37.824120 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 15 04:46:37.828667 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 15 04:46:37.830345 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 15 04:46:37.830486 systemd-tmpfiles[259]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jul 15 04:46:37.832110 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 15 04:46:37.835589 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 15 04:46:37.842985 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 15 04:46:37.845222 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 15 04:46:37.847843 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 15 04:46:37.849538 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 15 04:46:37.851237 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 15 04:46:37.869423 dracut-cmdline[290]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=71133d47dc7355ed63f3db64861b54679726ebf08c2975c3bf327e76b39a3acd Jul 15 04:46:37.889538 systemd-resolved[289]: Positive Trust Anchors: Jul 15 04:46:37.889556 systemd-resolved[289]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 15 04:46:37.889587 systemd-resolved[289]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 15 04:46:37.894805 systemd-resolved[289]: Defaulting to hostname 'linux'. Jul 15 04:46:37.895858 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 15 04:46:37.897478 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 15 04:46:37.948758 kernel: SCSI subsystem initialized Jul 15 04:46:37.952736 kernel: Loading iSCSI transport class v2.0-870. Jul 15 04:46:37.960764 kernel: iscsi: registered transport (tcp) Jul 15 04:46:37.972749 kernel: iscsi: registered transport (qla4xxx) Jul 15 04:46:37.972777 kernel: QLogic iSCSI HBA Driver Jul 15 04:46:37.988532 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 15 04:46:38.013495 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 15 04:46:38.016361 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 15 04:46:38.060772 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 15 04:46:38.062737 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 15 04:46:38.126759 kernel: raid6: neonx8 gen() 14099 MB/s Jul 15 04:46:38.143750 kernel: raid6: neonx4 gen() 15818 MB/s Jul 15 04:46:38.160749 kernel: raid6: neonx2 gen() 13245 MB/s Jul 15 04:46:38.177745 kernel: raid6: neonx1 gen() 10466 MB/s Jul 15 04:46:38.194746 kernel: raid6: int64x8 gen() 6892 MB/s Jul 15 04:46:38.211756 kernel: raid6: int64x4 gen() 7354 MB/s Jul 15 04:46:38.228750 kernel: raid6: int64x2 gen() 6106 MB/s Jul 15 04:46:38.245742 kernel: raid6: int64x1 gen() 5058 MB/s Jul 15 04:46:38.245763 kernel: raid6: using algorithm neonx4 gen() 15818 MB/s Jul 15 04:46:38.262752 kernel: raid6: .... xor() 12342 MB/s, rmw enabled Jul 15 04:46:38.262767 kernel: raid6: using neon recovery algorithm Jul 15 04:46:38.267739 kernel: xor: measuring software checksum speed Jul 15 04:46:38.267754 kernel: 8regs : 21630 MB/sec Jul 15 04:46:38.268739 kernel: 32regs : 15285 MB/sec Jul 15 04:46:38.268750 kernel: arm64_neon : 28051 MB/sec Jul 15 04:46:38.268759 kernel: xor: using function: arm64_neon (28051 MB/sec) Jul 15 04:46:38.320743 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 15 04:46:38.326618 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 15 04:46:38.328841 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 15 04:46:38.362430 systemd-udevd[499]: Using default interface naming scheme 'v255'. Jul 15 04:46:38.366586 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 15 04:46:38.368344 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 15 04:46:38.401537 dracut-pre-trigger[507]: rd.md=0: removing MD RAID activation Jul 15 04:46:38.424905 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 15 04:46:38.426805 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 15 04:46:38.477360 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 15 04:46:38.480959 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 15 04:46:38.527987 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Jul 15 04:46:38.528171 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jul 15 04:46:38.531361 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 15 04:46:38.532259 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 15 04:46:38.534536 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 15 04:46:38.539894 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 15 04:46:38.539919 kernel: GPT:9289727 != 19775487 Jul 15 04:46:38.539929 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 15 04:46:38.539938 kernel: GPT:9289727 != 19775487 Jul 15 04:46:38.539954 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 15 04:46:38.539963 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 15 04:46:38.536046 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 15 04:46:38.567726 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jul 15 04:46:38.568829 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 15 04:46:38.576468 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 15 04:46:38.589111 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jul 15 04:46:38.596148 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 15 04:46:38.601904 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jul 15 04:46:38.602822 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jul 15 04:46:38.604531 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 15 04:46:38.606581 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 15 04:46:38.608084 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 15 04:46:38.610171 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 15 04:46:38.611951 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 15 04:46:38.633585 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 15 04:46:38.635506 disk-uuid[592]: Primary Header is updated. Jul 15 04:46:38.635506 disk-uuid[592]: Secondary Entries is updated. Jul 15 04:46:38.635506 disk-uuid[592]: Secondary Header is updated. Jul 15 04:46:38.638750 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 15 04:46:39.650509 disk-uuid[599]: The operation has completed successfully. Jul 15 04:46:39.651510 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 15 04:46:39.679001 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 15 04:46:39.679790 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 15 04:46:39.701867 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 15 04:46:39.730738 sh[612]: Success Jul 15 04:46:39.748916 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 15 04:46:39.748982 kernel: device-mapper: uevent: version 1.0.3 Jul 15 04:46:39.750753 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jul 15 04:46:39.760769 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Jul 15 04:46:39.785555 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 15 04:46:39.788431 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 15 04:46:39.805078 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 15 04:46:39.812747 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Jul 15 04:46:39.817650 kernel: BTRFS: device fsid a7b7592d-2d1d-4236-b04f-dc58147b4692 devid 1 transid 37 /dev/mapper/usr (253:0) scanned by mount (624) Jul 15 04:46:39.819222 kernel: BTRFS info (device dm-0): first mount of filesystem a7b7592d-2d1d-4236-b04f-dc58147b4692 Jul 15 04:46:39.819238 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jul 15 04:46:39.819769 kernel: BTRFS info (device dm-0): using free-space-tree Jul 15 04:46:39.823696 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 15 04:46:39.824951 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jul 15 04:46:39.825917 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 15 04:46:39.826745 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 15 04:46:39.829309 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 15 04:46:39.849900 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 (254:6) scanned by mount (653) Jul 15 04:46:39.852139 kernel: BTRFS info (device vda6): first mount of filesystem 1ba6da34-80a1-4a8c-bd4d-0f30640013e8 Jul 15 04:46:39.852173 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 15 04:46:39.852184 kernel: BTRFS info (device vda6): using free-space-tree Jul 15 04:46:39.857743 kernel: BTRFS info (device vda6): last unmount of filesystem 1ba6da34-80a1-4a8c-bd4d-0f30640013e8 Jul 15 04:46:39.858805 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 15 04:46:39.860868 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 15 04:46:39.937259 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 15 04:46:39.942024 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 15 04:46:39.976824 systemd-networkd[797]: lo: Link UP Jul 15 04:46:39.976835 systemd-networkd[797]: lo: Gained carrier Jul 15 04:46:39.977601 systemd-networkd[797]: Enumeration completed Jul 15 04:46:39.977694 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 15 04:46:39.979005 systemd[1]: Reached target network.target - Network. Jul 15 04:46:39.979761 systemd-networkd[797]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 15 04:46:39.979765 systemd-networkd[797]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 15 04:46:39.980243 systemd-networkd[797]: eth0: Link UP Jul 15 04:46:39.980246 systemd-networkd[797]: eth0: Gained carrier Jul 15 04:46:39.980253 systemd-networkd[797]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 15 04:46:40.001781 systemd-networkd[797]: eth0: DHCPv4 address 10.0.0.81/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 15 04:46:40.017634 ignition[698]: Ignition 2.21.0 Jul 15 04:46:40.017647 ignition[698]: Stage: fetch-offline Jul 15 04:46:40.017681 ignition[698]: no configs at "/usr/lib/ignition/base.d" Jul 15 04:46:40.017689 ignition[698]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 15 04:46:40.017892 ignition[698]: parsed url from cmdline: "" Jul 15 04:46:40.017896 ignition[698]: no config URL provided Jul 15 04:46:40.017901 ignition[698]: reading system config file "/usr/lib/ignition/user.ign" Jul 15 04:46:40.017908 ignition[698]: no config at "/usr/lib/ignition/user.ign" Jul 15 04:46:40.017934 ignition[698]: op(1): [started] loading QEMU firmware config module Jul 15 04:46:40.017939 ignition[698]: op(1): executing: "modprobe" "qemu_fw_cfg" Jul 15 04:46:40.029673 ignition[698]: op(1): [finished] loading QEMU firmware config module Jul 15 04:46:40.065795 ignition[698]: parsing config with SHA512: 212e966b63df4511ce674514c24961b7d377a9e77e1efadd977d6dd44f194a6504d7dc52a12c439c93a2e11b34827349c64aef05f61d9e13d443a902f8d637b4 Jul 15 04:46:40.069874 unknown[698]: fetched base config from "system" Jul 15 04:46:40.069885 unknown[698]: fetched user config from "qemu" Jul 15 04:46:40.070278 ignition[698]: fetch-offline: fetch-offline passed Jul 15 04:46:40.070333 ignition[698]: Ignition finished successfully Jul 15 04:46:40.072443 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 15 04:46:40.073670 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 15 04:46:40.074453 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 15 04:46:40.106639 ignition[813]: Ignition 2.21.0 Jul 15 04:46:40.106653 ignition[813]: Stage: kargs Jul 15 04:46:40.106880 ignition[813]: no configs at "/usr/lib/ignition/base.d" Jul 15 04:46:40.106890 ignition[813]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 15 04:46:40.108589 ignition[813]: kargs: kargs passed Jul 15 04:46:40.108659 ignition[813]: Ignition finished successfully Jul 15 04:46:40.111790 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 15 04:46:40.113509 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 15 04:46:40.144049 ignition[821]: Ignition 2.21.0 Jul 15 04:46:40.144065 ignition[821]: Stage: disks Jul 15 04:46:40.144209 ignition[821]: no configs at "/usr/lib/ignition/base.d" Jul 15 04:46:40.144218 ignition[821]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 15 04:46:40.146369 ignition[821]: disks: disks passed Jul 15 04:46:40.146427 ignition[821]: Ignition finished successfully Jul 15 04:46:40.148072 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 15 04:46:40.149002 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 15 04:46:40.150151 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 15 04:46:40.151640 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 15 04:46:40.153359 systemd[1]: Reached target sysinit.target - System Initialization. Jul 15 04:46:40.154599 systemd[1]: Reached target basic.target - Basic System. Jul 15 04:46:40.156718 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 15 04:46:40.189635 systemd-fsck[831]: ROOT: clean, 15/553520 files, 52789/553472 blocks Jul 15 04:46:40.196108 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 15 04:46:40.198101 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 15 04:46:40.268755 kernel: EXT4-fs (vda9): mounted filesystem 4818953b-9d82-47bd-ab58-d0aa5641a19a r/w with ordered data mode. Quota mode: none. Jul 15 04:46:40.268902 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 15 04:46:40.269956 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 15 04:46:40.274524 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 15 04:46:40.276502 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 15 04:46:40.277349 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 15 04:46:40.277392 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 15 04:46:40.277417 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 15 04:46:40.293415 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 15 04:46:40.295366 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 15 04:46:40.298786 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 (254:6) scanned by mount (840) Jul 15 04:46:40.301098 kernel: BTRFS info (device vda6): first mount of filesystem 1ba6da34-80a1-4a8c-bd4d-0f30640013e8 Jul 15 04:46:40.301143 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 15 04:46:40.301154 kernel: BTRFS info (device vda6): using free-space-tree Jul 15 04:46:40.304297 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 15 04:46:40.336401 initrd-setup-root[864]: cut: /sysroot/etc/passwd: No such file or directory Jul 15 04:46:40.339437 initrd-setup-root[871]: cut: /sysroot/etc/group: No such file or directory Jul 15 04:46:40.343200 initrd-setup-root[878]: cut: /sysroot/etc/shadow: No such file or directory Jul 15 04:46:40.346425 initrd-setup-root[885]: cut: /sysroot/etc/gshadow: No such file or directory Jul 15 04:46:40.425250 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 15 04:46:40.427214 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 15 04:46:40.428600 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 15 04:46:40.453774 kernel: BTRFS info (device vda6): last unmount of filesystem 1ba6da34-80a1-4a8c-bd4d-0f30640013e8 Jul 15 04:46:40.466851 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 15 04:46:40.480391 ignition[954]: INFO : Ignition 2.21.0 Jul 15 04:46:40.480391 ignition[954]: INFO : Stage: mount Jul 15 04:46:40.480391 ignition[954]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 15 04:46:40.480391 ignition[954]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 15 04:46:40.483501 ignition[954]: INFO : mount: mount passed Jul 15 04:46:40.483501 ignition[954]: INFO : Ignition finished successfully Jul 15 04:46:40.483005 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 15 04:46:40.485194 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 15 04:46:40.814317 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 15 04:46:40.815865 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 15 04:46:40.834267 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 (254:6) scanned by mount (966) Jul 15 04:46:40.834306 kernel: BTRFS info (device vda6): first mount of filesystem 1ba6da34-80a1-4a8c-bd4d-0f30640013e8 Jul 15 04:46:40.834318 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 15 04:46:40.834986 kernel: BTRFS info (device vda6): using free-space-tree Jul 15 04:46:40.838020 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 15 04:46:40.865447 ignition[983]: INFO : Ignition 2.21.0 Jul 15 04:46:40.865447 ignition[983]: INFO : Stage: files Jul 15 04:46:40.866749 ignition[983]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 15 04:46:40.866749 ignition[983]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 15 04:46:40.868222 ignition[983]: DEBUG : files: compiled without relabeling support, skipping Jul 15 04:46:40.869068 ignition[983]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 15 04:46:40.869068 ignition[983]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 15 04:46:40.870951 ignition[983]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 15 04:46:40.870951 ignition[983]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 15 04:46:40.872816 ignition[983]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 15 04:46:40.872816 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jul 15 04:46:40.872816 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jul 15 04:46:40.871074 unknown[983]: wrote ssh authorized keys file for user: core Jul 15 04:46:40.949509 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 15 04:46:41.162320 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jul 15 04:46:41.162320 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 15 04:46:41.165180 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Jul 15 04:46:41.384170 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jul 15 04:46:41.448858 systemd-networkd[797]: eth0: Gained IPv6LL Jul 15 04:46:41.660801 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 15 04:46:41.662066 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jul 15 04:46:41.662066 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jul 15 04:46:41.662066 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 15 04:46:41.662066 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 15 04:46:41.662066 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 15 04:46:41.662066 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 15 04:46:41.662066 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 15 04:46:41.662066 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 15 04:46:41.671974 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 15 04:46:41.671974 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 15 04:46:41.671974 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 15 04:46:41.671974 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 15 04:46:41.671974 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 15 04:46:41.671974 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-arm64.raw: attempt #1 Jul 15 04:46:42.080853 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jul 15 04:46:42.475146 ignition[983]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 15 04:46:42.475146 ignition[983]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jul 15 04:46:42.477817 ignition[983]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 15 04:46:42.489523 ignition[983]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 15 04:46:42.489523 ignition[983]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jul 15 04:46:42.489523 ignition[983]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Jul 15 04:46:42.492767 ignition[983]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 15 04:46:42.492767 ignition[983]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 15 04:46:42.492767 ignition[983]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Jul 15 04:46:42.492767 ignition[983]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Jul 15 04:46:42.513371 ignition[983]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 15 04:46:42.517013 ignition[983]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 15 04:46:42.519323 ignition[983]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Jul 15 04:46:42.519323 ignition[983]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Jul 15 04:46:42.519323 ignition[983]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Jul 15 04:46:42.519323 ignition[983]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 15 04:46:42.519323 ignition[983]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 15 04:46:42.519323 ignition[983]: INFO : files: files passed Jul 15 04:46:42.519323 ignition[983]: INFO : Ignition finished successfully Jul 15 04:46:42.520491 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 15 04:46:42.523490 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 15 04:46:42.526282 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 15 04:46:42.542183 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 15 04:46:42.542278 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 15 04:46:42.544704 initrd-setup-root-after-ignition[1012]: grep: /sysroot/oem/oem-release: No such file or directory Jul 15 04:46:42.546342 initrd-setup-root-after-ignition[1014]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 15 04:46:42.546342 initrd-setup-root-after-ignition[1014]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 15 04:46:42.548848 initrd-setup-root-after-ignition[1018]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 15 04:46:42.548195 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 15 04:46:42.549937 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 15 04:46:42.552281 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 15 04:46:42.590529 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 15 04:46:42.590644 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 15 04:46:42.591784 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 15 04:46:42.592553 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 15 04:46:42.594032 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 15 04:46:42.596098 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 15 04:46:42.631570 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 15 04:46:42.633986 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 15 04:46:42.653294 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 15 04:46:42.654194 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 15 04:46:42.655718 systemd[1]: Stopped target timers.target - Timer Units. Jul 15 04:46:42.657190 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 15 04:46:42.657295 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 15 04:46:42.659279 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 15 04:46:42.660776 systemd[1]: Stopped target basic.target - Basic System. Jul 15 04:46:42.662117 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 15 04:46:42.663386 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 15 04:46:42.664899 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 15 04:46:42.666573 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jul 15 04:46:42.668123 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 15 04:46:42.669607 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 15 04:46:42.671517 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 15 04:46:42.672999 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 15 04:46:42.674270 systemd[1]: Stopped target swap.target - Swaps. Jul 15 04:46:42.675345 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 15 04:46:42.675450 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 15 04:46:42.677264 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 15 04:46:42.678793 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 15 04:46:42.680561 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 15 04:46:42.683945 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 15 04:46:42.684916 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 15 04:46:42.685022 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 15 04:46:42.687124 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 15 04:46:42.687237 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 15 04:46:42.688687 systemd[1]: Stopped target paths.target - Path Units. Jul 15 04:46:42.690988 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 15 04:46:42.694794 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 15 04:46:42.695707 systemd[1]: Stopped target slices.target - Slice Units. Jul 15 04:46:42.697394 systemd[1]: Stopped target sockets.target - Socket Units. Jul 15 04:46:42.698545 systemd[1]: iscsid.socket: Deactivated successfully. Jul 15 04:46:42.698636 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 15 04:46:42.699814 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 15 04:46:42.699894 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 15 04:46:42.701056 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 15 04:46:42.701168 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 15 04:46:42.702458 systemd[1]: ignition-files.service: Deactivated successfully. Jul 15 04:46:42.702555 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 15 04:46:42.704403 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 15 04:46:42.706408 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 15 04:46:42.707144 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 15 04:46:42.707267 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 15 04:46:42.708595 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 15 04:46:42.708679 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 15 04:46:42.713273 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 15 04:46:42.718981 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 15 04:46:42.727077 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 15 04:46:42.738434 ignition[1038]: INFO : Ignition 2.21.0 Jul 15 04:46:42.738434 ignition[1038]: INFO : Stage: umount Jul 15 04:46:42.738434 ignition[1038]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 15 04:46:42.738434 ignition[1038]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 15 04:46:42.742232 ignition[1038]: INFO : umount: umount passed Jul 15 04:46:42.742232 ignition[1038]: INFO : Ignition finished successfully Jul 15 04:46:42.742992 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 15 04:46:42.743937 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 15 04:46:42.745225 systemd[1]: Stopped target network.target - Network. Jul 15 04:46:42.746805 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 15 04:46:42.746870 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 15 04:46:42.747666 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 15 04:46:42.747702 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 15 04:46:42.748521 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 15 04:46:42.748557 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 15 04:46:42.749777 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 15 04:46:42.749810 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 15 04:46:42.751241 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 15 04:46:42.752584 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 15 04:46:42.761305 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 15 04:46:42.761434 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 15 04:46:42.764435 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jul 15 04:46:42.764621 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 15 04:46:42.764702 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 15 04:46:42.768247 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jul 15 04:46:42.768826 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jul 15 04:46:42.770835 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 15 04:46:42.770875 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 15 04:46:42.772865 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 15 04:46:42.774071 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 15 04:46:42.774125 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 15 04:46:42.775578 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 15 04:46:42.775618 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 15 04:46:42.777857 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 15 04:46:42.777897 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 15 04:46:42.779355 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 15 04:46:42.779399 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 15 04:46:42.781626 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 15 04:46:42.787577 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 15 04:46:42.787659 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jul 15 04:46:42.800410 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 15 04:46:42.801852 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 15 04:46:42.802962 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 15 04:46:42.803003 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 15 04:46:42.807390 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 15 04:46:42.807526 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 15 04:46:42.808684 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 15 04:46:42.808783 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 15 04:46:42.810178 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 15 04:46:42.810242 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 15 04:46:42.813952 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 15 04:46:42.813983 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 15 04:46:42.815192 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 15 04:46:42.815235 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 15 04:46:42.816666 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 15 04:46:42.816709 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 15 04:46:42.817554 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 15 04:46:42.817600 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 15 04:46:42.819420 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 15 04:46:42.820337 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jul 15 04:46:42.820392 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jul 15 04:46:42.823291 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 15 04:46:42.823335 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 15 04:46:42.828226 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jul 15 04:46:42.828270 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 15 04:46:42.830444 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 15 04:46:42.830488 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 15 04:46:42.831947 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 15 04:46:42.831987 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 15 04:46:42.835914 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Jul 15 04:46:42.835965 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Jul 15 04:46:42.835993 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jul 15 04:46:42.836024 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 15 04:46:42.846981 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 15 04:46:42.847869 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 15 04:46:42.849407 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 15 04:46:42.851473 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 15 04:46:42.865435 systemd[1]: Switching root. Jul 15 04:46:42.898846 systemd-journald[244]: Journal stopped Jul 15 04:46:43.655458 systemd-journald[244]: Received SIGTERM from PID 1 (systemd). Jul 15 04:46:43.655509 kernel: SELinux: policy capability network_peer_controls=1 Jul 15 04:46:43.655520 kernel: SELinux: policy capability open_perms=1 Jul 15 04:46:43.655529 kernel: SELinux: policy capability extended_socket_class=1 Jul 15 04:46:43.655539 kernel: SELinux: policy capability always_check_network=0 Jul 15 04:46:43.655551 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 15 04:46:43.655563 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 15 04:46:43.655572 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 15 04:46:43.655581 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 15 04:46:43.655591 kernel: SELinux: policy capability userspace_initial_context=0 Jul 15 04:46:43.655606 kernel: audit: type=1403 audit(1752554803.096:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 15 04:46:43.655620 systemd[1]: Successfully loaded SELinux policy in 64.182ms. Jul 15 04:46:43.655639 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 5.146ms. Jul 15 04:46:43.655650 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 15 04:46:43.655661 systemd[1]: Detected virtualization kvm. Jul 15 04:46:43.655670 systemd[1]: Detected architecture arm64. Jul 15 04:46:43.655680 systemd[1]: Detected first boot. Jul 15 04:46:43.655690 systemd[1]: Initializing machine ID from VM UUID. Jul 15 04:46:43.655700 kernel: NET: Registered PF_VSOCK protocol family Jul 15 04:46:43.655710 zram_generator::config[1083]: No configuration found. Jul 15 04:46:43.655752 systemd[1]: Populated /etc with preset unit settings. Jul 15 04:46:43.655766 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jul 15 04:46:43.655776 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 15 04:46:43.655786 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 15 04:46:43.655796 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 15 04:46:43.655807 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 15 04:46:43.655817 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 15 04:46:43.655836 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 15 04:46:43.655847 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 15 04:46:43.655857 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 15 04:46:43.655867 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 15 04:46:43.655878 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 15 04:46:43.655888 systemd[1]: Created slice user.slice - User and Session Slice. Jul 15 04:46:43.655903 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 15 04:46:43.655913 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 15 04:46:43.655924 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 15 04:46:43.655935 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 15 04:46:43.655946 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 15 04:46:43.655960 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 15 04:46:43.655972 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jul 15 04:46:43.655986 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 15 04:46:43.655996 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 15 04:46:43.656006 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 15 04:46:43.656016 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 15 04:46:43.656027 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 15 04:46:43.656038 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 15 04:46:43.656048 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 15 04:46:43.656058 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 15 04:46:43.656068 systemd[1]: Reached target slices.target - Slice Units. Jul 15 04:46:43.656078 systemd[1]: Reached target swap.target - Swaps. Jul 15 04:46:43.656088 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 15 04:46:43.656097 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 15 04:46:43.656108 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jul 15 04:46:43.656119 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 15 04:46:43.656130 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 15 04:46:43.656140 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 15 04:46:43.656150 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 15 04:46:43.656160 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 15 04:46:43.656170 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 15 04:46:43.656180 systemd[1]: Mounting media.mount - External Media Directory... Jul 15 04:46:43.656190 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 15 04:46:43.656200 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 15 04:46:43.656212 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 15 04:46:43.656223 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 15 04:46:43.656233 systemd[1]: Reached target machines.target - Containers. Jul 15 04:46:43.656244 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 15 04:46:43.656254 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 15 04:46:43.656265 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 15 04:46:43.656275 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 15 04:46:43.656285 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 15 04:46:43.656297 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 15 04:46:43.656307 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 15 04:46:43.656317 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 15 04:46:43.656328 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 15 04:46:43.656339 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 15 04:46:43.656349 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 15 04:46:43.656359 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 15 04:46:43.656370 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 15 04:46:43.656380 systemd[1]: Stopped systemd-fsck-usr.service. Jul 15 04:46:43.656392 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 15 04:46:43.656403 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 15 04:46:43.656413 kernel: fuse: init (API version 7.41) Jul 15 04:46:43.656423 kernel: loop: module loaded Jul 15 04:46:43.656433 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 15 04:46:43.656443 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 15 04:46:43.656454 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 15 04:46:43.656464 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jul 15 04:46:43.656476 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 15 04:46:43.656486 kernel: ACPI: bus type drm_connector registered Jul 15 04:46:43.656496 systemd[1]: verity-setup.service: Deactivated successfully. Jul 15 04:46:43.656506 systemd[1]: Stopped verity-setup.service. Jul 15 04:46:43.656529 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 15 04:46:43.656541 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 15 04:46:43.656552 systemd[1]: Mounted media.mount - External Media Directory. Jul 15 04:46:43.656564 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 15 04:46:43.656595 systemd-journald[1152]: Collecting audit messages is disabled. Jul 15 04:46:43.656616 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 15 04:46:43.656628 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 15 04:46:43.656639 systemd-journald[1152]: Journal started Jul 15 04:46:43.656660 systemd-journald[1152]: Runtime Journal (/run/log/journal/0167c0310b0a40ba8a419141cf978c00) is 6M, max 48.5M, 42.4M free. Jul 15 04:46:43.459892 systemd[1]: Queued start job for default target multi-user.target. Jul 15 04:46:43.482683 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jul 15 04:46:43.483047 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 15 04:46:43.659318 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 15 04:46:43.660928 systemd[1]: Started systemd-journald.service - Journal Service. Jul 15 04:46:43.661758 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 15 04:46:43.662885 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 15 04:46:43.663032 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 15 04:46:43.664075 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 15 04:46:43.664215 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 15 04:46:43.665252 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 15 04:46:43.665402 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 15 04:46:43.666407 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 15 04:46:43.666551 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 15 04:46:43.667655 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 15 04:46:43.667979 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 15 04:46:43.670097 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 15 04:46:43.670297 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 15 04:46:43.671425 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 15 04:46:43.672538 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 15 04:46:43.673742 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 15 04:46:43.674981 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jul 15 04:46:43.686136 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 15 04:46:43.688079 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 15 04:46:43.689857 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 15 04:46:43.690675 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 15 04:46:43.690702 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 15 04:46:43.692259 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jul 15 04:46:43.701886 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 15 04:46:43.702757 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 15 04:46:43.703783 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 15 04:46:43.705407 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 15 04:46:43.706717 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 15 04:46:43.707624 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 15 04:46:43.708514 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 15 04:46:43.710057 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 15 04:46:43.711854 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 15 04:46:43.714436 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 15 04:46:43.718481 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 15 04:46:43.722855 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 15 04:46:43.725280 systemd-journald[1152]: Time spent on flushing to /var/log/journal/0167c0310b0a40ba8a419141cf978c00 is 12.307ms for 894 entries. Jul 15 04:46:43.725280 systemd-journald[1152]: System Journal (/var/log/journal/0167c0310b0a40ba8a419141cf978c00) is 8M, max 195.6M, 187.6M free. Jul 15 04:46:43.745081 systemd-journald[1152]: Received client request to flush runtime journal. Jul 15 04:46:43.745128 kernel: loop0: detected capacity change from 0 to 134232 Jul 15 04:46:43.724812 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 15 04:46:43.734100 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 15 04:46:43.736182 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 15 04:46:43.738110 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 15 04:46:43.741883 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jul 15 04:46:43.746834 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 15 04:46:43.748964 systemd-tmpfiles[1201]: ACLs are not supported, ignoring. Jul 15 04:46:43.748979 systemd-tmpfiles[1201]: ACLs are not supported, ignoring. Jul 15 04:46:43.753278 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 15 04:46:43.755845 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 15 04:46:43.765752 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 15 04:46:43.770244 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jul 15 04:46:43.778743 kernel: loop1: detected capacity change from 0 to 203944 Jul 15 04:46:43.793304 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 15 04:46:43.796813 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 15 04:46:43.811766 kernel: loop2: detected capacity change from 0 to 105936 Jul 15 04:46:43.814030 systemd-tmpfiles[1221]: ACLs are not supported, ignoring. Jul 15 04:46:43.814046 systemd-tmpfiles[1221]: ACLs are not supported, ignoring. Jul 15 04:46:43.816458 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 15 04:46:43.848807 kernel: loop3: detected capacity change from 0 to 134232 Jul 15 04:46:43.856759 kernel: loop4: detected capacity change from 0 to 203944 Jul 15 04:46:43.861744 kernel: loop5: detected capacity change from 0 to 105936 Jul 15 04:46:43.865510 (sd-merge)[1225]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jul 15 04:46:43.865889 (sd-merge)[1225]: Merged extensions into '/usr'. Jul 15 04:46:43.869952 systemd[1]: Reload requested from client PID 1200 ('systemd-sysext') (unit systemd-sysext.service)... Jul 15 04:46:43.870075 systemd[1]: Reloading... Jul 15 04:46:43.929764 zram_generator::config[1247]: No configuration found. Jul 15 04:46:43.990406 ldconfig[1195]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 15 04:46:44.006003 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 15 04:46:44.068153 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 15 04:46:44.068672 systemd[1]: Reloading finished in 198 ms. Jul 15 04:46:44.106540 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 15 04:46:44.109761 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 15 04:46:44.127929 systemd[1]: Starting ensure-sysext.service... Jul 15 04:46:44.129756 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 15 04:46:44.142464 systemd-tmpfiles[1286]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jul 15 04:46:44.142685 systemd-tmpfiles[1286]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jul 15 04:46:44.143026 systemd-tmpfiles[1286]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 15 04:46:44.143220 systemd-tmpfiles[1286]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 15 04:46:44.143877 systemd-tmpfiles[1286]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 15 04:46:44.144089 systemd-tmpfiles[1286]: ACLs are not supported, ignoring. Jul 15 04:46:44.144135 systemd-tmpfiles[1286]: ACLs are not supported, ignoring. Jul 15 04:46:44.146336 systemd-tmpfiles[1286]: Detected autofs mount point /boot during canonicalization of boot. Jul 15 04:46:44.146348 systemd-tmpfiles[1286]: Skipping /boot Jul 15 04:46:44.150035 systemd[1]: Reload requested from client PID 1285 ('systemctl') (unit ensure-sysext.service)... Jul 15 04:46:44.150050 systemd[1]: Reloading... Jul 15 04:46:44.152214 systemd-tmpfiles[1286]: Detected autofs mount point /boot during canonicalization of boot. Jul 15 04:46:44.152220 systemd-tmpfiles[1286]: Skipping /boot Jul 15 04:46:44.192860 zram_generator::config[1313]: No configuration found. Jul 15 04:46:44.260265 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 15 04:46:44.322314 systemd[1]: Reloading finished in 171 ms. Jul 15 04:46:44.340522 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 15 04:46:44.361119 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 15 04:46:44.368358 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 15 04:46:44.370702 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 15 04:46:44.389180 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 15 04:46:44.392453 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 15 04:46:44.395965 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 15 04:46:44.399039 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 15 04:46:44.402271 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 15 04:46:44.404690 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 15 04:46:44.409308 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 15 04:46:44.412272 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 15 04:46:44.414707 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 15 04:46:44.414857 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 15 04:46:44.424161 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 15 04:46:44.426334 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 15 04:46:44.426534 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 15 04:46:44.429660 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 15 04:46:44.429909 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 15 04:46:44.431272 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 15 04:46:44.432758 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 15 04:46:44.437908 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 15 04:46:44.438691 systemd-udevd[1354]: Using default interface naming scheme 'v255'. Jul 15 04:46:44.442934 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 15 04:46:44.452556 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 15 04:46:44.455984 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 15 04:46:44.456885 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 15 04:46:44.457008 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 15 04:46:44.458579 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 15 04:46:44.464844 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 15 04:46:44.466566 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 15 04:46:44.469841 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 15 04:46:44.471318 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 15 04:46:44.473324 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 15 04:46:44.473635 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 15 04:46:44.475362 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 15 04:46:44.476769 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 15 04:46:44.477795 augenrules[1409]: No rules Jul 15 04:46:44.478134 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 15 04:46:44.478269 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 15 04:46:44.480399 systemd[1]: audit-rules.service: Deactivated successfully. Jul 15 04:46:44.480634 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 15 04:46:44.483119 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 15 04:46:44.505993 systemd[1]: Finished ensure-sysext.service. Jul 15 04:46:44.514170 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 15 04:46:44.515067 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 15 04:46:44.516099 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 15 04:46:44.517674 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 15 04:46:44.519390 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 15 04:46:44.527463 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 15 04:46:44.528646 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 15 04:46:44.528689 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 15 04:46:44.530929 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 15 04:46:44.534900 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 15 04:46:44.535751 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 15 04:46:44.536237 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 15 04:46:44.536385 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 15 04:46:44.542711 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 15 04:46:44.543272 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 15 04:46:44.545063 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 15 04:46:44.545219 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 15 04:46:44.546559 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 15 04:46:44.558938 augenrules[1426]: /sbin/augenrules: No change Jul 15 04:46:44.559071 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 15 04:46:44.559574 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 15 04:46:44.561161 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 15 04:46:44.568113 augenrules[1454]: No rules Jul 15 04:46:44.569439 systemd[1]: audit-rules.service: Deactivated successfully. Jul 15 04:46:44.569965 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 15 04:46:44.572057 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 15 04:46:44.582864 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jul 15 04:46:44.643078 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 15 04:46:44.646892 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 15 04:46:44.664550 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 15 04:46:44.665602 systemd[1]: Reached target time-set.target - System Time Set. Jul 15 04:46:44.673456 systemd-resolved[1352]: Positive Trust Anchors: Jul 15 04:46:44.673474 systemd-resolved[1352]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 15 04:46:44.673504 systemd-resolved[1352]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 15 04:46:44.684205 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 15 04:46:44.684524 systemd-resolved[1352]: Defaulting to hostname 'linux'. Jul 15 04:46:44.686531 systemd-networkd[1431]: lo: Link UP Jul 15 04:46:44.686537 systemd-networkd[1431]: lo: Gained carrier Jul 15 04:46:44.690169 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 15 04:46:44.691128 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 15 04:46:44.691566 systemd-networkd[1431]: Enumeration completed Jul 15 04:46:44.692003 systemd-networkd[1431]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 15 04:46:44.692011 systemd-networkd[1431]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 15 04:46:44.692023 systemd[1]: Reached target sysinit.target - System Initialization. Jul 15 04:46:44.692506 systemd-networkd[1431]: eth0: Link UP Jul 15 04:46:44.692610 systemd-networkd[1431]: eth0: Gained carrier Jul 15 04:46:44.692625 systemd-networkd[1431]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 15 04:46:44.692922 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 15 04:46:44.693806 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 15 04:46:44.694918 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 15 04:46:44.695816 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 15 04:46:44.696692 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 15 04:46:44.697775 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 15 04:46:44.697807 systemd[1]: Reached target paths.target - Path Units. Jul 15 04:46:44.698442 systemd[1]: Reached target timers.target - Timer Units. Jul 15 04:46:44.699760 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 15 04:46:44.701807 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 15 04:46:44.704318 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jul 15 04:46:44.706004 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jul 15 04:46:44.706907 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jul 15 04:46:44.713805 systemd-networkd[1431]: eth0: DHCPv4 address 10.0.0.81/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 15 04:46:44.714284 systemd-timesyncd[1432]: Network configuration changed, trying to establish connection. Jul 15 04:46:44.715160 systemd-timesyncd[1432]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jul 15 04:46:44.715209 systemd-timesyncd[1432]: Initial clock synchronization to Tue 2025-07-15 04:46:44.590773 UTC. Jul 15 04:46:44.715952 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 15 04:46:44.717223 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jul 15 04:46:44.718963 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 15 04:46:44.720307 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 15 04:46:44.721666 systemd[1]: Reached target network.target - Network. Jul 15 04:46:44.722423 systemd[1]: Reached target sockets.target - Socket Units. Jul 15 04:46:44.723261 systemd[1]: Reached target basic.target - Basic System. Jul 15 04:46:44.724206 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 15 04:46:44.724237 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 15 04:46:44.725577 systemd[1]: Starting containerd.service - containerd container runtime... Jul 15 04:46:44.727571 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 15 04:46:44.729333 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 15 04:46:44.731425 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 15 04:46:44.736242 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 15 04:46:44.737096 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 15 04:46:44.738966 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 15 04:46:44.742395 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 15 04:46:44.744937 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 15 04:46:44.747974 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 15 04:46:44.752109 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 15 04:46:44.755936 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jul 15 04:46:44.757914 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 15 04:46:44.760217 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 15 04:46:44.760611 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 15 04:46:44.764014 jq[1498]: false Jul 15 04:46:44.771056 systemd[1]: Starting update-engine.service - Update Engine... Jul 15 04:46:44.772954 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 15 04:46:44.776749 extend-filesystems[1499]: Found /dev/vda6 Jul 15 04:46:44.778758 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 15 04:46:44.780023 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 15 04:46:44.780200 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 15 04:46:44.780444 systemd[1]: motdgen.service: Deactivated successfully. Jul 15 04:46:44.780600 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 15 04:46:44.784062 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 15 04:46:44.784248 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 15 04:46:44.786908 extend-filesystems[1499]: Found /dev/vda9 Jul 15 04:46:44.789975 extend-filesystems[1499]: Checking size of /dev/vda9 Jul 15 04:46:44.794393 jq[1516]: true Jul 15 04:46:44.795980 (ntainerd)[1523]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 15 04:46:44.808989 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 15 04:46:44.814887 extend-filesystems[1499]: Resized partition /dev/vda9 Jul 15 04:46:44.821570 jq[1534]: true Jul 15 04:46:44.826005 extend-filesystems[1540]: resize2fs 1.47.2 (1-Jan-2025) Jul 15 04:46:44.828413 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jul 15 04:46:44.835534 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jul 15 04:46:44.850246 tar[1521]: linux-arm64/helm Jul 15 04:46:44.851357 update_engine[1512]: I20250715 04:46:44.850404 1512 main.cc:92] Flatcar Update Engine starting Jul 15 04:46:44.864958 dbus-daemon[1495]: [system] SELinux support is enabled Jul 15 04:46:44.865137 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 15 04:46:44.869030 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 15 04:46:44.869064 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 15 04:46:44.871223 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 15 04:46:44.874180 update_engine[1512]: I20250715 04:46:44.872407 1512 update_check_scheduler.cc:74] Next update check in 2m8s Jul 15 04:46:44.871249 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 15 04:46:44.873466 systemd[1]: Started update-engine.service - Update Engine. Jul 15 04:46:44.878819 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 15 04:46:44.903691 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jul 15 04:46:44.921669 systemd-logind[1507]: Watching system buttons on /dev/input/event0 (Power Button) Jul 15 04:46:44.922770 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 15 04:46:44.926366 systemd-logind[1507]: New seat seat0. Jul 15 04:46:44.927586 extend-filesystems[1540]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 15 04:46:44.927586 extend-filesystems[1540]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 15 04:46:44.927586 extend-filesystems[1540]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jul 15 04:46:44.938045 extend-filesystems[1499]: Resized filesystem in /dev/vda9 Jul 15 04:46:44.939866 bash[1563]: Updated "/home/core/.ssh/authorized_keys" Jul 15 04:46:44.929714 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 15 04:46:44.930568 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 15 04:46:44.932747 systemd[1]: Started systemd-logind.service - User Login Management. Jul 15 04:46:44.939145 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 15 04:46:44.942200 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 15 04:46:44.980237 locksmithd[1552]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 15 04:46:45.085941 containerd[1523]: time="2025-07-15T04:46:45Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jul 15 04:46:45.086616 containerd[1523]: time="2025-07-15T04:46:45.086576240Z" level=info msg="starting containerd" revision=fb4c30d4ede3531652d86197bf3fc9515e5276d9 version=v2.0.5 Jul 15 04:46:45.096921 containerd[1523]: time="2025-07-15T04:46:45.095503293Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="9.049µs" Jul 15 04:46:45.096921 containerd[1523]: time="2025-07-15T04:46:45.096917261Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jul 15 04:46:45.097010 containerd[1523]: time="2025-07-15T04:46:45.096959530Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jul 15 04:46:45.097135 containerd[1523]: time="2025-07-15T04:46:45.097114833Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jul 15 04:46:45.097160 containerd[1523]: time="2025-07-15T04:46:45.097136424Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jul 15 04:46:45.097194 containerd[1523]: time="2025-07-15T04:46:45.097162024Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 15 04:46:45.097229 containerd[1523]: time="2025-07-15T04:46:45.097210365Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 15 04:46:45.097229 containerd[1523]: time="2025-07-15T04:46:45.097224574Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 15 04:46:45.097470 containerd[1523]: time="2025-07-15T04:46:45.097446079Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 15 04:46:45.097470 containerd[1523]: time="2025-07-15T04:46:45.097465845Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 15 04:46:45.097518 containerd[1523]: time="2025-07-15T04:46:45.097477235Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 15 04:46:45.097518 containerd[1523]: time="2025-07-15T04:46:45.097485173Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jul 15 04:46:45.097560 containerd[1523]: time="2025-07-15T04:46:45.097551256Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jul 15 04:46:45.097771 containerd[1523]: time="2025-07-15T04:46:45.097747916Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 15 04:46:45.097806 containerd[1523]: time="2025-07-15T04:46:45.097781810Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 15 04:46:45.097806 containerd[1523]: time="2025-07-15T04:46:45.097792328Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jul 15 04:46:45.097849 containerd[1523]: time="2025-07-15T04:46:45.097810188Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jul 15 04:46:45.098004 containerd[1523]: time="2025-07-15T04:46:45.097985217Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jul 15 04:46:45.098072 containerd[1523]: time="2025-07-15T04:46:45.098048244Z" level=info msg="metadata content store policy set" policy=shared Jul 15 04:46:45.101059 containerd[1523]: time="2025-07-15T04:46:45.101018425Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jul 15 04:46:45.101116 containerd[1523]: time="2025-07-15T04:46:45.101080777Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jul 15 04:46:45.101116 containerd[1523]: time="2025-07-15T04:46:45.101096176Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jul 15 04:46:45.101116 containerd[1523]: time="2025-07-15T04:46:45.101107329Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jul 15 04:46:45.101163 containerd[1523]: time="2025-07-15T04:46:45.101118005Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jul 15 04:46:45.101163 containerd[1523]: time="2025-07-15T04:46:45.101128880Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jul 15 04:46:45.101163 containerd[1523]: time="2025-07-15T04:46:45.101142017Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jul 15 04:46:45.101163 containerd[1523]: time="2025-07-15T04:46:45.101153447Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jul 15 04:46:45.101245 containerd[1523]: time="2025-07-15T04:46:45.101165315Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jul 15 04:46:45.101245 containerd[1523]: time="2025-07-15T04:46:45.101175753Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jul 15 04:46:45.101245 containerd[1523]: time="2025-07-15T04:46:45.101184881Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jul 15 04:46:45.101245 containerd[1523]: time="2025-07-15T04:46:45.101197185Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jul 15 04:46:45.101309 containerd[1523]: time="2025-07-15T04:46:45.101294662Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jul 15 04:46:45.101327 containerd[1523]: time="2025-07-15T04:46:45.101315419Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jul 15 04:46:45.101343 containerd[1523]: time="2025-07-15T04:46:45.101332644Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jul 15 04:46:45.101360 containerd[1523]: time="2025-07-15T04:46:45.101347131Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jul 15 04:46:45.101360 containerd[1523]: time="2025-07-15T04:46:45.101357529Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jul 15 04:46:45.101391 containerd[1523]: time="2025-07-15T04:46:45.101367650Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jul 15 04:46:45.101391 containerd[1523]: time="2025-07-15T04:46:45.101378287Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jul 15 04:46:45.101391 containerd[1523]: time="2025-07-15T04:46:45.101387614Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jul 15 04:46:45.101443 containerd[1523]: time="2025-07-15T04:46:45.101398528Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jul 15 04:46:45.101443 containerd[1523]: time="2025-07-15T04:46:45.101408967Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jul 15 04:46:45.101443 containerd[1523]: time="2025-07-15T04:46:45.101418492Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jul 15 04:46:45.101960 containerd[1523]: time="2025-07-15T04:46:45.101603205Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jul 15 04:46:45.101960 containerd[1523]: time="2025-07-15T04:46:45.101623447Z" level=info msg="Start snapshots syncer" Jul 15 04:46:45.101960 containerd[1523]: time="2025-07-15T04:46:45.101649920Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jul 15 04:46:45.102039 containerd[1523]: time="2025-07-15T04:46:45.101847691Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jul 15 04:46:45.102039 containerd[1523]: time="2025-07-15T04:46:45.101891587Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jul 15 04:46:45.102578 containerd[1523]: time="2025-07-15T04:46:45.102530147Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jul 15 04:46:45.102683 containerd[1523]: time="2025-07-15T04:46:45.102663264Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jul 15 04:46:45.102713 containerd[1523]: time="2025-07-15T04:46:45.102693468Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jul 15 04:46:45.102713 containerd[1523]: time="2025-07-15T04:46:45.102706605Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jul 15 04:46:45.102759 containerd[1523]: time="2025-07-15T04:46:45.102734070Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jul 15 04:46:45.102759 containerd[1523]: time="2025-07-15T04:46:45.102750382Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jul 15 04:46:45.102803 containerd[1523]: time="2025-07-15T04:46:45.102761138Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jul 15 04:46:45.102803 containerd[1523]: time="2025-07-15T04:46:45.102772410Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jul 15 04:46:45.102803 containerd[1523]: time="2025-07-15T04:46:45.102797890Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jul 15 04:46:45.102851 containerd[1523]: time="2025-07-15T04:46:45.102810511Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jul 15 04:46:45.102851 containerd[1523]: time="2025-07-15T04:46:45.102821823Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jul 15 04:46:45.102883 containerd[1523]: time="2025-07-15T04:46:45.102852225Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 15 04:46:45.102883 containerd[1523]: time="2025-07-15T04:46:45.102866671Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 15 04:46:45.102883 containerd[1523]: time="2025-07-15T04:46:45.102875244Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 15 04:46:45.102933 containerd[1523]: time="2025-07-15T04:46:45.102884889Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 15 04:46:45.102933 containerd[1523]: time="2025-07-15T04:46:45.102893343Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jul 15 04:46:45.102933 containerd[1523]: time="2025-07-15T04:46:45.102903106Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jul 15 04:46:45.102933 containerd[1523]: time="2025-07-15T04:46:45.102914775Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jul 15 04:46:45.103719 containerd[1523]: time="2025-07-15T04:46:45.103041740Z" level=info msg="runtime interface created" Jul 15 04:46:45.103719 containerd[1523]: time="2025-07-15T04:46:45.103053965Z" level=info msg="created NRI interface" Jul 15 04:46:45.103719 containerd[1523]: time="2025-07-15T04:46:45.103063689Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jul 15 04:46:45.103719 containerd[1523]: time="2025-07-15T04:46:45.103076429Z" level=info msg="Connect containerd service" Jul 15 04:46:45.103719 containerd[1523]: time="2025-07-15T04:46:45.103102346Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 15 04:46:45.103719 containerd[1523]: time="2025-07-15T04:46:45.103693516Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 15 04:46:45.146748 tar[1521]: linux-arm64/LICENSE Jul 15 04:46:45.146748 tar[1521]: linux-arm64/README.md Jul 15 04:46:45.161701 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 15 04:46:45.221358 containerd[1523]: time="2025-07-15T04:46:45.221289675Z" level=info msg="Start subscribing containerd event" Jul 15 04:46:45.221358 containerd[1523]: time="2025-07-15T04:46:45.221353892Z" level=info msg="Start recovering state" Jul 15 04:46:45.221473 containerd[1523]: time="2025-07-15T04:46:45.221448789Z" level=info msg="Start event monitor" Jul 15 04:46:45.221473 containerd[1523]: time="2025-07-15T04:46:45.221461252Z" level=info msg="Start cni network conf syncer for default" Jul 15 04:46:45.221473 containerd[1523]: time="2025-07-15T04:46:45.221470420Z" level=info msg="Start streaming server" Jul 15 04:46:45.221546 containerd[1523]: time="2025-07-15T04:46:45.221478040Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jul 15 04:46:45.221546 containerd[1523]: time="2025-07-15T04:46:45.221484549Z" level=info msg="runtime interface starting up..." Jul 15 04:46:45.221546 containerd[1523]: time="2025-07-15T04:46:45.221489828Z" level=info msg="starting plugins..." Jul 15 04:46:45.221546 containerd[1523]: time="2025-07-15T04:46:45.221502727Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jul 15 04:46:45.221611 containerd[1523]: time="2025-07-15T04:46:45.221586035Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 15 04:46:45.221657 containerd[1523]: time="2025-07-15T04:46:45.221631717Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 15 04:46:45.221709 containerd[1523]: time="2025-07-15T04:46:45.221687679Z" level=info msg="containerd successfully booted in 0.136473s" Jul 15 04:46:45.221793 systemd[1]: Started containerd.service - containerd container runtime. Jul 15 04:46:45.734352 sshd_keygen[1522]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 15 04:46:45.752969 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 15 04:46:45.755376 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 15 04:46:45.787331 systemd[1]: issuegen.service: Deactivated successfully. Jul 15 04:46:45.787577 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 15 04:46:45.789936 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 15 04:46:45.812386 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 15 04:46:45.814814 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 15 04:46:45.816566 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jul 15 04:46:45.817610 systemd[1]: Reached target getty.target - Login Prompts. Jul 15 04:46:46.760885 systemd-networkd[1431]: eth0: Gained IPv6LL Jul 15 04:46:46.763366 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 15 04:46:46.764774 systemd[1]: Reached target network-online.target - Network is Online. Jul 15 04:46:46.766808 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jul 15 04:46:46.768810 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 04:46:46.770558 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 15 04:46:46.796889 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 15 04:46:46.798150 systemd[1]: coreos-metadata.service: Deactivated successfully. Jul 15 04:46:46.798330 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jul 15 04:46:46.800018 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 15 04:46:47.317943 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 04:46:47.319171 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 15 04:46:47.320198 systemd[1]: Startup finished in 2.039s (kernel) + 5.460s (initrd) + 4.289s (userspace) = 11.790s. Jul 15 04:46:47.321364 (kubelet)[1638]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 15 04:46:47.727546 kubelet[1638]: E0715 04:46:47.727435 1638 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 15 04:46:47.729830 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 15 04:46:47.729964 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 15 04:46:47.730273 systemd[1]: kubelet.service: Consumed 812ms CPU time, 257.8M memory peak. Jul 15 04:46:50.104297 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 15 04:46:50.105576 systemd[1]: Started sshd@0-10.0.0.81:22-10.0.0.1:49648.service - OpenSSH per-connection server daemon (10.0.0.1:49648). Jul 15 04:46:50.198136 sshd[1652]: Accepted publickey for core from 10.0.0.1 port 49648 ssh2: RSA SHA256:sVpqIt/le8mJMWBRnqSUOr83Z2pgjga2fm8CYRKAYYo Jul 15 04:46:50.199059 sshd-session[1652]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 04:46:50.206862 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 15 04:46:50.207688 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 15 04:46:50.213898 systemd-logind[1507]: New session 1 of user core. Jul 15 04:46:50.230316 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 15 04:46:50.232492 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 15 04:46:50.248560 (systemd)[1657]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 15 04:46:50.251313 systemd-logind[1507]: New session c1 of user core. Jul 15 04:46:50.362538 systemd[1657]: Queued start job for default target default.target. Jul 15 04:46:50.384744 systemd[1657]: Created slice app.slice - User Application Slice. Jul 15 04:46:50.384771 systemd[1657]: Reached target paths.target - Paths. Jul 15 04:46:50.384810 systemd[1657]: Reached target timers.target - Timers. Jul 15 04:46:50.385933 systemd[1657]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 15 04:46:50.394914 systemd[1657]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 15 04:46:50.394974 systemd[1657]: Reached target sockets.target - Sockets. Jul 15 04:46:50.395018 systemd[1657]: Reached target basic.target - Basic System. Jul 15 04:46:50.395046 systemd[1657]: Reached target default.target - Main User Target. Jul 15 04:46:50.395070 systemd[1657]: Startup finished in 137ms. Jul 15 04:46:50.395519 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 15 04:46:50.397079 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 15 04:46:50.464936 systemd[1]: Started sshd@1-10.0.0.81:22-10.0.0.1:49662.service - OpenSSH per-connection server daemon (10.0.0.1:49662). Jul 15 04:46:50.518498 sshd[1669]: Accepted publickey for core from 10.0.0.1 port 49662 ssh2: RSA SHA256:sVpqIt/le8mJMWBRnqSUOr83Z2pgjga2fm8CYRKAYYo Jul 15 04:46:50.519837 sshd-session[1669]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 04:46:50.523731 systemd-logind[1507]: New session 2 of user core. Jul 15 04:46:50.535900 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 15 04:46:50.589367 sshd[1672]: Connection closed by 10.0.0.1 port 49662 Jul 15 04:46:50.591110 sshd-session[1669]: pam_unix(sshd:session): session closed for user core Jul 15 04:46:50.605663 systemd[1]: sshd@1-10.0.0.81:22-10.0.0.1:49662.service: Deactivated successfully. Jul 15 04:46:50.608022 systemd[1]: session-2.scope: Deactivated successfully. Jul 15 04:46:50.610054 systemd-logind[1507]: Session 2 logged out. Waiting for processes to exit. Jul 15 04:46:50.612436 systemd[1]: Started sshd@2-10.0.0.81:22-10.0.0.1:49668.service - OpenSSH per-connection server daemon (10.0.0.1:49668). Jul 15 04:46:50.613084 systemd-logind[1507]: Removed session 2. Jul 15 04:46:50.678478 sshd[1678]: Accepted publickey for core from 10.0.0.1 port 49668 ssh2: RSA SHA256:sVpqIt/le8mJMWBRnqSUOr83Z2pgjga2fm8CYRKAYYo Jul 15 04:46:50.680029 sshd-session[1678]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 04:46:50.685130 systemd-logind[1507]: New session 3 of user core. Jul 15 04:46:50.701881 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 15 04:46:50.749421 sshd[1681]: Connection closed by 10.0.0.1 port 49668 Jul 15 04:46:50.749817 sshd-session[1678]: pam_unix(sshd:session): session closed for user core Jul 15 04:46:50.760516 systemd[1]: sshd@2-10.0.0.81:22-10.0.0.1:49668.service: Deactivated successfully. Jul 15 04:46:50.763004 systemd[1]: session-3.scope: Deactivated successfully. Jul 15 04:46:50.763623 systemd-logind[1507]: Session 3 logged out. Waiting for processes to exit. Jul 15 04:46:50.765665 systemd[1]: Started sshd@3-10.0.0.81:22-10.0.0.1:49676.service - OpenSSH per-connection server daemon (10.0.0.1:49676). Jul 15 04:46:50.766136 systemd-logind[1507]: Removed session 3. Jul 15 04:46:50.819988 sshd[1687]: Accepted publickey for core from 10.0.0.1 port 49676 ssh2: RSA SHA256:sVpqIt/le8mJMWBRnqSUOr83Z2pgjga2fm8CYRKAYYo Jul 15 04:46:50.821091 sshd-session[1687]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 04:46:50.825677 systemd-logind[1507]: New session 4 of user core. Jul 15 04:46:50.835901 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 15 04:46:50.886037 sshd[1690]: Connection closed by 10.0.0.1 port 49676 Jul 15 04:46:50.886451 sshd-session[1687]: pam_unix(sshd:session): session closed for user core Jul 15 04:46:50.899551 systemd[1]: sshd@3-10.0.0.81:22-10.0.0.1:49676.service: Deactivated successfully. Jul 15 04:46:50.902088 systemd[1]: session-4.scope: Deactivated successfully. Jul 15 04:46:50.902752 systemd-logind[1507]: Session 4 logged out. Waiting for processes to exit. Jul 15 04:46:50.904923 systemd[1]: Started sshd@4-10.0.0.81:22-10.0.0.1:49690.service - OpenSSH per-connection server daemon (10.0.0.1:49690). Jul 15 04:46:50.906396 systemd-logind[1507]: Removed session 4. Jul 15 04:46:50.949965 sshd[1696]: Accepted publickey for core from 10.0.0.1 port 49690 ssh2: RSA SHA256:sVpqIt/le8mJMWBRnqSUOr83Z2pgjga2fm8CYRKAYYo Jul 15 04:46:50.951145 sshd-session[1696]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 04:46:50.955523 systemd-logind[1507]: New session 5 of user core. Jul 15 04:46:50.976892 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 15 04:46:51.043900 sudo[1700]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 15 04:46:51.044618 sudo[1700]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 15 04:46:51.072484 sudo[1700]: pam_unix(sudo:session): session closed for user root Jul 15 04:46:51.075777 sshd[1699]: Connection closed by 10.0.0.1 port 49690 Jul 15 04:46:51.076042 sshd-session[1696]: pam_unix(sshd:session): session closed for user core Jul 15 04:46:51.086591 systemd[1]: sshd@4-10.0.0.81:22-10.0.0.1:49690.service: Deactivated successfully. Jul 15 04:46:51.088921 systemd[1]: session-5.scope: Deactivated successfully. Jul 15 04:46:51.090296 systemd-logind[1507]: Session 5 logged out. Waiting for processes to exit. Jul 15 04:46:51.091434 systemd[1]: Started sshd@5-10.0.0.81:22-10.0.0.1:49694.service - OpenSSH per-connection server daemon (10.0.0.1:49694). Jul 15 04:46:51.092260 systemd-logind[1507]: Removed session 5. Jul 15 04:46:51.164332 sshd[1706]: Accepted publickey for core from 10.0.0.1 port 49694 ssh2: RSA SHA256:sVpqIt/le8mJMWBRnqSUOr83Z2pgjga2fm8CYRKAYYo Jul 15 04:46:51.166353 sshd-session[1706]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 04:46:51.170399 systemd-logind[1507]: New session 6 of user core. Jul 15 04:46:51.182859 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 15 04:46:51.232535 sudo[1711]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 15 04:46:51.232823 sudo[1711]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 15 04:46:51.271743 sudo[1711]: pam_unix(sudo:session): session closed for user root Jul 15 04:46:51.276590 sudo[1710]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jul 15 04:46:51.276876 sudo[1710]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 15 04:46:51.284744 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 15 04:46:51.323214 augenrules[1733]: No rules Jul 15 04:46:51.324415 systemd[1]: audit-rules.service: Deactivated successfully. Jul 15 04:46:51.324601 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 15 04:46:51.325666 sudo[1710]: pam_unix(sudo:session): session closed for user root Jul 15 04:46:51.328755 sshd[1709]: Connection closed by 10.0.0.1 port 49694 Jul 15 04:46:51.329522 sshd-session[1706]: pam_unix(sshd:session): session closed for user core Jul 15 04:46:51.336444 systemd[1]: sshd@5-10.0.0.81:22-10.0.0.1:49694.service: Deactivated successfully. Jul 15 04:46:51.338869 systemd[1]: session-6.scope: Deactivated successfully. Jul 15 04:46:51.339483 systemd-logind[1507]: Session 6 logged out. Waiting for processes to exit. Jul 15 04:46:51.342963 systemd[1]: Started sshd@6-10.0.0.81:22-10.0.0.1:49696.service - OpenSSH per-connection server daemon (10.0.0.1:49696). Jul 15 04:46:51.343896 systemd-logind[1507]: Removed session 6. Jul 15 04:46:51.397662 sshd[1742]: Accepted publickey for core from 10.0.0.1 port 49696 ssh2: RSA SHA256:sVpqIt/le8mJMWBRnqSUOr83Z2pgjga2fm8CYRKAYYo Jul 15 04:46:51.398779 sshd-session[1742]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 04:46:51.403245 systemd-logind[1507]: New session 7 of user core. Jul 15 04:46:51.418873 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 15 04:46:51.469156 sudo[1746]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 15 04:46:51.469525 sudo[1746]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 15 04:46:51.810701 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 15 04:46:51.827099 (dockerd)[1766]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 15 04:46:52.095802 dockerd[1766]: time="2025-07-15T04:46:52.095167705Z" level=info msg="Starting up" Jul 15 04:46:52.096821 dockerd[1766]: time="2025-07-15T04:46:52.096794220Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jul 15 04:46:52.106248 dockerd[1766]: time="2025-07-15T04:46:52.106216948Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jul 15 04:46:52.203847 dockerd[1766]: time="2025-07-15T04:46:52.203801969Z" level=info msg="Loading containers: start." Jul 15 04:46:52.211742 kernel: Initializing XFRM netlink socket Jul 15 04:46:52.443041 systemd-networkd[1431]: docker0: Link UP Jul 15 04:46:52.448439 dockerd[1766]: time="2025-07-15T04:46:52.448392582Z" level=info msg="Loading containers: done." Jul 15 04:46:52.459858 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck548875275-merged.mount: Deactivated successfully. Jul 15 04:46:52.461537 dockerd[1766]: time="2025-07-15T04:46:52.461231251Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 15 04:46:52.461537 dockerd[1766]: time="2025-07-15T04:46:52.461310020Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jul 15 04:46:52.461537 dockerd[1766]: time="2025-07-15T04:46:52.461389186Z" level=info msg="Initializing buildkit" Jul 15 04:46:52.482354 dockerd[1766]: time="2025-07-15T04:46:52.482317926Z" level=info msg="Completed buildkit initialization" Jul 15 04:46:52.489672 dockerd[1766]: time="2025-07-15T04:46:52.489634123Z" level=info msg="Daemon has completed initialization" Jul 15 04:46:52.490034 dockerd[1766]: time="2025-07-15T04:46:52.489923206Z" level=info msg="API listen on /run/docker.sock" Jul 15 04:46:52.490029 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 15 04:46:53.047963 containerd[1523]: time="2025-07-15T04:46:53.047906124Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\"" Jul 15 04:46:53.652647 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1249897285.mount: Deactivated successfully. Jul 15 04:46:54.618669 containerd[1523]: time="2025-07-15T04:46:54.618616355Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:46:54.619587 containerd[1523]: time="2025-07-15T04:46:54.619543148Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.10: active requests=0, bytes read=25651795" Jul 15 04:46:54.620382 containerd[1523]: time="2025-07-15T04:46:54.620350225Z" level=info msg="ImageCreate event name:\"sha256:8907c2d36348551c1038e24ef688f6830681069380376707e55518007a20a86c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:46:54.622669 containerd[1523]: time="2025-07-15T04:46:54.622609283Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:46:54.623740 containerd[1523]: time="2025-07-15T04:46:54.623686938Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.10\" with image id \"sha256:8907c2d36348551c1038e24ef688f6830681069380376707e55518007a20a86c\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\", size \"25648593\" in 1.575741514s" Jul 15 04:46:54.624011 containerd[1523]: time="2025-07-15T04:46:54.623828958Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\" returns image reference \"sha256:8907c2d36348551c1038e24ef688f6830681069380376707e55518007a20a86c\"" Jul 15 04:46:54.626907 containerd[1523]: time="2025-07-15T04:46:54.626823962Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\"" Jul 15 04:46:55.815050 containerd[1523]: time="2025-07-15T04:46:55.815008815Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:46:55.815468 containerd[1523]: time="2025-07-15T04:46:55.815433397Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.10: active requests=0, bytes read=22459679" Jul 15 04:46:55.816445 containerd[1523]: time="2025-07-15T04:46:55.816397931Z" level=info msg="ImageCreate event name:\"sha256:0f640d6889416d515a0ac4de1c26f4d80134c47641ff464abc831560a951175f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:46:55.820879 containerd[1523]: time="2025-07-15T04:46:55.820823821Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:46:55.821904 containerd[1523]: time="2025-07-15T04:46:55.821870299Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.10\" with image id \"sha256:0f640d6889416d515a0ac4de1c26f4d80134c47641ff464abc831560a951175f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\", size \"23995467\" in 1.19495577s" Jul 15 04:46:55.821947 containerd[1523]: time="2025-07-15T04:46:55.821909937Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\" returns image reference \"sha256:0f640d6889416d515a0ac4de1c26f4d80134c47641ff464abc831560a951175f\"" Jul 15 04:46:55.822447 containerd[1523]: time="2025-07-15T04:46:55.822405948Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\"" Jul 15 04:46:56.845915 containerd[1523]: time="2025-07-15T04:46:56.845851442Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:46:56.847176 containerd[1523]: time="2025-07-15T04:46:56.847117308Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.10: active requests=0, bytes read=17125068" Jul 15 04:46:56.847848 containerd[1523]: time="2025-07-15T04:46:56.847696845Z" level=info msg="ImageCreate event name:\"sha256:23d79b83d912e2633bcb4f9f7b8b46024893e11d492a4249d8f1f8c9a26b7b2c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:46:56.849897 containerd[1523]: time="2025-07-15T04:46:56.849844409Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:46:56.850838 containerd[1523]: time="2025-07-15T04:46:56.850814927Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.10\" with image id \"sha256:23d79b83d912e2633bcb4f9f7b8b46024893e11d492a4249d8f1f8c9a26b7b2c\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\", size \"18660874\" in 1.028374519s" Jul 15 04:46:56.850959 containerd[1523]: time="2025-07-15T04:46:56.850920602Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\" returns image reference \"sha256:23d79b83d912e2633bcb4f9f7b8b46024893e11d492a4249d8f1f8c9a26b7b2c\"" Jul 15 04:46:56.851624 containerd[1523]: time="2025-07-15T04:46:56.851540545Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\"" Jul 15 04:46:57.774091 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3821432273.mount: Deactivated successfully. Jul 15 04:46:57.775069 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 15 04:46:57.776441 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 04:46:57.888442 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 04:46:57.891573 (kubelet)[2065]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 15 04:46:57.934900 kubelet[2065]: E0715 04:46:57.934842 2065 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 15 04:46:57.938957 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 15 04:46:57.939083 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 15 04:46:57.939364 systemd[1]: kubelet.service: Consumed 143ms CPU time, 105.4M memory peak. Jul 15 04:46:58.292865 containerd[1523]: time="2025-07-15T04:46:58.292816633Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:46:58.294051 containerd[1523]: time="2025-07-15T04:46:58.294009375Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.10: active requests=0, bytes read=26915959" Jul 15 04:46:58.294954 containerd[1523]: time="2025-07-15T04:46:58.294898300Z" level=info msg="ImageCreate event name:\"sha256:dde5ff0da443b455e81aefc7bf6a216fdd659d1cbe13b8e8ac8129c3ecd27f89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:46:58.296934 containerd[1523]: time="2025-07-15T04:46:58.296876675Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:46:58.297501 containerd[1523]: time="2025-07-15T04:46:58.297472029Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.10\" with image id \"sha256:dde5ff0da443b455e81aefc7bf6a216fdd659d1cbe13b8e8ac8129c3ecd27f89\", repo tag \"registry.k8s.io/kube-proxy:v1.31.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\", size \"26914976\" in 1.445901834s" Jul 15 04:46:58.297577 containerd[1523]: time="2025-07-15T04:46:58.297502526Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\" returns image reference \"sha256:dde5ff0da443b455e81aefc7bf6a216fdd659d1cbe13b8e8ac8129c3ecd27f89\"" Jul 15 04:46:58.298066 containerd[1523]: time="2025-07-15T04:46:58.298002482Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 15 04:46:58.953937 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3706380708.mount: Deactivated successfully. Jul 15 04:46:59.585303 containerd[1523]: time="2025-07-15T04:46:59.585257023Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:46:59.586212 containerd[1523]: time="2025-07-15T04:46:59.585625380Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951624" Jul 15 04:46:59.586987 containerd[1523]: time="2025-07-15T04:46:59.586928266Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:46:59.589746 containerd[1523]: time="2025-07-15T04:46:59.589684283Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:46:59.590673 containerd[1523]: time="2025-07-15T04:46:59.590626507Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.292594802s" Jul 15 04:46:59.590673 containerd[1523]: time="2025-07-15T04:46:59.590657370Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Jul 15 04:46:59.591244 containerd[1523]: time="2025-07-15T04:46:59.591073137Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 15 04:47:00.001031 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1771687216.mount: Deactivated successfully. Jul 15 04:47:00.004805 containerd[1523]: time="2025-07-15T04:47:00.004759261Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 15 04:47:00.005860 containerd[1523]: time="2025-07-15T04:47:00.005829932Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Jul 15 04:47:00.006565 containerd[1523]: time="2025-07-15T04:47:00.006517019Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 15 04:47:00.010745 containerd[1523]: time="2025-07-15T04:47:00.010450935Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 15 04:47:00.012695 containerd[1523]: time="2025-07-15T04:47:00.012662230Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 421.557191ms" Jul 15 04:47:00.012825 containerd[1523]: time="2025-07-15T04:47:00.012807360Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jul 15 04:47:00.013389 containerd[1523]: time="2025-07-15T04:47:00.013365827Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jul 15 04:47:00.505883 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3088327338.mount: Deactivated successfully. Jul 15 04:47:02.294910 containerd[1523]: time="2025-07-15T04:47:02.294855434Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:47:02.410440 containerd[1523]: time="2025-07-15T04:47:02.410387190Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66406467" Jul 15 04:47:02.412565 containerd[1523]: time="2025-07-15T04:47:02.412504085Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:47:02.415881 containerd[1523]: time="2025-07-15T04:47:02.415824490Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:47:02.416911 containerd[1523]: time="2025-07-15T04:47:02.416875358Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 2.403478101s" Jul 15 04:47:02.416911 containerd[1523]: time="2025-07-15T04:47:02.416908910Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Jul 15 04:47:06.933463 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 04:47:06.933600 systemd[1]: kubelet.service: Consumed 143ms CPU time, 105.4M memory peak. Jul 15 04:47:06.935570 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 04:47:06.957228 systemd[1]: Reload requested from client PID 2215 ('systemctl') (unit session-7.scope)... Jul 15 04:47:06.957242 systemd[1]: Reloading... Jul 15 04:47:07.037762 zram_generator::config[2256]: No configuration found. Jul 15 04:47:07.139628 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 15 04:47:07.225815 systemd[1]: Reloading finished in 268 ms. Jul 15 04:47:07.278246 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 15 04:47:07.278332 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 15 04:47:07.278584 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 04:47:07.278636 systemd[1]: kubelet.service: Consumed 91ms CPU time, 95.1M memory peak. Jul 15 04:47:07.280186 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 04:47:07.393008 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 04:47:07.397498 (kubelet)[2303]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 15 04:47:07.440433 kubelet[2303]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 15 04:47:07.440433 kubelet[2303]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 15 04:47:07.440433 kubelet[2303]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 15 04:47:07.440847 kubelet[2303]: I0715 04:47:07.440514 2303 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 15 04:47:07.853395 kubelet[2303]: I0715 04:47:07.853336 2303 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 15 04:47:07.853395 kubelet[2303]: I0715 04:47:07.853382 2303 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 15 04:47:07.853731 kubelet[2303]: I0715 04:47:07.853699 2303 server.go:934] "Client rotation is on, will bootstrap in background" Jul 15 04:47:07.895289 kubelet[2303]: E0715 04:47:07.895225 2303 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.81:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.81:6443: connect: connection refused" logger="UnhandledError" Jul 15 04:47:07.897053 kubelet[2303]: I0715 04:47:07.896895 2303 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 15 04:47:07.906313 kubelet[2303]: I0715 04:47:07.906278 2303 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 15 04:47:07.914335 kubelet[2303]: I0715 04:47:07.914304 2303 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 15 04:47:07.914629 kubelet[2303]: I0715 04:47:07.914613 2303 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 15 04:47:07.914855 kubelet[2303]: I0715 04:47:07.914820 2303 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 15 04:47:07.915106 kubelet[2303]: I0715 04:47:07.914922 2303 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 15 04:47:07.915369 kubelet[2303]: I0715 04:47:07.915354 2303 topology_manager.go:138] "Creating topology manager with none policy" Jul 15 04:47:07.915428 kubelet[2303]: I0715 04:47:07.915420 2303 container_manager_linux.go:300] "Creating device plugin manager" Jul 15 04:47:07.915835 kubelet[2303]: I0715 04:47:07.915818 2303 state_mem.go:36] "Initialized new in-memory state store" Jul 15 04:47:07.918097 kubelet[2303]: I0715 04:47:07.918070 2303 kubelet.go:408] "Attempting to sync node with API server" Jul 15 04:47:07.918212 kubelet[2303]: I0715 04:47:07.918201 2303 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 15 04:47:07.918288 kubelet[2303]: I0715 04:47:07.918279 2303 kubelet.go:314] "Adding apiserver pod source" Jul 15 04:47:07.918407 kubelet[2303]: I0715 04:47:07.918398 2303 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 15 04:47:07.923140 kubelet[2303]: W0715 04:47:07.923069 2303 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.81:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.81:6443: connect: connection refused Jul 15 04:47:07.923229 kubelet[2303]: E0715 04:47:07.923150 2303 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.81:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.81:6443: connect: connection refused" logger="UnhandledError" Jul 15 04:47:07.923753 kubelet[2303]: W0715 04:47:07.923237 2303 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.81:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.81:6443: connect: connection refused Jul 15 04:47:07.923753 kubelet[2303]: E0715 04:47:07.923277 2303 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.81:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.81:6443: connect: connection refused" logger="UnhandledError" Jul 15 04:47:07.925541 kubelet[2303]: I0715 04:47:07.925509 2303 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Jul 15 04:47:07.926800 kubelet[2303]: I0715 04:47:07.926781 2303 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 15 04:47:07.926975 kubelet[2303]: W0715 04:47:07.926964 2303 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 15 04:47:07.928609 kubelet[2303]: I0715 04:47:07.928496 2303 server.go:1274] "Started kubelet" Jul 15 04:47:07.930441 kubelet[2303]: I0715 04:47:07.930267 2303 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 15 04:47:07.930633 kubelet[2303]: I0715 04:47:07.930608 2303 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 15 04:47:07.932258 kubelet[2303]: I0715 04:47:07.931755 2303 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 15 04:47:07.932760 kubelet[2303]: I0715 04:47:07.932736 2303 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 15 04:47:07.933051 kubelet[2303]: I0715 04:47:07.933026 2303 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 15 04:47:07.933401 kubelet[2303]: E0715 04:47:07.931956 2303 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.81:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.81:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18525354a011f013 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-15 04:47:07.928432659 +0000 UTC m=+0.527995781,LastTimestamp:2025-07-15 04:47:07.928432659 +0000 UTC m=+0.527995781,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 15 04:47:07.933565 kubelet[2303]: I0715 04:47:07.933404 2303 server.go:449] "Adding debug handlers to kubelet server" Jul 15 04:47:07.934503 kubelet[2303]: I0715 04:47:07.934470 2303 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 15 04:47:07.934586 kubelet[2303]: I0715 04:47:07.934570 2303 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 15 04:47:07.934626 kubelet[2303]: I0715 04:47:07.934619 2303 reconciler.go:26] "Reconciler: start to sync state" Jul 15 04:47:07.934941 kubelet[2303]: E0715 04:47:07.934914 2303 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 15 04:47:07.935104 kubelet[2303]: I0715 04:47:07.935079 2303 factory.go:221] Registration of the systemd container factory successfully Jul 15 04:47:07.935200 kubelet[2303]: W0715 04:47:07.935143 2303 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.81:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.81:6443: connect: connection refused Jul 15 04:47:07.935230 kubelet[2303]: E0715 04:47:07.935214 2303 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.81:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.81:6443: connect: connection refused" logger="UnhandledError" Jul 15 04:47:07.935256 kubelet[2303]: I0715 04:47:07.935172 2303 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 15 04:47:07.935746 kubelet[2303]: E0715 04:47:07.935701 2303 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 04:47:07.935820 kubelet[2303]: E0715 04:47:07.935697 2303 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.81:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.81:6443: connect: connection refused" interval="200ms" Jul 15 04:47:07.937234 kubelet[2303]: I0715 04:47:07.937173 2303 factory.go:221] Registration of the containerd container factory successfully Jul 15 04:47:07.947849 kubelet[2303]: I0715 04:47:07.947824 2303 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 15 04:47:07.947849 kubelet[2303]: I0715 04:47:07.947842 2303 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 15 04:47:07.947849 kubelet[2303]: I0715 04:47:07.947861 2303 state_mem.go:36] "Initialized new in-memory state store" Jul 15 04:47:07.951023 kubelet[2303]: I0715 04:47:07.950963 2303 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 15 04:47:07.952072 kubelet[2303]: I0715 04:47:07.952040 2303 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 15 04:47:07.952072 kubelet[2303]: I0715 04:47:07.952065 2303 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 15 04:47:07.952183 kubelet[2303]: I0715 04:47:07.952086 2303 kubelet.go:2321] "Starting kubelet main sync loop" Jul 15 04:47:07.952183 kubelet[2303]: E0715 04:47:07.952131 2303 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 15 04:47:08.036016 kubelet[2303]: E0715 04:47:08.035940 2303 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 04:47:08.053209 kubelet[2303]: E0715 04:47:08.053151 2303 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 15 04:47:08.061105 kubelet[2303]: W0715 04:47:08.061033 2303 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.81:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.81:6443: connect: connection refused Jul 15 04:47:08.061160 kubelet[2303]: E0715 04:47:08.061114 2303 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.81:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.81:6443: connect: connection refused" logger="UnhandledError" Jul 15 04:47:08.061832 kubelet[2303]: I0715 04:47:08.061801 2303 policy_none.go:49] "None policy: Start" Jul 15 04:47:08.062605 kubelet[2303]: I0715 04:47:08.062586 2303 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 15 04:47:08.062666 kubelet[2303]: I0715 04:47:08.062629 2303 state_mem.go:35] "Initializing new in-memory state store" Jul 15 04:47:08.069526 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 15 04:47:08.083645 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 15 04:47:08.087009 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 15 04:47:08.094747 kubelet[2303]: I0715 04:47:08.094543 2303 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 15 04:47:08.094852 kubelet[2303]: I0715 04:47:08.094766 2303 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 15 04:47:08.094852 kubelet[2303]: I0715 04:47:08.094777 2303 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 15 04:47:08.095372 kubelet[2303]: I0715 04:47:08.095322 2303 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 15 04:47:08.096135 kubelet[2303]: E0715 04:47:08.096096 2303 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jul 15 04:47:08.136586 kubelet[2303]: E0715 04:47:08.136541 2303 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.81:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.81:6443: connect: connection refused" interval="400ms" Jul 15 04:47:08.196784 kubelet[2303]: I0715 04:47:08.196750 2303 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 15 04:47:08.197254 kubelet[2303]: E0715 04:47:08.197215 2303 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.81:6443/api/v1/nodes\": dial tcp 10.0.0.81:6443: connect: connection refused" node="localhost" Jul 15 04:47:08.265714 systemd[1]: Created slice kubepods-burstable-podb35b56493416c25588cb530e37ffc065.slice - libcontainer container kubepods-burstable-podb35b56493416c25588cb530e37ffc065.slice. Jul 15 04:47:08.287495 systemd[1]: Created slice kubepods-burstable-pod053976a3b4f8563497a8a85e0c894dd8.slice - libcontainer container kubepods-burstable-pod053976a3b4f8563497a8a85e0c894dd8.slice. Jul 15 04:47:08.310289 systemd[1]: Created slice kubepods-burstable-pod3f04709fe51ae4ab5abd58e8da771b74.slice - libcontainer container kubepods-burstable-pod3f04709fe51ae4ab5abd58e8da771b74.slice. Jul 15 04:47:08.399707 kubelet[2303]: I0715 04:47:08.399523 2303 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 15 04:47:08.400078 kubelet[2303]: E0715 04:47:08.400043 2303 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.81:6443/api/v1/nodes\": dial tcp 10.0.0.81:6443: connect: connection refused" node="localhost" Jul 15 04:47:08.435844 kubelet[2303]: I0715 04:47:08.435792 2303 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/053976a3b4f8563497a8a85e0c894dd8-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"053976a3b4f8563497a8a85e0c894dd8\") " pod="kube-system/kube-apiserver-localhost" Jul 15 04:47:08.435844 kubelet[2303]: I0715 04:47:08.435842 2303 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 04:47:08.436002 kubelet[2303]: I0715 04:47:08.435864 2303 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 04:47:08.436002 kubelet[2303]: I0715 04:47:08.435881 2303 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 04:47:08.436002 kubelet[2303]: I0715 04:47:08.435899 2303 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b35b56493416c25588cb530e37ffc065-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"b35b56493416c25588cb530e37ffc065\") " pod="kube-system/kube-scheduler-localhost" Jul 15 04:47:08.436002 kubelet[2303]: I0715 04:47:08.435913 2303 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/053976a3b4f8563497a8a85e0c894dd8-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"053976a3b4f8563497a8a85e0c894dd8\") " pod="kube-system/kube-apiserver-localhost" Jul 15 04:47:08.436002 kubelet[2303]: I0715 04:47:08.435927 2303 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/053976a3b4f8563497a8a85e0c894dd8-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"053976a3b4f8563497a8a85e0c894dd8\") " pod="kube-system/kube-apiserver-localhost" Jul 15 04:47:08.436098 kubelet[2303]: I0715 04:47:08.435953 2303 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 04:47:08.436098 kubelet[2303]: I0715 04:47:08.435970 2303 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 04:47:08.538051 kubelet[2303]: E0715 04:47:08.537996 2303 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.81:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.81:6443: connect: connection refused" interval="800ms" Jul 15 04:47:08.586850 containerd[1523]: time="2025-07-15T04:47:08.586799834Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:b35b56493416c25588cb530e37ffc065,Namespace:kube-system,Attempt:0,}" Jul 15 04:47:08.609570 containerd[1523]: time="2025-07-15T04:47:08.609525982Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:053976a3b4f8563497a8a85e0c894dd8,Namespace:kube-system,Attempt:0,}" Jul 15 04:47:08.613353 containerd[1523]: time="2025-07-15T04:47:08.613241348Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:3f04709fe51ae4ab5abd58e8da771b74,Namespace:kube-system,Attempt:0,}" Jul 15 04:47:08.694870 containerd[1523]: time="2025-07-15T04:47:08.694392907Z" level=info msg="connecting to shim 2c8bf631c2c84e2e0a4b3dad9b697f1a530c6a773eb80aff99922baf2459ed91" address="unix:///run/containerd/s/8db5010c7f3953760ad0a8146b8aabf1abd8479b1337ae32b4f3b5b99cd7af52" namespace=k8s.io protocol=ttrpc version=3 Jul 15 04:47:08.701260 containerd[1523]: time="2025-07-15T04:47:08.701218426Z" level=info msg="connecting to shim 17ea36ba92ef108c075cd57d007654b6f3abbd5c6f5f0e82c1be335ce48f8ae7" address="unix:///run/containerd/s/1cd0a35ee09bc5780a40417d5ba0e74ba7a1e0b2e5f1d74d081d24fb85685828" namespace=k8s.io protocol=ttrpc version=3 Jul 15 04:47:08.706122 containerd[1523]: time="2025-07-15T04:47:08.706081926Z" level=info msg="connecting to shim 3a85fc99e54f71d5bfc8a983129deaa292e310555be13384810da53220925310" address="unix:///run/containerd/s/fc5495693e4f16e7335415d2a353630fed9ef9790cc48065c0d6af21027b64c2" namespace=k8s.io protocol=ttrpc version=3 Jul 15 04:47:08.724940 systemd[1]: Started cri-containerd-2c8bf631c2c84e2e0a4b3dad9b697f1a530c6a773eb80aff99922baf2459ed91.scope - libcontainer container 2c8bf631c2c84e2e0a4b3dad9b697f1a530c6a773eb80aff99922baf2459ed91. Jul 15 04:47:08.728178 systemd[1]: Started cri-containerd-17ea36ba92ef108c075cd57d007654b6f3abbd5c6f5f0e82c1be335ce48f8ae7.scope - libcontainer container 17ea36ba92ef108c075cd57d007654b6f3abbd5c6f5f0e82c1be335ce48f8ae7. Jul 15 04:47:08.733254 systemd[1]: Started cri-containerd-3a85fc99e54f71d5bfc8a983129deaa292e310555be13384810da53220925310.scope - libcontainer container 3a85fc99e54f71d5bfc8a983129deaa292e310555be13384810da53220925310. Jul 15 04:47:08.743169 kubelet[2303]: W0715 04:47:08.743068 2303 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.81:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.81:6443: connect: connection refused Jul 15 04:47:08.743291 kubelet[2303]: E0715 04:47:08.743208 2303 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.81:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.81:6443: connect: connection refused" logger="UnhandledError" Jul 15 04:47:08.772902 containerd[1523]: time="2025-07-15T04:47:08.772312208Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:b35b56493416c25588cb530e37ffc065,Namespace:kube-system,Attempt:0,} returns sandbox id \"2c8bf631c2c84e2e0a4b3dad9b697f1a530c6a773eb80aff99922baf2459ed91\"" Jul 15 04:47:08.778033 containerd[1523]: time="2025-07-15T04:47:08.777985200Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:3f04709fe51ae4ab5abd58e8da771b74,Namespace:kube-system,Attempt:0,} returns sandbox id \"3a85fc99e54f71d5bfc8a983129deaa292e310555be13384810da53220925310\"" Jul 15 04:47:08.778201 containerd[1523]: time="2025-07-15T04:47:08.778169994Z" level=info msg="CreateContainer within sandbox \"2c8bf631c2c84e2e0a4b3dad9b697f1a530c6a773eb80aff99922baf2459ed91\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 15 04:47:08.779028 containerd[1523]: time="2025-07-15T04:47:08.778981043Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:053976a3b4f8563497a8a85e0c894dd8,Namespace:kube-system,Attempt:0,} returns sandbox id \"17ea36ba92ef108c075cd57d007654b6f3abbd5c6f5f0e82c1be335ce48f8ae7\"" Jul 15 04:47:08.781725 containerd[1523]: time="2025-07-15T04:47:08.781678205Z" level=info msg="CreateContainer within sandbox \"3a85fc99e54f71d5bfc8a983129deaa292e310555be13384810da53220925310\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 15 04:47:08.786121 containerd[1523]: time="2025-07-15T04:47:08.786074290Z" level=info msg="Container f1959c5d686ae267375dbfb7db8b47f27335dbc86028f7e3aa6b141809ebdc58: CDI devices from CRI Config.CDIDevices: []" Jul 15 04:47:08.792134 containerd[1523]: time="2025-07-15T04:47:08.792080494Z" level=info msg="Container 9b1022c1c9ff351dbca941933934bf5fd666c878a8df2a6d5e59e3666feece3c: CDI devices from CRI Config.CDIDevices: []" Jul 15 04:47:08.793201 containerd[1523]: time="2025-07-15T04:47:08.793153921Z" level=info msg="CreateContainer within sandbox \"17ea36ba92ef108c075cd57d007654b6f3abbd5c6f5f0e82c1be335ce48f8ae7\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 15 04:47:08.802469 kubelet[2303]: I0715 04:47:08.802434 2303 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 15 04:47:08.803239 containerd[1523]: time="2025-07-15T04:47:08.803089354Z" level=info msg="CreateContainer within sandbox \"3a85fc99e54f71d5bfc8a983129deaa292e310555be13384810da53220925310\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"9b1022c1c9ff351dbca941933934bf5fd666c878a8df2a6d5e59e3666feece3c\"" Jul 15 04:47:08.803750 containerd[1523]: time="2025-07-15T04:47:08.803114150Z" level=info msg="CreateContainer within sandbox \"2c8bf631c2c84e2e0a4b3dad9b697f1a530c6a773eb80aff99922baf2459ed91\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"f1959c5d686ae267375dbfb7db8b47f27335dbc86028f7e3aa6b141809ebdc58\"" Jul 15 04:47:08.803798 kubelet[2303]: E0715 04:47:08.803338 2303 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.81:6443/api/v1/nodes\": dial tcp 10.0.0.81:6443: connect: connection refused" node="localhost" Jul 15 04:47:08.803920 containerd[1523]: time="2025-07-15T04:47:08.803894493Z" level=info msg="StartContainer for \"9b1022c1c9ff351dbca941933934bf5fd666c878a8df2a6d5e59e3666feece3c\"" Jul 15 04:47:08.804022 containerd[1523]: time="2025-07-15T04:47:08.803906472Z" level=info msg="StartContainer for \"f1959c5d686ae267375dbfb7db8b47f27335dbc86028f7e3aa6b141809ebdc58\"" Jul 15 04:47:08.804088 containerd[1523]: time="2025-07-15T04:47:08.804065671Z" level=info msg="Container d75cfa62cbd689b60c8afa47e7db7731c68ca5c39f0a38695aa924f9b12bb0fb: CDI devices from CRI Config.CDIDevices: []" Jul 15 04:47:08.805002 containerd[1523]: time="2025-07-15T04:47:08.804972192Z" level=info msg="connecting to shim f1959c5d686ae267375dbfb7db8b47f27335dbc86028f7e3aa6b141809ebdc58" address="unix:///run/containerd/s/8db5010c7f3953760ad0a8146b8aabf1abd8479b1337ae32b4f3b5b99cd7af52" protocol=ttrpc version=3 Jul 15 04:47:08.805075 containerd[1523]: time="2025-07-15T04:47:08.805018870Z" level=info msg="connecting to shim 9b1022c1c9ff351dbca941933934bf5fd666c878a8df2a6d5e59e3666feece3c" address="unix:///run/containerd/s/fc5495693e4f16e7335415d2a353630fed9ef9790cc48065c0d6af21027b64c2" protocol=ttrpc version=3 Jul 15 04:47:08.810599 containerd[1523]: time="2025-07-15T04:47:08.810537854Z" level=info msg="CreateContainer within sandbox \"17ea36ba92ef108c075cd57d007654b6f3abbd5c6f5f0e82c1be335ce48f8ae7\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"d75cfa62cbd689b60c8afa47e7db7731c68ca5c39f0a38695aa924f9b12bb0fb\"" Jul 15 04:47:08.811224 containerd[1523]: time="2025-07-15T04:47:08.811192539Z" level=info msg="StartContainer for \"d75cfa62cbd689b60c8afa47e7db7731c68ca5c39f0a38695aa924f9b12bb0fb\"" Jul 15 04:47:08.812821 containerd[1523]: time="2025-07-15T04:47:08.812787685Z" level=info msg="connecting to shim d75cfa62cbd689b60c8afa47e7db7731c68ca5c39f0a38695aa924f9b12bb0fb" address="unix:///run/containerd/s/1cd0a35ee09bc5780a40417d5ba0e74ba7a1e0b2e5f1d74d081d24fb85685828" protocol=ttrpc version=3 Jul 15 04:47:08.825898 systemd[1]: Started cri-containerd-9b1022c1c9ff351dbca941933934bf5fd666c878a8df2a6d5e59e3666feece3c.scope - libcontainer container 9b1022c1c9ff351dbca941933934bf5fd666c878a8df2a6d5e59e3666feece3c. Jul 15 04:47:08.826892 systemd[1]: Started cri-containerd-f1959c5d686ae267375dbfb7db8b47f27335dbc86028f7e3aa6b141809ebdc58.scope - libcontainer container f1959c5d686ae267375dbfb7db8b47f27335dbc86028f7e3aa6b141809ebdc58. Jul 15 04:47:08.829979 systemd[1]: Started cri-containerd-d75cfa62cbd689b60c8afa47e7db7731c68ca5c39f0a38695aa924f9b12bb0fb.scope - libcontainer container d75cfa62cbd689b60c8afa47e7db7731c68ca5c39f0a38695aa924f9b12bb0fb. Jul 15 04:47:08.877163 containerd[1523]: time="2025-07-15T04:47:08.877026280Z" level=info msg="StartContainer for \"f1959c5d686ae267375dbfb7db8b47f27335dbc86028f7e3aa6b141809ebdc58\" returns successfully" Jul 15 04:47:08.885006 containerd[1523]: time="2025-07-15T04:47:08.884875353Z" level=info msg="StartContainer for \"d75cfa62cbd689b60c8afa47e7db7731c68ca5c39f0a38695aa924f9b12bb0fb\" returns successfully" Jul 15 04:47:08.887337 containerd[1523]: time="2025-07-15T04:47:08.887303470Z" level=info msg="StartContainer for \"9b1022c1c9ff351dbca941933934bf5fd666c878a8df2a6d5e59e3666feece3c\" returns successfully" Jul 15 04:47:08.991012 kubelet[2303]: W0715 04:47:08.990523 2303 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.81:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.81:6443: connect: connection refused Jul 15 04:47:08.991757 kubelet[2303]: E0715 04:47:08.991162 2303 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.81:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.81:6443: connect: connection refused" logger="UnhandledError" Jul 15 04:47:09.049870 kubelet[2303]: W0715 04:47:09.049806 2303 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.81:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.81:6443: connect: connection refused Jul 15 04:47:09.050084 kubelet[2303]: E0715 04:47:09.049879 2303 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.81:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.81:6443: connect: connection refused" logger="UnhandledError" Jul 15 04:47:09.605216 kubelet[2303]: I0715 04:47:09.605171 2303 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 15 04:47:10.743124 kubelet[2303]: E0715 04:47:10.743062 2303 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jul 15 04:47:10.881991 kubelet[2303]: E0715 04:47:10.881842 2303 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{localhost.18525354a011f013 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-15 04:47:07.928432659 +0000 UTC m=+0.527995781,LastTimestamp:2025-07-15 04:47:07.928432659 +0000 UTC m=+0.527995781,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 15 04:47:10.940291 kubelet[2303]: I0715 04:47:10.940182 2303 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jul 15 04:47:10.940291 kubelet[2303]: E0715 04:47:10.940226 2303 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jul 15 04:47:10.957134 kubelet[2303]: E0715 04:47:10.957092 2303 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 04:47:11.057860 kubelet[2303]: E0715 04:47:11.057407 2303 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 04:47:11.921772 kubelet[2303]: I0715 04:47:11.921705 2303 apiserver.go:52] "Watching apiserver" Jul 15 04:47:11.935695 kubelet[2303]: I0715 04:47:11.935631 2303 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 15 04:47:12.849981 systemd[1]: Reload requested from client PID 2579 ('systemctl') (unit session-7.scope)... Jul 15 04:47:12.849995 systemd[1]: Reloading... Jul 15 04:47:12.919777 zram_generator::config[2628]: No configuration found. Jul 15 04:47:12.981268 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 15 04:47:13.079584 systemd[1]: Reloading finished in 229 ms. Jul 15 04:47:13.105382 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 04:47:13.118749 systemd[1]: kubelet.service: Deactivated successfully. Jul 15 04:47:13.118990 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 04:47:13.119048 systemd[1]: kubelet.service: Consumed 931ms CPU time, 128M memory peak. Jul 15 04:47:13.120833 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 15 04:47:13.241648 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 15 04:47:13.256140 (kubelet)[2664]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 15 04:47:13.296240 kubelet[2664]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 15 04:47:13.296240 kubelet[2664]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 15 04:47:13.296240 kubelet[2664]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 15 04:47:13.296554 kubelet[2664]: I0715 04:47:13.296288 2664 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 15 04:47:13.303750 kubelet[2664]: I0715 04:47:13.302916 2664 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 15 04:47:13.303750 kubelet[2664]: I0715 04:47:13.302948 2664 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 15 04:47:13.303750 kubelet[2664]: I0715 04:47:13.303169 2664 server.go:934] "Client rotation is on, will bootstrap in background" Jul 15 04:47:13.304763 kubelet[2664]: I0715 04:47:13.304737 2664 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 15 04:47:13.306954 kubelet[2664]: I0715 04:47:13.306906 2664 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 15 04:47:13.311271 kubelet[2664]: I0715 04:47:13.311249 2664 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 15 04:47:13.313646 kubelet[2664]: I0715 04:47:13.313623 2664 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 15 04:47:13.313798 kubelet[2664]: I0715 04:47:13.313782 2664 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 15 04:47:13.313933 kubelet[2664]: I0715 04:47:13.313901 2664 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 15 04:47:13.314125 kubelet[2664]: I0715 04:47:13.313933 2664 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 15 04:47:13.314194 kubelet[2664]: I0715 04:47:13.314133 2664 topology_manager.go:138] "Creating topology manager with none policy" Jul 15 04:47:13.314194 kubelet[2664]: I0715 04:47:13.314142 2664 container_manager_linux.go:300] "Creating device plugin manager" Jul 15 04:47:13.314194 kubelet[2664]: I0715 04:47:13.314178 2664 state_mem.go:36] "Initialized new in-memory state store" Jul 15 04:47:13.314304 kubelet[2664]: I0715 04:47:13.314283 2664 kubelet.go:408] "Attempting to sync node with API server" Jul 15 04:47:13.314304 kubelet[2664]: I0715 04:47:13.314298 2664 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 15 04:47:13.314304 kubelet[2664]: I0715 04:47:13.314318 2664 kubelet.go:314] "Adding apiserver pod source" Jul 15 04:47:13.314304 kubelet[2664]: I0715 04:47:13.314331 2664 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 15 04:47:13.315098 kubelet[2664]: I0715 04:47:13.315061 2664 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.5" apiVersion="v1" Jul 15 04:47:13.315954 kubelet[2664]: I0715 04:47:13.315923 2664 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 15 04:47:13.316943 kubelet[2664]: I0715 04:47:13.316917 2664 server.go:1274] "Started kubelet" Jul 15 04:47:13.318029 kubelet[2664]: I0715 04:47:13.317994 2664 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 15 04:47:13.318774 kubelet[2664]: I0715 04:47:13.317788 2664 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 15 04:47:13.318860 kubelet[2664]: I0715 04:47:13.318787 2664 server.go:449] "Adding debug handlers to kubelet server" Jul 15 04:47:13.318964 kubelet[2664]: I0715 04:47:13.318939 2664 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 15 04:47:13.319188 kubelet[2664]: I0715 04:47:13.319162 2664 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 15 04:47:13.320210 kubelet[2664]: I0715 04:47:13.320190 2664 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 15 04:47:13.321472 kubelet[2664]: I0715 04:47:13.321450 2664 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 15 04:47:13.321652 kubelet[2664]: I0715 04:47:13.321637 2664 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 15 04:47:13.321869 kubelet[2664]: I0715 04:47:13.321847 2664 reconciler.go:26] "Reconciler: start to sync state" Jul 15 04:47:13.322008 kubelet[2664]: E0715 04:47:13.321663 2664 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 15 04:47:13.325899 kubelet[2664]: I0715 04:47:13.325873 2664 factory.go:221] Registration of the systemd container factory successfully Jul 15 04:47:13.326083 kubelet[2664]: I0715 04:47:13.326059 2664 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 15 04:47:13.330190 kubelet[2664]: I0715 04:47:13.330140 2664 factory.go:221] Registration of the containerd container factory successfully Jul 15 04:47:13.343224 kubelet[2664]: I0715 04:47:13.343172 2664 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 15 04:47:13.344977 kubelet[2664]: I0715 04:47:13.344642 2664 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 15 04:47:13.345086 kubelet[2664]: I0715 04:47:13.345071 2664 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 15 04:47:13.345148 kubelet[2664]: I0715 04:47:13.345138 2664 kubelet.go:2321] "Starting kubelet main sync loop" Jul 15 04:47:13.345265 kubelet[2664]: E0715 04:47:13.345240 2664 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 15 04:47:13.367420 kubelet[2664]: I0715 04:47:13.367335 2664 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 15 04:47:13.367420 kubelet[2664]: I0715 04:47:13.367351 2664 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 15 04:47:13.367420 kubelet[2664]: I0715 04:47:13.367371 2664 state_mem.go:36] "Initialized new in-memory state store" Jul 15 04:47:13.367539 kubelet[2664]: I0715 04:47:13.367499 2664 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 15 04:47:13.367539 kubelet[2664]: I0715 04:47:13.367509 2664 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 15 04:47:13.367539 kubelet[2664]: I0715 04:47:13.367526 2664 policy_none.go:49] "None policy: Start" Jul 15 04:47:13.368391 kubelet[2664]: I0715 04:47:13.368370 2664 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 15 04:47:13.368499 kubelet[2664]: I0715 04:47:13.368398 2664 state_mem.go:35] "Initializing new in-memory state store" Jul 15 04:47:13.368553 kubelet[2664]: I0715 04:47:13.368538 2664 state_mem.go:75] "Updated machine memory state" Jul 15 04:47:13.372455 kubelet[2664]: I0715 04:47:13.372421 2664 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 15 04:47:13.372952 kubelet[2664]: I0715 04:47:13.372933 2664 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 15 04:47:13.373257 kubelet[2664]: I0715 04:47:13.373222 2664 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 15 04:47:13.373817 kubelet[2664]: I0715 04:47:13.373637 2664 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 15 04:47:13.475094 kubelet[2664]: I0715 04:47:13.475057 2664 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 15 04:47:13.487909 kubelet[2664]: I0715 04:47:13.487867 2664 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Jul 15 04:47:13.488261 kubelet[2664]: I0715 04:47:13.488090 2664 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jul 15 04:47:13.623714 kubelet[2664]: I0715 04:47:13.623520 2664 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 04:47:13.623714 kubelet[2664]: I0715 04:47:13.623559 2664 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 04:47:13.623714 kubelet[2664]: I0715 04:47:13.623582 2664 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b35b56493416c25588cb530e37ffc065-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"b35b56493416c25588cb530e37ffc065\") " pod="kube-system/kube-scheduler-localhost" Jul 15 04:47:13.623714 kubelet[2664]: I0715 04:47:13.623602 2664 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/053976a3b4f8563497a8a85e0c894dd8-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"053976a3b4f8563497a8a85e0c894dd8\") " pod="kube-system/kube-apiserver-localhost" Jul 15 04:47:13.623714 kubelet[2664]: I0715 04:47:13.623647 2664 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/053976a3b4f8563497a8a85e0c894dd8-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"053976a3b4f8563497a8a85e0c894dd8\") " pod="kube-system/kube-apiserver-localhost" Jul 15 04:47:13.624084 kubelet[2664]: I0715 04:47:13.623952 2664 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 04:47:13.624084 kubelet[2664]: I0715 04:47:13.624001 2664 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 04:47:13.624084 kubelet[2664]: I0715 04:47:13.624035 2664 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 15 04:47:13.624084 kubelet[2664]: I0715 04:47:13.624054 2664 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/053976a3b4f8563497a8a85e0c894dd8-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"053976a3b4f8563497a8a85e0c894dd8\") " pod="kube-system/kube-apiserver-localhost" Jul 15 04:47:13.855284 sudo[2697]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jul 15 04:47:13.855558 sudo[2697]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jul 15 04:47:14.165709 sudo[2697]: pam_unix(sudo:session): session closed for user root Jul 15 04:47:14.315701 kubelet[2664]: I0715 04:47:14.315661 2664 apiserver.go:52] "Watching apiserver" Jul 15 04:47:14.322424 kubelet[2664]: I0715 04:47:14.322386 2664 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 15 04:47:14.368779 kubelet[2664]: E0715 04:47:14.367558 2664 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 15 04:47:14.368779 kubelet[2664]: E0715 04:47:14.367771 2664 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-localhost\" already exists" pod="kube-system/kube-controller-manager-localhost" Jul 15 04:47:14.386554 kubelet[2664]: I0715 04:47:14.386482 2664 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.386463485 podStartE2EDuration="1.386463485s" podCreationTimestamp="2025-07-15 04:47:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 04:47:14.37926694 +0000 UTC m=+1.119998790" watchObservedRunningTime="2025-07-15 04:47:14.386463485 +0000 UTC m=+1.127195295" Jul 15 04:47:14.393610 kubelet[2664]: I0715 04:47:14.393474 2664 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.393458272 podStartE2EDuration="1.393458272s" podCreationTimestamp="2025-07-15 04:47:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 04:47:14.386688656 +0000 UTC m=+1.127420545" watchObservedRunningTime="2025-07-15 04:47:14.393458272 +0000 UTC m=+1.134190122" Jul 15 04:47:14.393778 kubelet[2664]: I0715 04:47:14.393736 2664 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.393716283 podStartE2EDuration="1.393716283s" podCreationTimestamp="2025-07-15 04:47:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 04:47:14.392869577 +0000 UTC m=+1.133601427" watchObservedRunningTime="2025-07-15 04:47:14.393716283 +0000 UTC m=+1.134448133" Jul 15 04:47:16.428951 sudo[1746]: pam_unix(sudo:session): session closed for user root Jul 15 04:47:16.430395 sshd[1745]: Connection closed by 10.0.0.1 port 49696 Jul 15 04:47:16.430774 sshd-session[1742]: pam_unix(sshd:session): session closed for user core Jul 15 04:47:16.434663 systemd[1]: sshd@6-10.0.0.81:22-10.0.0.1:49696.service: Deactivated successfully. Jul 15 04:47:16.437224 systemd[1]: session-7.scope: Deactivated successfully. Jul 15 04:47:16.437574 systemd[1]: session-7.scope: Consumed 7.199s CPU time, 259.8M memory peak. Jul 15 04:47:16.438851 systemd-logind[1507]: Session 7 logged out. Waiting for processes to exit. Jul 15 04:47:16.440392 systemd-logind[1507]: Removed session 7. Jul 15 04:47:19.536279 kubelet[2664]: I0715 04:47:19.536159 2664 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 15 04:47:19.536799 kubelet[2664]: I0715 04:47:19.536664 2664 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 15 04:47:19.536852 containerd[1523]: time="2025-07-15T04:47:19.536508114Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 15 04:47:20.595257 systemd[1]: Created slice kubepods-burstable-podb6f73f09_efff_4dd4_9d83_b0de6a2fe64c.slice - libcontainer container kubepods-burstable-podb6f73f09_efff_4dd4_9d83_b0de6a2fe64c.slice. Jul 15 04:47:20.603294 systemd[1]: Created slice kubepods-besteffort-pod2e862d2d_1f24_401e_a9cd_05a765f91267.slice - libcontainer container kubepods-besteffort-pod2e862d2d_1f24_401e_a9cd_05a765f91267.slice. Jul 15 04:47:20.670049 kubelet[2664]: I0715 04:47:20.669983 2664 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2e862d2d-1f24-401e-a9cd-05a765f91267-xtables-lock\") pod \"kube-proxy-5pfc6\" (UID: \"2e862d2d-1f24-401e-a9cd-05a765f91267\") " pod="kube-system/kube-proxy-5pfc6" Jul 15 04:47:20.670049 kubelet[2664]: I0715 04:47:20.670049 2664 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2e862d2d-1f24-401e-a9cd-05a765f91267-lib-modules\") pod \"kube-proxy-5pfc6\" (UID: \"2e862d2d-1f24-401e-a9cd-05a765f91267\") " pod="kube-system/kube-proxy-5pfc6" Jul 15 04:47:20.670429 kubelet[2664]: I0715 04:47:20.670069 2664 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b6f73f09-efff-4dd4-9d83-b0de6a2fe64c-cilium-run\") pod \"cilium-j7bq4\" (UID: \"b6f73f09-efff-4dd4-9d83-b0de6a2fe64c\") " pod="kube-system/cilium-j7bq4" Jul 15 04:47:20.670429 kubelet[2664]: I0715 04:47:20.670086 2664 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b6f73f09-efff-4dd4-9d83-b0de6a2fe64c-cilium-cgroup\") pod \"cilium-j7bq4\" (UID: \"b6f73f09-efff-4dd4-9d83-b0de6a2fe64c\") " pod="kube-system/cilium-j7bq4" Jul 15 04:47:20.670429 kubelet[2664]: I0715 04:47:20.670104 2664 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b6f73f09-efff-4dd4-9d83-b0de6a2fe64c-lib-modules\") pod \"cilium-j7bq4\" (UID: \"b6f73f09-efff-4dd4-9d83-b0de6a2fe64c\") " pod="kube-system/cilium-j7bq4" Jul 15 04:47:20.670429 kubelet[2664]: I0715 04:47:20.670119 2664 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b6f73f09-efff-4dd4-9d83-b0de6a2fe64c-host-proc-sys-net\") pod \"cilium-j7bq4\" (UID: \"b6f73f09-efff-4dd4-9d83-b0de6a2fe64c\") " pod="kube-system/cilium-j7bq4" Jul 15 04:47:20.670429 kubelet[2664]: I0715 04:47:20.670135 2664 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b6f73f09-efff-4dd4-9d83-b0de6a2fe64c-hubble-tls\") pod \"cilium-j7bq4\" (UID: \"b6f73f09-efff-4dd4-9d83-b0de6a2fe64c\") " pod="kube-system/cilium-j7bq4" Jul 15 04:47:20.670429 kubelet[2664]: I0715 04:47:20.670149 2664 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b6f73f09-efff-4dd4-9d83-b0de6a2fe64c-etc-cni-netd\") pod \"cilium-j7bq4\" (UID: \"b6f73f09-efff-4dd4-9d83-b0de6a2fe64c\") " pod="kube-system/cilium-j7bq4" Jul 15 04:47:20.670554 kubelet[2664]: I0715 04:47:20.670166 2664 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b6f73f09-efff-4dd4-9d83-b0de6a2fe64c-xtables-lock\") pod \"cilium-j7bq4\" (UID: \"b6f73f09-efff-4dd4-9d83-b0de6a2fe64c\") " pod="kube-system/cilium-j7bq4" Jul 15 04:47:20.670554 kubelet[2664]: I0715 04:47:20.670181 2664 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b6f73f09-efff-4dd4-9d83-b0de6a2fe64c-clustermesh-secrets\") pod \"cilium-j7bq4\" (UID: \"b6f73f09-efff-4dd4-9d83-b0de6a2fe64c\") " pod="kube-system/cilium-j7bq4" Jul 15 04:47:20.670554 kubelet[2664]: I0715 04:47:20.670245 2664 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b6f73f09-efff-4dd4-9d83-b0de6a2fe64c-cni-path\") pod \"cilium-j7bq4\" (UID: \"b6f73f09-efff-4dd4-9d83-b0de6a2fe64c\") " pod="kube-system/cilium-j7bq4" Jul 15 04:47:20.670554 kubelet[2664]: I0715 04:47:20.670264 2664 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b6f73f09-efff-4dd4-9d83-b0de6a2fe64c-host-proc-sys-kernel\") pod \"cilium-j7bq4\" (UID: \"b6f73f09-efff-4dd4-9d83-b0de6a2fe64c\") " pod="kube-system/cilium-j7bq4" Jul 15 04:47:20.670554 kubelet[2664]: I0715 04:47:20.670290 2664 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q6xxg\" (UniqueName: \"kubernetes.io/projected/b6f73f09-efff-4dd4-9d83-b0de6a2fe64c-kube-api-access-q6xxg\") pod \"cilium-j7bq4\" (UID: \"b6f73f09-efff-4dd4-9d83-b0de6a2fe64c\") " pod="kube-system/cilium-j7bq4" Jul 15 04:47:20.670648 kubelet[2664]: I0715 04:47:20.670316 2664 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b6f73f09-efff-4dd4-9d83-b0de6a2fe64c-cilium-config-path\") pod \"cilium-j7bq4\" (UID: \"b6f73f09-efff-4dd4-9d83-b0de6a2fe64c\") " pod="kube-system/cilium-j7bq4" Jul 15 04:47:20.670648 kubelet[2664]: I0715 04:47:20.670333 2664 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/2e862d2d-1f24-401e-a9cd-05a765f91267-kube-proxy\") pod \"kube-proxy-5pfc6\" (UID: \"2e862d2d-1f24-401e-a9cd-05a765f91267\") " pod="kube-system/kube-proxy-5pfc6" Jul 15 04:47:20.670648 kubelet[2664]: I0715 04:47:20.670347 2664 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ln78n\" (UniqueName: \"kubernetes.io/projected/2e862d2d-1f24-401e-a9cd-05a765f91267-kube-api-access-ln78n\") pod \"kube-proxy-5pfc6\" (UID: \"2e862d2d-1f24-401e-a9cd-05a765f91267\") " pod="kube-system/kube-proxy-5pfc6" Jul 15 04:47:20.670648 kubelet[2664]: I0715 04:47:20.670566 2664 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b6f73f09-efff-4dd4-9d83-b0de6a2fe64c-bpf-maps\") pod \"cilium-j7bq4\" (UID: \"b6f73f09-efff-4dd4-9d83-b0de6a2fe64c\") " pod="kube-system/cilium-j7bq4" Jul 15 04:47:20.670648 kubelet[2664]: I0715 04:47:20.670586 2664 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b6f73f09-efff-4dd4-9d83-b0de6a2fe64c-hostproc\") pod \"cilium-j7bq4\" (UID: \"b6f73f09-efff-4dd4-9d83-b0de6a2fe64c\") " pod="kube-system/cilium-j7bq4" Jul 15 04:47:20.690994 systemd[1]: Created slice kubepods-besteffort-pod8f934e2d_4122_4519_b637_21a6f4fbb090.slice - libcontainer container kubepods-besteffort-pod8f934e2d_4122_4519_b637_21a6f4fbb090.slice. Jul 15 04:47:20.771798 kubelet[2664]: I0715 04:47:20.771757 2664 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8f934e2d-4122-4519-b637-21a6f4fbb090-cilium-config-path\") pod \"cilium-operator-5d85765b45-5d55m\" (UID: \"8f934e2d-4122-4519-b637-21a6f4fbb090\") " pod="kube-system/cilium-operator-5d85765b45-5d55m" Jul 15 04:47:20.771902 kubelet[2664]: I0715 04:47:20.771821 2664 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5vq8m\" (UniqueName: \"kubernetes.io/projected/8f934e2d-4122-4519-b637-21a6f4fbb090-kube-api-access-5vq8m\") pod \"cilium-operator-5d85765b45-5d55m\" (UID: \"8f934e2d-4122-4519-b637-21a6f4fbb090\") " pod="kube-system/cilium-operator-5d85765b45-5d55m" Jul 15 04:47:20.901116 containerd[1523]: time="2025-07-15T04:47:20.901062854Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-j7bq4,Uid:b6f73f09-efff-4dd4-9d83-b0de6a2fe64c,Namespace:kube-system,Attempt:0,}" Jul 15 04:47:20.914968 containerd[1523]: time="2025-07-15T04:47:20.914838823Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5pfc6,Uid:2e862d2d-1f24-401e-a9cd-05a765f91267,Namespace:kube-system,Attempt:0,}" Jul 15 04:47:20.947421 containerd[1523]: time="2025-07-15T04:47:20.947362480Z" level=info msg="connecting to shim c3a73fb67a7e4373b7f595b8f00274724a0e6d9bcdab49828181bbfd5e56ac9a" address="unix:///run/containerd/s/8522391d997e3a730d518474d620955794a5ede7b1a5b6ac2d46826a42f16289" namespace=k8s.io protocol=ttrpc version=3 Jul 15 04:47:20.948562 containerd[1523]: time="2025-07-15T04:47:20.948523297Z" level=info msg="connecting to shim 2ad5e5ffa5a214cdc137b5c211e2aa3f8325ba71da3f26409bf516d11f88757e" address="unix:///run/containerd/s/eff0769aee93e2f758b189e63046fad56edef5dc5799c32af7a16a4a7ae704b0" namespace=k8s.io protocol=ttrpc version=3 Jul 15 04:47:20.970916 systemd[1]: Started cri-containerd-c3a73fb67a7e4373b7f595b8f00274724a0e6d9bcdab49828181bbfd5e56ac9a.scope - libcontainer container c3a73fb67a7e4373b7f595b8f00274724a0e6d9bcdab49828181bbfd5e56ac9a. Jul 15 04:47:20.973923 systemd[1]: Started cri-containerd-2ad5e5ffa5a214cdc137b5c211e2aa3f8325ba71da3f26409bf516d11f88757e.scope - libcontainer container 2ad5e5ffa5a214cdc137b5c211e2aa3f8325ba71da3f26409bf516d11f88757e. Jul 15 04:47:20.997439 containerd[1523]: time="2025-07-15T04:47:20.997391557Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-5d55m,Uid:8f934e2d-4122-4519-b637-21a6f4fbb090,Namespace:kube-system,Attempt:0,}" Jul 15 04:47:20.997888 containerd[1523]: time="2025-07-15T04:47:20.997818410Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-5pfc6,Uid:2e862d2d-1f24-401e-a9cd-05a765f91267,Namespace:kube-system,Attempt:0,} returns sandbox id \"c3a73fb67a7e4373b7f595b8f00274724a0e6d9bcdab49828181bbfd5e56ac9a\"" Jul 15 04:47:21.001122 containerd[1523]: time="2025-07-15T04:47:21.001088292Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-j7bq4,Uid:b6f73f09-efff-4dd4-9d83-b0de6a2fe64c,Namespace:kube-system,Attempt:0,} returns sandbox id \"2ad5e5ffa5a214cdc137b5c211e2aa3f8325ba71da3f26409bf516d11f88757e\"" Jul 15 04:47:21.002806 containerd[1523]: time="2025-07-15T04:47:21.002648504Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jul 15 04:47:21.002974 containerd[1523]: time="2025-07-15T04:47:21.002785919Z" level=info msg="CreateContainer within sandbox \"c3a73fb67a7e4373b7f595b8f00274724a0e6d9bcdab49828181bbfd5e56ac9a\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 15 04:47:21.015093 containerd[1523]: time="2025-07-15T04:47:21.015051018Z" level=info msg="Container 0cd5d4b7eef9ee3eacb1be59661f6ccdcf5317c64dbdadff15f098fa02d8078a: CDI devices from CRI Config.CDIDevices: []" Jul 15 04:47:21.018858 containerd[1523]: time="2025-07-15T04:47:21.018815072Z" level=info msg="connecting to shim 2ddc1d990c6978c8c5f0e2256cbd5d026f279853edcb3743a72c3468290489ec" address="unix:///run/containerd/s/69c4067c9df7d0ede5cbd55326d392aa285bf91cbf832c43460d723ea44d20f6" namespace=k8s.io protocol=ttrpc version=3 Jul 15 04:47:21.023488 containerd[1523]: time="2025-07-15T04:47:21.023445465Z" level=info msg="CreateContainer within sandbox \"c3a73fb67a7e4373b7f595b8f00274724a0e6d9bcdab49828181bbfd5e56ac9a\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"0cd5d4b7eef9ee3eacb1be59661f6ccdcf5317c64dbdadff15f098fa02d8078a\"" Jul 15 04:47:21.024678 containerd[1523]: time="2025-07-15T04:47:21.024563613Z" level=info msg="StartContainer for \"0cd5d4b7eef9ee3eacb1be59661f6ccdcf5317c64dbdadff15f098fa02d8078a\"" Jul 15 04:47:21.026152 containerd[1523]: time="2025-07-15T04:47:21.026104240Z" level=info msg="connecting to shim 0cd5d4b7eef9ee3eacb1be59661f6ccdcf5317c64dbdadff15f098fa02d8078a" address="unix:///run/containerd/s/8522391d997e3a730d518474d620955794a5ede7b1a5b6ac2d46826a42f16289" protocol=ttrpc version=3 Jul 15 04:47:21.042908 systemd[1]: Started cri-containerd-2ddc1d990c6978c8c5f0e2256cbd5d026f279853edcb3743a72c3468290489ec.scope - libcontainer container 2ddc1d990c6978c8c5f0e2256cbd5d026f279853edcb3743a72c3468290489ec. Jul 15 04:47:21.045960 systemd[1]: Started cri-containerd-0cd5d4b7eef9ee3eacb1be59661f6ccdcf5317c64dbdadff15f098fa02d8078a.scope - libcontainer container 0cd5d4b7eef9ee3eacb1be59661f6ccdcf5317c64dbdadff15f098fa02d8078a. Jul 15 04:47:21.085507 containerd[1523]: time="2025-07-15T04:47:21.084554086Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-5d55m,Uid:8f934e2d-4122-4519-b637-21a6f4fbb090,Namespace:kube-system,Attempt:0,} returns sandbox id \"2ddc1d990c6978c8c5f0e2256cbd5d026f279853edcb3743a72c3468290489ec\"" Jul 15 04:47:21.087134 containerd[1523]: time="2025-07-15T04:47:21.087093591Z" level=info msg="StartContainer for \"0cd5d4b7eef9ee3eacb1be59661f6ccdcf5317c64dbdadff15f098fa02d8078a\" returns successfully" Jul 15 04:47:21.390282 kubelet[2664]: I0715 04:47:21.390182 2664 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-5pfc6" podStartSLOduration=1.3901649360000001 podStartE2EDuration="1.390164936s" podCreationTimestamp="2025-07-15 04:47:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 04:47:21.390104622 +0000 UTC m=+8.130836512" watchObservedRunningTime="2025-07-15 04:47:21.390164936 +0000 UTC m=+8.130896786" Jul 15 04:47:29.980276 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount980700720.mount: Deactivated successfully. Jul 15 04:47:30.537535 update_engine[1512]: I20250715 04:47:30.537472 1512 update_attempter.cc:509] Updating boot flags... Jul 15 04:47:31.387575 containerd[1523]: time="2025-07-15T04:47:31.387514950Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Jul 15 04:47:31.389990 containerd[1523]: time="2025-07-15T04:47:31.389850338Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 10.38716362s" Jul 15 04:47:31.389990 containerd[1523]: time="2025-07-15T04:47:31.389888083Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Jul 15 04:47:31.400034 containerd[1523]: time="2025-07-15T04:47:31.399807482Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jul 15 04:47:31.402826 containerd[1523]: time="2025-07-15T04:47:31.402774177Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:47:31.403647 containerd[1523]: time="2025-07-15T04:47:31.403614762Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:47:31.415051 containerd[1523]: time="2025-07-15T04:47:31.415001056Z" level=info msg="CreateContainer within sandbox \"2ad5e5ffa5a214cdc137b5c211e2aa3f8325ba71da3f26409bf516d11f88757e\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 15 04:47:31.421557 containerd[1523]: time="2025-07-15T04:47:31.421513095Z" level=info msg="Container 5721462d3c3fa1a42fa6d1c0494038bec023994c5b8f166f41d5d2c1befbd407: CDI devices from CRI Config.CDIDevices: []" Jul 15 04:47:31.424534 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2272051957.mount: Deactivated successfully. Jul 15 04:47:31.428234 containerd[1523]: time="2025-07-15T04:47:31.428185231Z" level=info msg="CreateContainer within sandbox \"2ad5e5ffa5a214cdc137b5c211e2aa3f8325ba71da3f26409bf516d11f88757e\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"5721462d3c3fa1a42fa6d1c0494038bec023994c5b8f166f41d5d2c1befbd407\"" Jul 15 04:47:31.428952 containerd[1523]: time="2025-07-15T04:47:31.428917659Z" level=info msg="StartContainer for \"5721462d3c3fa1a42fa6d1c0494038bec023994c5b8f166f41d5d2c1befbd407\"" Jul 15 04:47:31.429921 containerd[1523]: time="2025-07-15T04:47:31.429757404Z" level=info msg="connecting to shim 5721462d3c3fa1a42fa6d1c0494038bec023994c5b8f166f41d5d2c1befbd407" address="unix:///run/containerd/s/eff0769aee93e2f758b189e63046fad56edef5dc5799c32af7a16a4a7ae704b0" protocol=ttrpc version=3 Jul 15 04:47:31.489894 systemd[1]: Started cri-containerd-5721462d3c3fa1a42fa6d1c0494038bec023994c5b8f166f41d5d2c1befbd407.scope - libcontainer container 5721462d3c3fa1a42fa6d1c0494038bec023994c5b8f166f41d5d2c1befbd407. Jul 15 04:47:31.523164 containerd[1523]: time="2025-07-15T04:47:31.523053633Z" level=info msg="StartContainer for \"5721462d3c3fa1a42fa6d1c0494038bec023994c5b8f166f41d5d2c1befbd407\" returns successfully" Jul 15 04:47:31.629219 systemd[1]: cri-containerd-5721462d3c3fa1a42fa6d1c0494038bec023994c5b8f166f41d5d2c1befbd407.scope: Deactivated successfully. Jul 15 04:47:31.669069 containerd[1523]: time="2025-07-15T04:47:31.668547220Z" level=info msg="received exit event container_id:\"5721462d3c3fa1a42fa6d1c0494038bec023994c5b8f166f41d5d2c1befbd407\" id:\"5721462d3c3fa1a42fa6d1c0494038bec023994c5b8f166f41d5d2c1befbd407\" pid:3103 exited_at:{seconds:1752554851 nanos:656809867}" Jul 15 04:47:31.669069 containerd[1523]: time="2025-07-15T04:47:31.668653658Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5721462d3c3fa1a42fa6d1c0494038bec023994c5b8f166f41d5d2c1befbd407\" id:\"5721462d3c3fa1a42fa6d1c0494038bec023994c5b8f166f41d5d2c1befbd407\" pid:3103 exited_at:{seconds:1752554851 nanos:656809867}" Jul 15 04:47:31.714404 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5721462d3c3fa1a42fa6d1c0494038bec023994c5b8f166f41d5d2c1befbd407-rootfs.mount: Deactivated successfully. Jul 15 04:47:32.405033 containerd[1523]: time="2025-07-15T04:47:32.404985338Z" level=info msg="CreateContainer within sandbox \"2ad5e5ffa5a214cdc137b5c211e2aa3f8325ba71da3f26409bf516d11f88757e\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 15 04:47:32.415495 containerd[1523]: time="2025-07-15T04:47:32.415041294Z" level=info msg="Container 64c01b0f8202c646c407c22a2f60eb5383e07b37ac327d38ddd61b0923fa074c: CDI devices from CRI Config.CDIDevices: []" Jul 15 04:47:32.439101 containerd[1523]: time="2025-07-15T04:47:32.439049148Z" level=info msg="CreateContainer within sandbox \"2ad5e5ffa5a214cdc137b5c211e2aa3f8325ba71da3f26409bf516d11f88757e\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"64c01b0f8202c646c407c22a2f60eb5383e07b37ac327d38ddd61b0923fa074c\"" Jul 15 04:47:32.439921 containerd[1523]: time="2025-07-15T04:47:32.439761881Z" level=info msg="StartContainer for \"64c01b0f8202c646c407c22a2f60eb5383e07b37ac327d38ddd61b0923fa074c\"" Jul 15 04:47:32.440527 containerd[1523]: time="2025-07-15T04:47:32.440499805Z" level=info msg="connecting to shim 64c01b0f8202c646c407c22a2f60eb5383e07b37ac327d38ddd61b0923fa074c" address="unix:///run/containerd/s/eff0769aee93e2f758b189e63046fad56edef5dc5799c32af7a16a4a7ae704b0" protocol=ttrpc version=3 Jul 15 04:47:32.474894 systemd[1]: Started cri-containerd-64c01b0f8202c646c407c22a2f60eb5383e07b37ac327d38ddd61b0923fa074c.scope - libcontainer container 64c01b0f8202c646c407c22a2f60eb5383e07b37ac327d38ddd61b0923fa074c. Jul 15 04:47:32.504431 containerd[1523]: time="2025-07-15T04:47:32.504376695Z" level=info msg="StartContainer for \"64c01b0f8202c646c407c22a2f60eb5383e07b37ac327d38ddd61b0923fa074c\" returns successfully" Jul 15 04:47:32.525356 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 15 04:47:32.526323 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 15 04:47:32.526586 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jul 15 04:47:32.528323 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 15 04:47:32.529666 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 15 04:47:32.530016 containerd[1523]: time="2025-07-15T04:47:32.529907258Z" level=info msg="received exit event container_id:\"64c01b0f8202c646c407c22a2f60eb5383e07b37ac327d38ddd61b0923fa074c\" id:\"64c01b0f8202c646c407c22a2f60eb5383e07b37ac327d38ddd61b0923fa074c\" pid:3158 exited_at:{seconds:1752554852 nanos:529162457}" Jul 15 04:47:32.530124 systemd[1]: cri-containerd-64c01b0f8202c646c407c22a2f60eb5383e07b37ac327d38ddd61b0923fa074c.scope: Deactivated successfully. Jul 15 04:47:32.531090 containerd[1523]: time="2025-07-15T04:47:32.531058907Z" level=info msg="TaskExit event in podsandbox handler container_id:\"64c01b0f8202c646c407c22a2f60eb5383e07b37ac327d38ddd61b0923fa074c\" id:\"64c01b0f8202c646c407c22a2f60eb5383e07b37ac327d38ddd61b0923fa074c\" pid:3158 exited_at:{seconds:1752554852 nanos:529162457}" Jul 15 04:47:32.547801 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-64c01b0f8202c646c407c22a2f60eb5383e07b37ac327d38ddd61b0923fa074c-rootfs.mount: Deactivated successfully. Jul 15 04:47:32.569884 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 15 04:47:32.805973 containerd[1523]: time="2025-07-15T04:47:32.805862725Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:47:32.807091 containerd[1523]: time="2025-07-15T04:47:32.807056518Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Jul 15 04:47:32.808051 containerd[1523]: time="2025-07-15T04:47:32.807969696Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 15 04:47:32.809497 containerd[1523]: time="2025-07-15T04:47:32.809355417Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 1.409506591s" Jul 15 04:47:32.809497 containerd[1523]: time="2025-07-15T04:47:32.809395042Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Jul 15 04:47:32.812256 containerd[1523]: time="2025-07-15T04:47:32.811861999Z" level=info msg="CreateContainer within sandbox \"2ddc1d990c6978c8c5f0e2256cbd5d026f279853edcb3743a72c3468290489ec\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jul 15 04:47:32.837791 containerd[1523]: time="2025-07-15T04:47:32.837446222Z" level=info msg="Container d006dd53652dda122e9334347574a1da15df3434a94293ebc1a1c006aa9ecc77: CDI devices from CRI Config.CDIDevices: []" Jul 15 04:47:32.842211 containerd[1523]: time="2025-07-15T04:47:32.842166655Z" level=info msg="CreateContainer within sandbox \"2ddc1d990c6978c8c5f0e2256cbd5d026f279853edcb3743a72c3468290489ec\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"d006dd53652dda122e9334347574a1da15df3434a94293ebc1a1c006aa9ecc77\"" Jul 15 04:47:32.842864 containerd[1523]: time="2025-07-15T04:47:32.842827048Z" level=info msg="StartContainer for \"d006dd53652dda122e9334347574a1da15df3434a94293ebc1a1c006aa9ecc77\"" Jul 15 04:47:32.843626 containerd[1523]: time="2025-07-15T04:47:32.843601278Z" level=info msg="connecting to shim d006dd53652dda122e9334347574a1da15df3434a94293ebc1a1c006aa9ecc77" address="unix:///run/containerd/s/69c4067c9df7d0ede5cbd55326d392aa285bf91cbf832c43460d723ea44d20f6" protocol=ttrpc version=3 Jul 15 04:47:32.861910 systemd[1]: Started cri-containerd-d006dd53652dda122e9334347574a1da15df3434a94293ebc1a1c006aa9ecc77.scope - libcontainer container d006dd53652dda122e9334347574a1da15df3434a94293ebc1a1c006aa9ecc77. Jul 15 04:47:32.901870 containerd[1523]: time="2025-07-15T04:47:32.901823525Z" level=info msg="StartContainer for \"d006dd53652dda122e9334347574a1da15df3434a94293ebc1a1c006aa9ecc77\" returns successfully" Jul 15 04:47:33.414055 containerd[1523]: time="2025-07-15T04:47:33.413987863Z" level=info msg="CreateContainer within sandbox \"2ad5e5ffa5a214cdc137b5c211e2aa3f8325ba71da3f26409bf516d11f88757e\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 15 04:47:33.425421 kubelet[2664]: I0715 04:47:33.425350 2664 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-5d55m" podStartSLOduration=1.7024846519999999 podStartE2EDuration="13.425331123s" podCreationTimestamp="2025-07-15 04:47:20 +0000 UTC" firstStartedPulling="2025-07-15 04:47:21.087361707 +0000 UTC m=+7.828093557" lastFinishedPulling="2025-07-15 04:47:32.810208218 +0000 UTC m=+19.550940028" observedRunningTime="2025-07-15 04:47:33.42519653 +0000 UTC m=+20.165928380" watchObservedRunningTime="2025-07-15 04:47:33.425331123 +0000 UTC m=+20.166062973" Jul 15 04:47:33.553471 containerd[1523]: time="2025-07-15T04:47:33.553427932Z" level=info msg="Container de0ab44e883081f1eba0b9be3ce3e17039df590c139cc4619663c0520e8f31e8: CDI devices from CRI Config.CDIDevices: []" Jul 15 04:47:33.562635 containerd[1523]: time="2025-07-15T04:47:33.562574963Z" level=info msg="CreateContainer within sandbox \"2ad5e5ffa5a214cdc137b5c211e2aa3f8325ba71da3f26409bf516d11f88757e\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"de0ab44e883081f1eba0b9be3ce3e17039df590c139cc4619663c0520e8f31e8\"" Jul 15 04:47:33.563911 containerd[1523]: time="2025-07-15T04:47:33.563881704Z" level=info msg="StartContainer for \"de0ab44e883081f1eba0b9be3ce3e17039df590c139cc4619663c0520e8f31e8\"" Jul 15 04:47:33.566212 containerd[1523]: time="2025-07-15T04:47:33.566166262Z" level=info msg="connecting to shim de0ab44e883081f1eba0b9be3ce3e17039df590c139cc4619663c0520e8f31e8" address="unix:///run/containerd/s/eff0769aee93e2f758b189e63046fad56edef5dc5799c32af7a16a4a7ae704b0" protocol=ttrpc version=3 Jul 15 04:47:33.595868 systemd[1]: Started cri-containerd-de0ab44e883081f1eba0b9be3ce3e17039df590c139cc4619663c0520e8f31e8.scope - libcontainer container de0ab44e883081f1eba0b9be3ce3e17039df590c139cc4619663c0520e8f31e8. Jul 15 04:47:33.643516 containerd[1523]: time="2025-07-15T04:47:33.640905316Z" level=info msg="StartContainer for \"de0ab44e883081f1eba0b9be3ce3e17039df590c139cc4619663c0520e8f31e8\" returns successfully" Jul 15 04:47:33.666074 systemd[1]: cri-containerd-de0ab44e883081f1eba0b9be3ce3e17039df590c139cc4619663c0520e8f31e8.scope: Deactivated successfully. Jul 15 04:47:33.669119 containerd[1523]: time="2025-07-15T04:47:33.669066154Z" level=info msg="TaskExit event in podsandbox handler container_id:\"de0ab44e883081f1eba0b9be3ce3e17039df590c139cc4619663c0520e8f31e8\" id:\"de0ab44e883081f1eba0b9be3ce3e17039df590c139cc4619663c0520e8f31e8\" pid:3251 exited_at:{seconds:1752554853 nanos:668478920}" Jul 15 04:47:33.669119 containerd[1523]: time="2025-07-15T04:47:33.669067193Z" level=info msg="received exit event container_id:\"de0ab44e883081f1eba0b9be3ce3e17039df590c139cc4619663c0520e8f31e8\" id:\"de0ab44e883081f1eba0b9be3ce3e17039df590c139cc4619663c0520e8f31e8\" pid:3251 exited_at:{seconds:1752554853 nanos:668478920}" Jul 15 04:47:33.709818 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-de0ab44e883081f1eba0b9be3ce3e17039df590c139cc4619663c0520e8f31e8-rootfs.mount: Deactivated successfully. Jul 15 04:47:34.422259 containerd[1523]: time="2025-07-15T04:47:34.420832008Z" level=info msg="CreateContainer within sandbox \"2ad5e5ffa5a214cdc137b5c211e2aa3f8325ba71da3f26409bf516d11f88757e\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 15 04:47:34.467310 containerd[1523]: time="2025-07-15T04:47:34.467263733Z" level=info msg="Container a522f420f22f01f3f3e838c9c980f987d237fae201cdfc3c07b0fa7356291eaa: CDI devices from CRI Config.CDIDevices: []" Jul 15 04:47:34.468065 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1215316696.mount: Deactivated successfully. Jul 15 04:47:34.473760 containerd[1523]: time="2025-07-15T04:47:34.473705174Z" level=info msg="CreateContainer within sandbox \"2ad5e5ffa5a214cdc137b5c211e2aa3f8325ba71da3f26409bf516d11f88757e\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"a522f420f22f01f3f3e838c9c980f987d237fae201cdfc3c07b0fa7356291eaa\"" Jul 15 04:47:34.474303 containerd[1523]: time="2025-07-15T04:47:34.474263910Z" level=info msg="StartContainer for \"a522f420f22f01f3f3e838c9c980f987d237fae201cdfc3c07b0fa7356291eaa\"" Jul 15 04:47:34.475363 containerd[1523]: time="2025-07-15T04:47:34.475325961Z" level=info msg="connecting to shim a522f420f22f01f3f3e838c9c980f987d237fae201cdfc3c07b0fa7356291eaa" address="unix:///run/containerd/s/eff0769aee93e2f758b189e63046fad56edef5dc5799c32af7a16a4a7ae704b0" protocol=ttrpc version=3 Jul 15 04:47:34.495906 systemd[1]: Started cri-containerd-a522f420f22f01f3f3e838c9c980f987d237fae201cdfc3c07b0fa7356291eaa.scope - libcontainer container a522f420f22f01f3f3e838c9c980f987d237fae201cdfc3c07b0fa7356291eaa. Jul 15 04:47:34.517270 systemd[1]: cri-containerd-a522f420f22f01f3f3e838c9c980f987d237fae201cdfc3c07b0fa7356291eaa.scope: Deactivated successfully. Jul 15 04:47:34.520905 containerd[1523]: time="2025-07-15T04:47:34.520849105Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a522f420f22f01f3f3e838c9c980f987d237fae201cdfc3c07b0fa7356291eaa\" id:\"a522f420f22f01f3f3e838c9c980f987d237fae201cdfc3c07b0fa7356291eaa\" pid:3290 exited_at:{seconds:1752554854 nanos:520492023}" Jul 15 04:47:34.525740 containerd[1523]: time="2025-07-15T04:47:34.525293483Z" level=info msg="received exit event container_id:\"a522f420f22f01f3f3e838c9c980f987d237fae201cdfc3c07b0fa7356291eaa\" id:\"a522f420f22f01f3f3e838c9c980f987d237fae201cdfc3c07b0fa7356291eaa\" pid:3290 exited_at:{seconds:1752554854 nanos:520492023}" Jul 15 04:47:34.529859 containerd[1523]: time="2025-07-15T04:47:34.529831470Z" level=info msg="StartContainer for \"a522f420f22f01f3f3e838c9c980f987d237fae201cdfc3c07b0fa7356291eaa\" returns successfully" Jul 15 04:47:34.546098 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a522f420f22f01f3f3e838c9c980f987d237fae201cdfc3c07b0fa7356291eaa-rootfs.mount: Deactivated successfully. Jul 15 04:47:35.424060 containerd[1523]: time="2025-07-15T04:47:35.423917620Z" level=info msg="CreateContainer within sandbox \"2ad5e5ffa5a214cdc137b5c211e2aa3f8325ba71da3f26409bf516d11f88757e\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 15 04:47:35.443995 containerd[1523]: time="2025-07-15T04:47:35.443945283Z" level=info msg="Container 5ea8ae611b10f9508bb1935a1bd699055fc86ce5cdb05ec3b7120f75dd1a88cf: CDI devices from CRI Config.CDIDevices: []" Jul 15 04:47:35.450491 containerd[1523]: time="2025-07-15T04:47:35.450446118Z" level=info msg="CreateContainer within sandbox \"2ad5e5ffa5a214cdc137b5c211e2aa3f8325ba71da3f26409bf516d11f88757e\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"5ea8ae611b10f9508bb1935a1bd699055fc86ce5cdb05ec3b7120f75dd1a88cf\"" Jul 15 04:47:35.452182 containerd[1523]: time="2025-07-15T04:47:35.451039535Z" level=info msg="StartContainer for \"5ea8ae611b10f9508bb1935a1bd699055fc86ce5cdb05ec3b7120f75dd1a88cf\"" Jul 15 04:47:35.452182 containerd[1523]: time="2025-07-15T04:47:35.452072057Z" level=info msg="connecting to shim 5ea8ae611b10f9508bb1935a1bd699055fc86ce5cdb05ec3b7120f75dd1a88cf" address="unix:///run/containerd/s/eff0769aee93e2f758b189e63046fad56edef5dc5799c32af7a16a4a7ae704b0" protocol=ttrpc version=3 Jul 15 04:47:35.477931 systemd[1]: Started cri-containerd-5ea8ae611b10f9508bb1935a1bd699055fc86ce5cdb05ec3b7120f75dd1a88cf.scope - libcontainer container 5ea8ae611b10f9508bb1935a1bd699055fc86ce5cdb05ec3b7120f75dd1a88cf. Jul 15 04:47:35.509058 containerd[1523]: time="2025-07-15T04:47:35.509002099Z" level=info msg="StartContainer for \"5ea8ae611b10f9508bb1935a1bd699055fc86ce5cdb05ec3b7120f75dd1a88cf\" returns successfully" Jul 15 04:47:35.630145 containerd[1523]: time="2025-07-15T04:47:35.630108269Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5ea8ae611b10f9508bb1935a1bd699055fc86ce5cdb05ec3b7120f75dd1a88cf\" id:\"1c00f8f49e8941f20db96fb0b5987bd882dc9f23d9bca743b9431afbd0bd4520\" pid:3356 exited_at:{seconds:1752554855 nanos:626166565}" Jul 15 04:47:35.702252 kubelet[2664]: I0715 04:47:35.702149 2664 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jul 15 04:47:35.769935 systemd[1]: Created slice kubepods-burstable-pod0de445df_848a_4801_97c3_d73e631a0572.slice - libcontainer container kubepods-burstable-pod0de445df_848a_4801_97c3_d73e631a0572.slice. Jul 15 04:47:35.771262 kubelet[2664]: I0715 04:47:35.770137 2664 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6d86fc0b-3973-4f73-b3c1-df82fabccbdb-config-volume\") pod \"coredns-7c65d6cfc9-8gbtv\" (UID: \"6d86fc0b-3973-4f73-b3c1-df82fabccbdb\") " pod="kube-system/coredns-7c65d6cfc9-8gbtv" Jul 15 04:47:35.771262 kubelet[2664]: I0715 04:47:35.770171 2664 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wfs6b\" (UniqueName: \"kubernetes.io/projected/0de445df-848a-4801-97c3-d73e631a0572-kube-api-access-wfs6b\") pod \"coredns-7c65d6cfc9-dmfrh\" (UID: \"0de445df-848a-4801-97c3-d73e631a0572\") " pod="kube-system/coredns-7c65d6cfc9-dmfrh" Jul 15 04:47:35.771262 kubelet[2664]: I0715 04:47:35.770190 2664 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0de445df-848a-4801-97c3-d73e631a0572-config-volume\") pod \"coredns-7c65d6cfc9-dmfrh\" (UID: \"0de445df-848a-4801-97c3-d73e631a0572\") " pod="kube-system/coredns-7c65d6cfc9-dmfrh" Jul 15 04:47:35.771262 kubelet[2664]: I0715 04:47:35.770205 2664 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4f9gq\" (UniqueName: \"kubernetes.io/projected/6d86fc0b-3973-4f73-b3c1-df82fabccbdb-kube-api-access-4f9gq\") pod \"coredns-7c65d6cfc9-8gbtv\" (UID: \"6d86fc0b-3973-4f73-b3c1-df82fabccbdb\") " pod="kube-system/coredns-7c65d6cfc9-8gbtv" Jul 15 04:47:35.782184 systemd[1]: Created slice kubepods-burstable-pod6d86fc0b_3973_4f73_b3c1_df82fabccbdb.slice - libcontainer container kubepods-burstable-pod6d86fc0b_3973_4f73_b3c1_df82fabccbdb.slice. Jul 15 04:47:36.075706 containerd[1523]: time="2025-07-15T04:47:36.075588865Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-dmfrh,Uid:0de445df-848a-4801-97c3-d73e631a0572,Namespace:kube-system,Attempt:0,}" Jul 15 04:47:36.087480 containerd[1523]: time="2025-07-15T04:47:36.087428802Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-8gbtv,Uid:6d86fc0b-3973-4f73-b3c1-df82fabccbdb,Namespace:kube-system,Attempt:0,}" Jul 15 04:47:36.450273 kubelet[2664]: I0715 04:47:36.450143 2664 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-j7bq4" podStartSLOduration=6.053148505 podStartE2EDuration="16.450124019s" podCreationTimestamp="2025-07-15 04:47:20 +0000 UTC" firstStartedPulling="2025-07-15 04:47:21.002266834 +0000 UTC m=+7.742998684" lastFinishedPulling="2025-07-15 04:47:31.399242348 +0000 UTC m=+18.139974198" observedRunningTime="2025-07-15 04:47:36.450024047 +0000 UTC m=+23.190755897" watchObservedRunningTime="2025-07-15 04:47:36.450124019 +0000 UTC m=+23.190855869" Jul 15 04:47:37.778461 systemd-networkd[1431]: cilium_host: Link UP Jul 15 04:47:37.778567 systemd-networkd[1431]: cilium_net: Link UP Jul 15 04:47:37.778678 systemd-networkd[1431]: cilium_net: Gained carrier Jul 15 04:47:37.778811 systemd-networkd[1431]: cilium_host: Gained carrier Jul 15 04:47:37.856851 systemd-networkd[1431]: cilium_net: Gained IPv6LL Jul 15 04:47:37.875089 systemd-networkd[1431]: cilium_vxlan: Link UP Jul 15 04:47:37.875094 systemd-networkd[1431]: cilium_vxlan: Gained carrier Jul 15 04:47:38.089988 systemd-networkd[1431]: cilium_host: Gained IPv6LL Jul 15 04:47:38.178762 kernel: NET: Registered PF_ALG protocol family Jul 15 04:47:38.784692 systemd-networkd[1431]: lxc_health: Link UP Jul 15 04:47:38.786197 systemd-networkd[1431]: lxc_health: Gained carrier Jul 15 04:47:39.240756 kernel: eth0: renamed from tmpec6e9 Jul 15 04:47:39.244857 systemd-networkd[1431]: lxc01c2e68bc893: Link UP Jul 15 04:47:39.245131 systemd-networkd[1431]: lxc4d676d278617: Link UP Jul 15 04:47:39.252768 kernel: eth0: renamed from tmpafb76 Jul 15 04:47:39.254279 systemd-networkd[1431]: lxc4d676d278617: Gained carrier Jul 15 04:47:39.255992 systemd-networkd[1431]: lxc01c2e68bc893: Gained carrier Jul 15 04:47:39.304907 systemd-networkd[1431]: cilium_vxlan: Gained IPv6LL Jul 15 04:47:40.460841 systemd-networkd[1431]: lxc_health: Gained IPv6LL Jul 15 04:47:40.584877 systemd-networkd[1431]: lxc01c2e68bc893: Gained IPv6LL Jul 15 04:47:40.585634 systemd-networkd[1431]: lxc4d676d278617: Gained IPv6LL Jul 15 04:47:42.775524 systemd[1]: Started sshd@7-10.0.0.81:22-10.0.0.1:55260.service - OpenSSH per-connection server daemon (10.0.0.1:55260). Jul 15 04:47:42.839345 sshd[3835]: Accepted publickey for core from 10.0.0.1 port 55260 ssh2: RSA SHA256:sVpqIt/le8mJMWBRnqSUOr83Z2pgjga2fm8CYRKAYYo Jul 15 04:47:42.840708 sshd-session[3835]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 04:47:42.846101 systemd-logind[1507]: New session 8 of user core. Jul 15 04:47:42.857962 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 15 04:47:43.016351 sshd[3838]: Connection closed by 10.0.0.1 port 55260 Jul 15 04:47:43.017150 sshd-session[3835]: pam_unix(sshd:session): session closed for user core Jul 15 04:47:43.023746 systemd[1]: sshd@7-10.0.0.81:22-10.0.0.1:55260.service: Deactivated successfully. Jul 15 04:47:43.026274 systemd[1]: session-8.scope: Deactivated successfully. Jul 15 04:47:43.029141 systemd-logind[1507]: Session 8 logged out. Waiting for processes to exit. Jul 15 04:47:43.032212 systemd-logind[1507]: Removed session 8. Jul 15 04:47:43.085667 containerd[1523]: time="2025-07-15T04:47:43.085276904Z" level=info msg="connecting to shim afb76421f5ac890e3179cb5c5f3f24aba1aa1271fc90475605ecf2276fc76ad6" address="unix:///run/containerd/s/64919c93ba4889f45e57a85443873d3695ae57c8dd25e26c3803dcb84b735f20" namespace=k8s.io protocol=ttrpc version=3 Jul 15 04:47:43.086476 containerd[1523]: time="2025-07-15T04:47:43.086428173Z" level=info msg="connecting to shim ec6e9628ee0e8857a10909a07528e355ae0f8601fa4f6a7b7d620530900ef2e5" address="unix:///run/containerd/s/527f8cd6ef4fa0bcf6def7d49d3b38b9b978868b3e2b9e1c829985d6f30a4dce" namespace=k8s.io protocol=ttrpc version=3 Jul 15 04:47:43.117180 systemd[1]: Started cri-containerd-afb76421f5ac890e3179cb5c5f3f24aba1aa1271fc90475605ecf2276fc76ad6.scope - libcontainer container afb76421f5ac890e3179cb5c5f3f24aba1aa1271fc90475605ecf2276fc76ad6. Jul 15 04:47:43.122004 systemd[1]: Started cri-containerd-ec6e9628ee0e8857a10909a07528e355ae0f8601fa4f6a7b7d620530900ef2e5.scope - libcontainer container ec6e9628ee0e8857a10909a07528e355ae0f8601fa4f6a7b7d620530900ef2e5. Jul 15 04:47:43.135013 systemd-resolved[1352]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 15 04:47:43.139595 systemd-resolved[1352]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 15 04:47:43.171500 containerd[1523]: time="2025-07-15T04:47:43.171458086Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-dmfrh,Uid:0de445df-848a-4801-97c3-d73e631a0572,Namespace:kube-system,Attempt:0,} returns sandbox id \"ec6e9628ee0e8857a10909a07528e355ae0f8601fa4f6a7b7d620530900ef2e5\"" Jul 15 04:47:43.187315 containerd[1523]: time="2025-07-15T04:47:43.185452271Z" level=info msg="CreateContainer within sandbox \"ec6e9628ee0e8857a10909a07528e355ae0f8601fa4f6a7b7d620530900ef2e5\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 15 04:47:43.190099 containerd[1523]: time="2025-07-15T04:47:43.189979758Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-8gbtv,Uid:6d86fc0b-3973-4f73-b3c1-df82fabccbdb,Namespace:kube-system,Attempt:0,} returns sandbox id \"afb76421f5ac890e3179cb5c5f3f24aba1aa1271fc90475605ecf2276fc76ad6\"" Jul 15 04:47:43.192499 containerd[1523]: time="2025-07-15T04:47:43.192466141Z" level=info msg="CreateContainer within sandbox \"afb76421f5ac890e3179cb5c5f3f24aba1aa1271fc90475605ecf2276fc76ad6\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 15 04:47:43.202668 containerd[1523]: time="2025-07-15T04:47:43.202611554Z" level=info msg="Container c96ee4abeb444f4699d20d081d191802d3f2346502c67cd468c4b32f861e5168: CDI devices from CRI Config.CDIDevices: []" Jul 15 04:47:43.204668 containerd[1523]: time="2025-07-15T04:47:43.204621144Z" level=info msg="Container fad1b02e7a30854fb69fe81374a4df8ba408b887eb38a10ee8f87559224f47f8: CDI devices from CRI Config.CDIDevices: []" Jul 15 04:47:43.209189 containerd[1523]: time="2025-07-15T04:47:43.209136473Z" level=info msg="CreateContainer within sandbox \"ec6e9628ee0e8857a10909a07528e355ae0f8601fa4f6a7b7d620530900ef2e5\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c96ee4abeb444f4699d20d081d191802d3f2346502c67cd468c4b32f861e5168\"" Jul 15 04:47:43.210480 containerd[1523]: time="2025-07-15T04:47:43.210442553Z" level=info msg="StartContainer for \"c96ee4abeb444f4699d20d081d191802d3f2346502c67cd468c4b32f861e5168\"" Jul 15 04:47:43.213158 containerd[1523]: time="2025-07-15T04:47:43.212989844Z" level=info msg="connecting to shim c96ee4abeb444f4699d20d081d191802d3f2346502c67cd468c4b32f861e5168" address="unix:///run/containerd/s/527f8cd6ef4fa0bcf6def7d49d3b38b9b978868b3e2b9e1c829985d6f30a4dce" protocol=ttrpc version=3 Jul 15 04:47:43.215570 containerd[1523]: time="2025-07-15T04:47:43.215522258Z" level=info msg="CreateContainer within sandbox \"afb76421f5ac890e3179cb5c5f3f24aba1aa1271fc90475605ecf2276fc76ad6\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"fad1b02e7a30854fb69fe81374a4df8ba408b887eb38a10ee8f87559224f47f8\"" Jul 15 04:47:43.217402 containerd[1523]: time="2025-07-15T04:47:43.217166996Z" level=info msg="StartContainer for \"fad1b02e7a30854fb69fe81374a4df8ba408b887eb38a10ee8f87559224f47f8\"" Jul 15 04:47:43.218117 containerd[1523]: time="2025-07-15T04:47:43.218085587Z" level=info msg="connecting to shim fad1b02e7a30854fb69fe81374a4df8ba408b887eb38a10ee8f87559224f47f8" address="unix:///run/containerd/s/64919c93ba4889f45e57a85443873d3695ae57c8dd25e26c3803dcb84b735f20" protocol=ttrpc version=3 Jul 15 04:47:43.239077 systemd[1]: Started cri-containerd-c96ee4abeb444f4699d20d081d191802d3f2346502c67cd468c4b32f861e5168.scope - libcontainer container c96ee4abeb444f4699d20d081d191802d3f2346502c67cd468c4b32f861e5168. Jul 15 04:47:43.249967 systemd[1]: Started cri-containerd-fad1b02e7a30854fb69fe81374a4df8ba408b887eb38a10ee8f87559224f47f8.scope - libcontainer container fad1b02e7a30854fb69fe81374a4df8ba408b887eb38a10ee8f87559224f47f8. Jul 15 04:47:43.281461 containerd[1523]: time="2025-07-15T04:47:43.281317871Z" level=info msg="StartContainer for \"c96ee4abeb444f4699d20d081d191802d3f2346502c67cd468c4b32f861e5168\" returns successfully" Jul 15 04:47:43.287708 containerd[1523]: time="2025-07-15T04:47:43.287630190Z" level=info msg="StartContainer for \"fad1b02e7a30854fb69fe81374a4df8ba408b887eb38a10ee8f87559224f47f8\" returns successfully" Jul 15 04:47:43.539163 kubelet[2664]: I0715 04:47:43.538955 2664 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-dmfrh" podStartSLOduration=23.538933708 podStartE2EDuration="23.538933708s" podCreationTimestamp="2025-07-15 04:47:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 04:47:43.523169129 +0000 UTC m=+30.263900979" watchObservedRunningTime="2025-07-15 04:47:43.538933708 +0000 UTC m=+30.279665518" Jul 15 04:47:43.540455 kubelet[2664]: I0715 04:47:43.539447 2664 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-8gbtv" podStartSLOduration=23.539437975 podStartE2EDuration="23.539437975s" podCreationTimestamp="2025-07-15 04:47:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 04:47:43.53914211 +0000 UTC m=+30.279873960" watchObservedRunningTime="2025-07-15 04:47:43.539437975 +0000 UTC m=+30.280169825" Jul 15 04:47:48.035668 systemd[1]: Started sshd@8-10.0.0.81:22-10.0.0.1:55282.service - OpenSSH per-connection server daemon (10.0.0.1:55282). Jul 15 04:47:48.099891 sshd[4028]: Accepted publickey for core from 10.0.0.1 port 55282 ssh2: RSA SHA256:sVpqIt/le8mJMWBRnqSUOr83Z2pgjga2fm8CYRKAYYo Jul 15 04:47:48.101292 sshd-session[4028]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 04:47:48.105337 systemd-logind[1507]: New session 9 of user core. Jul 15 04:47:48.115911 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 15 04:47:48.230553 sshd[4031]: Connection closed by 10.0.0.1 port 55282 Jul 15 04:47:48.231044 sshd-session[4028]: pam_unix(sshd:session): session closed for user core Jul 15 04:47:48.234699 systemd[1]: sshd@8-10.0.0.81:22-10.0.0.1:55282.service: Deactivated successfully. Jul 15 04:47:48.236561 systemd[1]: session-9.scope: Deactivated successfully. Jul 15 04:47:48.237358 systemd-logind[1507]: Session 9 logged out. Waiting for processes to exit. Jul 15 04:47:48.238906 systemd-logind[1507]: Removed session 9. Jul 15 04:47:53.246255 systemd[1]: Started sshd@9-10.0.0.81:22-10.0.0.1:56360.service - OpenSSH per-connection server daemon (10.0.0.1:56360). Jul 15 04:47:53.309428 sshd[4049]: Accepted publickey for core from 10.0.0.1 port 56360 ssh2: RSA SHA256:sVpqIt/le8mJMWBRnqSUOr83Z2pgjga2fm8CYRKAYYo Jul 15 04:47:53.310494 sshd-session[4049]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 04:47:53.314814 systemd-logind[1507]: New session 10 of user core. Jul 15 04:47:53.323051 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 15 04:47:53.459476 sshd[4052]: Connection closed by 10.0.0.1 port 56360 Jul 15 04:47:53.460332 sshd-session[4049]: pam_unix(sshd:session): session closed for user core Jul 15 04:47:53.463869 systemd[1]: sshd@9-10.0.0.81:22-10.0.0.1:56360.service: Deactivated successfully. Jul 15 04:47:53.466122 systemd[1]: session-10.scope: Deactivated successfully. Jul 15 04:47:53.467651 systemd-logind[1507]: Session 10 logged out. Waiting for processes to exit. Jul 15 04:47:53.469343 systemd-logind[1507]: Removed session 10. Jul 15 04:47:58.478707 systemd[1]: Started sshd@10-10.0.0.81:22-10.0.0.1:56378.service - OpenSSH per-connection server daemon (10.0.0.1:56378). Jul 15 04:47:58.545172 sshd[4067]: Accepted publickey for core from 10.0.0.1 port 56378 ssh2: RSA SHA256:sVpqIt/le8mJMWBRnqSUOr83Z2pgjga2fm8CYRKAYYo Jul 15 04:47:58.547079 sshd-session[4067]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 04:47:58.552554 systemd-logind[1507]: New session 11 of user core. Jul 15 04:47:58.562877 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 15 04:47:58.681387 sshd[4070]: Connection closed by 10.0.0.1 port 56378 Jul 15 04:47:58.681924 sshd-session[4067]: pam_unix(sshd:session): session closed for user core Jul 15 04:47:58.694084 systemd[1]: sshd@10-10.0.0.81:22-10.0.0.1:56378.service: Deactivated successfully. Jul 15 04:47:58.696859 systemd[1]: session-11.scope: Deactivated successfully. Jul 15 04:47:58.697607 systemd-logind[1507]: Session 11 logged out. Waiting for processes to exit. Jul 15 04:47:58.700214 systemd[1]: Started sshd@11-10.0.0.81:22-10.0.0.1:56380.service - OpenSSH per-connection server daemon (10.0.0.1:56380). Jul 15 04:47:58.701250 systemd-logind[1507]: Removed session 11. Jul 15 04:47:58.760203 sshd[4084]: Accepted publickey for core from 10.0.0.1 port 56380 ssh2: RSA SHA256:sVpqIt/le8mJMWBRnqSUOr83Z2pgjga2fm8CYRKAYYo Jul 15 04:47:58.761613 sshd-session[4084]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 04:47:58.766668 systemd-logind[1507]: New session 12 of user core. Jul 15 04:47:58.781896 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 15 04:47:58.949939 sshd[4087]: Connection closed by 10.0.0.1 port 56380 Jul 15 04:47:58.950448 sshd-session[4084]: pam_unix(sshd:session): session closed for user core Jul 15 04:47:58.963523 systemd[1]: sshd@11-10.0.0.81:22-10.0.0.1:56380.service: Deactivated successfully. Jul 15 04:47:58.967663 systemd[1]: session-12.scope: Deactivated successfully. Jul 15 04:47:58.969884 systemd-logind[1507]: Session 12 logged out. Waiting for processes to exit. Jul 15 04:47:58.971567 systemd[1]: Started sshd@12-10.0.0.81:22-10.0.0.1:56406.service - OpenSSH per-connection server daemon (10.0.0.1:56406). Jul 15 04:47:58.977150 systemd-logind[1507]: Removed session 12. Jul 15 04:47:59.045163 sshd[4099]: Accepted publickey for core from 10.0.0.1 port 56406 ssh2: RSA SHA256:sVpqIt/le8mJMWBRnqSUOr83Z2pgjga2fm8CYRKAYYo Jul 15 04:47:59.046287 sshd-session[4099]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 04:47:59.050781 systemd-logind[1507]: New session 13 of user core. Jul 15 04:47:59.060909 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 15 04:47:59.170707 sshd[4102]: Connection closed by 10.0.0.1 port 56406 Jul 15 04:47:59.171640 sshd-session[4099]: pam_unix(sshd:session): session closed for user core Jul 15 04:47:59.175669 systemd[1]: sshd@12-10.0.0.81:22-10.0.0.1:56406.service: Deactivated successfully. Jul 15 04:47:59.177601 systemd[1]: session-13.scope: Deactivated successfully. Jul 15 04:47:59.180359 systemd-logind[1507]: Session 13 logged out. Waiting for processes to exit. Jul 15 04:47:59.181699 systemd-logind[1507]: Removed session 13. Jul 15 04:48:04.185941 systemd[1]: Started sshd@13-10.0.0.81:22-10.0.0.1:55192.service - OpenSSH per-connection server daemon (10.0.0.1:55192). Jul 15 04:48:04.245643 sshd[4116]: Accepted publickey for core from 10.0.0.1 port 55192 ssh2: RSA SHA256:sVpqIt/le8mJMWBRnqSUOr83Z2pgjga2fm8CYRKAYYo Jul 15 04:48:04.247049 sshd-session[4116]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 04:48:04.251074 systemd-logind[1507]: New session 14 of user core. Jul 15 04:48:04.265887 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 15 04:48:04.379978 sshd[4119]: Connection closed by 10.0.0.1 port 55192 Jul 15 04:48:04.380919 sshd-session[4116]: pam_unix(sshd:session): session closed for user core Jul 15 04:48:04.384415 systemd[1]: sshd@13-10.0.0.81:22-10.0.0.1:55192.service: Deactivated successfully. Jul 15 04:48:04.386238 systemd[1]: session-14.scope: Deactivated successfully. Jul 15 04:48:04.388303 systemd-logind[1507]: Session 14 logged out. Waiting for processes to exit. Jul 15 04:48:04.389554 systemd-logind[1507]: Removed session 14. Jul 15 04:48:09.394860 systemd[1]: Started sshd@14-10.0.0.81:22-10.0.0.1:55200.service - OpenSSH per-connection server daemon (10.0.0.1:55200). Jul 15 04:48:09.448778 sshd[4132]: Accepted publickey for core from 10.0.0.1 port 55200 ssh2: RSA SHA256:sVpqIt/le8mJMWBRnqSUOr83Z2pgjga2fm8CYRKAYYo Jul 15 04:48:09.449918 sshd-session[4132]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 04:48:09.453573 systemd-logind[1507]: New session 15 of user core. Jul 15 04:48:09.464923 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 15 04:48:09.574696 sshd[4135]: Connection closed by 10.0.0.1 port 55200 Jul 15 04:48:09.575221 sshd-session[4132]: pam_unix(sshd:session): session closed for user core Jul 15 04:48:09.584056 systemd[1]: sshd@14-10.0.0.81:22-10.0.0.1:55200.service: Deactivated successfully. Jul 15 04:48:09.586102 systemd[1]: session-15.scope: Deactivated successfully. Jul 15 04:48:09.587042 systemd-logind[1507]: Session 15 logged out. Waiting for processes to exit. Jul 15 04:48:09.588905 systemd[1]: Started sshd@15-10.0.0.81:22-10.0.0.1:55212.service - OpenSSH per-connection server daemon (10.0.0.1:55212). Jul 15 04:48:09.590056 systemd-logind[1507]: Removed session 15. Jul 15 04:48:09.646872 sshd[4148]: Accepted publickey for core from 10.0.0.1 port 55212 ssh2: RSA SHA256:sVpqIt/le8mJMWBRnqSUOr83Z2pgjga2fm8CYRKAYYo Jul 15 04:48:09.647857 sshd-session[4148]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 04:48:09.652112 systemd-logind[1507]: New session 16 of user core. Jul 15 04:48:09.662851 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 15 04:48:09.862276 sshd[4151]: Connection closed by 10.0.0.1 port 55212 Jul 15 04:48:09.862976 sshd-session[4148]: pam_unix(sshd:session): session closed for user core Jul 15 04:48:09.871736 systemd[1]: sshd@15-10.0.0.81:22-10.0.0.1:55212.service: Deactivated successfully. Jul 15 04:48:09.873371 systemd[1]: session-16.scope: Deactivated successfully. Jul 15 04:48:09.874142 systemd-logind[1507]: Session 16 logged out. Waiting for processes to exit. Jul 15 04:48:09.876318 systemd[1]: Started sshd@16-10.0.0.81:22-10.0.0.1:55230.service - OpenSSH per-connection server daemon (10.0.0.1:55230). Jul 15 04:48:09.877066 systemd-logind[1507]: Removed session 16. Jul 15 04:48:09.942048 sshd[4162]: Accepted publickey for core from 10.0.0.1 port 55230 ssh2: RSA SHA256:sVpqIt/le8mJMWBRnqSUOr83Z2pgjga2fm8CYRKAYYo Jul 15 04:48:09.943190 sshd-session[4162]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 04:48:09.948330 systemd-logind[1507]: New session 17 of user core. Jul 15 04:48:09.961879 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 15 04:48:11.221043 sshd[4165]: Connection closed by 10.0.0.1 port 55230 Jul 15 04:48:11.221599 sshd-session[4162]: pam_unix(sshd:session): session closed for user core Jul 15 04:48:11.229456 systemd[1]: sshd@16-10.0.0.81:22-10.0.0.1:55230.service: Deactivated successfully. Jul 15 04:48:11.230855 systemd[1]: session-17.scope: Deactivated successfully. Jul 15 04:48:11.231595 systemd-logind[1507]: Session 17 logged out. Waiting for processes to exit. Jul 15 04:48:11.234509 systemd[1]: Started sshd@17-10.0.0.81:22-10.0.0.1:55300.service - OpenSSH per-connection server daemon (10.0.0.1:55300). Jul 15 04:48:11.237282 systemd-logind[1507]: Removed session 17. Jul 15 04:48:11.292300 sshd[4185]: Accepted publickey for core from 10.0.0.1 port 55300 ssh2: RSA SHA256:sVpqIt/le8mJMWBRnqSUOr83Z2pgjga2fm8CYRKAYYo Jul 15 04:48:11.293430 sshd-session[4185]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 04:48:11.297870 systemd-logind[1507]: New session 18 of user core. Jul 15 04:48:11.307864 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 15 04:48:11.518086 sshd[4188]: Connection closed by 10.0.0.1 port 55300 Jul 15 04:48:11.518505 sshd-session[4185]: pam_unix(sshd:session): session closed for user core Jul 15 04:48:11.528110 systemd[1]: sshd@17-10.0.0.81:22-10.0.0.1:55300.service: Deactivated successfully. Jul 15 04:48:11.531131 systemd[1]: session-18.scope: Deactivated successfully. Jul 15 04:48:11.531933 systemd-logind[1507]: Session 18 logged out. Waiting for processes to exit. Jul 15 04:48:11.534687 systemd[1]: Started sshd@18-10.0.0.81:22-10.0.0.1:55302.service - OpenSSH per-connection server daemon (10.0.0.1:55302). Jul 15 04:48:11.536044 systemd-logind[1507]: Removed session 18. Jul 15 04:48:11.586454 sshd[4199]: Accepted publickey for core from 10.0.0.1 port 55302 ssh2: RSA SHA256:sVpqIt/le8mJMWBRnqSUOr83Z2pgjga2fm8CYRKAYYo Jul 15 04:48:11.588370 sshd-session[4199]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 04:48:11.594106 systemd-logind[1507]: New session 19 of user core. Jul 15 04:48:11.604900 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 15 04:48:11.708965 sshd[4202]: Connection closed by 10.0.0.1 port 55302 Jul 15 04:48:11.709273 sshd-session[4199]: pam_unix(sshd:session): session closed for user core Jul 15 04:48:11.712611 systemd[1]: sshd@18-10.0.0.81:22-10.0.0.1:55302.service: Deactivated successfully. Jul 15 04:48:11.715451 systemd[1]: session-19.scope: Deactivated successfully. Jul 15 04:48:11.716410 systemd-logind[1507]: Session 19 logged out. Waiting for processes to exit. Jul 15 04:48:11.717708 systemd-logind[1507]: Removed session 19. Jul 15 04:48:16.724930 systemd[1]: Started sshd@19-10.0.0.81:22-10.0.0.1:47066.service - OpenSSH per-connection server daemon (10.0.0.1:47066). Jul 15 04:48:16.785967 sshd[4222]: Accepted publickey for core from 10.0.0.1 port 47066 ssh2: RSA SHA256:sVpqIt/le8mJMWBRnqSUOr83Z2pgjga2fm8CYRKAYYo Jul 15 04:48:16.787030 sshd-session[4222]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 04:48:16.792094 systemd-logind[1507]: New session 20 of user core. Jul 15 04:48:16.805857 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 15 04:48:16.911609 sshd[4225]: Connection closed by 10.0.0.1 port 47066 Jul 15 04:48:16.911936 sshd-session[4222]: pam_unix(sshd:session): session closed for user core Jul 15 04:48:16.915424 systemd-logind[1507]: Session 20 logged out. Waiting for processes to exit. Jul 15 04:48:16.916035 systemd[1]: sshd@19-10.0.0.81:22-10.0.0.1:47066.service: Deactivated successfully. Jul 15 04:48:16.917626 systemd[1]: session-20.scope: Deactivated successfully. Jul 15 04:48:16.919022 systemd-logind[1507]: Removed session 20. Jul 15 04:48:21.926917 systemd[1]: Started sshd@20-10.0.0.81:22-10.0.0.1:47078.service - OpenSSH per-connection server daemon (10.0.0.1:47078). Jul 15 04:48:21.972654 sshd[4241]: Accepted publickey for core from 10.0.0.1 port 47078 ssh2: RSA SHA256:sVpqIt/le8mJMWBRnqSUOr83Z2pgjga2fm8CYRKAYYo Jul 15 04:48:21.973823 sshd-session[4241]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 04:48:21.977773 systemd-logind[1507]: New session 21 of user core. Jul 15 04:48:21.991878 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 15 04:48:22.120806 sshd[4244]: Connection closed by 10.0.0.1 port 47078 Jul 15 04:48:22.121326 sshd-session[4241]: pam_unix(sshd:session): session closed for user core Jul 15 04:48:22.124857 systemd[1]: sshd@20-10.0.0.81:22-10.0.0.1:47078.service: Deactivated successfully. Jul 15 04:48:22.127822 systemd[1]: session-21.scope: Deactivated successfully. Jul 15 04:48:22.129183 systemd-logind[1507]: Session 21 logged out. Waiting for processes to exit. Jul 15 04:48:22.131788 systemd-logind[1507]: Removed session 21. Jul 15 04:48:27.137476 systemd[1]: Started sshd@21-10.0.0.81:22-10.0.0.1:51464.service - OpenSSH per-connection server daemon (10.0.0.1:51464). Jul 15 04:48:27.186818 sshd[4257]: Accepted publickey for core from 10.0.0.1 port 51464 ssh2: RSA SHA256:sVpqIt/le8mJMWBRnqSUOr83Z2pgjga2fm8CYRKAYYo Jul 15 04:48:27.188153 sshd-session[4257]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 04:48:27.192210 systemd-logind[1507]: New session 22 of user core. Jul 15 04:48:27.210949 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 15 04:48:27.327638 sshd[4260]: Connection closed by 10.0.0.1 port 51464 Jul 15 04:48:27.328219 sshd-session[4257]: pam_unix(sshd:session): session closed for user core Jul 15 04:48:27.335936 systemd[1]: sshd@21-10.0.0.81:22-10.0.0.1:51464.service: Deactivated successfully. Jul 15 04:48:27.337563 systemd[1]: session-22.scope: Deactivated successfully. Jul 15 04:48:27.340528 systemd-logind[1507]: Session 22 logged out. Waiting for processes to exit. Jul 15 04:48:27.343690 systemd[1]: Started sshd@22-10.0.0.81:22-10.0.0.1:51470.service - OpenSSH per-connection server daemon (10.0.0.1:51470). Jul 15 04:48:27.344331 systemd-logind[1507]: Removed session 22. Jul 15 04:48:27.403170 sshd[4273]: Accepted publickey for core from 10.0.0.1 port 51470 ssh2: RSA SHA256:sVpqIt/le8mJMWBRnqSUOr83Z2pgjga2fm8CYRKAYYo Jul 15 04:48:27.404396 sshd-session[4273]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 04:48:27.410043 systemd-logind[1507]: New session 23 of user core. Jul 15 04:48:27.420865 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 15 04:48:29.737196 containerd[1523]: time="2025-07-15T04:48:29.737154144Z" level=info msg="StopContainer for \"d006dd53652dda122e9334347574a1da15df3434a94293ebc1a1c006aa9ecc77\" with timeout 30 (s)" Jul 15 04:48:29.738162 containerd[1523]: time="2025-07-15T04:48:29.738133008Z" level=info msg="Stop container \"d006dd53652dda122e9334347574a1da15df3434a94293ebc1a1c006aa9ecc77\" with signal terminated" Jul 15 04:48:29.747973 systemd[1]: cri-containerd-d006dd53652dda122e9334347574a1da15df3434a94293ebc1a1c006aa9ecc77.scope: Deactivated successfully. Jul 15 04:48:29.749366 containerd[1523]: time="2025-07-15T04:48:29.749189070Z" level=info msg="received exit event container_id:\"d006dd53652dda122e9334347574a1da15df3434a94293ebc1a1c006aa9ecc77\" id:\"d006dd53652dda122e9334347574a1da15df3434a94293ebc1a1c006aa9ecc77\" pid:3214 exited_at:{seconds:1752554909 nanos:748977473}" Jul 15 04:48:29.749502 containerd[1523]: time="2025-07-15T04:48:29.749269749Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d006dd53652dda122e9334347574a1da15df3434a94293ebc1a1c006aa9ecc77\" id:\"d006dd53652dda122e9334347574a1da15df3434a94293ebc1a1c006aa9ecc77\" pid:3214 exited_at:{seconds:1752554909 nanos:748977473}" Jul 15 04:48:29.766986 containerd[1523]: time="2025-07-15T04:48:29.766915664Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 15 04:48:29.769328 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d006dd53652dda122e9334347574a1da15df3434a94293ebc1a1c006aa9ecc77-rootfs.mount: Deactivated successfully. Jul 15 04:48:29.771524 containerd[1523]: time="2025-07-15T04:48:29.771488270Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5ea8ae611b10f9508bb1935a1bd699055fc86ce5cdb05ec3b7120f75dd1a88cf\" id:\"5cdefa3e7002c0599d676dc1d642f6de94b169efdaf4b25b0aee2a91ce3016a1\" pid:4304 exited_at:{seconds:1752554909 nanos:771214434}" Jul 15 04:48:29.776956 containerd[1523]: time="2025-07-15T04:48:29.776907902Z" level=info msg="StopContainer for \"5ea8ae611b10f9508bb1935a1bd699055fc86ce5cdb05ec3b7120f75dd1a88cf\" with timeout 2 (s)" Jul 15 04:48:29.777248 containerd[1523]: time="2025-07-15T04:48:29.777223057Z" level=info msg="Stop container \"5ea8ae611b10f9508bb1935a1bd699055fc86ce5cdb05ec3b7120f75dd1a88cf\" with signal terminated" Jul 15 04:48:29.781658 containerd[1523]: time="2025-07-15T04:48:29.781622906Z" level=info msg="StopContainer for \"d006dd53652dda122e9334347574a1da15df3434a94293ebc1a1c006aa9ecc77\" returns successfully" Jul 15 04:48:29.784936 systemd-networkd[1431]: lxc_health: Link DOWN Jul 15 04:48:29.786022 containerd[1523]: time="2025-07-15T04:48:29.785230808Z" level=info msg="StopPodSandbox for \"2ddc1d990c6978c8c5f0e2256cbd5d026f279853edcb3743a72c3468290489ec\"" Jul 15 04:48:29.784942 systemd-networkd[1431]: lxc_health: Lost carrier Jul 15 04:48:29.791438 containerd[1523]: time="2025-07-15T04:48:29.791386989Z" level=info msg="Container to stop \"d006dd53652dda122e9334347574a1da15df3434a94293ebc1a1c006aa9ecc77\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 15 04:48:29.800621 systemd[1]: cri-containerd-2ddc1d990c6978c8c5f0e2256cbd5d026f279853edcb3743a72c3468290489ec.scope: Deactivated successfully. Jul 15 04:48:29.802099 containerd[1523]: time="2025-07-15T04:48:29.801863140Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2ddc1d990c6978c8c5f0e2256cbd5d026f279853edcb3743a72c3468290489ec\" id:\"2ddc1d990c6978c8c5f0e2256cbd5d026f279853edcb3743a72c3468290489ec\" pid:2880 exit_status:137 exited_at:{seconds:1752554909 nanos:800535841}" Jul 15 04:48:29.807186 systemd[1]: cri-containerd-5ea8ae611b10f9508bb1935a1bd699055fc86ce5cdb05ec3b7120f75dd1a88cf.scope: Deactivated successfully. Jul 15 04:48:29.807949 systemd[1]: cri-containerd-5ea8ae611b10f9508bb1935a1bd699055fc86ce5cdb05ec3b7120f75dd1a88cf.scope: Consumed 6.800s CPU time, 123.9M memory peak, 136K read from disk, 12.9M written to disk. Jul 15 04:48:29.809867 containerd[1523]: time="2025-07-15T04:48:29.809801091Z" level=info msg="received exit event container_id:\"5ea8ae611b10f9508bb1935a1bd699055fc86ce5cdb05ec3b7120f75dd1a88cf\" id:\"5ea8ae611b10f9508bb1935a1bd699055fc86ce5cdb05ec3b7120f75dd1a88cf\" pid:3327 exited_at:{seconds:1752554909 nanos:809510016}" Jul 15 04:48:29.825438 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2ddc1d990c6978c8c5f0e2256cbd5d026f279853edcb3743a72c3468290489ec-rootfs.mount: Deactivated successfully. Jul 15 04:48:29.829978 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5ea8ae611b10f9508bb1935a1bd699055fc86ce5cdb05ec3b7120f75dd1a88cf-rootfs.mount: Deactivated successfully. Jul 15 04:48:29.833263 containerd[1523]: time="2025-07-15T04:48:29.833185194Z" level=info msg="shim disconnected" id=2ddc1d990c6978c8c5f0e2256cbd5d026f279853edcb3743a72c3468290489ec namespace=k8s.io Jul 15 04:48:29.840743 containerd[1523]: time="2025-07-15T04:48:29.833218553Z" level=warning msg="cleaning up after shim disconnected" id=2ddc1d990c6978c8c5f0e2256cbd5d026f279853edcb3743a72c3468290489ec namespace=k8s.io Jul 15 04:48:29.840743 containerd[1523]: time="2025-07-15T04:48:29.840634794Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 15 04:48:29.840743 containerd[1523]: time="2025-07-15T04:48:29.837027972Z" level=info msg="StopContainer for \"5ea8ae611b10f9508bb1935a1bd699055fc86ce5cdb05ec3b7120f75dd1a88cf\" returns successfully" Jul 15 04:48:29.841567 containerd[1523]: time="2025-07-15T04:48:29.841202384Z" level=info msg="StopPodSandbox for \"2ad5e5ffa5a214cdc137b5c211e2aa3f8325ba71da3f26409bf516d11f88757e\"" Jul 15 04:48:29.841567 containerd[1523]: time="2025-07-15T04:48:29.841305063Z" level=info msg="Container to stop \"64c01b0f8202c646c407c22a2f60eb5383e07b37ac327d38ddd61b0923fa074c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 15 04:48:29.841567 containerd[1523]: time="2025-07-15T04:48:29.841317543Z" level=info msg="Container to stop \"a522f420f22f01f3f3e838c9c980f987d237fae201cdfc3c07b0fa7356291eaa\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 15 04:48:29.841567 containerd[1523]: time="2025-07-15T04:48:29.841328542Z" level=info msg="Container to stop \"5721462d3c3fa1a42fa6d1c0494038bec023994c5b8f166f41d5d2c1befbd407\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 15 04:48:29.841567 containerd[1523]: time="2025-07-15T04:48:29.841337142Z" level=info msg="Container to stop \"de0ab44e883081f1eba0b9be3ce3e17039df590c139cc4619663c0520e8f31e8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 15 04:48:29.841567 containerd[1523]: time="2025-07-15T04:48:29.841348022Z" level=info msg="Container to stop \"5ea8ae611b10f9508bb1935a1bd699055fc86ce5cdb05ec3b7120f75dd1a88cf\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 15 04:48:29.847084 systemd[1]: cri-containerd-2ad5e5ffa5a214cdc137b5c211e2aa3f8325ba71da3f26409bf516d11f88757e.scope: Deactivated successfully. Jul 15 04:48:29.855170 containerd[1523]: time="2025-07-15T04:48:29.855121760Z" level=info msg="received exit event sandbox_id:\"2ddc1d990c6978c8c5f0e2256cbd5d026f279853edcb3743a72c3468290489ec\" exit_status:137 exited_at:{seconds:1752554909 nanos:800535841}" Jul 15 04:48:29.855404 containerd[1523]: time="2025-07-15T04:48:29.855372116Z" level=info msg="TearDown network for sandbox \"2ddc1d990c6978c8c5f0e2256cbd5d026f279853edcb3743a72c3468290489ec\" successfully" Jul 15 04:48:29.855404 containerd[1523]: time="2025-07-15T04:48:29.855398515Z" level=info msg="StopPodSandbox for \"2ddc1d990c6978c8c5f0e2256cbd5d026f279853edcb3743a72c3468290489ec\" returns successfully" Jul 15 04:48:29.855881 containerd[1523]: time="2025-07-15T04:48:29.855845468Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5ea8ae611b10f9508bb1935a1bd699055fc86ce5cdb05ec3b7120f75dd1a88cf\" id:\"5ea8ae611b10f9508bb1935a1bd699055fc86ce5cdb05ec3b7120f75dd1a88cf\" pid:3327 exited_at:{seconds:1752554909 nanos:809510016}" Jul 15 04:48:29.855939 containerd[1523]: time="2025-07-15T04:48:29.855898867Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2ad5e5ffa5a214cdc137b5c211e2aa3f8325ba71da3f26409bf516d11f88757e\" id:\"2ad5e5ffa5a214cdc137b5c211e2aa3f8325ba71da3f26409bf516d11f88757e\" pid:2818 exit_status:137 exited_at:{seconds:1752554909 nanos:847798798}" Jul 15 04:48:29.856583 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2ddc1d990c6978c8c5f0e2256cbd5d026f279853edcb3743a72c3468290489ec-shm.mount: Deactivated successfully. Jul 15 04:48:29.871336 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2ad5e5ffa5a214cdc137b5c211e2aa3f8325ba71da3f26409bf516d11f88757e-rootfs.mount: Deactivated successfully. Jul 15 04:48:29.877347 containerd[1523]: time="2025-07-15T04:48:29.876476335Z" level=info msg="received exit event sandbox_id:\"2ad5e5ffa5a214cdc137b5c211e2aa3f8325ba71da3f26409bf516d11f88757e\" exit_status:137 exited_at:{seconds:1752554909 nanos:847798798}" Jul 15 04:48:29.877818 containerd[1523]: time="2025-07-15T04:48:29.876386936Z" level=info msg="shim disconnected" id=2ad5e5ffa5a214cdc137b5c211e2aa3f8325ba71da3f26409bf516d11f88757e namespace=k8s.io Jul 15 04:48:29.877818 containerd[1523]: time="2025-07-15T04:48:29.877621916Z" level=warning msg="cleaning up after shim disconnected" id=2ad5e5ffa5a214cdc137b5c211e2aa3f8325ba71da3f26409bf516d11f88757e namespace=k8s.io Jul 15 04:48:29.877818 containerd[1523]: time="2025-07-15T04:48:29.877656156Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 15 04:48:29.877818 containerd[1523]: time="2025-07-15T04:48:29.876693051Z" level=info msg="TearDown network for sandbox \"2ad5e5ffa5a214cdc137b5c211e2aa3f8325ba71da3f26409bf516d11f88757e\" successfully" Jul 15 04:48:29.877818 containerd[1523]: time="2025-07-15T04:48:29.877706155Z" level=info msg="StopPodSandbox for \"2ad5e5ffa5a214cdc137b5c211e2aa3f8325ba71da3f26409bf516d11f88757e\" returns successfully" Jul 15 04:48:30.009354 kubelet[2664]: I0715 04:48:30.009203 2664 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b6f73f09-efff-4dd4-9d83-b0de6a2fe64c-bpf-maps\") pod \"b6f73f09-efff-4dd4-9d83-b0de6a2fe64c\" (UID: \"b6f73f09-efff-4dd4-9d83-b0de6a2fe64c\") " Jul 15 04:48:30.009354 kubelet[2664]: I0715 04:48:30.009255 2664 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b6f73f09-efff-4dd4-9d83-b0de6a2fe64c-clustermesh-secrets\") pod \"b6f73f09-efff-4dd4-9d83-b0de6a2fe64c\" (UID: \"b6f73f09-efff-4dd4-9d83-b0de6a2fe64c\") " Jul 15 04:48:30.009354 kubelet[2664]: I0715 04:48:30.009276 2664 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b6f73f09-efff-4dd4-9d83-b0de6a2fe64c-cilium-config-path\") pod \"b6f73f09-efff-4dd4-9d83-b0de6a2fe64c\" (UID: \"b6f73f09-efff-4dd4-9d83-b0de6a2fe64c\") " Jul 15 04:48:30.009354 kubelet[2664]: I0715 04:48:30.009293 2664 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b6f73f09-efff-4dd4-9d83-b0de6a2fe64c-hostproc\") pod \"b6f73f09-efff-4dd4-9d83-b0de6a2fe64c\" (UID: \"b6f73f09-efff-4dd4-9d83-b0de6a2fe64c\") " Jul 15 04:48:30.009354 kubelet[2664]: I0715 04:48:30.009310 2664 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b6f73f09-efff-4dd4-9d83-b0de6a2fe64c-host-proc-sys-net\") pod \"b6f73f09-efff-4dd4-9d83-b0de6a2fe64c\" (UID: \"b6f73f09-efff-4dd4-9d83-b0de6a2fe64c\") " Jul 15 04:48:30.009354 kubelet[2664]: I0715 04:48:30.009326 2664 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b6f73f09-efff-4dd4-9d83-b0de6a2fe64c-hubble-tls\") pod \"b6f73f09-efff-4dd4-9d83-b0de6a2fe64c\" (UID: \"b6f73f09-efff-4dd4-9d83-b0de6a2fe64c\") " Jul 15 04:48:30.011534 kubelet[2664]: I0715 04:48:30.009346 2664 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8f934e2d-4122-4519-b637-21a6f4fbb090-cilium-config-path\") pod \"8f934e2d-4122-4519-b637-21a6f4fbb090\" (UID: \"8f934e2d-4122-4519-b637-21a6f4fbb090\") " Jul 15 04:48:30.011534 kubelet[2664]: I0715 04:48:30.009361 2664 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b6f73f09-efff-4dd4-9d83-b0de6a2fe64c-lib-modules\") pod \"b6f73f09-efff-4dd4-9d83-b0de6a2fe64c\" (UID: \"b6f73f09-efff-4dd4-9d83-b0de6a2fe64c\") " Jul 15 04:48:30.011534 kubelet[2664]: I0715 04:48:30.009377 2664 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b6f73f09-efff-4dd4-9d83-b0de6a2fe64c-host-proc-sys-kernel\") pod \"b6f73f09-efff-4dd4-9d83-b0de6a2fe64c\" (UID: \"b6f73f09-efff-4dd4-9d83-b0de6a2fe64c\") " Jul 15 04:48:30.011534 kubelet[2664]: I0715 04:48:30.009393 2664 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5vq8m\" (UniqueName: \"kubernetes.io/projected/8f934e2d-4122-4519-b637-21a6f4fbb090-kube-api-access-5vq8m\") pod \"8f934e2d-4122-4519-b637-21a6f4fbb090\" (UID: \"8f934e2d-4122-4519-b637-21a6f4fbb090\") " Jul 15 04:48:30.011534 kubelet[2664]: I0715 04:48:30.009411 2664 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b6f73f09-efff-4dd4-9d83-b0de6a2fe64c-cilium-cgroup\") pod \"b6f73f09-efff-4dd4-9d83-b0de6a2fe64c\" (UID: \"b6f73f09-efff-4dd4-9d83-b0de6a2fe64c\") " Jul 15 04:48:30.011534 kubelet[2664]: I0715 04:48:30.009425 2664 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b6f73f09-efff-4dd4-9d83-b0de6a2fe64c-etc-cni-netd\") pod \"b6f73f09-efff-4dd4-9d83-b0de6a2fe64c\" (UID: \"b6f73f09-efff-4dd4-9d83-b0de6a2fe64c\") " Jul 15 04:48:30.012355 kubelet[2664]: I0715 04:48:30.009438 2664 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b6f73f09-efff-4dd4-9d83-b0de6a2fe64c-xtables-lock\") pod \"b6f73f09-efff-4dd4-9d83-b0de6a2fe64c\" (UID: \"b6f73f09-efff-4dd4-9d83-b0de6a2fe64c\") " Jul 15 04:48:30.012355 kubelet[2664]: I0715 04:48:30.009452 2664 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b6f73f09-efff-4dd4-9d83-b0de6a2fe64c-cni-path\") pod \"b6f73f09-efff-4dd4-9d83-b0de6a2fe64c\" (UID: \"b6f73f09-efff-4dd4-9d83-b0de6a2fe64c\") " Jul 15 04:48:30.012355 kubelet[2664]: I0715 04:48:30.009488 2664 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b6f73f09-efff-4dd4-9d83-b0de6a2fe64c-cilium-run\") pod \"b6f73f09-efff-4dd4-9d83-b0de6a2fe64c\" (UID: \"b6f73f09-efff-4dd4-9d83-b0de6a2fe64c\") " Jul 15 04:48:30.012355 kubelet[2664]: I0715 04:48:30.009512 2664 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q6xxg\" (UniqueName: \"kubernetes.io/projected/b6f73f09-efff-4dd4-9d83-b0de6a2fe64c-kube-api-access-q6xxg\") pod \"b6f73f09-efff-4dd4-9d83-b0de6a2fe64c\" (UID: \"b6f73f09-efff-4dd4-9d83-b0de6a2fe64c\") " Jul 15 04:48:30.022350 kubelet[2664]: I0715 04:48:30.014675 2664 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b6f73f09-efff-4dd4-9d83-b0de6a2fe64c-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "b6f73f09-efff-4dd4-9d83-b0de6a2fe64c" (UID: "b6f73f09-efff-4dd4-9d83-b0de6a2fe64c"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 15 04:48:30.022350 kubelet[2664]: I0715 04:48:30.014704 2664 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b6f73f09-efff-4dd4-9d83-b0de6a2fe64c-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "b6f73f09-efff-4dd4-9d83-b0de6a2fe64c" (UID: "b6f73f09-efff-4dd4-9d83-b0de6a2fe64c"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 15 04:48:30.022350 kubelet[2664]: I0715 04:48:30.014791 2664 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b6f73f09-efff-4dd4-9d83-b0de6a2fe64c-cni-path" (OuterVolumeSpecName: "cni-path") pod "b6f73f09-efff-4dd4-9d83-b0de6a2fe64c" (UID: "b6f73f09-efff-4dd4-9d83-b0de6a2fe64c"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 15 04:48:30.022350 kubelet[2664]: I0715 04:48:30.014811 2664 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b6f73f09-efff-4dd4-9d83-b0de6a2fe64c-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "b6f73f09-efff-4dd4-9d83-b0de6a2fe64c" (UID: "b6f73f09-efff-4dd4-9d83-b0de6a2fe64c"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 15 04:48:30.022350 kubelet[2664]: I0715 04:48:30.014842 2664 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b6f73f09-efff-4dd4-9d83-b0de6a2fe64c-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "b6f73f09-efff-4dd4-9d83-b0de6a2fe64c" (UID: "b6f73f09-efff-4dd4-9d83-b0de6a2fe64c"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 15 04:48:30.022586 kubelet[2664]: I0715 04:48:30.015087 2664 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b6f73f09-efff-4dd4-9d83-b0de6a2fe64c-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "b6f73f09-efff-4dd4-9d83-b0de6a2fe64c" (UID: "b6f73f09-efff-4dd4-9d83-b0de6a2fe64c"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 15 04:48:30.022586 kubelet[2664]: I0715 04:48:30.015126 2664 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b6f73f09-efff-4dd4-9d83-b0de6a2fe64c-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "b6f73f09-efff-4dd4-9d83-b0de6a2fe64c" (UID: "b6f73f09-efff-4dd4-9d83-b0de6a2fe64c"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 15 04:48:30.022586 kubelet[2664]: I0715 04:48:30.015382 2664 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b6f73f09-efff-4dd4-9d83-b0de6a2fe64c-hostproc" (OuterVolumeSpecName: "hostproc") pod "b6f73f09-efff-4dd4-9d83-b0de6a2fe64c" (UID: "b6f73f09-efff-4dd4-9d83-b0de6a2fe64c"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 15 04:48:30.022586 kubelet[2664]: I0715 04:48:30.017144 2664 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6f73f09-efff-4dd4-9d83-b0de6a2fe64c-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "b6f73f09-efff-4dd4-9d83-b0de6a2fe64c" (UID: "b6f73f09-efff-4dd4-9d83-b0de6a2fe64c"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 15 04:48:30.022586 kubelet[2664]: I0715 04:48:30.017193 2664 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b6f73f09-efff-4dd4-9d83-b0de6a2fe64c-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "b6f73f09-efff-4dd4-9d83-b0de6a2fe64c" (UID: "b6f73f09-efff-4dd4-9d83-b0de6a2fe64c"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 15 04:48:30.022694 kubelet[2664]: I0715 04:48:30.018996 2664 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b6f73f09-efff-4dd4-9d83-b0de6a2fe64c-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "b6f73f09-efff-4dd4-9d83-b0de6a2fe64c" (UID: "b6f73f09-efff-4dd4-9d83-b0de6a2fe64c"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 15 04:48:30.022694 kubelet[2664]: I0715 04:48:30.022304 2664 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f934e2d-4122-4519-b637-21a6f4fbb090-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "8f934e2d-4122-4519-b637-21a6f4fbb090" (UID: "8f934e2d-4122-4519-b637-21a6f4fbb090"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 15 04:48:30.022694 kubelet[2664]: I0715 04:48:30.022550 2664 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f934e2d-4122-4519-b637-21a6f4fbb090-kube-api-access-5vq8m" (OuterVolumeSpecName: "kube-api-access-5vq8m") pod "8f934e2d-4122-4519-b637-21a6f4fbb090" (UID: "8f934e2d-4122-4519-b637-21a6f4fbb090"). InnerVolumeSpecName "kube-api-access-5vq8m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 15 04:48:30.023106 kubelet[2664]: I0715 04:48:30.022786 2664 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6f73f09-efff-4dd4-9d83-b0de6a2fe64c-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "b6f73f09-efff-4dd4-9d83-b0de6a2fe64c" (UID: "b6f73f09-efff-4dd4-9d83-b0de6a2fe64c"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 15 04:48:30.023276 kubelet[2664]: I0715 04:48:30.023247 2664 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6f73f09-efff-4dd4-9d83-b0de6a2fe64c-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "b6f73f09-efff-4dd4-9d83-b0de6a2fe64c" (UID: "b6f73f09-efff-4dd4-9d83-b0de6a2fe64c"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 15 04:48:30.023875 kubelet[2664]: I0715 04:48:30.023844 2664 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6f73f09-efff-4dd4-9d83-b0de6a2fe64c-kube-api-access-q6xxg" (OuterVolumeSpecName: "kube-api-access-q6xxg") pod "b6f73f09-efff-4dd4-9d83-b0de6a2fe64c" (UID: "b6f73f09-efff-4dd4-9d83-b0de6a2fe64c"). InnerVolumeSpecName "kube-api-access-q6xxg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 15 04:48:30.110275 kubelet[2664]: I0715 04:48:30.110227 2664 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b6f73f09-efff-4dd4-9d83-b0de6a2fe64c-hostproc\") on node \"localhost\" DevicePath \"\"" Jul 15 04:48:30.110275 kubelet[2664]: I0715 04:48:30.110262 2664 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b6f73f09-efff-4dd4-9d83-b0de6a2fe64c-hubble-tls\") on node \"localhost\" DevicePath \"\"" Jul 15 04:48:30.110275 kubelet[2664]: I0715 04:48:30.110274 2664 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b6f73f09-efff-4dd4-9d83-b0de6a2fe64c-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Jul 15 04:48:30.110275 kubelet[2664]: I0715 04:48:30.110283 2664 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8f934e2d-4122-4519-b637-21a6f4fbb090-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 15 04:48:30.110275 kubelet[2664]: I0715 04:48:30.110291 2664 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b6f73f09-efff-4dd4-9d83-b0de6a2fe64c-lib-modules\") on node \"localhost\" DevicePath \"\"" Jul 15 04:48:30.110501 kubelet[2664]: I0715 04:48:30.110299 2664 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b6f73f09-efff-4dd4-9d83-b0de6a2fe64c-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Jul 15 04:48:30.110501 kubelet[2664]: I0715 04:48:30.110307 2664 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5vq8m\" (UniqueName: \"kubernetes.io/projected/8f934e2d-4122-4519-b637-21a6f4fbb090-kube-api-access-5vq8m\") on node \"localhost\" DevicePath \"\"" Jul 15 04:48:30.110501 kubelet[2664]: I0715 04:48:30.110315 2664 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b6f73f09-efff-4dd4-9d83-b0de6a2fe64c-cni-path\") on node \"localhost\" DevicePath \"\"" Jul 15 04:48:30.110501 kubelet[2664]: I0715 04:48:30.110322 2664 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b6f73f09-efff-4dd4-9d83-b0de6a2fe64c-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Jul 15 04:48:30.110501 kubelet[2664]: I0715 04:48:30.110330 2664 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b6f73f09-efff-4dd4-9d83-b0de6a2fe64c-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Jul 15 04:48:30.110501 kubelet[2664]: I0715 04:48:30.110337 2664 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b6f73f09-efff-4dd4-9d83-b0de6a2fe64c-xtables-lock\") on node \"localhost\" DevicePath \"\"" Jul 15 04:48:30.110501 kubelet[2664]: I0715 04:48:30.110344 2664 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b6f73f09-efff-4dd4-9d83-b0de6a2fe64c-cilium-run\") on node \"localhost\" DevicePath \"\"" Jul 15 04:48:30.110501 kubelet[2664]: I0715 04:48:30.110351 2664 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q6xxg\" (UniqueName: \"kubernetes.io/projected/b6f73f09-efff-4dd4-9d83-b0de6a2fe64c-kube-api-access-q6xxg\") on node \"localhost\" DevicePath \"\"" Jul 15 04:48:30.110661 kubelet[2664]: I0715 04:48:30.110358 2664 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b6f73f09-efff-4dd4-9d83-b0de6a2fe64c-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 15 04:48:30.110661 kubelet[2664]: I0715 04:48:30.110366 2664 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b6f73f09-efff-4dd4-9d83-b0de6a2fe64c-bpf-maps\") on node \"localhost\" DevicePath \"\"" Jul 15 04:48:30.110661 kubelet[2664]: I0715 04:48:30.110373 2664 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b6f73f09-efff-4dd4-9d83-b0de6a2fe64c-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Jul 15 04:48:30.567351 kubelet[2664]: I0715 04:48:30.567305 2664 scope.go:117] "RemoveContainer" containerID="d006dd53652dda122e9334347574a1da15df3434a94293ebc1a1c006aa9ecc77" Jul 15 04:48:30.569145 containerd[1523]: time="2025-07-15T04:48:30.569027352Z" level=info msg="RemoveContainer for \"d006dd53652dda122e9334347574a1da15df3434a94293ebc1a1c006aa9ecc77\"" Jul 15 04:48:30.573836 systemd[1]: Removed slice kubepods-besteffort-pod8f934e2d_4122_4519_b637_21a6f4fbb090.slice - libcontainer container kubepods-besteffort-pod8f934e2d_4122_4519_b637_21a6f4fbb090.slice. Jul 15 04:48:30.576811 containerd[1523]: time="2025-07-15T04:48:30.576749710Z" level=info msg="RemoveContainer for \"d006dd53652dda122e9334347574a1da15df3434a94293ebc1a1c006aa9ecc77\" returns successfully" Jul 15 04:48:30.577245 kubelet[2664]: I0715 04:48:30.577127 2664 scope.go:117] "RemoveContainer" containerID="d006dd53652dda122e9334347574a1da15df3434a94293ebc1a1c006aa9ecc77" Jul 15 04:48:30.579147 systemd[1]: Removed slice kubepods-burstable-podb6f73f09_efff_4dd4_9d83_b0de6a2fe64c.slice - libcontainer container kubepods-burstable-podb6f73f09_efff_4dd4_9d83_b0de6a2fe64c.slice. Jul 15 04:48:30.579246 systemd[1]: kubepods-burstable-podb6f73f09_efff_4dd4_9d83_b0de6a2fe64c.slice: Consumed 7.015s CPU time, 124.2M memory peak, 144K read from disk, 12.9M written to disk. Jul 15 04:48:30.585079 containerd[1523]: time="2025-07-15T04:48:30.577405900Z" level=error msg="ContainerStatus for \"d006dd53652dda122e9334347574a1da15df3434a94293ebc1a1c006aa9ecc77\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d006dd53652dda122e9334347574a1da15df3434a94293ebc1a1c006aa9ecc77\": not found" Jul 15 04:48:30.588540 kubelet[2664]: E0715 04:48:30.588409 2664 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d006dd53652dda122e9334347574a1da15df3434a94293ebc1a1c006aa9ecc77\": not found" containerID="d006dd53652dda122e9334347574a1da15df3434a94293ebc1a1c006aa9ecc77" Jul 15 04:48:30.588635 kubelet[2664]: I0715 04:48:30.588466 2664 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d006dd53652dda122e9334347574a1da15df3434a94293ebc1a1c006aa9ecc77"} err="failed to get container status \"d006dd53652dda122e9334347574a1da15df3434a94293ebc1a1c006aa9ecc77\": rpc error: code = NotFound desc = an error occurred when try to find container \"d006dd53652dda122e9334347574a1da15df3434a94293ebc1a1c006aa9ecc77\": not found" Jul 15 04:48:30.588711 kubelet[2664]: I0715 04:48:30.588698 2664 scope.go:117] "RemoveContainer" containerID="5ea8ae611b10f9508bb1935a1bd699055fc86ce5cdb05ec3b7120f75dd1a88cf" Jul 15 04:48:30.590427 containerd[1523]: time="2025-07-15T04:48:30.590392135Z" level=info msg="RemoveContainer for \"5ea8ae611b10f9508bb1935a1bd699055fc86ce5cdb05ec3b7120f75dd1a88cf\"" Jul 15 04:48:30.595264 containerd[1523]: time="2025-07-15T04:48:30.595223018Z" level=info msg="RemoveContainer for \"5ea8ae611b10f9508bb1935a1bd699055fc86ce5cdb05ec3b7120f75dd1a88cf\" returns successfully" Jul 15 04:48:30.595547 kubelet[2664]: I0715 04:48:30.595459 2664 scope.go:117] "RemoveContainer" containerID="a522f420f22f01f3f3e838c9c980f987d237fae201cdfc3c07b0fa7356291eaa" Jul 15 04:48:30.599289 containerd[1523]: time="2025-07-15T04:48:30.599163436Z" level=info msg="RemoveContainer for \"a522f420f22f01f3f3e838c9c980f987d237fae201cdfc3c07b0fa7356291eaa\"" Jul 15 04:48:30.611008 containerd[1523]: time="2025-07-15T04:48:30.610963010Z" level=info msg="RemoveContainer for \"a522f420f22f01f3f3e838c9c980f987d237fae201cdfc3c07b0fa7356291eaa\" returns successfully" Jul 15 04:48:30.611204 kubelet[2664]: I0715 04:48:30.611180 2664 scope.go:117] "RemoveContainer" containerID="de0ab44e883081f1eba0b9be3ce3e17039df590c139cc4619663c0520e8f31e8" Jul 15 04:48:30.613500 containerd[1523]: time="2025-07-15T04:48:30.613459210Z" level=info msg="RemoveContainer for \"de0ab44e883081f1eba0b9be3ce3e17039df590c139cc4619663c0520e8f31e8\"" Jul 15 04:48:30.619038 containerd[1523]: time="2025-07-15T04:48:30.619001003Z" level=info msg="RemoveContainer for \"de0ab44e883081f1eba0b9be3ce3e17039df590c139cc4619663c0520e8f31e8\" returns successfully" Jul 15 04:48:30.619245 kubelet[2664]: I0715 04:48:30.619216 2664 scope.go:117] "RemoveContainer" containerID="64c01b0f8202c646c407c22a2f60eb5383e07b37ac327d38ddd61b0923fa074c" Jul 15 04:48:30.624391 containerd[1523]: time="2025-07-15T04:48:30.624358718Z" level=info msg="RemoveContainer for \"64c01b0f8202c646c407c22a2f60eb5383e07b37ac327d38ddd61b0923fa074c\"" Jul 15 04:48:30.627786 containerd[1523]: time="2025-07-15T04:48:30.627757105Z" level=info msg="RemoveContainer for \"64c01b0f8202c646c407c22a2f60eb5383e07b37ac327d38ddd61b0923fa074c\" returns successfully" Jul 15 04:48:30.627998 kubelet[2664]: I0715 04:48:30.627975 2664 scope.go:117] "RemoveContainer" containerID="5721462d3c3fa1a42fa6d1c0494038bec023994c5b8f166f41d5d2c1befbd407" Jul 15 04:48:30.629740 containerd[1523]: time="2025-07-15T04:48:30.629496517Z" level=info msg="RemoveContainer for \"5721462d3c3fa1a42fa6d1c0494038bec023994c5b8f166f41d5d2c1befbd407\"" Jul 15 04:48:30.642946 containerd[1523]: time="2025-07-15T04:48:30.642908625Z" level=info msg="RemoveContainer for \"5721462d3c3fa1a42fa6d1c0494038bec023994c5b8f166f41d5d2c1befbd407\" returns successfully" Jul 15 04:48:30.643318 kubelet[2664]: I0715 04:48:30.643288 2664 scope.go:117] "RemoveContainer" containerID="5ea8ae611b10f9508bb1935a1bd699055fc86ce5cdb05ec3b7120f75dd1a88cf" Jul 15 04:48:30.643597 containerd[1523]: time="2025-07-15T04:48:30.643560535Z" level=error msg="ContainerStatus for \"5ea8ae611b10f9508bb1935a1bd699055fc86ce5cdb05ec3b7120f75dd1a88cf\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5ea8ae611b10f9508bb1935a1bd699055fc86ce5cdb05ec3b7120f75dd1a88cf\": not found" Jul 15 04:48:30.643700 kubelet[2664]: E0715 04:48:30.643679 2664 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5ea8ae611b10f9508bb1935a1bd699055fc86ce5cdb05ec3b7120f75dd1a88cf\": not found" containerID="5ea8ae611b10f9508bb1935a1bd699055fc86ce5cdb05ec3b7120f75dd1a88cf" Jul 15 04:48:30.643758 kubelet[2664]: I0715 04:48:30.643710 2664 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5ea8ae611b10f9508bb1935a1bd699055fc86ce5cdb05ec3b7120f75dd1a88cf"} err="failed to get container status \"5ea8ae611b10f9508bb1935a1bd699055fc86ce5cdb05ec3b7120f75dd1a88cf\": rpc error: code = NotFound desc = an error occurred when try to find container \"5ea8ae611b10f9508bb1935a1bd699055fc86ce5cdb05ec3b7120f75dd1a88cf\": not found" Jul 15 04:48:30.643758 kubelet[2664]: I0715 04:48:30.643756 2664 scope.go:117] "RemoveContainer" containerID="a522f420f22f01f3f3e838c9c980f987d237fae201cdfc3c07b0fa7356291eaa" Jul 15 04:48:30.644032 containerd[1523]: time="2025-07-15T04:48:30.643941289Z" level=error msg="ContainerStatus for \"a522f420f22f01f3f3e838c9c980f987d237fae201cdfc3c07b0fa7356291eaa\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a522f420f22f01f3f3e838c9c980f987d237fae201cdfc3c07b0fa7356291eaa\": not found" Jul 15 04:48:30.644077 kubelet[2664]: E0715 04:48:30.644046 2664 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a522f420f22f01f3f3e838c9c980f987d237fae201cdfc3c07b0fa7356291eaa\": not found" containerID="a522f420f22f01f3f3e838c9c980f987d237fae201cdfc3c07b0fa7356291eaa" Jul 15 04:48:30.644077 kubelet[2664]: I0715 04:48:30.644064 2664 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a522f420f22f01f3f3e838c9c980f987d237fae201cdfc3c07b0fa7356291eaa"} err="failed to get container status \"a522f420f22f01f3f3e838c9c980f987d237fae201cdfc3c07b0fa7356291eaa\": rpc error: code = NotFound desc = an error occurred when try to find container \"a522f420f22f01f3f3e838c9c980f987d237fae201cdfc3c07b0fa7356291eaa\": not found" Jul 15 04:48:30.644077 kubelet[2664]: I0715 04:48:30.644077 2664 scope.go:117] "RemoveContainer" containerID="de0ab44e883081f1eba0b9be3ce3e17039df590c139cc4619663c0520e8f31e8" Jul 15 04:48:30.644372 containerd[1523]: time="2025-07-15T04:48:30.644346403Z" level=error msg="ContainerStatus for \"de0ab44e883081f1eba0b9be3ce3e17039df590c139cc4619663c0520e8f31e8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"de0ab44e883081f1eba0b9be3ce3e17039df590c139cc4619663c0520e8f31e8\": not found" Jul 15 04:48:30.644603 kubelet[2664]: E0715 04:48:30.644568 2664 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"de0ab44e883081f1eba0b9be3ce3e17039df590c139cc4619663c0520e8f31e8\": not found" containerID="de0ab44e883081f1eba0b9be3ce3e17039df590c139cc4619663c0520e8f31e8" Jul 15 04:48:30.644649 kubelet[2664]: I0715 04:48:30.644606 2664 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"de0ab44e883081f1eba0b9be3ce3e17039df590c139cc4619663c0520e8f31e8"} err="failed to get container status \"de0ab44e883081f1eba0b9be3ce3e17039df590c139cc4619663c0520e8f31e8\": rpc error: code = NotFound desc = an error occurred when try to find container \"de0ab44e883081f1eba0b9be3ce3e17039df590c139cc4619663c0520e8f31e8\": not found" Jul 15 04:48:30.644649 kubelet[2664]: I0715 04:48:30.644625 2664 scope.go:117] "RemoveContainer" containerID="64c01b0f8202c646c407c22a2f60eb5383e07b37ac327d38ddd61b0923fa074c" Jul 15 04:48:30.644897 containerd[1523]: time="2025-07-15T04:48:30.644866794Z" level=error msg="ContainerStatus for \"64c01b0f8202c646c407c22a2f60eb5383e07b37ac327d38ddd61b0923fa074c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"64c01b0f8202c646c407c22a2f60eb5383e07b37ac327d38ddd61b0923fa074c\": not found" Jul 15 04:48:30.645182 kubelet[2664]: E0715 04:48:30.645164 2664 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"64c01b0f8202c646c407c22a2f60eb5383e07b37ac327d38ddd61b0923fa074c\": not found" containerID="64c01b0f8202c646c407c22a2f60eb5383e07b37ac327d38ddd61b0923fa074c" Jul 15 04:48:30.645237 kubelet[2664]: I0715 04:48:30.645184 2664 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"64c01b0f8202c646c407c22a2f60eb5383e07b37ac327d38ddd61b0923fa074c"} err="failed to get container status \"64c01b0f8202c646c407c22a2f60eb5383e07b37ac327d38ddd61b0923fa074c\": rpc error: code = NotFound desc = an error occurred when try to find container \"64c01b0f8202c646c407c22a2f60eb5383e07b37ac327d38ddd61b0923fa074c\": not found" Jul 15 04:48:30.645237 kubelet[2664]: I0715 04:48:30.645197 2664 scope.go:117] "RemoveContainer" containerID="5721462d3c3fa1a42fa6d1c0494038bec023994c5b8f166f41d5d2c1befbd407" Jul 15 04:48:30.645422 containerd[1523]: time="2025-07-15T04:48:30.645394186Z" level=error msg="ContainerStatus for \"5721462d3c3fa1a42fa6d1c0494038bec023994c5b8f166f41d5d2c1befbd407\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5721462d3c3fa1a42fa6d1c0494038bec023994c5b8f166f41d5d2c1befbd407\": not found" Jul 15 04:48:30.645756 kubelet[2664]: E0715 04:48:30.645684 2664 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5721462d3c3fa1a42fa6d1c0494038bec023994c5b8f166f41d5d2c1befbd407\": not found" containerID="5721462d3c3fa1a42fa6d1c0494038bec023994c5b8f166f41d5d2c1befbd407" Jul 15 04:48:30.645756 kubelet[2664]: I0715 04:48:30.645713 2664 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5721462d3c3fa1a42fa6d1c0494038bec023994c5b8f166f41d5d2c1befbd407"} err="failed to get container status \"5721462d3c3fa1a42fa6d1c0494038bec023994c5b8f166f41d5d2c1befbd407\": rpc error: code = NotFound desc = an error occurred when try to find container \"5721462d3c3fa1a42fa6d1c0494038bec023994c5b8f166f41d5d2c1befbd407\": not found" Jul 15 04:48:30.769349 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2ad5e5ffa5a214cdc137b5c211e2aa3f8325ba71da3f26409bf516d11f88757e-shm.mount: Deactivated successfully. Jul 15 04:48:30.769466 systemd[1]: var-lib-kubelet-pods-8f934e2d\x2d4122\x2d4519\x2db637\x2d21a6f4fbb090-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d5vq8m.mount: Deactivated successfully. Jul 15 04:48:30.769570 systemd[1]: var-lib-kubelet-pods-b6f73f09\x2defff\x2d4dd4\x2d9d83\x2db0de6a2fe64c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dq6xxg.mount: Deactivated successfully. Jul 15 04:48:30.769622 systemd[1]: var-lib-kubelet-pods-b6f73f09\x2defff\x2d4dd4\x2d9d83\x2db0de6a2fe64c-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 15 04:48:30.769668 systemd[1]: var-lib-kubelet-pods-b6f73f09\x2defff\x2d4dd4\x2d9d83\x2db0de6a2fe64c-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 15 04:48:31.351319 kubelet[2664]: I0715 04:48:31.350827 2664 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f934e2d-4122-4519-b637-21a6f4fbb090" path="/var/lib/kubelet/pods/8f934e2d-4122-4519-b637-21a6f4fbb090/volumes" Jul 15 04:48:31.351319 kubelet[2664]: I0715 04:48:31.351243 2664 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6f73f09-efff-4dd4-9d83-b0de6a2fe64c" path="/var/lib/kubelet/pods/b6f73f09-efff-4dd4-9d83-b0de6a2fe64c/volumes" Jul 15 04:48:31.693651 sshd[4276]: Connection closed by 10.0.0.1 port 51470 Jul 15 04:48:31.694066 sshd-session[4273]: pam_unix(sshd:session): session closed for user core Jul 15 04:48:31.705253 systemd[1]: sshd@22-10.0.0.81:22-10.0.0.1:51470.service: Deactivated successfully. Jul 15 04:48:31.708939 systemd[1]: session-23.scope: Deactivated successfully. Jul 15 04:48:31.709143 systemd[1]: session-23.scope: Consumed 1.655s CPU time, 25.2M memory peak. Jul 15 04:48:31.710068 systemd-logind[1507]: Session 23 logged out. Waiting for processes to exit. Jul 15 04:48:31.713278 systemd-logind[1507]: Removed session 23. Jul 15 04:48:31.715475 systemd[1]: Started sshd@23-10.0.0.81:22-10.0.0.1:51480.service - OpenSSH per-connection server daemon (10.0.0.1:51480). Jul 15 04:48:31.780412 sshd[4427]: Accepted publickey for core from 10.0.0.1 port 51480 ssh2: RSA SHA256:sVpqIt/le8mJMWBRnqSUOr83Z2pgjga2fm8CYRKAYYo Jul 15 04:48:31.781902 sshd-session[4427]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 04:48:31.786685 systemd-logind[1507]: New session 24 of user core. Jul 15 04:48:31.800911 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 15 04:48:32.415160 sshd[4430]: Connection closed by 10.0.0.1 port 51480 Jul 15 04:48:32.416868 sshd-session[4427]: pam_unix(sshd:session): session closed for user core Jul 15 04:48:32.424927 systemd[1]: sshd@23-10.0.0.81:22-10.0.0.1:51480.service: Deactivated successfully. Jul 15 04:48:32.429316 systemd[1]: session-24.scope: Deactivated successfully. Jul 15 04:48:32.430948 systemd-logind[1507]: Session 24 logged out. Waiting for processes to exit. Jul 15 04:48:32.436455 kubelet[2664]: E0715 04:48:32.436411 2664 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b6f73f09-efff-4dd4-9d83-b0de6a2fe64c" containerName="mount-cgroup" Jul 15 04:48:32.436455 kubelet[2664]: E0715 04:48:32.436449 2664 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b6f73f09-efff-4dd4-9d83-b0de6a2fe64c" containerName="mount-bpf-fs" Jul 15 04:48:32.436455 kubelet[2664]: E0715 04:48:32.436457 2664 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b6f73f09-efff-4dd4-9d83-b0de6a2fe64c" containerName="cilium-agent" Jul 15 04:48:32.436455 kubelet[2664]: E0715 04:48:32.436463 2664 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="8f934e2d-4122-4519-b637-21a6f4fbb090" containerName="cilium-operator" Jul 15 04:48:32.436455 kubelet[2664]: E0715 04:48:32.436469 2664 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b6f73f09-efff-4dd4-9d83-b0de6a2fe64c" containerName="clean-cilium-state" Jul 15 04:48:32.436865 kubelet[2664]: E0715 04:48:32.436477 2664 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="b6f73f09-efff-4dd4-9d83-b0de6a2fe64c" containerName="apply-sysctl-overwrites" Jul 15 04:48:32.436865 kubelet[2664]: I0715 04:48:32.436499 2664 memory_manager.go:354] "RemoveStaleState removing state" podUID="b6f73f09-efff-4dd4-9d83-b0de6a2fe64c" containerName="cilium-agent" Jul 15 04:48:32.436865 kubelet[2664]: I0715 04:48:32.436516 2664 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f934e2d-4122-4519-b637-21a6f4fbb090" containerName="cilium-operator" Jul 15 04:48:32.440257 systemd[1]: Started sshd@24-10.0.0.81:22-10.0.0.1:51486.service - OpenSSH per-connection server daemon (10.0.0.1:51486). Jul 15 04:48:32.443050 systemd-logind[1507]: Removed session 24. Jul 15 04:48:32.456471 systemd[1]: Created slice kubepods-burstable-podb4c1a2d0_ad45_427f_879e_75587ce0aca4.slice - libcontainer container kubepods-burstable-podb4c1a2d0_ad45_427f_879e_75587ce0aca4.slice. Jul 15 04:48:32.509114 sshd[4442]: Accepted publickey for core from 10.0.0.1 port 51486 ssh2: RSA SHA256:sVpqIt/le8mJMWBRnqSUOr83Z2pgjga2fm8CYRKAYYo Jul 15 04:48:32.509965 sshd-session[4442]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 04:48:32.516513 systemd-logind[1507]: New session 25 of user core. Jul 15 04:48:32.525882 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 15 04:48:32.578242 sshd[4445]: Connection closed by 10.0.0.1 port 51486 Jul 15 04:48:32.579530 sshd-session[4442]: pam_unix(sshd:session): session closed for user core Jul 15 04:48:32.587143 systemd[1]: sshd@24-10.0.0.81:22-10.0.0.1:51486.service: Deactivated successfully. Jul 15 04:48:32.588649 systemd[1]: session-25.scope: Deactivated successfully. Jul 15 04:48:32.589357 systemd-logind[1507]: Session 25 logged out. Waiting for processes to exit. Jul 15 04:48:32.593988 systemd[1]: Started sshd@25-10.0.0.81:22-10.0.0.1:35338.service - OpenSSH per-connection server daemon (10.0.0.1:35338). Jul 15 04:48:32.594945 systemd-logind[1507]: Removed session 25. Jul 15 04:48:32.625587 kubelet[2664]: I0715 04:48:32.625221 2664 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b4c1a2d0-ad45-427f-879e-75587ce0aca4-cilium-cgroup\") pod \"cilium-zmr85\" (UID: \"b4c1a2d0-ad45-427f-879e-75587ce0aca4\") " pod="kube-system/cilium-zmr85" Jul 15 04:48:32.625587 kubelet[2664]: I0715 04:48:32.625257 2664 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b4c1a2d0-ad45-427f-879e-75587ce0aca4-xtables-lock\") pod \"cilium-zmr85\" (UID: \"b4c1a2d0-ad45-427f-879e-75587ce0aca4\") " pod="kube-system/cilium-zmr85" Jul 15 04:48:32.625587 kubelet[2664]: I0715 04:48:32.625276 2664 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b4c1a2d0-ad45-427f-879e-75587ce0aca4-hubble-tls\") pod \"cilium-zmr85\" (UID: \"b4c1a2d0-ad45-427f-879e-75587ce0aca4\") " pod="kube-system/cilium-zmr85" Jul 15 04:48:32.625587 kubelet[2664]: I0715 04:48:32.625293 2664 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b4c1a2d0-ad45-427f-879e-75587ce0aca4-cni-path\") pod \"cilium-zmr85\" (UID: \"b4c1a2d0-ad45-427f-879e-75587ce0aca4\") " pod="kube-system/cilium-zmr85" Jul 15 04:48:32.625587 kubelet[2664]: I0715 04:48:32.625309 2664 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b4c1a2d0-ad45-427f-879e-75587ce0aca4-cilium-config-path\") pod \"cilium-zmr85\" (UID: \"b4c1a2d0-ad45-427f-879e-75587ce0aca4\") " pod="kube-system/cilium-zmr85" Jul 15 04:48:32.625587 kubelet[2664]: I0715 04:48:32.625324 2664 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b4c1a2d0-ad45-427f-879e-75587ce0aca4-bpf-maps\") pod \"cilium-zmr85\" (UID: \"b4c1a2d0-ad45-427f-879e-75587ce0aca4\") " pod="kube-system/cilium-zmr85" Jul 15 04:48:32.625820 kubelet[2664]: I0715 04:48:32.625340 2664 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b4c1a2d0-ad45-427f-879e-75587ce0aca4-hostproc\") pod \"cilium-zmr85\" (UID: \"b4c1a2d0-ad45-427f-879e-75587ce0aca4\") " pod="kube-system/cilium-zmr85" Jul 15 04:48:32.625820 kubelet[2664]: I0715 04:48:32.625355 2664 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b4c1a2d0-ad45-427f-879e-75587ce0aca4-lib-modules\") pod \"cilium-zmr85\" (UID: \"b4c1a2d0-ad45-427f-879e-75587ce0aca4\") " pod="kube-system/cilium-zmr85" Jul 15 04:48:32.625820 kubelet[2664]: I0715 04:48:32.625371 2664 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dljlv\" (UniqueName: \"kubernetes.io/projected/b4c1a2d0-ad45-427f-879e-75587ce0aca4-kube-api-access-dljlv\") pod \"cilium-zmr85\" (UID: \"b4c1a2d0-ad45-427f-879e-75587ce0aca4\") " pod="kube-system/cilium-zmr85" Jul 15 04:48:32.625820 kubelet[2664]: I0715 04:48:32.625390 2664 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b4c1a2d0-ad45-427f-879e-75587ce0aca4-host-proc-sys-net\") pod \"cilium-zmr85\" (UID: \"b4c1a2d0-ad45-427f-879e-75587ce0aca4\") " pod="kube-system/cilium-zmr85" Jul 15 04:48:32.625820 kubelet[2664]: I0715 04:48:32.625404 2664 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b4c1a2d0-ad45-427f-879e-75587ce0aca4-host-proc-sys-kernel\") pod \"cilium-zmr85\" (UID: \"b4c1a2d0-ad45-427f-879e-75587ce0aca4\") " pod="kube-system/cilium-zmr85" Jul 15 04:48:32.625820 kubelet[2664]: I0715 04:48:32.625430 2664 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b4c1a2d0-ad45-427f-879e-75587ce0aca4-etc-cni-netd\") pod \"cilium-zmr85\" (UID: \"b4c1a2d0-ad45-427f-879e-75587ce0aca4\") " pod="kube-system/cilium-zmr85" Jul 15 04:48:32.625931 kubelet[2664]: I0715 04:48:32.625452 2664 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b4c1a2d0-ad45-427f-879e-75587ce0aca4-cilium-run\") pod \"cilium-zmr85\" (UID: \"b4c1a2d0-ad45-427f-879e-75587ce0aca4\") " pod="kube-system/cilium-zmr85" Jul 15 04:48:32.625931 kubelet[2664]: I0715 04:48:32.625466 2664 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b4c1a2d0-ad45-427f-879e-75587ce0aca4-clustermesh-secrets\") pod \"cilium-zmr85\" (UID: \"b4c1a2d0-ad45-427f-879e-75587ce0aca4\") " pod="kube-system/cilium-zmr85" Jul 15 04:48:32.625931 kubelet[2664]: I0715 04:48:32.625484 2664 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/b4c1a2d0-ad45-427f-879e-75587ce0aca4-cilium-ipsec-secrets\") pod \"cilium-zmr85\" (UID: \"b4c1a2d0-ad45-427f-879e-75587ce0aca4\") " pod="kube-system/cilium-zmr85" Jul 15 04:48:32.666240 sshd[4452]: Accepted publickey for core from 10.0.0.1 port 35338 ssh2: RSA SHA256:sVpqIt/le8mJMWBRnqSUOr83Z2pgjga2fm8CYRKAYYo Jul 15 04:48:32.668426 sshd-session[4452]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 15 04:48:32.672663 systemd-logind[1507]: New session 26 of user core. Jul 15 04:48:32.682871 systemd[1]: Started session-26.scope - Session 26 of User core. Jul 15 04:48:32.761181 containerd[1523]: time="2025-07-15T04:48:32.761141251Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zmr85,Uid:b4c1a2d0-ad45-427f-879e-75587ce0aca4,Namespace:kube-system,Attempt:0,}" Jul 15 04:48:32.779930 containerd[1523]: time="2025-07-15T04:48:32.779876048Z" level=info msg="connecting to shim 6002b480cd2514e893cf834482ea9c376b839eef3f019e8c1604b1c23bc2f971" address="unix:///run/containerd/s/cc022626a23f752f91f293c98431f77c358ed7f2862080ae5f9f90423cd82f10" namespace=k8s.io protocol=ttrpc version=3 Jul 15 04:48:32.810926 systemd[1]: Started cri-containerd-6002b480cd2514e893cf834482ea9c376b839eef3f019e8c1604b1c23bc2f971.scope - libcontainer container 6002b480cd2514e893cf834482ea9c376b839eef3f019e8c1604b1c23bc2f971. Jul 15 04:48:32.835733 containerd[1523]: time="2025-07-15T04:48:32.835690764Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-zmr85,Uid:b4c1a2d0-ad45-427f-879e-75587ce0aca4,Namespace:kube-system,Attempt:0,} returns sandbox id \"6002b480cd2514e893cf834482ea9c376b839eef3f019e8c1604b1c23bc2f971\"" Jul 15 04:48:32.838061 containerd[1523]: time="2025-07-15T04:48:32.838022128Z" level=info msg="CreateContainer within sandbox \"6002b480cd2514e893cf834482ea9c376b839eef3f019e8c1604b1c23bc2f971\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 15 04:48:32.851179 containerd[1523]: time="2025-07-15T04:48:32.851136490Z" level=info msg="Container 10f62ab85a2704e6933fbff0c8f02e65d6724b9fec7b4a9e0f8da0e6f24c86ab: CDI devices from CRI Config.CDIDevices: []" Jul 15 04:48:32.857728 containerd[1523]: time="2025-07-15T04:48:32.857677791Z" level=info msg="CreateContainer within sandbox \"6002b480cd2514e893cf834482ea9c376b839eef3f019e8c1604b1c23bc2f971\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"10f62ab85a2704e6933fbff0c8f02e65d6724b9fec7b4a9e0f8da0e6f24c86ab\"" Jul 15 04:48:32.859479 containerd[1523]: time="2025-07-15T04:48:32.859442844Z" level=info msg="StartContainer for \"10f62ab85a2704e6933fbff0c8f02e65d6724b9fec7b4a9e0f8da0e6f24c86ab\"" Jul 15 04:48:32.861195 containerd[1523]: time="2025-07-15T04:48:32.861140779Z" level=info msg="connecting to shim 10f62ab85a2704e6933fbff0c8f02e65d6724b9fec7b4a9e0f8da0e6f24c86ab" address="unix:///run/containerd/s/cc022626a23f752f91f293c98431f77c358ed7f2862080ae5f9f90423cd82f10" protocol=ttrpc version=3 Jul 15 04:48:32.887967 systemd[1]: Started cri-containerd-10f62ab85a2704e6933fbff0c8f02e65d6724b9fec7b4a9e0f8da0e6f24c86ab.scope - libcontainer container 10f62ab85a2704e6933fbff0c8f02e65d6724b9fec7b4a9e0f8da0e6f24c86ab. Jul 15 04:48:32.913153 containerd[1523]: time="2025-07-15T04:48:32.913118472Z" level=info msg="StartContainer for \"10f62ab85a2704e6933fbff0c8f02e65d6724b9fec7b4a9e0f8da0e6f24c86ab\" returns successfully" Jul 15 04:48:32.947387 systemd[1]: cri-containerd-10f62ab85a2704e6933fbff0c8f02e65d6724b9fec7b4a9e0f8da0e6f24c86ab.scope: Deactivated successfully. Jul 15 04:48:32.950091 containerd[1523]: time="2025-07-15T04:48:32.950062393Z" level=info msg="received exit event container_id:\"10f62ab85a2704e6933fbff0c8f02e65d6724b9fec7b4a9e0f8da0e6f24c86ab\" id:\"10f62ab85a2704e6933fbff0c8f02e65d6724b9fec7b4a9e0f8da0e6f24c86ab\" pid:4525 exited_at:{seconds:1752554912 nanos:949512122}" Jul 15 04:48:32.950445 containerd[1523]: time="2025-07-15T04:48:32.950393748Z" level=info msg="TaskExit event in podsandbox handler container_id:\"10f62ab85a2704e6933fbff0c8f02e65d6724b9fec7b4a9e0f8da0e6f24c86ab\" id:\"10f62ab85a2704e6933fbff0c8f02e65d6724b9fec7b4a9e0f8da0e6f24c86ab\" pid:4525 exited_at:{seconds:1752554912 nanos:949512122}" Jul 15 04:48:33.398621 kubelet[2664]: E0715 04:48:33.398555 2664 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 15 04:48:33.603781 containerd[1523]: time="2025-07-15T04:48:33.603226662Z" level=info msg="CreateContainer within sandbox \"6002b480cd2514e893cf834482ea9c376b839eef3f019e8c1604b1c23bc2f971\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 15 04:48:33.610151 containerd[1523]: time="2025-07-15T04:48:33.610112320Z" level=info msg="Container b0b32587a4c47f21f86cbd43e8461e6223e76ca1a3c645fb21f1e743ca40cc99: CDI devices from CRI Config.CDIDevices: []" Jul 15 04:48:33.616047 containerd[1523]: time="2025-07-15T04:48:33.615901955Z" level=info msg="CreateContainer within sandbox \"6002b480cd2514e893cf834482ea9c376b839eef3f019e8c1604b1c23bc2f971\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"b0b32587a4c47f21f86cbd43e8461e6223e76ca1a3c645fb21f1e743ca40cc99\"" Jul 15 04:48:33.617898 containerd[1523]: time="2025-07-15T04:48:33.617835006Z" level=info msg="StartContainer for \"b0b32587a4c47f21f86cbd43e8461e6223e76ca1a3c645fb21f1e743ca40cc99\"" Jul 15 04:48:33.620349 containerd[1523]: time="2025-07-15T04:48:33.620277370Z" level=info msg="connecting to shim b0b32587a4c47f21f86cbd43e8461e6223e76ca1a3c645fb21f1e743ca40cc99" address="unix:///run/containerd/s/cc022626a23f752f91f293c98431f77c358ed7f2862080ae5f9f90423cd82f10" protocol=ttrpc version=3 Jul 15 04:48:33.650912 systemd[1]: Started cri-containerd-b0b32587a4c47f21f86cbd43e8461e6223e76ca1a3c645fb21f1e743ca40cc99.scope - libcontainer container b0b32587a4c47f21f86cbd43e8461e6223e76ca1a3c645fb21f1e743ca40cc99. Jul 15 04:48:33.677214 containerd[1523]: time="2025-07-15T04:48:33.677155047Z" level=info msg="StartContainer for \"b0b32587a4c47f21f86cbd43e8461e6223e76ca1a3c645fb21f1e743ca40cc99\" returns successfully" Jul 15 04:48:33.688186 systemd[1]: cri-containerd-b0b32587a4c47f21f86cbd43e8461e6223e76ca1a3c645fb21f1e743ca40cc99.scope: Deactivated successfully. Jul 15 04:48:33.690322 containerd[1523]: time="2025-07-15T04:48:33.690275733Z" level=info msg="received exit event container_id:\"b0b32587a4c47f21f86cbd43e8461e6223e76ca1a3c645fb21f1e743ca40cc99\" id:\"b0b32587a4c47f21f86cbd43e8461e6223e76ca1a3c645fb21f1e743ca40cc99\" pid:4573 exited_at:{seconds:1752554913 nanos:689974297}" Jul 15 04:48:33.690866 containerd[1523]: time="2025-07-15T04:48:33.690596128Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b0b32587a4c47f21f86cbd43e8461e6223e76ca1a3c645fb21f1e743ca40cc99\" id:\"b0b32587a4c47f21f86cbd43e8461e6223e76ca1a3c645fb21f1e743ca40cc99\" pid:4573 exited_at:{seconds:1752554913 nanos:689974297}" Jul 15 04:48:34.607529 containerd[1523]: time="2025-07-15T04:48:34.607455174Z" level=info msg="CreateContainer within sandbox \"6002b480cd2514e893cf834482ea9c376b839eef3f019e8c1604b1c23bc2f971\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 15 04:48:34.614736 containerd[1523]: time="2025-07-15T04:48:34.614683749Z" level=info msg="Container 8cf93701022be17f136ab9704dbdf11ebaa0fe08ec99ad3b60f906b9e4262a3b: CDI devices from CRI Config.CDIDevices: []" Jul 15 04:48:34.623433 containerd[1523]: time="2025-07-15T04:48:34.623390063Z" level=info msg="CreateContainer within sandbox \"6002b480cd2514e893cf834482ea9c376b839eef3f019e8c1604b1c23bc2f971\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"8cf93701022be17f136ab9704dbdf11ebaa0fe08ec99ad3b60f906b9e4262a3b\"" Jul 15 04:48:34.624750 containerd[1523]: time="2025-07-15T04:48:34.624014014Z" level=info msg="StartContainer for \"8cf93701022be17f136ab9704dbdf11ebaa0fe08ec99ad3b60f906b9e4262a3b\"" Jul 15 04:48:34.625495 containerd[1523]: time="2025-07-15T04:48:34.625461553Z" level=info msg="connecting to shim 8cf93701022be17f136ab9704dbdf11ebaa0fe08ec99ad3b60f906b9e4262a3b" address="unix:///run/containerd/s/cc022626a23f752f91f293c98431f77c358ed7f2862080ae5f9f90423cd82f10" protocol=ttrpc version=3 Jul 15 04:48:34.653913 systemd[1]: Started cri-containerd-8cf93701022be17f136ab9704dbdf11ebaa0fe08ec99ad3b60f906b9e4262a3b.scope - libcontainer container 8cf93701022be17f136ab9704dbdf11ebaa0fe08ec99ad3b60f906b9e4262a3b. Jul 15 04:48:34.688561 containerd[1523]: time="2025-07-15T04:48:34.688515638Z" level=info msg="StartContainer for \"8cf93701022be17f136ab9704dbdf11ebaa0fe08ec99ad3b60f906b9e4262a3b\" returns successfully" Jul 15 04:48:34.689027 systemd[1]: cri-containerd-8cf93701022be17f136ab9704dbdf11ebaa0fe08ec99ad3b60f906b9e4262a3b.scope: Deactivated successfully. Jul 15 04:48:34.691272 containerd[1523]: time="2025-07-15T04:48:34.691246039Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8cf93701022be17f136ab9704dbdf11ebaa0fe08ec99ad3b60f906b9e4262a3b\" id:\"8cf93701022be17f136ab9704dbdf11ebaa0fe08ec99ad3b60f906b9e4262a3b\" pid:4618 exited_at:{seconds:1752554914 nanos:691025082}" Jul 15 04:48:34.691486 containerd[1523]: time="2025-07-15T04:48:34.691464115Z" level=info msg="received exit event container_id:\"8cf93701022be17f136ab9704dbdf11ebaa0fe08ec99ad3b60f906b9e4262a3b\" id:\"8cf93701022be17f136ab9704dbdf11ebaa0fe08ec99ad3b60f906b9e4262a3b\" pid:4618 exited_at:{seconds:1752554914 nanos:691025082}" Jul 15 04:48:34.709127 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8cf93701022be17f136ab9704dbdf11ebaa0fe08ec99ad3b60f906b9e4262a3b-rootfs.mount: Deactivated successfully. Jul 15 04:48:34.840485 kubelet[2664]: I0715 04:48:34.840433 2664 setters.go:600] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-07-15T04:48:34Z","lastTransitionTime":"2025-07-15T04:48:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jul 15 04:48:35.612958 containerd[1523]: time="2025-07-15T04:48:35.612906853Z" level=info msg="CreateContainer within sandbox \"6002b480cd2514e893cf834482ea9c376b839eef3f019e8c1604b1c23bc2f971\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 15 04:48:35.623902 containerd[1523]: time="2025-07-15T04:48:35.621850126Z" level=info msg="Container 4ae075b2bf0ff3b578a695435de2061363b6c7f06693c7acb460aee5b233486a: CDI devices from CRI Config.CDIDevices: []" Jul 15 04:48:35.624672 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2647261858.mount: Deactivated successfully. Jul 15 04:48:35.634599 containerd[1523]: time="2025-07-15T04:48:35.634545185Z" level=info msg="CreateContainer within sandbox \"6002b480cd2514e893cf834482ea9c376b839eef3f019e8c1604b1c23bc2f971\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"4ae075b2bf0ff3b578a695435de2061363b6c7f06693c7acb460aee5b233486a\"" Jul 15 04:48:35.635151 containerd[1523]: time="2025-07-15T04:48:35.635098417Z" level=info msg="StartContainer for \"4ae075b2bf0ff3b578a695435de2061363b6c7f06693c7acb460aee5b233486a\"" Jul 15 04:48:35.635915 containerd[1523]: time="2025-07-15T04:48:35.635884846Z" level=info msg="connecting to shim 4ae075b2bf0ff3b578a695435de2061363b6c7f06693c7acb460aee5b233486a" address="unix:///run/containerd/s/cc022626a23f752f91f293c98431f77c358ed7f2862080ae5f9f90423cd82f10" protocol=ttrpc version=3 Jul 15 04:48:35.655898 systemd[1]: Started cri-containerd-4ae075b2bf0ff3b578a695435de2061363b6c7f06693c7acb460aee5b233486a.scope - libcontainer container 4ae075b2bf0ff3b578a695435de2061363b6c7f06693c7acb460aee5b233486a. Jul 15 04:48:35.677775 systemd[1]: cri-containerd-4ae075b2bf0ff3b578a695435de2061363b6c7f06693c7acb460aee5b233486a.scope: Deactivated successfully. Jul 15 04:48:35.678363 containerd[1523]: time="2025-07-15T04:48:35.677903889Z" level=info msg="received exit event container_id:\"4ae075b2bf0ff3b578a695435de2061363b6c7f06693c7acb460aee5b233486a\" id:\"4ae075b2bf0ff3b578a695435de2061363b6c7f06693c7acb460aee5b233486a\" pid:4657 exited_at:{seconds:1752554915 nanos:677705332}" Jul 15 04:48:35.679477 containerd[1523]: time="2025-07-15T04:48:35.679448347Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4ae075b2bf0ff3b578a695435de2061363b6c7f06693c7acb460aee5b233486a\" id:\"4ae075b2bf0ff3b578a695435de2061363b6c7f06693c7acb460aee5b233486a\" pid:4657 exited_at:{seconds:1752554915 nanos:677705332}" Jul 15 04:48:35.684680 containerd[1523]: time="2025-07-15T04:48:35.684637514Z" level=info msg="StartContainer for \"4ae075b2bf0ff3b578a695435de2061363b6c7f06693c7acb460aee5b233486a\" returns successfully" Jul 15 04:48:35.695265 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4ae075b2bf0ff3b578a695435de2061363b6c7f06693c7acb460aee5b233486a-rootfs.mount: Deactivated successfully. Jul 15 04:48:36.622607 containerd[1523]: time="2025-07-15T04:48:36.622529089Z" level=info msg="CreateContainer within sandbox \"6002b480cd2514e893cf834482ea9c376b839eef3f019e8c1604b1c23bc2f971\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 15 04:48:36.647236 containerd[1523]: time="2025-07-15T04:48:36.647193225Z" level=info msg="Container 5f1804116db0cdaf1fa90723c0313cf93e9612b046018baabb4a2a6f47b23ef5: CDI devices from CRI Config.CDIDevices: []" Jul 15 04:48:36.658789 containerd[1523]: time="2025-07-15T04:48:36.658718905Z" level=info msg="CreateContainer within sandbox \"6002b480cd2514e893cf834482ea9c376b839eef3f019e8c1604b1c23bc2f971\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"5f1804116db0cdaf1fa90723c0313cf93e9612b046018baabb4a2a6f47b23ef5\"" Jul 15 04:48:36.659391 containerd[1523]: time="2025-07-15T04:48:36.659273377Z" level=info msg="StartContainer for \"5f1804116db0cdaf1fa90723c0313cf93e9612b046018baabb4a2a6f47b23ef5\"" Jul 15 04:48:36.661710 containerd[1523]: time="2025-07-15T04:48:36.661668664Z" level=info msg="connecting to shim 5f1804116db0cdaf1fa90723c0313cf93e9612b046018baabb4a2a6f47b23ef5" address="unix:///run/containerd/s/cc022626a23f752f91f293c98431f77c358ed7f2862080ae5f9f90423cd82f10" protocol=ttrpc version=3 Jul 15 04:48:36.705178 systemd[1]: Started cri-containerd-5f1804116db0cdaf1fa90723c0313cf93e9612b046018baabb4a2a6f47b23ef5.scope - libcontainer container 5f1804116db0cdaf1fa90723c0313cf93e9612b046018baabb4a2a6f47b23ef5. Jul 15 04:48:36.756218 containerd[1523]: time="2025-07-15T04:48:36.756157389Z" level=info msg="StartContainer for \"5f1804116db0cdaf1fa90723c0313cf93e9612b046018baabb4a2a6f47b23ef5\" returns successfully" Jul 15 04:48:36.811114 containerd[1523]: time="2025-07-15T04:48:36.811065545Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5f1804116db0cdaf1fa90723c0313cf93e9612b046018baabb4a2a6f47b23ef5\" id:\"ee77c49f0583ba9b28576e6f7f2ab7fc1e65b9659c3307a92dc71ca0cd86c529\" pid:4723 exited_at:{seconds:1752554916 nanos:810783948}" Jul 15 04:48:37.052750 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Jul 15 04:48:39.069863 containerd[1523]: time="2025-07-15T04:48:39.069774153Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5f1804116db0cdaf1fa90723c0313cf93e9612b046018baabb4a2a6f47b23ef5\" id:\"3e3a8df5856bc123d08d20f3f0ee9a4a4c53dcf7ac58f4f19ab7f0bd0690c314\" pid:4999 exit_status:1 exited_at:{seconds:1752554919 nanos:69159921}" Jul 15 04:48:39.979108 systemd-networkd[1431]: lxc_health: Link UP Jul 15 04:48:39.979334 systemd-networkd[1431]: lxc_health: Gained carrier Jul 15 04:48:40.782408 kubelet[2664]: I0715 04:48:40.782070 2664 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-zmr85" podStartSLOduration=8.782055112 podStartE2EDuration="8.782055112s" podCreationTimestamp="2025-07-15 04:48:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-15 04:48:37.638070972 +0000 UTC m=+84.378802822" watchObservedRunningTime="2025-07-15 04:48:40.782055112 +0000 UTC m=+87.522786962" Jul 15 04:48:41.195668 containerd[1523]: time="2025-07-15T04:48:41.194191743Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5f1804116db0cdaf1fa90723c0313cf93e9612b046018baabb4a2a6f47b23ef5\" id:\"4e40b16707d5da4907d6d0b85147467622b2ac5a26d6a5e96d088af589d0118c\" pid:5259 exited_at:{seconds:1752554921 nanos:193898466}" Jul 15 04:48:41.384929 systemd-networkd[1431]: lxc_health: Gained IPv6LL Jul 15 04:48:43.309467 containerd[1523]: time="2025-07-15T04:48:43.309424343Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5f1804116db0cdaf1fa90723c0313cf93e9612b046018baabb4a2a6f47b23ef5\" id:\"039b584289debfd86140b0bfc6fb2ba6dcc133cc8cd6eb36abfefc5b00042cd8\" pid:5286 exited_at:{seconds:1752554923 nanos:309037628}" Jul 15 04:48:45.443359 containerd[1523]: time="2025-07-15T04:48:45.443302467Z" level=info msg="TaskExit event in podsandbox handler container_id:\"5f1804116db0cdaf1fa90723c0313cf93e9612b046018baabb4a2a6f47b23ef5\" id:\"395d935b61b2a661b4015beddf7179365773068460b7cb5c3e6b9d9e071d68d4\" pid:5317 exited_at:{seconds:1752554925 nanos:442988990}" Jul 15 04:48:45.448762 sshd[4455]: Connection closed by 10.0.0.1 port 35338 Jul 15 04:48:45.449194 sshd-session[4452]: pam_unix(sshd:session): session closed for user core Jul 15 04:48:45.452170 systemd[1]: sshd@25-10.0.0.81:22-10.0.0.1:35338.service: Deactivated successfully. Jul 15 04:48:45.453791 systemd[1]: session-26.scope: Deactivated successfully. Jul 15 04:48:45.456117 systemd-logind[1507]: Session 26 logged out. Waiting for processes to exit. Jul 15 04:48:45.457428 systemd-logind[1507]: Removed session 26.