Jul 6 23:46:12.868339 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jul 6 23:46:12.868359 kernel: Linux version 6.12.35-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT Sun Jul 6 21:57:11 -00 2025 Jul 6 23:46:12.868368 kernel: KASLR enabled Jul 6 23:46:12.868374 kernel: efi: EFI v2.7 by EDK II Jul 6 23:46:12.868379 kernel: efi: SMBIOS 3.0=0xdced0000 MEMATTR=0xdb832018 ACPI 2.0=0xdbfd0018 RNG=0xdbfd0a18 MEMRESERVE=0xdb838218 Jul 6 23:46:12.868385 kernel: random: crng init done Jul 6 23:46:12.868391 kernel: secureboot: Secure boot disabled Jul 6 23:46:12.868397 kernel: ACPI: Early table checksum verification disabled Jul 6 23:46:12.868402 kernel: ACPI: RSDP 0x00000000DBFD0018 000024 (v02 BOCHS ) Jul 6 23:46:12.868409 kernel: ACPI: XSDT 0x00000000DBFD0F18 000064 (v01 BOCHS BXPC 00000001 01000013) Jul 6 23:46:12.868415 kernel: ACPI: FACP 0x00000000DBFD0B18 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jul 6 23:46:12.868420 kernel: ACPI: DSDT 0x00000000DBF0E018 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 6 23:46:12.868426 kernel: ACPI: APIC 0x00000000DBFD0C98 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jul 6 23:46:12.868432 kernel: ACPI: PPTT 0x00000000DBFD0098 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 6 23:46:12.868439 kernel: ACPI: GTDT 0x00000000DBFD0818 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 6 23:46:12.868446 kernel: ACPI: MCFG 0x00000000DBFD0A98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 6 23:46:12.868453 kernel: ACPI: SPCR 0x00000000DBFD0918 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jul 6 23:46:12.868459 kernel: ACPI: DBG2 0x00000000DBFD0998 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jul 6 23:46:12.868465 kernel: ACPI: IORT 0x00000000DBFD0198 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 6 23:46:12.868470 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Jul 6 23:46:12.868477 kernel: ACPI: Use ACPI SPCR as default console: Yes Jul 6 23:46:12.868483 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Jul 6 23:46:12.868489 kernel: NODE_DATA(0) allocated [mem 0xdc965a00-0xdc96cfff] Jul 6 23:46:12.868494 kernel: Zone ranges: Jul 6 23:46:12.868500 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Jul 6 23:46:12.868507 kernel: DMA32 empty Jul 6 23:46:12.868513 kernel: Normal empty Jul 6 23:46:12.868519 kernel: Device empty Jul 6 23:46:12.868525 kernel: Movable zone start for each node Jul 6 23:46:12.868531 kernel: Early memory node ranges Jul 6 23:46:12.868537 kernel: node 0: [mem 0x0000000040000000-0x00000000db81ffff] Jul 6 23:46:12.868543 kernel: node 0: [mem 0x00000000db820000-0x00000000db82ffff] Jul 6 23:46:12.868549 kernel: node 0: [mem 0x00000000db830000-0x00000000dc09ffff] Jul 6 23:46:12.868555 kernel: node 0: [mem 0x00000000dc0a0000-0x00000000dc2dffff] Jul 6 23:46:12.868561 kernel: node 0: [mem 0x00000000dc2e0000-0x00000000dc36ffff] Jul 6 23:46:12.868567 kernel: node 0: [mem 0x00000000dc370000-0x00000000dc45ffff] Jul 6 23:46:12.868573 kernel: node 0: [mem 0x00000000dc460000-0x00000000dc52ffff] Jul 6 23:46:12.868580 kernel: node 0: [mem 0x00000000dc530000-0x00000000dc5cffff] Jul 6 23:46:12.868586 kernel: node 0: [mem 0x00000000dc5d0000-0x00000000dce1ffff] Jul 6 23:46:12.868592 kernel: node 0: [mem 0x00000000dce20000-0x00000000dceaffff] Jul 6 23:46:12.868601 kernel: node 0: [mem 0x00000000dceb0000-0x00000000dcebffff] Jul 6 23:46:12.868608 kernel: node 0: [mem 0x00000000dcec0000-0x00000000dcfdffff] Jul 6 23:46:12.868615 kernel: node 0: [mem 0x00000000dcfe0000-0x00000000dcffffff] Jul 6 23:46:12.868622 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Jul 6 23:46:12.868629 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Jul 6 23:46:12.868635 kernel: cma: Reserved 16 MiB at 0x00000000d8000000 on node -1 Jul 6 23:46:12.868642 kernel: psci: probing for conduit method from ACPI. Jul 6 23:46:12.868648 kernel: psci: PSCIv1.1 detected in firmware. Jul 6 23:46:12.868654 kernel: psci: Using standard PSCI v0.2 function IDs Jul 6 23:46:12.868660 kernel: psci: Trusted OS migration not required Jul 6 23:46:12.868667 kernel: psci: SMC Calling Convention v1.1 Jul 6 23:46:12.868673 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jul 6 23:46:12.868679 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Jul 6 23:46:12.868687 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Jul 6 23:46:12.868694 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Jul 6 23:46:12.868700 kernel: Detected PIPT I-cache on CPU0 Jul 6 23:46:12.868706 kernel: CPU features: detected: GIC system register CPU interface Jul 6 23:46:12.868713 kernel: CPU features: detected: Spectre-v4 Jul 6 23:46:12.868719 kernel: CPU features: detected: Spectre-BHB Jul 6 23:46:12.868725 kernel: CPU features: kernel page table isolation forced ON by KASLR Jul 6 23:46:12.868732 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jul 6 23:46:12.868738 kernel: CPU features: detected: ARM erratum 1418040 Jul 6 23:46:12.868744 kernel: CPU features: detected: SSBS not fully self-synchronizing Jul 6 23:46:12.868751 kernel: alternatives: applying boot alternatives Jul 6 23:46:12.868758 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=d1bbaf8ae8f23de11dc703e14022523825f85f007c0c35003d7559228cbdda22 Jul 6 23:46:12.868766 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 6 23:46:12.868772 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 6 23:46:12.868779 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 6 23:46:12.868785 kernel: Fallback order for Node 0: 0 Jul 6 23:46:12.868791 kernel: Built 1 zonelists, mobility grouping on. Total pages: 643072 Jul 6 23:46:12.868797 kernel: Policy zone: DMA Jul 6 23:46:12.868803 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 6 23:46:12.868810 kernel: software IO TLB: SWIOTLB bounce buffer size adjusted to 2MB Jul 6 23:46:12.868816 kernel: software IO TLB: area num 4. Jul 6 23:46:12.868822 kernel: software IO TLB: SWIOTLB bounce buffer size roundup to 4MB Jul 6 23:46:12.868829 kernel: software IO TLB: mapped [mem 0x00000000d7c00000-0x00000000d8000000] (4MB) Jul 6 23:46:12.868836 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jul 6 23:46:12.868842 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 6 23:46:12.868849 kernel: rcu: RCU event tracing is enabled. Jul 6 23:46:12.868856 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jul 6 23:46:12.868862 kernel: Trampoline variant of Tasks RCU enabled. Jul 6 23:46:12.868868 kernel: Tracing variant of Tasks RCU enabled. Jul 6 23:46:12.868875 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 6 23:46:12.868881 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jul 6 23:46:12.868887 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 6 23:46:12.868894 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 6 23:46:12.868900 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jul 6 23:46:12.868908 kernel: GICv3: 256 SPIs implemented Jul 6 23:46:12.868914 kernel: GICv3: 0 Extended SPIs implemented Jul 6 23:46:12.868920 kernel: Root IRQ handler: gic_handle_irq Jul 6 23:46:12.868927 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jul 6 23:46:12.868933 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Jul 6 23:46:12.868939 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jul 6 23:46:12.868945 kernel: ITS [mem 0x08080000-0x0809ffff] Jul 6 23:46:12.868952 kernel: ITS@0x0000000008080000: allocated 8192 Devices @40110000 (indirect, esz 8, psz 64K, shr 1) Jul 6 23:46:12.868958 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @40120000 (flat, esz 8, psz 64K, shr 1) Jul 6 23:46:12.868965 kernel: GICv3: using LPI property table @0x0000000040130000 Jul 6 23:46:12.868971 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000040140000 Jul 6 23:46:12.868978 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 6 23:46:12.868985 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 6 23:46:12.868991 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jul 6 23:46:12.868998 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jul 6 23:46:12.869004 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jul 6 23:46:12.869011 kernel: arm-pv: using stolen time PV Jul 6 23:46:12.869018 kernel: Console: colour dummy device 80x25 Jul 6 23:46:12.869024 kernel: ACPI: Core revision 20240827 Jul 6 23:46:12.869031 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jul 6 23:46:12.869037 kernel: pid_max: default: 32768 minimum: 301 Jul 6 23:46:12.869044 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jul 6 23:46:12.869052 kernel: landlock: Up and running. Jul 6 23:46:12.869058 kernel: SELinux: Initializing. Jul 6 23:46:12.869065 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 6 23:46:12.869071 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 6 23:46:12.869078 kernel: rcu: Hierarchical SRCU implementation. Jul 6 23:46:12.869084 kernel: rcu: Max phase no-delay instances is 400. Jul 6 23:46:12.869091 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jul 6 23:46:12.869097 kernel: Remapping and enabling EFI services. Jul 6 23:46:12.869104 kernel: smp: Bringing up secondary CPUs ... Jul 6 23:46:12.869116 kernel: Detected PIPT I-cache on CPU1 Jul 6 23:46:12.869131 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jul 6 23:46:12.869138 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000040150000 Jul 6 23:46:12.869148 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 6 23:46:12.869154 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jul 6 23:46:12.869161 kernel: Detected PIPT I-cache on CPU2 Jul 6 23:46:12.869168 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Jul 6 23:46:12.869190 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040160000 Jul 6 23:46:12.869202 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 6 23:46:12.869208 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Jul 6 23:46:12.869215 kernel: Detected PIPT I-cache on CPU3 Jul 6 23:46:12.869222 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Jul 6 23:46:12.869229 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040170000 Jul 6 23:46:12.869236 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jul 6 23:46:12.869243 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Jul 6 23:46:12.869250 kernel: smp: Brought up 1 node, 4 CPUs Jul 6 23:46:12.869257 kernel: SMP: Total of 4 processors activated. Jul 6 23:46:12.869265 kernel: CPU: All CPU(s) started at EL1 Jul 6 23:46:12.869272 kernel: CPU features: detected: 32-bit EL0 Support Jul 6 23:46:12.869279 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jul 6 23:46:12.869286 kernel: CPU features: detected: Common not Private translations Jul 6 23:46:12.869293 kernel: CPU features: detected: CRC32 instructions Jul 6 23:46:12.869300 kernel: CPU features: detected: Enhanced Virtualization Traps Jul 6 23:46:12.869307 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jul 6 23:46:12.869314 kernel: CPU features: detected: LSE atomic instructions Jul 6 23:46:12.869320 kernel: CPU features: detected: Privileged Access Never Jul 6 23:46:12.869329 kernel: CPU features: detected: RAS Extension Support Jul 6 23:46:12.869336 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jul 6 23:46:12.869343 kernel: alternatives: applying system-wide alternatives Jul 6 23:46:12.869350 kernel: CPU features: detected: Hardware dirty bit management on CPU0-3 Jul 6 23:46:12.869357 kernel: Memory: 2423968K/2572288K available (11136K kernel code, 2436K rwdata, 9076K rodata, 39488K init, 1038K bss, 125984K reserved, 16384K cma-reserved) Jul 6 23:46:12.869364 kernel: devtmpfs: initialized Jul 6 23:46:12.869371 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 6 23:46:12.869378 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jul 6 23:46:12.869384 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jul 6 23:46:12.869393 kernel: 0 pages in range for non-PLT usage Jul 6 23:46:12.869400 kernel: 508432 pages in range for PLT usage Jul 6 23:46:12.869406 kernel: pinctrl core: initialized pinctrl subsystem Jul 6 23:46:12.869413 kernel: SMBIOS 3.0.0 present. Jul 6 23:46:12.869420 kernel: DMI: QEMU KVM Virtual Machine, BIOS unknown 02/02/2022 Jul 6 23:46:12.869427 kernel: DMI: Memory slots populated: 1/1 Jul 6 23:46:12.869434 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 6 23:46:12.869441 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jul 6 23:46:12.869448 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jul 6 23:46:12.869456 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jul 6 23:46:12.869463 kernel: audit: initializing netlink subsys (disabled) Jul 6 23:46:12.869470 kernel: audit: type=2000 audit(0.022:1): state=initialized audit_enabled=0 res=1 Jul 6 23:46:12.869477 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 6 23:46:12.869484 kernel: cpuidle: using governor menu Jul 6 23:46:12.869490 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jul 6 23:46:12.869497 kernel: ASID allocator initialised with 32768 entries Jul 6 23:46:12.869504 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 6 23:46:12.869511 kernel: Serial: AMBA PL011 UART driver Jul 6 23:46:12.869519 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 6 23:46:12.869526 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jul 6 23:46:12.869533 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jul 6 23:46:12.869540 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jul 6 23:46:12.869547 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 6 23:46:12.869554 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jul 6 23:46:12.869561 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jul 6 23:46:12.869568 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jul 6 23:46:12.869575 kernel: ACPI: Added _OSI(Module Device) Jul 6 23:46:12.869582 kernel: ACPI: Added _OSI(Processor Device) Jul 6 23:46:12.869591 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 6 23:46:12.869597 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 6 23:46:12.869683 kernel: ACPI: Interpreter enabled Jul 6 23:46:12.869692 kernel: ACPI: Using GIC for interrupt routing Jul 6 23:46:12.869699 kernel: ACPI: MCFG table detected, 1 entries Jul 6 23:46:12.869706 kernel: ACPI: CPU0 has been hot-added Jul 6 23:46:12.869713 kernel: ACPI: CPU1 has been hot-added Jul 6 23:46:12.869720 kernel: ACPI: CPU2 has been hot-added Jul 6 23:46:12.869727 kernel: ACPI: CPU3 has been hot-added Jul 6 23:46:12.869737 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jul 6 23:46:12.869744 kernel: printk: legacy console [ttyAMA0] enabled Jul 6 23:46:12.869751 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 6 23:46:12.869891 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 6 23:46:12.869958 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jul 6 23:46:12.870022 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jul 6 23:46:12.870078 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jul 6 23:46:12.870152 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jul 6 23:46:12.870162 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jul 6 23:46:12.870239 kernel: PCI host bridge to bus 0000:00 Jul 6 23:46:12.870323 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jul 6 23:46:12.870379 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jul 6 23:46:12.870431 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jul 6 23:46:12.870482 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 6 23:46:12.870567 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint Jul 6 23:46:12.870637 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Jul 6 23:46:12.870698 kernel: pci 0000:00:01.0: BAR 0 [io 0x0000-0x001f] Jul 6 23:46:12.870885 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff] Jul 6 23:46:12.870949 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] Jul 6 23:46:12.871009 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned Jul 6 23:46:12.871067 kernel: pci 0000:00:01.0: BAR 1 [mem 0x10000000-0x10000fff]: assigned Jul 6 23:46:12.871143 kernel: pci 0000:00:01.0: BAR 0 [io 0x1000-0x101f]: assigned Jul 6 23:46:12.871222 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jul 6 23:46:12.871278 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jul 6 23:46:12.871330 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jul 6 23:46:12.871339 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jul 6 23:46:12.871346 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jul 6 23:46:12.871353 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jul 6 23:46:12.871363 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jul 6 23:46:12.871370 kernel: iommu: Default domain type: Translated Jul 6 23:46:12.871377 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jul 6 23:46:12.871383 kernel: efivars: Registered efivars operations Jul 6 23:46:12.871390 kernel: vgaarb: loaded Jul 6 23:46:12.871397 kernel: clocksource: Switched to clocksource arch_sys_counter Jul 6 23:46:12.871404 kernel: VFS: Disk quotas dquot_6.6.0 Jul 6 23:46:12.871411 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 6 23:46:12.871418 kernel: pnp: PnP ACPI init Jul 6 23:46:12.871487 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jul 6 23:46:12.871497 kernel: pnp: PnP ACPI: found 1 devices Jul 6 23:46:12.871505 kernel: NET: Registered PF_INET protocol family Jul 6 23:46:12.871512 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 6 23:46:12.871519 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 6 23:46:12.871526 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 6 23:46:12.871533 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 6 23:46:12.871540 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 6 23:46:12.871549 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 6 23:46:12.871556 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 6 23:46:12.871563 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 6 23:46:12.871570 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 6 23:46:12.871577 kernel: PCI: CLS 0 bytes, default 64 Jul 6 23:46:12.871584 kernel: kvm [1]: HYP mode not available Jul 6 23:46:12.871591 kernel: Initialise system trusted keyrings Jul 6 23:46:12.871597 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 6 23:46:12.871604 kernel: Key type asymmetric registered Jul 6 23:46:12.871611 kernel: Asymmetric key parser 'x509' registered Jul 6 23:46:12.871619 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jul 6 23:46:12.871626 kernel: io scheduler mq-deadline registered Jul 6 23:46:12.872496 kernel: io scheduler kyber registered Jul 6 23:46:12.872504 kernel: io scheduler bfq registered Jul 6 23:46:12.872523 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jul 6 23:46:12.872531 kernel: ACPI: button: Power Button [PWRB] Jul 6 23:46:12.872539 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jul 6 23:46:12.872623 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Jul 6 23:46:12.872634 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 6 23:46:12.872648 kernel: thunder_xcv, ver 1.0 Jul 6 23:46:12.872655 kernel: thunder_bgx, ver 1.0 Jul 6 23:46:12.872662 kernel: nicpf, ver 1.0 Jul 6 23:46:12.872669 kernel: nicvf, ver 1.0 Jul 6 23:46:12.872748 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jul 6 23:46:12.872805 kernel: rtc-efi rtc-efi.0: setting system clock to 2025-07-06T23:46:12 UTC (1751845572) Jul 6 23:46:12.872814 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 6 23:46:12.873604 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Jul 6 23:46:12.873617 kernel: watchdog: NMI not fully supported Jul 6 23:46:12.873625 kernel: watchdog: Hard watchdog permanently disabled Jul 6 23:46:12.873634 kernel: NET: Registered PF_INET6 protocol family Jul 6 23:46:12.873641 kernel: Segment Routing with IPv6 Jul 6 23:46:12.873649 kernel: In-situ OAM (IOAM) with IPv6 Jul 6 23:46:12.873656 kernel: NET: Registered PF_PACKET protocol family Jul 6 23:46:12.873669 kernel: Key type dns_resolver registered Jul 6 23:46:12.873676 kernel: registered taskstats version 1 Jul 6 23:46:12.873683 kernel: Loading compiled-in X.509 certificates Jul 6 23:46:12.873691 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.35-flatcar: f8c1d02496b1c3f2ac4a0c4b5b2a55d3dc0ca718' Jul 6 23:46:12.873698 kernel: Demotion targets for Node 0: null Jul 6 23:46:12.873706 kernel: Key type .fscrypt registered Jul 6 23:46:12.873712 kernel: Key type fscrypt-provisioning registered Jul 6 23:46:12.873719 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 6 23:46:12.873726 kernel: ima: Allocated hash algorithm: sha1 Jul 6 23:46:12.873733 kernel: ima: No architecture policies found Jul 6 23:46:12.873740 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jul 6 23:46:12.873747 kernel: clk: Disabling unused clocks Jul 6 23:46:12.873755 kernel: PM: genpd: Disabling unused power domains Jul 6 23:46:12.873762 kernel: Warning: unable to open an initial console. Jul 6 23:46:12.873769 kernel: Freeing unused kernel memory: 39488K Jul 6 23:46:12.873776 kernel: Run /init as init process Jul 6 23:46:12.873783 kernel: with arguments: Jul 6 23:46:12.873789 kernel: /init Jul 6 23:46:12.873796 kernel: with environment: Jul 6 23:46:12.873803 kernel: HOME=/ Jul 6 23:46:12.873810 kernel: TERM=linux Jul 6 23:46:12.873818 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 6 23:46:12.873826 systemd[1]: Successfully made /usr/ read-only. Jul 6 23:46:12.873837 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 6 23:46:12.873845 systemd[1]: Detected virtualization kvm. Jul 6 23:46:12.873852 systemd[1]: Detected architecture arm64. Jul 6 23:46:12.873860 systemd[1]: Running in initrd. Jul 6 23:46:12.873867 systemd[1]: No hostname configured, using default hostname. Jul 6 23:46:12.873876 systemd[1]: Hostname set to . Jul 6 23:46:12.873884 systemd[1]: Initializing machine ID from VM UUID. Jul 6 23:46:12.873891 systemd[1]: Queued start job for default target initrd.target. Jul 6 23:46:12.873899 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 6 23:46:12.873907 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 6 23:46:12.873915 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 6 23:46:12.873923 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 6 23:46:12.873931 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 6 23:46:12.873940 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 6 23:46:12.873949 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 6 23:46:12.873957 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 6 23:46:12.873965 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 6 23:46:12.873972 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 6 23:46:12.873980 systemd[1]: Reached target paths.target - Path Units. Jul 6 23:46:12.873987 systemd[1]: Reached target slices.target - Slice Units. Jul 6 23:46:12.873996 systemd[1]: Reached target swap.target - Swaps. Jul 6 23:46:12.874004 systemd[1]: Reached target timers.target - Timer Units. Jul 6 23:46:12.874011 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 6 23:46:12.874019 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 6 23:46:12.874027 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 6 23:46:12.874034 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jul 6 23:46:12.874042 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 6 23:46:12.874049 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 6 23:46:12.874058 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 6 23:46:12.874066 systemd[1]: Reached target sockets.target - Socket Units. Jul 6 23:46:12.874073 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 6 23:46:12.874081 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 6 23:46:12.874089 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 6 23:46:12.874097 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jul 6 23:46:12.874105 systemd[1]: Starting systemd-fsck-usr.service... Jul 6 23:46:12.874112 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 6 23:46:12.874124 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 6 23:46:12.874136 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:46:12.874143 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 6 23:46:12.874152 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 6 23:46:12.874159 systemd[1]: Finished systemd-fsck-usr.service. Jul 6 23:46:12.874168 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 6 23:46:12.874205 systemd-journald[246]: Collecting audit messages is disabled. Jul 6 23:46:12.874225 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:46:12.874233 systemd-journald[246]: Journal started Jul 6 23:46:12.874254 systemd-journald[246]: Runtime Journal (/run/log/journal/b60d3d97b2aa4adeb0e64a1a2d7fbcc3) is 6M, max 48.5M, 42.4M free. Jul 6 23:46:12.859409 systemd-modules-load[247]: Inserted module 'overlay' Jul 6 23:46:12.878198 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 6 23:46:12.878226 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 6 23:46:12.880004 systemd-modules-load[247]: Inserted module 'br_netfilter' Jul 6 23:46:12.882527 kernel: Bridge firewalling registered Jul 6 23:46:12.882545 systemd[1]: Started systemd-journald.service - Journal Service. Jul 6 23:46:12.890278 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 6 23:46:12.891495 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 6 23:46:12.895795 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 6 23:46:12.899353 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 6 23:46:12.905742 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 6 23:46:12.909938 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 6 23:46:12.913347 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 6 23:46:12.915746 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 6 23:46:12.916397 systemd-tmpfiles[279]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jul 6 23:46:12.919033 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 6 23:46:12.920542 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 6 23:46:12.933455 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 6 23:46:12.944932 dracut-cmdline[286]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=d1bbaf8ae8f23de11dc703e14022523825f85f007c0c35003d7559228cbdda22 Jul 6 23:46:12.962014 systemd-resolved[288]: Positive Trust Anchors: Jul 6 23:46:12.962030 systemd-resolved[288]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 6 23:46:12.962063 systemd-resolved[288]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 6 23:46:12.966824 systemd-resolved[288]: Defaulting to hostname 'linux'. Jul 6 23:46:12.968258 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 6 23:46:12.972433 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 6 23:46:13.034210 kernel: SCSI subsystem initialized Jul 6 23:46:13.039195 kernel: Loading iSCSI transport class v2.0-870. Jul 6 23:46:13.049216 kernel: iscsi: registered transport (tcp) Jul 6 23:46:13.064311 kernel: iscsi: registered transport (qla4xxx) Jul 6 23:46:13.064363 kernel: QLogic iSCSI HBA Driver Jul 6 23:46:13.081106 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 6 23:46:13.101985 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 6 23:46:13.104315 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 6 23:46:13.150823 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 6 23:46:13.153291 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 6 23:46:13.222236 kernel: raid6: neonx8 gen() 15666 MB/s Jul 6 23:46:13.239225 kernel: raid6: neonx4 gen() 15575 MB/s Jul 6 23:46:13.256227 kernel: raid6: neonx2 gen() 12733 MB/s Jul 6 23:46:13.273226 kernel: raid6: neonx1 gen() 10311 MB/s Jul 6 23:46:13.290226 kernel: raid6: int64x8 gen() 6821 MB/s Jul 6 23:46:13.307227 kernel: raid6: int64x4 gen() 7144 MB/s Jul 6 23:46:13.324227 kernel: raid6: int64x2 gen() 5970 MB/s Jul 6 23:46:13.341560 kernel: raid6: int64x1 gen() 4882 MB/s Jul 6 23:46:13.341634 kernel: raid6: using algorithm neonx8 gen() 15666 MB/s Jul 6 23:46:13.359432 kernel: raid6: .... xor() 11860 MB/s, rmw enabled Jul 6 23:46:13.359495 kernel: raid6: using neon recovery algorithm Jul 6 23:46:13.365676 kernel: xor: measuring software checksum speed Jul 6 23:46:13.365723 kernel: 8regs : 21376 MB/sec Jul 6 23:46:13.365742 kernel: 32regs : 20373 MB/sec Jul 6 23:46:13.366307 kernel: arm64_neon : 28003 MB/sec Jul 6 23:46:13.366320 kernel: xor: using function: arm64_neon (28003 MB/sec) Jul 6 23:46:13.426227 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 6 23:46:13.434056 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 6 23:46:13.437248 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 6 23:46:13.470103 systemd-udevd[499]: Using default interface naming scheme 'v255'. Jul 6 23:46:13.479239 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 6 23:46:13.481891 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 6 23:46:13.510018 dracut-pre-trigger[501]: rd.md=0: removing MD RAID activation Jul 6 23:46:13.533584 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 6 23:46:13.537262 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 6 23:46:13.589234 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 6 23:46:13.593569 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 6 23:46:13.641280 kernel: virtio_blk virtio1: 1/0/0 default/read/poll queues Jul 6 23:46:13.641447 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jul 6 23:46:13.647225 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 6 23:46:13.647271 kernel: GPT:9289727 != 19775487 Jul 6 23:46:13.647281 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 6 23:46:13.649538 kernel: GPT:9289727 != 19775487 Jul 6 23:46:13.649570 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 6 23:46:13.650351 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 6 23:46:13.653232 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 6 23:46:13.653347 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:46:13.656697 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:46:13.658620 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:46:13.686307 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jul 6 23:46:13.687910 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:46:13.690189 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 6 23:46:13.706741 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jul 6 23:46:13.713134 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jul 6 23:46:13.714376 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jul 6 23:46:13.723584 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 6 23:46:13.724824 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 6 23:46:13.726960 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 6 23:46:13.729108 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 6 23:46:13.731970 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 6 23:46:13.733769 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 6 23:46:13.756346 disk-uuid[589]: Primary Header is updated. Jul 6 23:46:13.756346 disk-uuid[589]: Secondary Entries is updated. Jul 6 23:46:13.756346 disk-uuid[589]: Secondary Header is updated. Jul 6 23:46:13.760412 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 6 23:46:13.766207 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 6 23:46:14.775196 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 6 23:46:14.776610 disk-uuid[593]: The operation has completed successfully. Jul 6 23:46:14.800975 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 6 23:46:14.801069 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 6 23:46:14.825603 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 6 23:46:14.842201 sh[609]: Success Jul 6 23:46:14.855904 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 6 23:46:14.857793 kernel: device-mapper: uevent: version 1.0.3 Jul 6 23:46:14.857828 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jul 6 23:46:14.865215 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Jul 6 23:46:14.892635 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 6 23:46:14.895013 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 6 23:46:14.907943 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 6 23:46:14.914198 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Jul 6 23:46:14.914237 kernel: BTRFS: device fsid 2cfafe0a-eb24-4e1d-b9c9-dec7de7e4c4d devid 1 transid 38 /dev/mapper/usr (253:0) scanned by mount (621) Jul 6 23:46:14.917108 kernel: BTRFS info (device dm-0): first mount of filesystem 2cfafe0a-eb24-4e1d-b9c9-dec7de7e4c4d Jul 6 23:46:14.918121 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jul 6 23:46:14.918146 kernel: BTRFS info (device dm-0): using free-space-tree Jul 6 23:46:14.923038 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 6 23:46:14.924392 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jul 6 23:46:14.925857 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 6 23:46:14.926642 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 6 23:46:14.928255 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 6 23:46:14.949191 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 (254:6) scanned by mount (652) Jul 6 23:46:14.949240 kernel: BTRFS info (device vda6): first mount of filesystem f2591801-6ba1-4aa7-8261-bdb292e2060d Jul 6 23:46:14.951415 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 6 23:46:14.951452 kernel: BTRFS info (device vda6): using free-space-tree Jul 6 23:46:14.958237 kernel: BTRFS info (device vda6): last unmount of filesystem f2591801-6ba1-4aa7-8261-bdb292e2060d Jul 6 23:46:14.960208 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 6 23:46:14.964275 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 6 23:46:15.026220 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 6 23:46:15.030161 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 6 23:46:15.087514 systemd-networkd[794]: lo: Link UP Jul 6 23:46:15.087525 systemd-networkd[794]: lo: Gained carrier Jul 6 23:46:15.088423 systemd-networkd[794]: Enumeration completed Jul 6 23:46:15.088816 systemd-networkd[794]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 6 23:46:15.088820 systemd-networkd[794]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 6 23:46:15.089206 systemd-networkd[794]: eth0: Link UP Jul 6 23:46:15.089209 systemd-networkd[794]: eth0: Gained carrier Jul 6 23:46:15.089221 systemd-networkd[794]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 6 23:46:15.089436 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 6 23:46:15.090647 systemd[1]: Reached target network.target - Network. Jul 6 23:46:15.119241 systemd-networkd[794]: eth0: DHCPv4 address 10.0.0.139/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 6 23:46:15.130108 ignition[699]: Ignition 2.21.0 Jul 6 23:46:15.130130 ignition[699]: Stage: fetch-offline Jul 6 23:46:15.130163 ignition[699]: no configs at "/usr/lib/ignition/base.d" Jul 6 23:46:15.130182 ignition[699]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 6 23:46:15.130365 ignition[699]: parsed url from cmdline: "" Jul 6 23:46:15.130368 ignition[699]: no config URL provided Jul 6 23:46:15.130373 ignition[699]: reading system config file "/usr/lib/ignition/user.ign" Jul 6 23:46:15.130379 ignition[699]: no config at "/usr/lib/ignition/user.ign" Jul 6 23:46:15.130401 ignition[699]: op(1): [started] loading QEMU firmware config module Jul 6 23:46:15.130406 ignition[699]: op(1): executing: "modprobe" "qemu_fw_cfg" Jul 6 23:46:15.137512 ignition[699]: op(1): [finished] loading QEMU firmware config module Jul 6 23:46:15.176468 ignition[699]: parsing config with SHA512: 6f34fabda8619018410dc8000df4a11699a4c20feea79d0093858911fcab83c7b1b43342a6ab9892962926bdfad71ed3cdfd30b72dcc35569c30f33b82615d9a Jul 6 23:46:15.181363 unknown[699]: fetched base config from "system" Jul 6 23:46:15.181375 unknown[699]: fetched user config from "qemu" Jul 6 23:46:15.181750 ignition[699]: fetch-offline: fetch-offline passed Jul 6 23:46:15.182964 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 6 23:46:15.181801 ignition[699]: Ignition finished successfully Jul 6 23:46:15.184755 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 6 23:46:15.185671 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 6 23:46:15.221989 ignition[809]: Ignition 2.21.0 Jul 6 23:46:15.222006 ignition[809]: Stage: kargs Jul 6 23:46:15.222188 ignition[809]: no configs at "/usr/lib/ignition/base.d" Jul 6 23:46:15.222199 ignition[809]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 6 23:46:15.223107 ignition[809]: kargs: kargs passed Jul 6 23:46:15.223168 ignition[809]: Ignition finished successfully Jul 6 23:46:15.226069 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 6 23:46:15.228873 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 6 23:46:15.254399 ignition[818]: Ignition 2.21.0 Jul 6 23:46:15.254415 ignition[818]: Stage: disks Jul 6 23:46:15.254558 ignition[818]: no configs at "/usr/lib/ignition/base.d" Jul 6 23:46:15.254567 ignition[818]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 6 23:46:15.259432 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 6 23:46:15.256618 ignition[818]: disks: disks passed Jul 6 23:46:15.260873 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 6 23:46:15.256680 ignition[818]: Ignition finished successfully Jul 6 23:46:15.262722 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 6 23:46:15.264403 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 6 23:46:15.266303 systemd[1]: Reached target sysinit.target - System Initialization. Jul 6 23:46:15.267927 systemd[1]: Reached target basic.target - Basic System. Jul 6 23:46:15.270826 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 6 23:46:15.294207 systemd-fsck[828]: ROOT: clean, 15/553520 files, 52789/553472 blocks Jul 6 23:46:15.360762 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 6 23:46:15.363071 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 6 23:46:15.434201 kernel: EXT4-fs (vda9): mounted filesystem 8d88df29-f94d-4ab8-8fb6-af875603e6d4 r/w with ordered data mode. Quota mode: none. Jul 6 23:46:15.434663 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 6 23:46:15.435941 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 6 23:46:15.439386 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 6 23:46:15.441788 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 6 23:46:15.442830 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 6 23:46:15.442874 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 6 23:46:15.442897 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 6 23:46:15.453857 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 6 23:46:15.456392 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 6 23:46:15.462184 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 (254:6) scanned by mount (836) Jul 6 23:46:15.465094 kernel: BTRFS info (device vda6): first mount of filesystem f2591801-6ba1-4aa7-8261-bdb292e2060d Jul 6 23:46:15.465154 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 6 23:46:15.466189 kernel: BTRFS info (device vda6): using free-space-tree Jul 6 23:46:15.469706 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 6 23:46:15.509760 initrd-setup-root[860]: cut: /sysroot/etc/passwd: No such file or directory Jul 6 23:46:15.513314 initrd-setup-root[867]: cut: /sysroot/etc/group: No such file or directory Jul 6 23:46:15.517952 initrd-setup-root[874]: cut: /sysroot/etc/shadow: No such file or directory Jul 6 23:46:15.521827 initrd-setup-root[881]: cut: /sysroot/etc/gshadow: No such file or directory Jul 6 23:46:15.602243 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 6 23:46:15.605299 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 6 23:46:15.606897 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 6 23:46:15.625236 kernel: BTRFS info (device vda6): last unmount of filesystem f2591801-6ba1-4aa7-8261-bdb292e2060d Jul 6 23:46:15.643973 ignition[949]: INFO : Ignition 2.21.0 Jul 6 23:46:15.643973 ignition[949]: INFO : Stage: mount Jul 6 23:46:15.643973 ignition[949]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 6 23:46:15.643973 ignition[949]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 6 23:46:15.645041 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 6 23:46:15.649676 ignition[949]: INFO : mount: mount passed Jul 6 23:46:15.649676 ignition[949]: INFO : Ignition finished successfully Jul 6 23:46:15.650878 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 6 23:46:15.653202 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 6 23:46:15.914423 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 6 23:46:15.916267 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 6 23:46:15.954107 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 (254:6) scanned by mount (962) Jul 6 23:46:15.954167 kernel: BTRFS info (device vda6): first mount of filesystem f2591801-6ba1-4aa7-8261-bdb292e2060d Jul 6 23:46:15.954190 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jul 6 23:46:15.955627 kernel: BTRFS info (device vda6): using free-space-tree Jul 6 23:46:15.958265 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 6 23:46:15.988796 ignition[980]: INFO : Ignition 2.21.0 Jul 6 23:46:15.988796 ignition[980]: INFO : Stage: files Jul 6 23:46:15.990769 ignition[980]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 6 23:46:15.990769 ignition[980]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 6 23:46:15.990769 ignition[980]: DEBUG : files: compiled without relabeling support, skipping Jul 6 23:46:15.994499 ignition[980]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 6 23:46:15.994499 ignition[980]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 6 23:46:15.994499 ignition[980]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 6 23:46:15.998446 ignition[980]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 6 23:46:15.998446 ignition[980]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 6 23:46:15.998303 unknown[980]: wrote ssh authorized keys file for user: core Jul 6 23:46:16.002359 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jul 6 23:46:16.002359 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-arm64.tar.gz: attempt #1 Jul 6 23:46:16.059521 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 6 23:46:16.280904 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-arm64.tar.gz" Jul 6 23:46:16.280904 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 6 23:46:16.284797 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-arm64.tar.gz: attempt #1 Jul 6 23:46:16.590560 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jul 6 23:46:16.657781 systemd-networkd[794]: eth0: Gained IPv6LL Jul 6 23:46:16.694649 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 6 23:46:16.696437 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jul 6 23:46:16.696437 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jul 6 23:46:16.696437 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 6 23:46:16.696437 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 6 23:46:16.696437 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 6 23:46:16.696437 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 6 23:46:16.696437 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 6 23:46:16.696437 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 6 23:46:16.725258 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 6 23:46:16.725258 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 6 23:46:16.725258 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 6 23:46:16.725258 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 6 23:46:16.725258 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 6 23:46:16.725258 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-arm64.raw: attempt #1 Jul 6 23:46:17.091998 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jul 6 23:46:17.683483 ignition[980]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-arm64.raw" Jul 6 23:46:17.683483 ignition[980]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jul 6 23:46:17.689761 ignition[980]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 6 23:46:17.689761 ignition[980]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 6 23:46:17.689761 ignition[980]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jul 6 23:46:17.689761 ignition[980]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Jul 6 23:46:17.689761 ignition[980]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 6 23:46:17.689761 ignition[980]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 6 23:46:17.689761 ignition[980]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Jul 6 23:46:17.689761 ignition[980]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Jul 6 23:46:17.708446 ignition[980]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 6 23:46:17.709972 ignition[980]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 6 23:46:17.709972 ignition[980]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Jul 6 23:46:17.709972 ignition[980]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Jul 6 23:46:17.709972 ignition[980]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Jul 6 23:46:17.718819 ignition[980]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 6 23:46:17.718819 ignition[980]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 6 23:46:17.718819 ignition[980]: INFO : files: files passed Jul 6 23:46:17.718819 ignition[980]: INFO : Ignition finished successfully Jul 6 23:46:17.714025 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 6 23:46:17.716246 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 6 23:46:17.718187 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 6 23:46:17.731997 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 6 23:46:17.734883 initrd-setup-root-after-ignition[1007]: grep: /sysroot/oem/oem-release: No such file or directory Jul 6 23:46:17.732092 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 6 23:46:17.737511 initrd-setup-root-after-ignition[1010]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 6 23:46:17.737511 initrd-setup-root-after-ignition[1010]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 6 23:46:17.741217 initrd-setup-root-after-ignition[1014]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 6 23:46:17.739888 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 6 23:46:17.742676 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 6 23:46:17.746021 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 6 23:46:17.779239 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 6 23:46:17.779349 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 6 23:46:17.781663 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 6 23:46:17.783576 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 6 23:46:17.785479 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 6 23:46:17.786290 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 6 23:46:17.800655 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 6 23:46:17.803255 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 6 23:46:17.825294 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 6 23:46:17.826697 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 6 23:46:17.828723 systemd[1]: Stopped target timers.target - Timer Units. Jul 6 23:46:17.830544 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 6 23:46:17.830674 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 6 23:46:17.833237 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 6 23:46:17.835268 systemd[1]: Stopped target basic.target - Basic System. Jul 6 23:46:17.836985 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 6 23:46:17.838765 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 6 23:46:17.840761 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 6 23:46:17.842762 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jul 6 23:46:17.844747 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 6 23:46:17.846715 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 6 23:46:17.848682 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 6 23:46:17.850676 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 6 23:46:17.852430 systemd[1]: Stopped target swap.target - Swaps. Jul 6 23:46:17.854050 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 6 23:46:17.854213 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 6 23:46:17.856605 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 6 23:46:17.857756 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 6 23:46:17.859775 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 6 23:46:17.859903 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 6 23:46:17.861790 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 6 23:46:17.861909 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 6 23:46:17.864648 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 6 23:46:17.864767 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 6 23:46:17.867240 systemd[1]: Stopped target paths.target - Path Units. Jul 6 23:46:17.868764 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 6 23:46:17.868880 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 6 23:46:17.870820 systemd[1]: Stopped target slices.target - Slice Units. Jul 6 23:46:17.872607 systemd[1]: Stopped target sockets.target - Socket Units. Jul 6 23:46:17.874429 systemd[1]: iscsid.socket: Deactivated successfully. Jul 6 23:46:17.874510 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 6 23:46:17.876056 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 6 23:46:17.876143 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 6 23:46:17.878033 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 6 23:46:17.878156 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 6 23:46:17.880445 systemd[1]: ignition-files.service: Deactivated successfully. Jul 6 23:46:17.880546 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 6 23:46:17.882909 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 6 23:46:17.885063 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 6 23:46:17.885943 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 6 23:46:17.886074 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 6 23:46:17.887939 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 6 23:46:17.888080 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 6 23:46:17.893216 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 6 23:46:17.897392 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 6 23:46:17.910207 ignition[1034]: INFO : Ignition 2.21.0 Jul 6 23:46:17.910207 ignition[1034]: INFO : Stage: umount Jul 6 23:46:17.911899 ignition[1034]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 6 23:46:17.911899 ignition[1034]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 6 23:46:17.913977 ignition[1034]: INFO : umount: umount passed Jul 6 23:46:17.913977 ignition[1034]: INFO : Ignition finished successfully Jul 6 23:46:17.912190 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 6 23:46:17.914364 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 6 23:46:17.914455 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 6 23:46:17.916033 systemd[1]: Stopped target network.target - Network. Jul 6 23:46:17.918324 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 6 23:46:17.918390 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 6 23:46:17.920056 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 6 23:46:17.920112 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 6 23:46:17.921768 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 6 23:46:17.921821 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 6 23:46:17.923659 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 6 23:46:17.923700 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 6 23:46:17.925517 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 6 23:46:17.927199 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 6 23:46:17.939990 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 6 23:46:17.940110 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 6 23:46:17.944169 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jul 6 23:46:17.944425 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 6 23:46:17.944511 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 6 23:46:17.947496 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jul 6 23:46:17.947991 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jul 6 23:46:17.949760 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 6 23:46:17.949801 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 6 23:46:17.953666 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 6 23:46:17.954646 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 6 23:46:17.954706 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 6 23:46:17.956821 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 6 23:46:17.956869 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 6 23:46:17.959535 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 6 23:46:17.959577 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 6 23:46:17.961757 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 6 23:46:17.961799 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 6 23:46:17.964686 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 6 23:46:17.969484 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 6 23:46:17.969547 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jul 6 23:46:17.982001 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 6 23:46:17.982145 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 6 23:46:17.986930 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 6 23:46:17.987089 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 6 23:46:17.989381 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 6 23:46:17.989469 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 6 23:46:17.991533 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 6 23:46:17.991586 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 6 23:46:17.992934 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 6 23:46:17.992963 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 6 23:46:17.994620 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 6 23:46:17.994671 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 6 23:46:17.997340 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 6 23:46:17.997395 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 6 23:46:18.000138 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 6 23:46:18.000243 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 6 23:46:18.003086 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 6 23:46:18.003146 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 6 23:46:18.005834 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 6 23:46:18.007719 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jul 6 23:46:18.007776 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jul 6 23:46:18.010815 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 6 23:46:18.010857 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 6 23:46:18.014008 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jul 6 23:46:18.014055 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 6 23:46:18.017519 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 6 23:46:18.017565 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 6 23:46:18.019820 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 6 23:46:18.019875 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:46:18.023946 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Jul 6 23:46:18.023990 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Jul 6 23:46:18.024018 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jul 6 23:46:18.024047 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 6 23:46:18.024429 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 6 23:46:18.024521 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 6 23:46:18.026927 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 6 23:46:18.029286 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 6 23:46:18.051263 systemd[1]: Switching root. Jul 6 23:46:18.094329 systemd-journald[246]: Journal stopped Jul 6 23:46:19.019871 systemd-journald[246]: Received SIGTERM from PID 1 (systemd). Jul 6 23:46:19.019931 kernel: SELinux: policy capability network_peer_controls=1 Jul 6 23:46:19.019943 kernel: SELinux: policy capability open_perms=1 Jul 6 23:46:19.019952 kernel: SELinux: policy capability extended_socket_class=1 Jul 6 23:46:19.019962 kernel: SELinux: policy capability always_check_network=0 Jul 6 23:46:19.019975 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 6 23:46:19.019986 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 6 23:46:19.019995 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 6 23:46:19.020004 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 6 23:46:19.020018 kernel: SELinux: policy capability userspace_initial_context=0 Jul 6 23:46:19.020027 kernel: audit: type=1403 audit(1751845578.363:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 6 23:46:19.020043 systemd[1]: Successfully loaded SELinux policy in 47.745ms. Jul 6 23:46:19.020064 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 9.769ms. Jul 6 23:46:19.020075 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 6 23:46:19.020085 systemd[1]: Detected virtualization kvm. Jul 6 23:46:19.020112 systemd[1]: Detected architecture arm64. Jul 6 23:46:19.020124 systemd[1]: Detected first boot. Jul 6 23:46:19.020134 systemd[1]: Initializing machine ID from VM UUID. Jul 6 23:46:19.020144 zram_generator::config[1078]: No configuration found. Jul 6 23:46:19.020154 kernel: NET: Registered PF_VSOCK protocol family Jul 6 23:46:19.020165 systemd[1]: Populated /etc with preset unit settings. Jul 6 23:46:19.020191 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jul 6 23:46:19.020204 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 6 23:46:19.020214 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 6 23:46:19.020223 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 6 23:46:19.020236 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 6 23:46:19.020246 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 6 23:46:19.020256 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 6 23:46:19.020268 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 6 23:46:19.020278 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 6 23:46:19.020288 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 6 23:46:19.020299 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 6 23:46:19.020308 systemd[1]: Created slice user.slice - User and Session Slice. Jul 6 23:46:19.020319 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 6 23:46:19.020330 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 6 23:46:19.020340 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 6 23:46:19.020350 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 6 23:46:19.020362 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 6 23:46:19.020372 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 6 23:46:19.020382 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jul 6 23:46:19.020392 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 6 23:46:19.020402 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 6 23:46:19.020413 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 6 23:46:19.020423 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 6 23:46:19.020433 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 6 23:46:19.020445 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 6 23:46:19.020455 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 6 23:46:19.020465 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 6 23:46:19.020475 systemd[1]: Reached target slices.target - Slice Units. Jul 6 23:46:19.020485 systemd[1]: Reached target swap.target - Swaps. Jul 6 23:46:19.020495 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 6 23:46:19.020506 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 6 23:46:19.020516 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jul 6 23:46:19.020526 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 6 23:46:19.020539 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 6 23:46:19.020549 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 6 23:46:19.020560 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 6 23:46:19.020571 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 6 23:46:19.020581 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 6 23:46:19.020592 systemd[1]: Mounting media.mount - External Media Directory... Jul 6 23:46:19.020602 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 6 23:46:19.020611 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 6 23:46:19.020634 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 6 23:46:19.020660 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 6 23:46:19.020671 systemd[1]: Reached target machines.target - Containers. Jul 6 23:46:19.020682 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 6 23:46:19.020693 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 6 23:46:19.020703 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 6 23:46:19.020713 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 6 23:46:19.020723 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 6 23:46:19.020733 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 6 23:46:19.020744 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 6 23:46:19.020754 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 6 23:46:19.020764 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 6 23:46:19.020774 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 6 23:46:19.020785 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 6 23:46:19.020795 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 6 23:46:19.020804 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 6 23:46:19.020814 systemd[1]: Stopped systemd-fsck-usr.service. Jul 6 23:46:19.020826 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 6 23:46:19.020836 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 6 23:46:19.020846 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 6 23:46:19.020856 kernel: loop: module loaded Jul 6 23:46:19.020865 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 6 23:46:19.020875 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 6 23:46:19.020885 kernel: ACPI: bus type drm_connector registered Jul 6 23:46:19.020895 kernel: fuse: init (API version 7.41) Jul 6 23:46:19.020905 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jul 6 23:46:19.020917 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 6 23:46:19.020927 systemd[1]: verity-setup.service: Deactivated successfully. Jul 6 23:46:19.020937 systemd[1]: Stopped verity-setup.service. Jul 6 23:46:19.020969 systemd-journald[1146]: Collecting audit messages is disabled. Jul 6 23:46:19.020992 systemd-journald[1146]: Journal started Jul 6 23:46:19.021013 systemd-journald[1146]: Runtime Journal (/run/log/journal/b60d3d97b2aa4adeb0e64a1a2d7fbcc3) is 6M, max 48.5M, 42.4M free. Jul 6 23:46:18.778324 systemd[1]: Queued start job for default target multi-user.target. Jul 6 23:46:19.023360 systemd[1]: Started systemd-journald.service - Journal Service. Jul 6 23:46:18.802138 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jul 6 23:46:18.802535 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 6 23:46:19.024629 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 6 23:46:19.026005 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 6 23:46:19.027267 systemd[1]: Mounted media.mount - External Media Directory. Jul 6 23:46:19.028450 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 6 23:46:19.029741 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 6 23:46:19.030962 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 6 23:46:19.034205 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 6 23:46:19.035599 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 6 23:46:19.037194 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 6 23:46:19.037374 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 6 23:46:19.038736 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 6 23:46:19.038885 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 6 23:46:19.040259 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 6 23:46:19.040420 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 6 23:46:19.041774 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 6 23:46:19.041945 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 6 23:46:19.043383 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 6 23:46:19.043537 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 6 23:46:19.044827 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 6 23:46:19.044991 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 6 23:46:19.046390 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 6 23:46:19.048212 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 6 23:46:19.049774 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 6 23:46:19.052532 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jul 6 23:46:19.065319 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 6 23:46:19.067946 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 6 23:46:19.070057 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 6 23:46:19.071337 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 6 23:46:19.071365 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 6 23:46:19.073277 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jul 6 23:46:19.080009 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 6 23:46:19.081258 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 6 23:46:19.084118 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 6 23:46:19.086234 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 6 23:46:19.087359 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 6 23:46:19.088414 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 6 23:46:19.089502 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 6 23:46:19.092304 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 6 23:46:19.094385 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 6 23:46:19.101261 systemd-journald[1146]: Time spent on flushing to /var/log/journal/b60d3d97b2aa4adeb0e64a1a2d7fbcc3 is 23.647ms for 891 entries. Jul 6 23:46:19.101261 systemd-journald[1146]: System Journal (/var/log/journal/b60d3d97b2aa4adeb0e64a1a2d7fbcc3) is 8M, max 195.6M, 187.6M free. Jul 6 23:46:19.132485 systemd-journald[1146]: Received client request to flush runtime journal. Jul 6 23:46:19.132531 kernel: loop0: detected capacity change from 0 to 138376 Jul 6 23:46:19.096519 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 6 23:46:19.099399 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 6 23:46:19.100843 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 6 23:46:19.103758 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 6 23:46:19.110582 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 6 23:46:19.114312 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 6 23:46:19.117156 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jul 6 23:46:19.135649 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 6 23:46:19.138311 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 6 23:46:19.139717 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 6 23:46:19.141490 systemd-tmpfiles[1196]: ACLs are not supported, ignoring. Jul 6 23:46:19.141511 systemd-tmpfiles[1196]: ACLs are not supported, ignoring. Jul 6 23:46:19.148753 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 6 23:46:19.152947 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 6 23:46:19.160343 kernel: loop1: detected capacity change from 0 to 203944 Jul 6 23:46:19.166437 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jul 6 23:46:19.187404 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 6 23:46:19.190124 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 6 23:46:19.193298 kernel: loop2: detected capacity change from 0 to 107312 Jul 6 23:46:19.212878 systemd-tmpfiles[1216]: ACLs are not supported, ignoring. Jul 6 23:46:19.212898 systemd-tmpfiles[1216]: ACLs are not supported, ignoring. Jul 6 23:46:19.215196 kernel: loop3: detected capacity change from 0 to 138376 Jul 6 23:46:19.218227 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 6 23:46:19.227200 kernel: loop4: detected capacity change from 0 to 203944 Jul 6 23:46:19.233191 kernel: loop5: detected capacity change from 0 to 107312 Jul 6 23:46:19.237249 (sd-merge)[1219]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jul 6 23:46:19.237616 (sd-merge)[1219]: Merged extensions into '/usr'. Jul 6 23:46:19.241205 systemd[1]: Reload requested from client PID 1195 ('systemd-sysext') (unit systemd-sysext.service)... Jul 6 23:46:19.241325 systemd[1]: Reloading... Jul 6 23:46:19.302310 zram_generator::config[1246]: No configuration found. Jul 6 23:46:19.365389 ldconfig[1190]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 6 23:46:19.380346 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 6 23:46:19.444442 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 6 23:46:19.444756 systemd[1]: Reloading finished in 203 ms. Jul 6 23:46:19.476209 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 6 23:46:19.477589 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 6 23:46:19.491531 systemd[1]: Starting ensure-sysext.service... Jul 6 23:46:19.493605 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 6 23:46:19.507117 systemd[1]: Reload requested from client PID 1280 ('systemctl') (unit ensure-sysext.service)... Jul 6 23:46:19.507136 systemd[1]: Reloading... Jul 6 23:46:19.513325 systemd-tmpfiles[1281]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jul 6 23:46:19.513367 systemd-tmpfiles[1281]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jul 6 23:46:19.513606 systemd-tmpfiles[1281]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 6 23:46:19.513794 systemd-tmpfiles[1281]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 6 23:46:19.514455 systemd-tmpfiles[1281]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 6 23:46:19.514657 systemd-tmpfiles[1281]: ACLs are not supported, ignoring. Jul 6 23:46:19.514699 systemd-tmpfiles[1281]: ACLs are not supported, ignoring. Jul 6 23:46:19.517312 systemd-tmpfiles[1281]: Detected autofs mount point /boot during canonicalization of boot. Jul 6 23:46:19.517324 systemd-tmpfiles[1281]: Skipping /boot Jul 6 23:46:19.526715 systemd-tmpfiles[1281]: Detected autofs mount point /boot during canonicalization of boot. Jul 6 23:46:19.526733 systemd-tmpfiles[1281]: Skipping /boot Jul 6 23:46:19.562200 zram_generator::config[1308]: No configuration found. Jul 6 23:46:19.633115 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 6 23:46:19.696760 systemd[1]: Reloading finished in 189 ms. Jul 6 23:46:19.721793 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 6 23:46:19.727487 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 6 23:46:19.734478 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 6 23:46:19.736732 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 6 23:46:19.739307 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 6 23:46:19.742303 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 6 23:46:19.746801 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 6 23:46:19.751335 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 6 23:46:19.756826 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 6 23:46:19.759905 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 6 23:46:19.766031 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 6 23:46:19.769495 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 6 23:46:19.774506 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 6 23:46:19.775669 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 6 23:46:19.775798 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 6 23:46:19.778275 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 6 23:46:19.780915 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 6 23:46:19.781092 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 6 23:46:19.786318 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 6 23:46:19.786520 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 6 23:46:19.791499 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 6 23:46:19.793491 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 6 23:46:19.796480 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 6 23:46:19.798297 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 6 23:46:19.798423 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 6 23:46:19.802059 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 6 23:46:19.809017 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 6 23:46:19.809511 systemd-udevd[1349]: Using default interface naming scheme 'v255'. Jul 6 23:46:19.814443 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 6 23:46:19.816554 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 6 23:46:19.816794 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 6 23:46:19.818431 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 6 23:46:19.818588 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 6 23:46:19.820204 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 6 23:46:19.820349 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 6 23:46:19.824204 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 6 23:46:19.833290 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 6 23:46:19.834635 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 6 23:46:19.836655 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 6 23:46:19.838477 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 6 23:46:19.847413 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 6 23:46:19.850441 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 6 23:46:19.850489 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 6 23:46:19.850551 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 6 23:46:19.850868 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 6 23:46:19.852576 systemd[1]: Finished ensure-sysext.service. Jul 6 23:46:19.855059 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 6 23:46:19.855243 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 6 23:46:19.856785 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 6 23:46:19.856946 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 6 23:46:19.858683 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 6 23:46:19.858841 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 6 23:46:19.860512 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 6 23:46:19.860660 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 6 23:46:19.863940 augenrules[1394]: No rules Jul 6 23:46:19.863983 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 6 23:46:19.865777 systemd[1]: audit-rules.service: Deactivated successfully. Jul 6 23:46:19.865984 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 6 23:46:19.880698 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 6 23:46:19.882204 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 6 23:46:19.882270 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 6 23:46:19.888982 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 6 23:46:19.929447 systemd-resolved[1348]: Positive Trust Anchors: Jul 6 23:46:19.929464 systemd-resolved[1348]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 6 23:46:19.929497 systemd-resolved[1348]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 6 23:46:19.937897 systemd-resolved[1348]: Defaulting to hostname 'linux'. Jul 6 23:46:19.940319 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 6 23:46:19.942586 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 6 23:46:19.950283 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jul 6 23:46:19.986987 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 6 23:46:19.990442 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 6 23:46:20.018631 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 6 23:46:20.038262 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 6 23:46:20.040079 systemd[1]: Reached target sysinit.target - System Initialization. Jul 6 23:46:20.041788 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 6 23:46:20.043388 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 6 23:46:20.045216 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 6 23:46:20.046553 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 6 23:46:20.046586 systemd[1]: Reached target paths.target - Path Units. Jul 6 23:46:20.047755 systemd[1]: Reached target time-set.target - System Time Set. Jul 6 23:46:20.049049 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 6 23:46:20.050391 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 6 23:46:20.051868 systemd[1]: Reached target timers.target - Timer Units. Jul 6 23:46:20.054245 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 6 23:46:20.056722 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 6 23:46:20.060417 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jul 6 23:46:20.061974 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jul 6 23:46:20.063292 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jul 6 23:46:20.063711 systemd-networkd[1430]: lo: Link UP Jul 6 23:46:20.063941 systemd-networkd[1430]: lo: Gained carrier Jul 6 23:46:20.064812 systemd-networkd[1430]: Enumeration completed Jul 6 23:46:20.068634 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 6 23:46:20.070107 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jul 6 23:46:20.072916 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 6 23:46:20.074742 systemd-networkd[1430]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 6 23:46:20.074751 systemd-networkd[1430]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 6 23:46:20.075434 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 6 23:46:20.076781 systemd[1]: Reached target network.target - Network. Jul 6 23:46:20.077318 systemd-networkd[1430]: eth0: Link UP Jul 6 23:46:20.077559 systemd-networkd[1430]: eth0: Gained carrier Jul 6 23:46:20.077626 systemd-networkd[1430]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 6 23:46:20.079239 systemd[1]: Reached target sockets.target - Socket Units. Jul 6 23:46:20.080273 systemd[1]: Reached target basic.target - Basic System. Jul 6 23:46:20.081317 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 6 23:46:20.081385 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 6 23:46:20.084388 systemd[1]: Starting containerd.service - containerd container runtime... Jul 6 23:46:20.086344 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 6 23:46:20.089750 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 6 23:46:20.094205 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 6 23:46:20.098309 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 6 23:46:20.099239 systemd-networkd[1430]: eth0: DHCPv4 address 10.0.0.139/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 6 23:46:20.099853 systemd-timesyncd[1432]: Network configuration changed, trying to establish connection. Jul 6 23:46:20.593715 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 6 23:46:20.593785 systemd-timesyncd[1432]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jul 6 23:46:20.593825 systemd-timesyncd[1432]: Initial clock synchronization to Sun 2025-07-06 23:46:20.593706 UTC. Jul 6 23:46:20.594459 systemd-resolved[1348]: Clock change detected. Flushing caches. Jul 6 23:46:20.598997 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 6 23:46:20.602007 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 6 23:46:20.607719 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 6 23:46:20.615920 jq[1465]: false Jul 6 23:46:20.618802 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 6 23:46:20.622514 extend-filesystems[1466]: Found /dev/vda6 Jul 6 23:46:20.624825 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 6 23:46:20.627714 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jul 6 23:46:20.631300 extend-filesystems[1466]: Found /dev/vda9 Jul 6 23:46:20.632684 extend-filesystems[1466]: Checking size of /dev/vda9 Jul 6 23:46:20.632737 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 6 23:46:20.637609 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 6 23:46:20.638075 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 6 23:46:20.638833 systemd[1]: Starting update-engine.service - Update Engine... Jul 6 23:46:20.642019 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 6 23:46:20.644054 extend-filesystems[1466]: Resized partition /dev/vda9 Jul 6 23:46:20.650607 extend-filesystems[1491]: resize2fs 1.47.2 (1-Jan-2025) Jul 6 23:46:20.652614 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jul 6 23:46:20.653871 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 6 23:46:20.656182 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 6 23:46:20.657081 jq[1488]: true Jul 6 23:46:20.658922 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 6 23:46:20.659267 systemd[1]: motdgen.service: Deactivated successfully. Jul 6 23:46:20.659447 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 6 23:46:20.668007 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 6 23:46:20.668252 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 6 23:46:20.686218 (ntainerd)[1497]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 6 23:46:20.690629 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jul 6 23:46:20.698223 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:46:20.705868 jq[1496]: true Jul 6 23:46:20.709995 extend-filesystems[1491]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 6 23:46:20.709995 extend-filesystems[1491]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 6 23:46:20.709995 extend-filesystems[1491]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jul 6 23:46:20.714637 extend-filesystems[1466]: Resized filesystem in /dev/vda9 Jul 6 23:46:20.710945 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 6 23:46:20.711169 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 6 23:46:20.717800 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jul 6 23:46:20.740642 update_engine[1487]: I20250706 23:46:20.740480 1487 main.cc:92] Flatcar Update Engine starting Jul 6 23:46:20.743552 dbus-daemon[1463]: [system] SELinux support is enabled Jul 6 23:46:20.743768 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 6 23:46:20.747037 tar[1495]: linux-arm64/helm Jul 6 23:46:20.747553 update_engine[1487]: I20250706 23:46:20.747216 1487 update_check_scheduler.cc:74] Next update check in 4m36s Jul 6 23:46:20.749163 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 6 23:46:20.749191 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 6 23:46:20.750542 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 6 23:46:20.750566 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 6 23:46:20.751897 systemd[1]: Started update-engine.service - Update Engine. Jul 6 23:46:20.754917 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 6 23:46:20.770927 systemd-logind[1478]: Watching system buttons on /dev/input/event0 (Power Button) Jul 6 23:46:20.771308 systemd-logind[1478]: New seat seat0. Jul 6 23:46:20.791920 systemd[1]: Started systemd-logind.service - User Login Management. Jul 6 23:46:20.816878 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:46:20.832388 locksmithd[1526]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 6 23:46:20.836579 bash[1534]: Updated "/home/core/.ssh/authorized_keys" Jul 6 23:46:20.840648 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 6 23:46:20.843203 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 6 23:46:20.930912 containerd[1497]: time="2025-07-06T23:46:20Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jul 6 23:46:20.934582 containerd[1497]: time="2025-07-06T23:46:20.933905147Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Jul 6 23:46:20.944696 containerd[1497]: time="2025-07-06T23:46:20.944640387Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="10.04µs" Jul 6 23:46:20.944696 containerd[1497]: time="2025-07-06T23:46:20.944685587Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jul 6 23:46:20.944793 containerd[1497]: time="2025-07-06T23:46:20.944707907Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jul 6 23:46:20.944905 containerd[1497]: time="2025-07-06T23:46:20.944882627Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jul 6 23:46:20.944930 containerd[1497]: time="2025-07-06T23:46:20.944905267Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jul 6 23:46:20.944949 containerd[1497]: time="2025-07-06T23:46:20.944931507Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 6 23:46:20.944997 containerd[1497]: time="2025-07-06T23:46:20.944980547Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 6 23:46:20.944997 containerd[1497]: time="2025-07-06T23:46:20.944994507Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 6 23:46:20.945250 containerd[1497]: time="2025-07-06T23:46:20.945225867Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 6 23:46:20.945250 containerd[1497]: time="2025-07-06T23:46:20.945247667Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 6 23:46:20.945294 containerd[1497]: time="2025-07-06T23:46:20.945259467Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 6 23:46:20.945294 containerd[1497]: time="2025-07-06T23:46:20.945268187Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jul 6 23:46:20.945361 containerd[1497]: time="2025-07-06T23:46:20.945341387Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jul 6 23:46:20.945560 containerd[1497]: time="2025-07-06T23:46:20.945538427Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 6 23:46:20.945612 containerd[1497]: time="2025-07-06T23:46:20.945594507Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 6 23:46:20.945612 containerd[1497]: time="2025-07-06T23:46:20.945609667Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jul 6 23:46:20.945665 containerd[1497]: time="2025-07-06T23:46:20.945646147Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jul 6 23:46:20.945891 containerd[1497]: time="2025-07-06T23:46:20.945871867Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jul 6 23:46:20.945958 containerd[1497]: time="2025-07-06T23:46:20.945940347Z" level=info msg="metadata content store policy set" policy=shared Jul 6 23:46:21.046792 containerd[1497]: time="2025-07-06T23:46:21.046720707Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jul 6 23:46:21.046900 containerd[1497]: time="2025-07-06T23:46:21.046805507Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jul 6 23:46:21.046900 containerd[1497]: time="2025-07-06T23:46:21.046822627Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jul 6 23:46:21.046900 containerd[1497]: time="2025-07-06T23:46:21.046849667Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jul 6 23:46:21.046900 containerd[1497]: time="2025-07-06T23:46:21.046862827Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jul 6 23:46:21.046900 containerd[1497]: time="2025-07-06T23:46:21.046877547Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jul 6 23:46:21.046900 containerd[1497]: time="2025-07-06T23:46:21.046889667Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jul 6 23:46:21.046900 containerd[1497]: time="2025-07-06T23:46:21.046903427Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jul 6 23:46:21.047047 containerd[1497]: time="2025-07-06T23:46:21.046916667Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jul 6 23:46:21.047047 containerd[1497]: time="2025-07-06T23:46:21.046933947Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jul 6 23:46:21.047047 containerd[1497]: time="2025-07-06T23:46:21.046943987Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jul 6 23:46:21.047047 containerd[1497]: time="2025-07-06T23:46:21.046956867Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jul 6 23:46:21.047699 containerd[1497]: time="2025-07-06T23:46:21.047110627Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jul 6 23:46:21.047699 containerd[1497]: time="2025-07-06T23:46:21.047139267Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jul 6 23:46:21.047699 containerd[1497]: time="2025-07-06T23:46:21.047157227Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jul 6 23:46:21.047699 containerd[1497]: time="2025-07-06T23:46:21.047168667Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jul 6 23:46:21.047699 containerd[1497]: time="2025-07-06T23:46:21.047178787Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jul 6 23:46:21.047699 containerd[1497]: time="2025-07-06T23:46:21.047202107Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jul 6 23:46:21.047699 containerd[1497]: time="2025-07-06T23:46:21.047213627Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jul 6 23:46:21.047699 containerd[1497]: time="2025-07-06T23:46:21.047223347Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jul 6 23:46:21.047699 containerd[1497]: time="2025-07-06T23:46:21.047239707Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jul 6 23:46:21.047699 containerd[1497]: time="2025-07-06T23:46:21.047250667Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jul 6 23:46:21.047699 containerd[1497]: time="2025-07-06T23:46:21.047260947Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jul 6 23:46:21.047699 containerd[1497]: time="2025-07-06T23:46:21.047557907Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jul 6 23:46:21.047699 containerd[1497]: time="2025-07-06T23:46:21.047601747Z" level=info msg="Start snapshots syncer" Jul 6 23:46:21.047699 containerd[1497]: time="2025-07-06T23:46:21.047633507Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jul 6 23:46:21.050325 containerd[1497]: time="2025-07-06T23:46:21.050260027Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jul 6 23:46:21.050502 containerd[1497]: time="2025-07-06T23:46:21.050473267Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jul 6 23:46:21.050758 containerd[1497]: time="2025-07-06T23:46:21.050705907Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jul 6 23:46:21.051096 containerd[1497]: time="2025-07-06T23:46:21.051059307Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jul 6 23:46:21.051143 containerd[1497]: time="2025-07-06T23:46:21.051109067Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jul 6 23:46:21.051143 containerd[1497]: time="2025-07-06T23:46:21.051124427Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jul 6 23:46:21.051143 containerd[1497]: time="2025-07-06T23:46:21.051136547Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jul 6 23:46:21.051195 containerd[1497]: time="2025-07-06T23:46:21.051149067Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jul 6 23:46:21.051195 containerd[1497]: time="2025-07-06T23:46:21.051160147Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jul 6 23:46:21.051195 containerd[1497]: time="2025-07-06T23:46:21.051181307Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jul 6 23:46:21.051245 containerd[1497]: time="2025-07-06T23:46:21.051208467Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jul 6 23:46:21.051245 containerd[1497]: time="2025-07-06T23:46:21.051220547Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jul 6 23:46:21.051245 containerd[1497]: time="2025-07-06T23:46:21.051231147Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jul 6 23:46:21.051295 containerd[1497]: time="2025-07-06T23:46:21.051279787Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 6 23:46:21.051313 containerd[1497]: time="2025-07-06T23:46:21.051296667Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 6 23:46:21.051313 containerd[1497]: time="2025-07-06T23:46:21.051307227Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 6 23:46:21.051432 containerd[1497]: time="2025-07-06T23:46:21.051409667Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 6 23:46:21.051457 containerd[1497]: time="2025-07-06T23:46:21.051440227Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jul 6 23:46:21.051477 containerd[1497]: time="2025-07-06T23:46:21.051459227Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jul 6 23:46:21.051477 containerd[1497]: time="2025-07-06T23:46:21.051471827Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jul 6 23:46:21.051592 containerd[1497]: time="2025-07-06T23:46:21.051566907Z" level=info msg="runtime interface created" Jul 6 23:46:21.051592 containerd[1497]: time="2025-07-06T23:46:21.051589507Z" level=info msg="created NRI interface" Jul 6 23:46:21.051628 containerd[1497]: time="2025-07-06T23:46:21.051599227Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jul 6 23:46:21.051628 containerd[1497]: time="2025-07-06T23:46:21.051612147Z" level=info msg="Connect containerd service" Jul 6 23:46:21.051737 containerd[1497]: time="2025-07-06T23:46:21.051707467Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 6 23:46:21.053059 containerd[1497]: time="2025-07-06T23:46:21.053020067Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 6 23:46:21.122553 tar[1495]: linux-arm64/LICENSE Jul 6 23:46:21.122665 tar[1495]: linux-arm64/README.md Jul 6 23:46:21.143879 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 6 23:46:21.177642 containerd[1497]: time="2025-07-06T23:46:21.177548507Z" level=info msg="Start subscribing containerd event" Jul 6 23:46:21.177642 containerd[1497]: time="2025-07-06T23:46:21.177644587Z" level=info msg="Start recovering state" Jul 6 23:46:21.177769 containerd[1497]: time="2025-07-06T23:46:21.177731147Z" level=info msg="Start event monitor" Jul 6 23:46:21.177769 containerd[1497]: time="2025-07-06T23:46:21.177746947Z" level=info msg="Start cni network conf syncer for default" Jul 6 23:46:21.177769 containerd[1497]: time="2025-07-06T23:46:21.177755947Z" level=info msg="Start streaming server" Jul 6 23:46:21.177769 containerd[1497]: time="2025-07-06T23:46:21.177764467Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jul 6 23:46:21.177863 containerd[1497]: time="2025-07-06T23:46:21.177771947Z" level=info msg="runtime interface starting up..." Jul 6 23:46:21.177863 containerd[1497]: time="2025-07-06T23:46:21.177777467Z" level=info msg="starting plugins..." Jul 6 23:46:21.177863 containerd[1497]: time="2025-07-06T23:46:21.177791267Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jul 6 23:46:21.178370 containerd[1497]: time="2025-07-06T23:46:21.178329707Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 6 23:46:21.178419 containerd[1497]: time="2025-07-06T23:46:21.178379267Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 6 23:46:21.178546 systemd[1]: Started containerd.service - containerd container runtime. Jul 6 23:46:21.179912 containerd[1497]: time="2025-07-06T23:46:21.179880467Z" level=info msg="containerd successfully booted in 0.249683s" Jul 6 23:46:21.499862 sshd_keygen[1499]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 6 23:46:21.520387 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 6 23:46:21.523906 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 6 23:46:21.544554 systemd[1]: issuegen.service: Deactivated successfully. Jul 6 23:46:21.544785 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 6 23:46:21.547822 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 6 23:46:21.583829 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 6 23:46:21.586701 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 6 23:46:21.588810 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jul 6 23:46:21.590111 systemd[1]: Reached target getty.target - Login Prompts. Jul 6 23:46:22.077741 systemd-networkd[1430]: eth0: Gained IPv6LL Jul 6 23:46:22.080128 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 6 23:46:22.081931 systemd[1]: Reached target network-online.target - Network is Online. Jul 6 23:46:22.084468 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jul 6 23:46:22.087099 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:46:22.098484 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 6 23:46:22.122653 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 6 23:46:22.124249 systemd[1]: coreos-metadata.service: Deactivated successfully. Jul 6 23:46:22.124444 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jul 6 23:46:22.126810 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 6 23:46:22.685677 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:46:22.687280 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 6 23:46:22.689186 systemd[1]: Startup finished in 2.171s (kernel) + 5.720s (initrd) + 3.887s (userspace) = 11.778s. Jul 6 23:46:22.695029 (kubelet)[1608]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 6 23:46:23.182843 kubelet[1608]: E0706 23:46:23.182736 1608 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 6 23:46:23.185304 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 6 23:46:23.185442 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 6 23:46:23.185753 systemd[1]: kubelet.service: Consumed 863ms CPU time, 258.1M memory peak. Jul 6 23:46:26.335180 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 6 23:46:26.336417 systemd[1]: Started sshd@0-10.0.0.139:22-10.0.0.1:52620.service - OpenSSH per-connection server daemon (10.0.0.1:52620). Jul 6 23:46:26.425268 sshd[1621]: Accepted publickey for core from 10.0.0.1 port 52620 ssh2: RSA SHA256:xPKA+TblypRwFFpP4Ulh9pljC5Xv/qD+dvpZZ1GZosc Jul 6 23:46:26.427813 sshd-session[1621]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:46:26.434915 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 6 23:46:26.435943 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 6 23:46:26.443268 systemd-logind[1478]: New session 1 of user core. Jul 6 23:46:26.466613 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 6 23:46:26.469538 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 6 23:46:26.488782 (systemd)[1625]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 6 23:46:26.491061 systemd-logind[1478]: New session c1 of user core. Jul 6 23:46:26.606213 systemd[1625]: Queued start job for default target default.target. Jul 6 23:46:26.623639 systemd[1625]: Created slice app.slice - User Application Slice. Jul 6 23:46:26.623671 systemd[1625]: Reached target paths.target - Paths. Jul 6 23:46:26.623710 systemd[1625]: Reached target timers.target - Timers. Jul 6 23:46:26.625044 systemd[1625]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 6 23:46:26.634712 systemd[1625]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 6 23:46:26.634784 systemd[1625]: Reached target sockets.target - Sockets. Jul 6 23:46:26.634833 systemd[1625]: Reached target basic.target - Basic System. Jul 6 23:46:26.634858 systemd[1625]: Reached target default.target - Main User Target. Jul 6 23:46:26.634887 systemd[1625]: Startup finished in 137ms. Jul 6 23:46:26.635221 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 6 23:46:26.636703 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 6 23:46:26.696282 systemd[1]: Started sshd@1-10.0.0.139:22-10.0.0.1:52632.service - OpenSSH per-connection server daemon (10.0.0.1:52632). Jul 6 23:46:26.757188 sshd[1636]: Accepted publickey for core from 10.0.0.1 port 52632 ssh2: RSA SHA256:xPKA+TblypRwFFpP4Ulh9pljC5Xv/qD+dvpZZ1GZosc Jul 6 23:46:26.758700 sshd-session[1636]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:46:26.762922 systemd-logind[1478]: New session 2 of user core. Jul 6 23:46:26.777807 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 6 23:46:26.828626 sshd[1638]: Connection closed by 10.0.0.1 port 52632 Jul 6 23:46:26.829051 sshd-session[1636]: pam_unix(sshd:session): session closed for user core Jul 6 23:46:26.849210 systemd[1]: sshd@1-10.0.0.139:22-10.0.0.1:52632.service: Deactivated successfully. Jul 6 23:46:26.852053 systemd[1]: session-2.scope: Deactivated successfully. Jul 6 23:46:26.852805 systemd-logind[1478]: Session 2 logged out. Waiting for processes to exit. Jul 6 23:46:26.855361 systemd[1]: Started sshd@2-10.0.0.139:22-10.0.0.1:52642.service - OpenSSH per-connection server daemon (10.0.0.1:52642). Jul 6 23:46:26.856089 systemd-logind[1478]: Removed session 2. Jul 6 23:46:26.909663 sshd[1644]: Accepted publickey for core from 10.0.0.1 port 52642 ssh2: RSA SHA256:xPKA+TblypRwFFpP4Ulh9pljC5Xv/qD+dvpZZ1GZosc Jul 6 23:46:26.911214 sshd-session[1644]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:46:26.916103 systemd-logind[1478]: New session 3 of user core. Jul 6 23:46:26.928770 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 6 23:46:26.979070 sshd[1646]: Connection closed by 10.0.0.1 port 52642 Jul 6 23:46:26.980102 sshd-session[1644]: pam_unix(sshd:session): session closed for user core Jul 6 23:46:26.989179 systemd[1]: sshd@2-10.0.0.139:22-10.0.0.1:52642.service: Deactivated successfully. Jul 6 23:46:26.992314 systemd[1]: session-3.scope: Deactivated successfully. Jul 6 23:46:26.993204 systemd-logind[1478]: Session 3 logged out. Waiting for processes to exit. Jul 6 23:46:26.999917 systemd[1]: Started sshd@3-10.0.0.139:22-10.0.0.1:52644.service - OpenSSH per-connection server daemon (10.0.0.1:52644). Jul 6 23:46:27.000665 systemd-logind[1478]: Removed session 3. Jul 6 23:46:27.052719 sshd[1652]: Accepted publickey for core from 10.0.0.1 port 52644 ssh2: RSA SHA256:xPKA+TblypRwFFpP4Ulh9pljC5Xv/qD+dvpZZ1GZosc Jul 6 23:46:27.053948 sshd-session[1652]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:46:27.058212 systemd-logind[1478]: New session 4 of user core. Jul 6 23:46:27.064814 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 6 23:46:27.116082 sshd[1654]: Connection closed by 10.0.0.1 port 52644 Jul 6 23:46:27.116585 sshd-session[1652]: pam_unix(sshd:session): session closed for user core Jul 6 23:46:27.130750 systemd[1]: sshd@3-10.0.0.139:22-10.0.0.1:52644.service: Deactivated successfully. Jul 6 23:46:27.132295 systemd[1]: session-4.scope: Deactivated successfully. Jul 6 23:46:27.134764 systemd-logind[1478]: Session 4 logged out. Waiting for processes to exit. Jul 6 23:46:27.136721 systemd[1]: Started sshd@4-10.0.0.139:22-10.0.0.1:52650.service - OpenSSH per-connection server daemon (10.0.0.1:52650). Jul 6 23:46:27.137558 systemd-logind[1478]: Removed session 4. Jul 6 23:46:27.194798 sshd[1660]: Accepted publickey for core from 10.0.0.1 port 52650 ssh2: RSA SHA256:xPKA+TblypRwFFpP4Ulh9pljC5Xv/qD+dvpZZ1GZosc Jul 6 23:46:27.196231 sshd-session[1660]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:46:27.200378 systemd-logind[1478]: New session 5 of user core. Jul 6 23:46:27.210767 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 6 23:46:27.279496 sudo[1663]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 6 23:46:27.279820 sudo[1663]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 6 23:46:27.298542 sudo[1663]: pam_unix(sudo:session): session closed for user root Jul 6 23:46:27.302680 sshd[1662]: Connection closed by 10.0.0.1 port 52650 Jul 6 23:46:27.303094 sshd-session[1660]: pam_unix(sshd:session): session closed for user core Jul 6 23:46:27.322295 systemd[1]: sshd@4-10.0.0.139:22-10.0.0.1:52650.service: Deactivated successfully. Jul 6 23:46:27.325643 systemd[1]: session-5.scope: Deactivated successfully. Jul 6 23:46:27.326603 systemd-logind[1478]: Session 5 logged out. Waiting for processes to exit. Jul 6 23:46:27.329373 systemd[1]: Started sshd@5-10.0.0.139:22-10.0.0.1:52664.service - OpenSSH per-connection server daemon (10.0.0.1:52664). Jul 6 23:46:27.331028 systemd-logind[1478]: Removed session 5. Jul 6 23:46:27.389507 sshd[1669]: Accepted publickey for core from 10.0.0.1 port 52664 ssh2: RSA SHA256:xPKA+TblypRwFFpP4Ulh9pljC5Xv/qD+dvpZZ1GZosc Jul 6 23:46:27.390957 sshd-session[1669]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:46:27.396182 systemd-logind[1478]: New session 6 of user core. Jul 6 23:46:27.408801 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 6 23:46:27.461770 sudo[1673]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 6 23:46:27.462404 sudo[1673]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 6 23:46:27.537482 sudo[1673]: pam_unix(sudo:session): session closed for user root Jul 6 23:46:27.542693 sudo[1672]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jul 6 23:46:27.542960 sudo[1672]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 6 23:46:27.553150 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 6 23:46:27.597094 augenrules[1695]: No rules Jul 6 23:46:27.598671 systemd[1]: audit-rules.service: Deactivated successfully. Jul 6 23:46:27.599770 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 6 23:46:27.600828 sudo[1672]: pam_unix(sudo:session): session closed for user root Jul 6 23:46:27.602314 sshd[1671]: Connection closed by 10.0.0.1 port 52664 Jul 6 23:46:27.602770 sshd-session[1669]: pam_unix(sshd:session): session closed for user core Jul 6 23:46:27.611381 systemd[1]: sshd@5-10.0.0.139:22-10.0.0.1:52664.service: Deactivated successfully. Jul 6 23:46:27.614220 systemd[1]: session-6.scope: Deactivated successfully. Jul 6 23:46:27.615052 systemd-logind[1478]: Session 6 logged out. Waiting for processes to exit. Jul 6 23:46:27.618268 systemd[1]: Started sshd@6-10.0.0.139:22-10.0.0.1:52680.service - OpenSSH per-connection server daemon (10.0.0.1:52680). Jul 6 23:46:27.618880 systemd-logind[1478]: Removed session 6. Jul 6 23:46:27.674032 sshd[1704]: Accepted publickey for core from 10.0.0.1 port 52680 ssh2: RSA SHA256:xPKA+TblypRwFFpP4Ulh9pljC5Xv/qD+dvpZZ1GZosc Jul 6 23:46:27.675387 sshd-session[1704]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:46:27.679692 systemd-logind[1478]: New session 7 of user core. Jul 6 23:46:27.695768 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 6 23:46:27.746600 sudo[1707]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 6 23:46:27.746886 sudo[1707]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 6 23:46:28.236767 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 6 23:46:28.258953 (dockerd)[1727]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 6 23:46:28.624671 dockerd[1727]: time="2025-07-06T23:46:28.624615147Z" level=info msg="Starting up" Jul 6 23:46:28.627679 dockerd[1727]: time="2025-07-06T23:46:28.627637987Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jul 6 23:46:28.659237 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport2343751889-merged.mount: Deactivated successfully. Jul 6 23:46:28.679752 dockerd[1727]: time="2025-07-06T23:46:28.679699787Z" level=info msg="Loading containers: start." Jul 6 23:46:28.688616 kernel: Initializing XFRM netlink socket Jul 6 23:46:28.969999 systemd-networkd[1430]: docker0: Link UP Jul 6 23:46:28.975143 dockerd[1727]: time="2025-07-06T23:46:28.975099187Z" level=info msg="Loading containers: done." Jul 6 23:46:28.996673 dockerd[1727]: time="2025-07-06T23:46:28.996286467Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 6 23:46:28.996673 dockerd[1727]: time="2025-07-06T23:46:28.996375907Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Jul 6 23:46:28.996673 dockerd[1727]: time="2025-07-06T23:46:28.996496267Z" level=info msg="Initializing buildkit" Jul 6 23:46:29.022024 dockerd[1727]: time="2025-07-06T23:46:29.021982667Z" level=info msg="Completed buildkit initialization" Jul 6 23:46:29.029675 dockerd[1727]: time="2025-07-06T23:46:29.029608667Z" level=info msg="Daemon has completed initialization" Jul 6 23:46:29.030271 dockerd[1727]: time="2025-07-06T23:46:29.029696427Z" level=info msg="API listen on /run/docker.sock" Jul 6 23:46:29.029881 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 6 23:46:29.628652 containerd[1497]: time="2025-07-06T23:46:29.628587827Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\"" Jul 6 23:46:29.655697 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3215061087-merged.mount: Deactivated successfully. Jul 6 23:46:30.219021 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1167054973.mount: Deactivated successfully. Jul 6 23:46:31.061335 containerd[1497]: time="2025-07-06T23:46:31.061288507Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:46:31.061867 containerd[1497]: time="2025-07-06T23:46:31.061812747Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.10: active requests=0, bytes read=25651795" Jul 6 23:46:31.062677 containerd[1497]: time="2025-07-06T23:46:31.062629867Z" level=info msg="ImageCreate event name:\"sha256:8907c2d36348551c1038e24ef688f6830681069380376707e55518007a20a86c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:46:31.065153 containerd[1497]: time="2025-07-06T23:46:31.065111987Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:46:31.066261 containerd[1497]: time="2025-07-06T23:46:31.066224467Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.10\" with image id \"sha256:8907c2d36348551c1038e24ef688f6830681069380376707e55518007a20a86c\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\", size \"25648593\" in 1.43758632s" Jul 6 23:46:31.066307 containerd[1497]: time="2025-07-06T23:46:31.066263827Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\" returns image reference \"sha256:8907c2d36348551c1038e24ef688f6830681069380376707e55518007a20a86c\"" Jul 6 23:46:31.070637 containerd[1497]: time="2025-07-06T23:46:31.070600907Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\"" Jul 6 23:46:32.049538 containerd[1497]: time="2025-07-06T23:46:32.049465067Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:46:32.050040 containerd[1497]: time="2025-07-06T23:46:32.050004627Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.10: active requests=0, bytes read=22459679" Jul 6 23:46:32.050770 containerd[1497]: time="2025-07-06T23:46:32.050739387Z" level=info msg="ImageCreate event name:\"sha256:0f640d6889416d515a0ac4de1c26f4d80134c47641ff464abc831560a951175f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:46:32.054596 containerd[1497]: time="2025-07-06T23:46:32.053836347Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:46:32.054675 containerd[1497]: time="2025-07-06T23:46:32.054626427Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.10\" with image id \"sha256:0f640d6889416d515a0ac4de1c26f4d80134c47641ff464abc831560a951175f\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\", size \"23995467\" in 983.98048ms" Jul 6 23:46:32.054675 containerd[1497]: time="2025-07-06T23:46:32.054660627Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\" returns image reference \"sha256:0f640d6889416d515a0ac4de1c26f4d80134c47641ff464abc831560a951175f\"" Jul 6 23:46:32.055128 containerd[1497]: time="2025-07-06T23:46:32.055097867Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\"" Jul 6 23:46:33.193298 containerd[1497]: time="2025-07-06T23:46:33.193239507Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:46:33.194350 containerd[1497]: time="2025-07-06T23:46:33.194166907Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.10: active requests=0, bytes read=17125068" Jul 6 23:46:33.195103 containerd[1497]: time="2025-07-06T23:46:33.195060547Z" level=info msg="ImageCreate event name:\"sha256:23d79b83d912e2633bcb4f9f7b8b46024893e11d492a4249d8f1f8c9a26b7b2c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:46:33.198031 containerd[1497]: time="2025-07-06T23:46:33.197978947Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:46:33.198965 containerd[1497]: time="2025-07-06T23:46:33.198867147Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.10\" with image id \"sha256:23d79b83d912e2633bcb4f9f7b8b46024893e11d492a4249d8f1f8c9a26b7b2c\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\", size \"18660874\" in 1.14373236s" Jul 6 23:46:33.198965 containerd[1497]: time="2025-07-06T23:46:33.198901667Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\" returns image reference \"sha256:23d79b83d912e2633bcb4f9f7b8b46024893e11d492a4249d8f1f8c9a26b7b2c\"" Jul 6 23:46:33.199588 containerd[1497]: time="2025-07-06T23:46:33.199544187Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\"" Jul 6 23:46:33.335935 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 6 23:46:33.337640 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:46:33.484161 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:46:33.488825 (kubelet)[2009]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 6 23:46:33.534824 kubelet[2009]: E0706 23:46:33.534764 2009 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 6 23:46:33.537947 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 6 23:46:33.538090 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 6 23:46:33.538970 systemd[1]: kubelet.service: Consumed 162ms CPU time, 106M memory peak. Jul 6 23:46:34.290362 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2458055676.mount: Deactivated successfully. Jul 6 23:46:34.661326 containerd[1497]: time="2025-07-06T23:46:34.661280787Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:46:34.662718 containerd[1497]: time="2025-07-06T23:46:34.662679227Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.10: active requests=0, bytes read=26915959" Jul 6 23:46:34.663645 containerd[1497]: time="2025-07-06T23:46:34.663591187Z" level=info msg="ImageCreate event name:\"sha256:dde5ff0da443b455e81aefc7bf6a216fdd659d1cbe13b8e8ac8129c3ecd27f89\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:46:34.665709 containerd[1497]: time="2025-07-06T23:46:34.665665427Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:46:34.666314 containerd[1497]: time="2025-07-06T23:46:34.666156347Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.10\" with image id \"sha256:dde5ff0da443b455e81aefc7bf6a216fdd659d1cbe13b8e8ac8129c3ecd27f89\", repo tag \"registry.k8s.io/kube-proxy:v1.31.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\", size \"26914976\" in 1.46646012s" Jul 6 23:46:34.666314 containerd[1497]: time="2025-07-06T23:46:34.666193307Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\" returns image reference \"sha256:dde5ff0da443b455e81aefc7bf6a216fdd659d1cbe13b8e8ac8129c3ecd27f89\"" Jul 6 23:46:34.666685 containerd[1497]: time="2025-07-06T23:46:34.666656787Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 6 23:46:35.212430 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2262810477.mount: Deactivated successfully. Jul 6 23:46:35.855565 containerd[1497]: time="2025-07-06T23:46:35.855506427Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:46:35.856938 containerd[1497]: time="2025-07-06T23:46:35.856905067Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951624" Jul 6 23:46:35.860381 containerd[1497]: time="2025-07-06T23:46:35.860344547Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:46:35.864380 containerd[1497]: time="2025-07-06T23:46:35.864334587Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:46:35.865882 containerd[1497]: time="2025-07-06T23:46:35.865846147Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.19915548s" Jul 6 23:46:35.865882 containerd[1497]: time="2025-07-06T23:46:35.865880227Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Jul 6 23:46:35.866506 containerd[1497]: time="2025-07-06T23:46:35.866304547Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 6 23:46:36.353473 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3402255380.mount: Deactivated successfully. Jul 6 23:46:36.359170 containerd[1497]: time="2025-07-06T23:46:36.359126667Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 6 23:46:36.360332 containerd[1497]: time="2025-07-06T23:46:36.360299147Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268705" Jul 6 23:46:36.361238 containerd[1497]: time="2025-07-06T23:46:36.361196307Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 6 23:46:36.363173 containerd[1497]: time="2025-07-06T23:46:36.363125347Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 6 23:46:36.363738 containerd[1497]: time="2025-07-06T23:46:36.363711907Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 497.3724ms" Jul 6 23:46:36.363795 containerd[1497]: time="2025-07-06T23:46:36.363744787Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jul 6 23:46:36.364207 containerd[1497]: time="2025-07-06T23:46:36.364185067Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jul 6 23:46:36.856979 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2445157606.mount: Deactivated successfully. Jul 6 23:46:38.289346 containerd[1497]: time="2025-07-06T23:46:38.289283427Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:46:38.290089 containerd[1497]: time="2025-07-06T23:46:38.290037387Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=66406467" Jul 6 23:46:38.290755 containerd[1497]: time="2025-07-06T23:46:38.290719947Z" level=info msg="ImageCreate event name:\"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:46:38.294068 containerd[1497]: time="2025-07-06T23:46:38.294028227Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:46:38.295046 containerd[1497]: time="2025-07-06T23:46:38.295010867Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"66535646\" in 1.930717s" Jul 6 23:46:38.295082 containerd[1497]: time="2025-07-06T23:46:38.295047947Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da\"" Jul 6 23:46:42.776894 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:46:42.777056 systemd[1]: kubelet.service: Consumed 162ms CPU time, 106M memory peak. Jul 6 23:46:42.780015 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:46:42.805711 systemd[1]: Reload requested from client PID 2164 ('systemctl') (unit session-7.scope)... Jul 6 23:46:42.805726 systemd[1]: Reloading... Jul 6 23:46:42.898608 zram_generator::config[2208]: No configuration found. Jul 6 23:46:43.005233 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 6 23:46:43.100095 systemd[1]: Reloading finished in 294 ms. Jul 6 23:46:43.171176 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 6 23:46:43.171253 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 6 23:46:43.171496 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:46:43.171545 systemd[1]: kubelet.service: Consumed 90ms CPU time, 95M memory peak. Jul 6 23:46:43.173203 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:46:43.349035 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:46:43.354370 (kubelet)[2253]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 6 23:46:43.397592 kubelet[2253]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 6 23:46:43.397592 kubelet[2253]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 6 23:46:43.397592 kubelet[2253]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 6 23:46:43.397592 kubelet[2253]: I0706 23:46:43.397547 2253 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 6 23:46:44.389080 kubelet[2253]: I0706 23:46:44.389033 2253 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 6 23:46:44.389080 kubelet[2253]: I0706 23:46:44.389066 2253 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 6 23:46:44.389317 kubelet[2253]: I0706 23:46:44.389295 2253 server.go:934] "Client rotation is on, will bootstrap in background" Jul 6 23:46:44.438276 kubelet[2253]: I0706 23:46:44.438144 2253 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 6 23:46:44.439623 kubelet[2253]: E0706 23:46:44.439586 2253 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.139:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.139:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:46:44.447081 kubelet[2253]: I0706 23:46:44.447002 2253 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 6 23:46:44.450673 kubelet[2253]: I0706 23:46:44.450651 2253 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 6 23:46:44.451068 kubelet[2253]: I0706 23:46:44.451051 2253 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 6 23:46:44.451272 kubelet[2253]: I0706 23:46:44.451244 2253 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 6 23:46:44.451525 kubelet[2253]: I0706 23:46:44.451320 2253 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 6 23:46:44.451760 kubelet[2253]: I0706 23:46:44.451746 2253 topology_manager.go:138] "Creating topology manager with none policy" Jul 6 23:46:44.451856 kubelet[2253]: I0706 23:46:44.451821 2253 container_manager_linux.go:300] "Creating device plugin manager" Jul 6 23:46:44.452095 kubelet[2253]: I0706 23:46:44.452082 2253 state_mem.go:36] "Initialized new in-memory state store" Jul 6 23:46:44.455143 kubelet[2253]: I0706 23:46:44.455107 2253 kubelet.go:408] "Attempting to sync node with API server" Jul 6 23:46:44.455185 kubelet[2253]: I0706 23:46:44.455147 2253 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 6 23:46:44.455185 kubelet[2253]: I0706 23:46:44.455168 2253 kubelet.go:314] "Adding apiserver pod source" Jul 6 23:46:44.455314 kubelet[2253]: I0706 23:46:44.455289 2253 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 6 23:46:44.456922 kubelet[2253]: W0706 23:46:44.456854 2253 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.139:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.139:6443: connect: connection refused Jul 6 23:46:44.456976 kubelet[2253]: E0706 23:46:44.456935 2253 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.139:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.139:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:46:44.462327 kubelet[2253]: W0706 23:46:44.462278 2253 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.139:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.139:6443: connect: connection refused Jul 6 23:46:44.462374 kubelet[2253]: E0706 23:46:44.462335 2253 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.139:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.139:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:46:44.480085 kubelet[2253]: I0706 23:46:44.480038 2253 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jul 6 23:46:44.480986 kubelet[2253]: I0706 23:46:44.480958 2253 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 6 23:46:44.481088 kubelet[2253]: W0706 23:46:44.481070 2253 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 6 23:46:44.482279 kubelet[2253]: I0706 23:46:44.482254 2253 server.go:1274] "Started kubelet" Jul 6 23:46:44.482626 kubelet[2253]: I0706 23:46:44.482380 2253 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 6 23:46:44.482821 kubelet[2253]: I0706 23:46:44.482759 2253 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 6 23:46:44.483064 kubelet[2253]: I0706 23:46:44.483037 2253 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 6 23:46:44.484402 kubelet[2253]: I0706 23:46:44.484194 2253 server.go:449] "Adding debug handlers to kubelet server" Jul 6 23:46:44.484545 kubelet[2253]: I0706 23:46:44.484521 2253 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 6 23:46:44.485660 kubelet[2253]: I0706 23:46:44.485521 2253 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 6 23:46:44.485991 kubelet[2253]: E0706 23:46:44.485968 2253 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 6 23:46:44.486279 kubelet[2253]: I0706 23:46:44.486002 2253 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 6 23:46:44.486279 kubelet[2253]: I0706 23:46:44.486182 2253 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 6 23:46:44.486279 kubelet[2253]: I0706 23:46:44.486249 2253 reconciler.go:26] "Reconciler: start to sync state" Jul 6 23:46:44.486797 kubelet[2253]: W0706 23:46:44.486751 2253 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.139:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.139:6443: connect: connection refused Jul 6 23:46:44.486797 kubelet[2253]: E0706 23:46:44.486805 2253 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.139:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.139:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:46:44.487176 kubelet[2253]: I0706 23:46:44.487152 2253 factory.go:221] Registration of the systemd container factory successfully Jul 6 23:46:44.487253 kubelet[2253]: I0706 23:46:44.487234 2253 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 6 23:46:44.488496 kubelet[2253]: E0706 23:46:44.488460 2253 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.139:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.139:6443: connect: connection refused" interval="200ms" Jul 6 23:46:44.489152 kubelet[2253]: I0706 23:46:44.488499 2253 factory.go:221] Registration of the containerd container factory successfully Jul 6 23:46:44.491613 kubelet[2253]: E0706 23:46:44.490139 2253 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.139:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.139:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.184fce4bac7e338b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-06 23:46:44.482233227 +0000 UTC m=+1.124273001,LastTimestamp:2025-07-06 23:46:44.482233227 +0000 UTC m=+1.124273001,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 6 23:46:44.500652 kubelet[2253]: I0706 23:46:44.500623 2253 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 6 23:46:44.500652 kubelet[2253]: I0706 23:46:44.500643 2253 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 6 23:46:44.500652 kubelet[2253]: I0706 23:46:44.500661 2253 state_mem.go:36] "Initialized new in-memory state store" Jul 6 23:46:44.522887 kubelet[2253]: I0706 23:46:44.522733 2253 policy_none.go:49] "None policy: Start" Jul 6 23:46:44.523791 kubelet[2253]: I0706 23:46:44.523758 2253 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 6 23:46:44.523791 kubelet[2253]: I0706 23:46:44.523788 2253 state_mem.go:35] "Initializing new in-memory state store" Jul 6 23:46:44.528859 kubelet[2253]: I0706 23:46:44.528809 2253 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 6 23:46:44.530287 kubelet[2253]: I0706 23:46:44.530256 2253 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 6 23:46:44.530287 kubelet[2253]: I0706 23:46:44.530291 2253 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 6 23:46:44.530561 kubelet[2253]: I0706 23:46:44.530315 2253 kubelet.go:2321] "Starting kubelet main sync loop" Jul 6 23:46:44.530561 kubelet[2253]: E0706 23:46:44.530371 2253 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 6 23:46:44.532465 kubelet[2253]: W0706 23:46:44.532385 2253 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.139:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.139:6443: connect: connection refused Jul 6 23:46:44.532465 kubelet[2253]: E0706 23:46:44.532462 2253 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.139:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.139:6443: connect: connection refused" logger="UnhandledError" Jul 6 23:46:44.536352 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 6 23:46:44.551509 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 6 23:46:44.555354 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 6 23:46:44.566722 kubelet[2253]: I0706 23:46:44.566682 2253 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 6 23:46:44.566940 kubelet[2253]: I0706 23:46:44.566917 2253 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 6 23:46:44.566995 kubelet[2253]: I0706 23:46:44.566937 2253 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 6 23:46:44.567547 kubelet[2253]: I0706 23:46:44.567527 2253 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 6 23:46:44.568722 kubelet[2253]: E0706 23:46:44.568701 2253 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jul 6 23:46:44.646257 systemd[1]: Created slice kubepods-burstable-poda93bc5b2317c17916f96b21c519f3d90.slice - libcontainer container kubepods-burstable-poda93bc5b2317c17916f96b21c519f3d90.slice. Jul 6 23:46:44.663384 systemd[1]: Created slice kubepods-burstable-pod3f04709fe51ae4ab5abd58e8da771b74.slice - libcontainer container kubepods-burstable-pod3f04709fe51ae4ab5abd58e8da771b74.slice. Jul 6 23:46:44.668371 systemd[1]: Created slice kubepods-burstable-podb35b56493416c25588cb530e37ffc065.slice - libcontainer container kubepods-burstable-podb35b56493416c25588cb530e37ffc065.slice. Jul 6 23:46:44.669338 kubelet[2253]: I0706 23:46:44.669302 2253 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 6 23:46:44.670275 kubelet[2253]: E0706 23:46:44.670240 2253 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.139:6443/api/v1/nodes\": dial tcp 10.0.0.139:6443: connect: connection refused" node="localhost" Jul 6 23:46:44.689305 kubelet[2253]: E0706 23:46:44.689255 2253 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.139:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.139:6443: connect: connection refused" interval="400ms" Jul 6 23:46:44.787994 kubelet[2253]: I0706 23:46:44.787752 2253 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 6 23:46:44.787994 kubelet[2253]: I0706 23:46:44.787807 2253 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b35b56493416c25588cb530e37ffc065-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"b35b56493416c25588cb530e37ffc065\") " pod="kube-system/kube-scheduler-localhost" Jul 6 23:46:44.787994 kubelet[2253]: I0706 23:46:44.787829 2253 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a93bc5b2317c17916f96b21c519f3d90-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"a93bc5b2317c17916f96b21c519f3d90\") " pod="kube-system/kube-apiserver-localhost" Jul 6 23:46:44.787994 kubelet[2253]: I0706 23:46:44.787847 2253 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 6 23:46:44.787994 kubelet[2253]: I0706 23:46:44.787864 2253 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 6 23:46:44.788205 kubelet[2253]: I0706 23:46:44.787884 2253 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 6 23:46:44.788205 kubelet[2253]: I0706 23:46:44.787900 2253 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a93bc5b2317c17916f96b21c519f3d90-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"a93bc5b2317c17916f96b21c519f3d90\") " pod="kube-system/kube-apiserver-localhost" Jul 6 23:46:44.788205 kubelet[2253]: I0706 23:46:44.787915 2253 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a93bc5b2317c17916f96b21c519f3d90-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"a93bc5b2317c17916f96b21c519f3d90\") " pod="kube-system/kube-apiserver-localhost" Jul 6 23:46:44.788205 kubelet[2253]: I0706 23:46:44.787930 2253 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 6 23:46:44.871941 kubelet[2253]: I0706 23:46:44.871862 2253 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 6 23:46:44.872427 kubelet[2253]: E0706 23:46:44.872376 2253 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.139:6443/api/v1/nodes\": dial tcp 10.0.0.139:6443: connect: connection refused" node="localhost" Jul 6 23:46:44.961539 containerd[1497]: time="2025-07-06T23:46:44.961132147Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:a93bc5b2317c17916f96b21c519f3d90,Namespace:kube-system,Attempt:0,}" Jul 6 23:46:44.966966 containerd[1497]: time="2025-07-06T23:46:44.966727707Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:3f04709fe51ae4ab5abd58e8da771b74,Namespace:kube-system,Attempt:0,}" Jul 6 23:46:44.972607 containerd[1497]: time="2025-07-06T23:46:44.972490427Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:b35b56493416c25588cb530e37ffc065,Namespace:kube-system,Attempt:0,}" Jul 6 23:46:45.000363 containerd[1497]: time="2025-07-06T23:46:45.000305587Z" level=info msg="connecting to shim 268aac5eaab52e14c73d38b97c802410c05a1e07a23f4cbc2b9ea9d8f48897c2" address="unix:///run/containerd/s/29ed29ad9e47db6b205f8e0ded0142d9504bb1a95e5f33b539f458c59c8829b8" namespace=k8s.io protocol=ttrpc version=3 Jul 6 23:46:45.019551 containerd[1497]: time="2025-07-06T23:46:45.019507707Z" level=info msg="connecting to shim a24de00d675c132cdc19eacf57f82a0b27d99a7b289213ff4fcf9c3a8a7fbfcd" address="unix:///run/containerd/s/29c2e605e7fc75cd2c2b0d00a86b3a3777fd1c62f8b8224d81fe3e230664820e" namespace=k8s.io protocol=ttrpc version=3 Jul 6 23:46:45.026944 containerd[1497]: time="2025-07-06T23:46:45.026827867Z" level=info msg="connecting to shim d32dee60e37f1a0dc8a3fe31d599c969e8a42a169be6d743f2a301ffe46aa775" address="unix:///run/containerd/s/694cb00e80aeef0bfc626b7cdb82405a1fe84934af1eaed21c472314a9453dc0" namespace=k8s.io protocol=ttrpc version=3 Jul 6 23:46:45.038762 systemd[1]: Started cri-containerd-268aac5eaab52e14c73d38b97c802410c05a1e07a23f4cbc2b9ea9d8f48897c2.scope - libcontainer container 268aac5eaab52e14c73d38b97c802410c05a1e07a23f4cbc2b9ea9d8f48897c2. Jul 6 23:46:45.056787 systemd[1]: Started cri-containerd-d32dee60e37f1a0dc8a3fe31d599c969e8a42a169be6d743f2a301ffe46aa775.scope - libcontainer container d32dee60e37f1a0dc8a3fe31d599c969e8a42a169be6d743f2a301ffe46aa775. Jul 6 23:46:45.061096 systemd[1]: Started cri-containerd-a24de00d675c132cdc19eacf57f82a0b27d99a7b289213ff4fcf9c3a8a7fbfcd.scope - libcontainer container a24de00d675c132cdc19eacf57f82a0b27d99a7b289213ff4fcf9c3a8a7fbfcd. Jul 6 23:46:45.089953 kubelet[2253]: E0706 23:46:45.089894 2253 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.139:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.139:6443: connect: connection refused" interval="800ms" Jul 6 23:46:45.096434 containerd[1497]: time="2025-07-06T23:46:45.096375427Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:a93bc5b2317c17916f96b21c519f3d90,Namespace:kube-system,Attempt:0,} returns sandbox id \"268aac5eaab52e14c73d38b97c802410c05a1e07a23f4cbc2b9ea9d8f48897c2\"" Jul 6 23:46:45.100909 containerd[1497]: time="2025-07-06T23:46:45.100796427Z" level=info msg="CreateContainer within sandbox \"268aac5eaab52e14c73d38b97c802410c05a1e07a23f4cbc2b9ea9d8f48897c2\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 6 23:46:45.103765 containerd[1497]: time="2025-07-06T23:46:45.103717627Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:3f04709fe51ae4ab5abd58e8da771b74,Namespace:kube-system,Attempt:0,} returns sandbox id \"a24de00d675c132cdc19eacf57f82a0b27d99a7b289213ff4fcf9c3a8a7fbfcd\"" Jul 6 23:46:45.106281 containerd[1497]: time="2025-07-06T23:46:45.105998187Z" level=info msg="CreateContainer within sandbox \"a24de00d675c132cdc19eacf57f82a0b27d99a7b289213ff4fcf9c3a8a7fbfcd\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 6 23:46:45.111444 containerd[1497]: time="2025-07-06T23:46:45.111404707Z" level=info msg="Container 905bbe7cbd503440d159a7377c04cdccde6ec7997ae819436c6e2f095a46218a: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:46:45.112795 containerd[1497]: time="2025-07-06T23:46:45.112762827Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:b35b56493416c25588cb530e37ffc065,Namespace:kube-system,Attempt:0,} returns sandbox id \"d32dee60e37f1a0dc8a3fe31d599c969e8a42a169be6d743f2a301ffe46aa775\"" Jul 6 23:46:45.115936 containerd[1497]: time="2025-07-06T23:46:45.115908667Z" level=info msg="CreateContainer within sandbox \"d32dee60e37f1a0dc8a3fe31d599c969e8a42a169be6d743f2a301ffe46aa775\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 6 23:46:45.119977 containerd[1497]: time="2025-07-06T23:46:45.119925747Z" level=info msg="Container 03276c837690f816b4bd1a11111991fcece1c6e5bb29e83a7d390a89dc79e420: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:46:45.122632 containerd[1497]: time="2025-07-06T23:46:45.122590707Z" level=info msg="CreateContainer within sandbox \"268aac5eaab52e14c73d38b97c802410c05a1e07a23f4cbc2b9ea9d8f48897c2\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"905bbe7cbd503440d159a7377c04cdccde6ec7997ae819436c6e2f095a46218a\"" Jul 6 23:46:45.123264 containerd[1497]: time="2025-07-06T23:46:45.123232347Z" level=info msg="StartContainer for \"905bbe7cbd503440d159a7377c04cdccde6ec7997ae819436c6e2f095a46218a\"" Jul 6 23:46:45.124549 containerd[1497]: time="2025-07-06T23:46:45.124484467Z" level=info msg="connecting to shim 905bbe7cbd503440d159a7377c04cdccde6ec7997ae819436c6e2f095a46218a" address="unix:///run/containerd/s/29ed29ad9e47db6b205f8e0ded0142d9504bb1a95e5f33b539f458c59c8829b8" protocol=ttrpc version=3 Jul 6 23:46:45.127776 containerd[1497]: time="2025-07-06T23:46:45.127743347Z" level=info msg="Container 4edfb4de90c2542ed7d3f719a25af0e8e37504920ac8318032c835e2e7f9d253: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:46:45.130876 containerd[1497]: time="2025-07-06T23:46:45.130787867Z" level=info msg="CreateContainer within sandbox \"a24de00d675c132cdc19eacf57f82a0b27d99a7b289213ff4fcf9c3a8a7fbfcd\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"03276c837690f816b4bd1a11111991fcece1c6e5bb29e83a7d390a89dc79e420\"" Jul 6 23:46:45.131792 containerd[1497]: time="2025-07-06T23:46:45.131752467Z" level=info msg="StartContainer for \"03276c837690f816b4bd1a11111991fcece1c6e5bb29e83a7d390a89dc79e420\"" Jul 6 23:46:45.131792 containerd[1497]: time="2025-07-06T23:46:45.132834627Z" level=info msg="connecting to shim 03276c837690f816b4bd1a11111991fcece1c6e5bb29e83a7d390a89dc79e420" address="unix:///run/containerd/s/29c2e605e7fc75cd2c2b0d00a86b3a3777fd1c62f8b8224d81fe3e230664820e" protocol=ttrpc version=3 Jul 6 23:46:45.135735 containerd[1497]: time="2025-07-06T23:46:45.135701147Z" level=info msg="CreateContainer within sandbox \"d32dee60e37f1a0dc8a3fe31d599c969e8a42a169be6d743f2a301ffe46aa775\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"4edfb4de90c2542ed7d3f719a25af0e8e37504920ac8318032c835e2e7f9d253\"" Jul 6 23:46:45.136368 containerd[1497]: time="2025-07-06T23:46:45.136347267Z" level=info msg="StartContainer for \"4edfb4de90c2542ed7d3f719a25af0e8e37504920ac8318032c835e2e7f9d253\"" Jul 6 23:46:45.137518 containerd[1497]: time="2025-07-06T23:46:45.137491027Z" level=info msg="connecting to shim 4edfb4de90c2542ed7d3f719a25af0e8e37504920ac8318032c835e2e7f9d253" address="unix:///run/containerd/s/694cb00e80aeef0bfc626b7cdb82405a1fe84934af1eaed21c472314a9453dc0" protocol=ttrpc version=3 Jul 6 23:46:45.144790 systemd[1]: Started cri-containerd-905bbe7cbd503440d159a7377c04cdccde6ec7997ae819436c6e2f095a46218a.scope - libcontainer container 905bbe7cbd503440d159a7377c04cdccde6ec7997ae819436c6e2f095a46218a. Jul 6 23:46:45.150511 systemd[1]: Started cri-containerd-03276c837690f816b4bd1a11111991fcece1c6e5bb29e83a7d390a89dc79e420.scope - libcontainer container 03276c837690f816b4bd1a11111991fcece1c6e5bb29e83a7d390a89dc79e420. Jul 6 23:46:45.156025 systemd[1]: Started cri-containerd-4edfb4de90c2542ed7d3f719a25af0e8e37504920ac8318032c835e2e7f9d253.scope - libcontainer container 4edfb4de90c2542ed7d3f719a25af0e8e37504920ac8318032c835e2e7f9d253. Jul 6 23:46:45.198633 containerd[1497]: time="2025-07-06T23:46:45.198597107Z" level=info msg="StartContainer for \"905bbe7cbd503440d159a7377c04cdccde6ec7997ae819436c6e2f095a46218a\" returns successfully" Jul 6 23:46:45.206817 containerd[1497]: time="2025-07-06T23:46:45.206780387Z" level=info msg="StartContainer for \"03276c837690f816b4bd1a11111991fcece1c6e5bb29e83a7d390a89dc79e420\" returns successfully" Jul 6 23:46:45.255036 containerd[1497]: time="2025-07-06T23:46:45.254865307Z" level=info msg="StartContainer for \"4edfb4de90c2542ed7d3f719a25af0e8e37504920ac8318032c835e2e7f9d253\" returns successfully" Jul 6 23:46:45.277812 kubelet[2253]: I0706 23:46:45.277735 2253 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 6 23:46:45.278155 kubelet[2253]: E0706 23:46:45.278108 2253 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.139:6443/api/v1/nodes\": dial tcp 10.0.0.139:6443: connect: connection refused" node="localhost" Jul 6 23:46:46.080146 kubelet[2253]: I0706 23:46:46.080116 2253 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 6 23:46:47.035235 kubelet[2253]: E0706 23:46:47.035080 2253 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jul 6 23:46:47.095596 kubelet[2253]: I0706 23:46:47.095558 2253 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jul 6 23:46:47.096121 kubelet[2253]: E0706 23:46:47.095969 2253 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jul 6 23:46:47.457184 kubelet[2253]: I0706 23:46:47.457147 2253 apiserver.go:52] "Watching apiserver" Jul 6 23:46:47.487017 kubelet[2253]: I0706 23:46:47.486963 2253 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 6 23:46:47.552753 kubelet[2253]: E0706 23:46:47.552700 2253 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jul 6 23:46:49.318823 systemd[1]: Reload requested from client PID 2526 ('systemctl') (unit session-7.scope)... Jul 6 23:46:49.319199 systemd[1]: Reloading... Jul 6 23:46:49.396624 zram_generator::config[2569]: No configuration found. Jul 6 23:46:49.476871 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 6 23:46:49.576966 systemd[1]: Reloading finished in 257 ms. Jul 6 23:46:49.604644 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:46:49.628636 systemd[1]: kubelet.service: Deactivated successfully. Jul 6 23:46:49.628878 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:46:49.628945 systemd[1]: kubelet.service: Consumed 1.504s CPU time, 128.1M memory peak. Jul 6 23:46:49.630904 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:46:49.757456 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:46:49.764553 (kubelet)[2611]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 6 23:46:49.801012 kubelet[2611]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 6 23:46:49.801012 kubelet[2611]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 6 23:46:49.801012 kubelet[2611]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 6 23:46:49.801012 kubelet[2611]: I0706 23:46:49.800911 2611 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 6 23:46:49.808912 kubelet[2611]: I0706 23:46:49.808868 2611 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 6 23:46:49.809065 kubelet[2611]: I0706 23:46:49.809055 2611 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 6 23:46:49.809394 kubelet[2611]: I0706 23:46:49.809376 2611 server.go:934] "Client rotation is on, will bootstrap in background" Jul 6 23:46:49.812549 kubelet[2611]: I0706 23:46:49.811900 2611 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 6 23:46:49.814097 kubelet[2611]: I0706 23:46:49.814054 2611 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 6 23:46:49.817739 kubelet[2611]: I0706 23:46:49.817719 2611 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 6 23:46:49.820202 kubelet[2611]: I0706 23:46:49.820176 2611 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 6 23:46:49.820326 kubelet[2611]: I0706 23:46:49.820313 2611 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 6 23:46:49.820458 kubelet[2611]: I0706 23:46:49.820434 2611 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 6 23:46:49.820656 kubelet[2611]: I0706 23:46:49.820461 2611 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 6 23:46:49.820740 kubelet[2611]: I0706 23:46:49.820665 2611 topology_manager.go:138] "Creating topology manager with none policy" Jul 6 23:46:49.820740 kubelet[2611]: I0706 23:46:49.820676 2611 container_manager_linux.go:300] "Creating device plugin manager" Jul 6 23:46:49.820740 kubelet[2611]: I0706 23:46:49.820711 2611 state_mem.go:36] "Initialized new in-memory state store" Jul 6 23:46:49.820820 kubelet[2611]: I0706 23:46:49.820807 2611 kubelet.go:408] "Attempting to sync node with API server" Jul 6 23:46:49.820853 kubelet[2611]: I0706 23:46:49.820830 2611 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 6 23:46:49.820853 kubelet[2611]: I0706 23:46:49.820849 2611 kubelet.go:314] "Adding apiserver pod source" Jul 6 23:46:49.820891 kubelet[2611]: I0706 23:46:49.820861 2611 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 6 23:46:49.824677 kubelet[2611]: I0706 23:46:49.824653 2611 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jul 6 23:46:49.825815 kubelet[2611]: I0706 23:46:49.825791 2611 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 6 23:46:49.826312 kubelet[2611]: I0706 23:46:49.826287 2611 server.go:1274] "Started kubelet" Jul 6 23:46:49.829655 kubelet[2611]: I0706 23:46:49.827924 2611 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 6 23:46:49.829655 kubelet[2611]: I0706 23:46:49.828795 2611 server.go:449] "Adding debug handlers to kubelet server" Jul 6 23:46:49.829655 kubelet[2611]: I0706 23:46:49.828852 2611 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 6 23:46:49.830716 kubelet[2611]: I0706 23:46:49.830541 2611 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 6 23:46:49.831007 kubelet[2611]: I0706 23:46:49.830989 2611 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 6 23:46:49.831779 kubelet[2611]: E0706 23:46:49.831737 2611 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 6 23:46:49.831870 kubelet[2611]: I0706 23:46:49.831794 2611 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 6 23:46:49.832029 kubelet[2611]: I0706 23:46:49.832002 2611 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 6 23:46:49.832141 kubelet[2611]: I0706 23:46:49.832123 2611 reconciler.go:26] "Reconciler: start to sync state" Jul 6 23:46:49.832247 kubelet[2611]: I0706 23:46:49.832225 2611 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 6 23:46:49.833734 kubelet[2611]: I0706 23:46:49.833695 2611 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 6 23:46:49.840054 kubelet[2611]: E0706 23:46:49.840023 2611 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 6 23:46:49.842607 kubelet[2611]: I0706 23:46:49.840620 2611 factory.go:221] Registration of the containerd container factory successfully Jul 6 23:46:49.843445 kubelet[2611]: I0706 23:46:49.843424 2611 factory.go:221] Registration of the systemd container factory successfully Jul 6 23:46:49.845760 kubelet[2611]: I0706 23:46:49.845708 2611 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 6 23:46:49.848776 kubelet[2611]: I0706 23:46:49.848748 2611 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 6 23:46:49.848891 kubelet[2611]: I0706 23:46:49.848879 2611 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 6 23:46:49.848963 kubelet[2611]: I0706 23:46:49.848953 2611 kubelet.go:2321] "Starting kubelet main sync loop" Jul 6 23:46:49.849066 kubelet[2611]: E0706 23:46:49.849045 2611 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 6 23:46:49.896051 kubelet[2611]: I0706 23:46:49.896007 2611 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 6 23:46:49.896051 kubelet[2611]: I0706 23:46:49.896037 2611 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 6 23:46:49.896051 kubelet[2611]: I0706 23:46:49.896062 2611 state_mem.go:36] "Initialized new in-memory state store" Jul 6 23:46:49.896258 kubelet[2611]: I0706 23:46:49.896251 2611 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 6 23:46:49.896287 kubelet[2611]: I0706 23:46:49.896261 2611 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 6 23:46:49.896287 kubelet[2611]: I0706 23:46:49.896279 2611 policy_none.go:49] "None policy: Start" Jul 6 23:46:49.898311 kubelet[2611]: I0706 23:46:49.898283 2611 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 6 23:46:49.898311 kubelet[2611]: I0706 23:46:49.898316 2611 state_mem.go:35] "Initializing new in-memory state store" Jul 6 23:46:49.899651 kubelet[2611]: I0706 23:46:49.898499 2611 state_mem.go:75] "Updated machine memory state" Jul 6 23:46:49.912174 kubelet[2611]: I0706 23:46:49.912134 2611 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 6 23:46:49.912366 kubelet[2611]: I0706 23:46:49.912329 2611 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 6 23:46:49.912409 kubelet[2611]: I0706 23:46:49.912358 2611 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 6 23:46:49.916048 kubelet[2611]: I0706 23:46:49.916017 2611 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 6 23:46:50.014626 kubelet[2611]: I0706 23:46:50.014582 2611 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jul 6 23:46:50.022483 kubelet[2611]: I0706 23:46:50.021390 2611 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Jul 6 23:46:50.022483 kubelet[2611]: I0706 23:46:50.021497 2611 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jul 6 23:46:50.132634 kubelet[2611]: I0706 23:46:50.132561 2611 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a93bc5b2317c17916f96b21c519f3d90-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"a93bc5b2317c17916f96b21c519f3d90\") " pod="kube-system/kube-apiserver-localhost" Jul 6 23:46:50.132634 kubelet[2611]: I0706 23:46:50.132634 2611 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a93bc5b2317c17916f96b21c519f3d90-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"a93bc5b2317c17916f96b21c519f3d90\") " pod="kube-system/kube-apiserver-localhost" Jul 6 23:46:50.132797 kubelet[2611]: I0706 23:46:50.132655 2611 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 6 23:46:50.132797 kubelet[2611]: I0706 23:46:50.132676 2611 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 6 23:46:50.132797 kubelet[2611]: I0706 23:46:50.132692 2611 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b35b56493416c25588cb530e37ffc065-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"b35b56493416c25588cb530e37ffc065\") " pod="kube-system/kube-scheduler-localhost" Jul 6 23:46:50.132797 kubelet[2611]: I0706 23:46:50.132708 2611 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a93bc5b2317c17916f96b21c519f3d90-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"a93bc5b2317c17916f96b21c519f3d90\") " pod="kube-system/kube-apiserver-localhost" Jul 6 23:46:50.132797 kubelet[2611]: I0706 23:46:50.132724 2611 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 6 23:46:50.132937 kubelet[2611]: I0706 23:46:50.132739 2611 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 6 23:46:50.132969 kubelet[2611]: I0706 23:46:50.132951 2611 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jul 6 23:46:50.329700 sudo[2650]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jul 6 23:46:50.330390 sudo[2650]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jul 6 23:46:50.779857 sudo[2650]: pam_unix(sudo:session): session closed for user root Jul 6 23:46:50.821246 kubelet[2611]: I0706 23:46:50.821198 2611 apiserver.go:52] "Watching apiserver" Jul 6 23:46:50.832899 kubelet[2611]: I0706 23:46:50.832852 2611 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 6 23:46:50.876596 kubelet[2611]: E0706 23:46:50.875836 2611 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 6 23:46:50.892509 kubelet[2611]: I0706 23:46:50.892412 2611 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.891559467 podStartE2EDuration="1.891559467s" podCreationTimestamp="2025-07-06 23:46:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:46:50.891328747 +0000 UTC m=+1.122055081" watchObservedRunningTime="2025-07-06 23:46:50.891559467 +0000 UTC m=+1.122285841" Jul 6 23:46:50.917355 kubelet[2611]: I0706 23:46:50.917176 2611 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.917157067 podStartE2EDuration="1.917157067s" podCreationTimestamp="2025-07-06 23:46:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:46:50.900211867 +0000 UTC m=+1.130938281" watchObservedRunningTime="2025-07-06 23:46:50.917157067 +0000 UTC m=+1.147883441" Jul 6 23:46:50.928005 kubelet[2611]: I0706 23:46:50.927940 2611 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.927924547 podStartE2EDuration="1.927924547s" podCreationTimestamp="2025-07-06 23:46:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:46:50.917472467 +0000 UTC m=+1.148198841" watchObservedRunningTime="2025-07-06 23:46:50.927924547 +0000 UTC m=+1.158650921" Jul 6 23:46:52.874637 sudo[1707]: pam_unix(sudo:session): session closed for user root Jul 6 23:46:52.876227 sshd[1706]: Connection closed by 10.0.0.1 port 52680 Jul 6 23:46:52.877144 sshd-session[1704]: pam_unix(sshd:session): session closed for user core Jul 6 23:46:52.880452 systemd[1]: sshd@6-10.0.0.139:22-10.0.0.1:52680.service: Deactivated successfully. Jul 6 23:46:52.882429 systemd[1]: session-7.scope: Deactivated successfully. Jul 6 23:46:52.882646 systemd[1]: session-7.scope: Consumed 7.304s CPU time, 264.6M memory peak. Jul 6 23:46:52.884947 systemd-logind[1478]: Session 7 logged out. Waiting for processes to exit. Jul 6 23:46:52.886320 systemd-logind[1478]: Removed session 7. Jul 6 23:46:54.757457 kubelet[2611]: I0706 23:46:54.757405 2611 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 6 23:46:54.757951 containerd[1497]: time="2025-07-06T23:46:54.757848438Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 6 23:46:54.758778 kubelet[2611]: I0706 23:46:54.758119 2611 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 6 23:46:55.649439 systemd[1]: Created slice kubepods-besteffort-pod55f39dc0_7852_4669_aeb9_4491e9660c2b.slice - libcontainer container kubepods-besteffort-pod55f39dc0_7852_4669_aeb9_4491e9660c2b.slice. Jul 6 23:46:55.669383 systemd[1]: Created slice kubepods-burstable-pod7c6e390b_ddbf_4568_a42d_13eabdc242e8.slice - libcontainer container kubepods-burstable-pod7c6e390b_ddbf_4568_a42d_13eabdc242e8.slice. Jul 6 23:46:55.676899 kubelet[2611]: I0706 23:46:55.676786 2611 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xhrpl\" (UniqueName: \"kubernetes.io/projected/7c6e390b-ddbf-4568-a42d-13eabdc242e8-kube-api-access-xhrpl\") pod \"cilium-7jgm9\" (UID: \"7c6e390b-ddbf-4568-a42d-13eabdc242e8\") " pod="kube-system/cilium-7jgm9" Jul 6 23:46:55.676899 kubelet[2611]: I0706 23:46:55.676840 2611 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/55f39dc0-7852-4669-aeb9-4491e9660c2b-kube-proxy\") pod \"kube-proxy-zhjkm\" (UID: \"55f39dc0-7852-4669-aeb9-4491e9660c2b\") " pod="kube-system/kube-proxy-zhjkm" Jul 6 23:46:55.676899 kubelet[2611]: I0706 23:46:55.676859 2611 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7c6e390b-ddbf-4568-a42d-13eabdc242e8-host-proc-sys-net\") pod \"cilium-7jgm9\" (UID: \"7c6e390b-ddbf-4568-a42d-13eabdc242e8\") " pod="kube-system/cilium-7jgm9" Jul 6 23:46:55.676899 kubelet[2611]: I0706 23:46:55.676877 2611 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7c6e390b-ddbf-4568-a42d-13eabdc242e8-etc-cni-netd\") pod \"cilium-7jgm9\" (UID: \"7c6e390b-ddbf-4568-a42d-13eabdc242e8\") " pod="kube-system/cilium-7jgm9" Jul 6 23:46:55.676899 kubelet[2611]: I0706 23:46:55.676896 2611 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7c6e390b-ddbf-4568-a42d-13eabdc242e8-clustermesh-secrets\") pod \"cilium-7jgm9\" (UID: \"7c6e390b-ddbf-4568-a42d-13eabdc242e8\") " pod="kube-system/cilium-7jgm9" Jul 6 23:46:55.676899 kubelet[2611]: I0706 23:46:55.676915 2611 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7c6e390b-ddbf-4568-a42d-13eabdc242e8-bpf-maps\") pod \"cilium-7jgm9\" (UID: \"7c6e390b-ddbf-4568-a42d-13eabdc242e8\") " pod="kube-system/cilium-7jgm9" Jul 6 23:46:55.677154 kubelet[2611]: I0706 23:46:55.676935 2611 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7c6e390b-ddbf-4568-a42d-13eabdc242e8-cilium-cgroup\") pod \"cilium-7jgm9\" (UID: \"7c6e390b-ddbf-4568-a42d-13eabdc242e8\") " pod="kube-system/cilium-7jgm9" Jul 6 23:46:55.677154 kubelet[2611]: I0706 23:46:55.676950 2611 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7c6e390b-ddbf-4568-a42d-13eabdc242e8-lib-modules\") pod \"cilium-7jgm9\" (UID: \"7c6e390b-ddbf-4568-a42d-13eabdc242e8\") " pod="kube-system/cilium-7jgm9" Jul 6 23:46:55.677154 kubelet[2611]: I0706 23:46:55.676975 2611 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7c6e390b-ddbf-4568-a42d-13eabdc242e8-host-proc-sys-kernel\") pod \"cilium-7jgm9\" (UID: \"7c6e390b-ddbf-4568-a42d-13eabdc242e8\") " pod="kube-system/cilium-7jgm9" Jul 6 23:46:55.677154 kubelet[2611]: I0706 23:46:55.676993 2611 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7c6e390b-ddbf-4568-a42d-13eabdc242e8-hubble-tls\") pod \"cilium-7jgm9\" (UID: \"7c6e390b-ddbf-4568-a42d-13eabdc242e8\") " pod="kube-system/cilium-7jgm9" Jul 6 23:46:55.677154 kubelet[2611]: I0706 23:46:55.677008 2611 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/55f39dc0-7852-4669-aeb9-4491e9660c2b-xtables-lock\") pod \"kube-proxy-zhjkm\" (UID: \"55f39dc0-7852-4669-aeb9-4491e9660c2b\") " pod="kube-system/kube-proxy-zhjkm" Jul 6 23:46:55.677291 kubelet[2611]: I0706 23:46:55.677024 2611 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-675kk\" (UniqueName: \"kubernetes.io/projected/55f39dc0-7852-4669-aeb9-4491e9660c2b-kube-api-access-675kk\") pod \"kube-proxy-zhjkm\" (UID: \"55f39dc0-7852-4669-aeb9-4491e9660c2b\") " pod="kube-system/kube-proxy-zhjkm" Jul 6 23:46:55.677291 kubelet[2611]: I0706 23:46:55.677041 2611 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7c6e390b-ddbf-4568-a42d-13eabdc242e8-xtables-lock\") pod \"cilium-7jgm9\" (UID: \"7c6e390b-ddbf-4568-a42d-13eabdc242e8\") " pod="kube-system/cilium-7jgm9" Jul 6 23:46:55.677291 kubelet[2611]: I0706 23:46:55.677065 2611 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7c6e390b-ddbf-4568-a42d-13eabdc242e8-cilium-config-path\") pod \"cilium-7jgm9\" (UID: \"7c6e390b-ddbf-4568-a42d-13eabdc242e8\") " pod="kube-system/cilium-7jgm9" Jul 6 23:46:55.677291 kubelet[2611]: I0706 23:46:55.677081 2611 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/55f39dc0-7852-4669-aeb9-4491e9660c2b-lib-modules\") pod \"kube-proxy-zhjkm\" (UID: \"55f39dc0-7852-4669-aeb9-4491e9660c2b\") " pod="kube-system/kube-proxy-zhjkm" Jul 6 23:46:55.677291 kubelet[2611]: I0706 23:46:55.677097 2611 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7c6e390b-ddbf-4568-a42d-13eabdc242e8-hostproc\") pod \"cilium-7jgm9\" (UID: \"7c6e390b-ddbf-4568-a42d-13eabdc242e8\") " pod="kube-system/cilium-7jgm9" Jul 6 23:46:55.677291 kubelet[2611]: I0706 23:46:55.677113 2611 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7c6e390b-ddbf-4568-a42d-13eabdc242e8-cni-path\") pod \"cilium-7jgm9\" (UID: \"7c6e390b-ddbf-4568-a42d-13eabdc242e8\") " pod="kube-system/cilium-7jgm9" Jul 6 23:46:55.677483 kubelet[2611]: I0706 23:46:55.677130 2611 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7c6e390b-ddbf-4568-a42d-13eabdc242e8-cilium-run\") pod \"cilium-7jgm9\" (UID: \"7c6e390b-ddbf-4568-a42d-13eabdc242e8\") " pod="kube-system/cilium-7jgm9" Jul 6 23:46:55.882901 systemd[1]: Created slice kubepods-besteffort-poddc42bb6a_438c_41d0_b222_b744e87b8330.slice - libcontainer container kubepods-besteffort-poddc42bb6a_438c_41d0_b222_b744e87b8330.slice. Jul 6 23:46:55.969378 containerd[1497]: time="2025-07-06T23:46:55.969276074Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zhjkm,Uid:55f39dc0-7852-4669-aeb9-4491e9660c2b,Namespace:kube-system,Attempt:0,}" Jul 6 23:46:55.974707 containerd[1497]: time="2025-07-06T23:46:55.974642581Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-7jgm9,Uid:7c6e390b-ddbf-4568-a42d-13eabdc242e8,Namespace:kube-system,Attempt:0,}" Jul 6 23:46:55.980952 kubelet[2611]: I0706 23:46:55.980896 2611 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bv7ws\" (UniqueName: \"kubernetes.io/projected/dc42bb6a-438c-41d0-b222-b744e87b8330-kube-api-access-bv7ws\") pod \"cilium-operator-5d85765b45-bcnbr\" (UID: \"dc42bb6a-438c-41d0-b222-b744e87b8330\") " pod="kube-system/cilium-operator-5d85765b45-bcnbr" Jul 6 23:46:55.980952 kubelet[2611]: I0706 23:46:55.980947 2611 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/dc42bb6a-438c-41d0-b222-b744e87b8330-cilium-config-path\") pod \"cilium-operator-5d85765b45-bcnbr\" (UID: \"dc42bb6a-438c-41d0-b222-b744e87b8330\") " pod="kube-system/cilium-operator-5d85765b45-bcnbr" Jul 6 23:46:56.005600 containerd[1497]: time="2025-07-06T23:46:56.004880530Z" level=info msg="connecting to shim 923d6b9b6d31abda558909d68daa2b6f13a58eb0f6d6f492352b7c271c9c76a4" address="unix:///run/containerd/s/141eecc2f2579dc7e7da4639d5a75ff1497b1f1c802ab443a4b01651437ca8b9" namespace=k8s.io protocol=ttrpc version=3 Jul 6 23:46:56.007157 containerd[1497]: time="2025-07-06T23:46:56.007117381Z" level=info msg="connecting to shim aa0a92e092ed4c6dadb7b57af3e01fd1540044aec7859ccdbb30e28f790006be" address="unix:///run/containerd/s/c870d899adbdaa24aa08e4bc43e077f6cf1e1cb5d47e3ae92048dbdc9a2324d5" namespace=k8s.io protocol=ttrpc version=3 Jul 6 23:46:56.031777 systemd[1]: Started cri-containerd-923d6b9b6d31abda558909d68daa2b6f13a58eb0f6d6f492352b7c271c9c76a4.scope - libcontainer container 923d6b9b6d31abda558909d68daa2b6f13a58eb0f6d6f492352b7c271c9c76a4. Jul 6 23:46:56.035667 systemd[1]: Started cri-containerd-aa0a92e092ed4c6dadb7b57af3e01fd1540044aec7859ccdbb30e28f790006be.scope - libcontainer container aa0a92e092ed4c6dadb7b57af3e01fd1540044aec7859ccdbb30e28f790006be. Jul 6 23:46:56.067002 containerd[1497]: time="2025-07-06T23:46:56.066895540Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-zhjkm,Uid:55f39dc0-7852-4669-aeb9-4491e9660c2b,Namespace:kube-system,Attempt:0,} returns sandbox id \"923d6b9b6d31abda558909d68daa2b6f13a58eb0f6d6f492352b7c271c9c76a4\"" Jul 6 23:46:56.068024 containerd[1497]: time="2025-07-06T23:46:56.067988385Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-7jgm9,Uid:7c6e390b-ddbf-4568-a42d-13eabdc242e8,Namespace:kube-system,Attempt:0,} returns sandbox id \"aa0a92e092ed4c6dadb7b57af3e01fd1540044aec7859ccdbb30e28f790006be\"" Jul 6 23:46:56.072757 containerd[1497]: time="2025-07-06T23:46:56.072718247Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jul 6 23:46:56.074343 containerd[1497]: time="2025-07-06T23:46:56.074013213Z" level=info msg="CreateContainer within sandbox \"923d6b9b6d31abda558909d68daa2b6f13a58eb0f6d6f492352b7c271c9c76a4\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 6 23:46:56.084290 containerd[1497]: time="2025-07-06T23:46:56.084240060Z" level=info msg="Container 00a73f872f733a40b90d0626ec9cf686b8e2ed7cddd4c6f36133ef02e6fae8d3: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:46:56.092419 containerd[1497]: time="2025-07-06T23:46:56.092365058Z" level=info msg="CreateContainer within sandbox \"923d6b9b6d31abda558909d68daa2b6f13a58eb0f6d6f492352b7c271c9c76a4\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"00a73f872f733a40b90d0626ec9cf686b8e2ed7cddd4c6f36133ef02e6fae8d3\"" Jul 6 23:46:56.093196 containerd[1497]: time="2025-07-06T23:46:56.093144582Z" level=info msg="StartContainer for \"00a73f872f733a40b90d0626ec9cf686b8e2ed7cddd4c6f36133ef02e6fae8d3\"" Jul 6 23:46:56.094637 containerd[1497]: time="2025-07-06T23:46:56.094600109Z" level=info msg="connecting to shim 00a73f872f733a40b90d0626ec9cf686b8e2ed7cddd4c6f36133ef02e6fae8d3" address="unix:///run/containerd/s/141eecc2f2579dc7e7da4639d5a75ff1497b1f1c802ab443a4b01651437ca8b9" protocol=ttrpc version=3 Jul 6 23:46:56.115789 systemd[1]: Started cri-containerd-00a73f872f733a40b90d0626ec9cf686b8e2ed7cddd4c6f36133ef02e6fae8d3.scope - libcontainer container 00a73f872f733a40b90d0626ec9cf686b8e2ed7cddd4c6f36133ef02e6fae8d3. Jul 6 23:46:56.163872 containerd[1497]: time="2025-07-06T23:46:56.163823912Z" level=info msg="StartContainer for \"00a73f872f733a40b90d0626ec9cf686b8e2ed7cddd4c6f36133ef02e6fae8d3\" returns successfully" Jul 6 23:46:56.186771 containerd[1497]: time="2025-07-06T23:46:56.186660138Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-bcnbr,Uid:dc42bb6a-438c-41d0-b222-b744e87b8330,Namespace:kube-system,Attempt:0,}" Jul 6 23:46:56.211426 containerd[1497]: time="2025-07-06T23:46:56.211360614Z" level=info msg="connecting to shim ba33e080559445465e95b62e9dcd63f4f9b70d58738b9aa91aa198f02c7186be" address="unix:///run/containerd/s/dfbdae55aaeb508aaa324a2add3656cdf608add7e40a3c9e58400cb1a202129e" namespace=k8s.io protocol=ttrpc version=3 Jul 6 23:46:56.246789 systemd[1]: Started cri-containerd-ba33e080559445465e95b62e9dcd63f4f9b70d58738b9aa91aa198f02c7186be.scope - libcontainer container ba33e080559445465e95b62e9dcd63f4f9b70d58738b9aa91aa198f02c7186be. Jul 6 23:46:56.297594 containerd[1497]: time="2025-07-06T23:46:56.297505375Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-bcnbr,Uid:dc42bb6a-438c-41d0-b222-b744e87b8330,Namespace:kube-system,Attempt:0,} returns sandbox id \"ba33e080559445465e95b62e9dcd63f4f9b70d58738b9aa91aa198f02c7186be\"" Jul 6 23:46:56.896166 kubelet[2611]: I0706 23:46:56.895457 2611 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-zhjkm" podStartSLOduration=1.895439445 podStartE2EDuration="1.895439445s" podCreationTimestamp="2025-07-06 23:46:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:46:56.895133284 +0000 UTC m=+7.125859658" watchObservedRunningTime="2025-07-06 23:46:56.895439445 +0000 UTC m=+7.126165819" Jul 6 23:47:04.704587 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1363846089.mount: Deactivated successfully. Jul 6 23:47:05.913031 update_engine[1487]: I20250706 23:47:05.912969 1487 update_attempter.cc:509] Updating boot flags... Jul 6 23:47:06.050622 containerd[1497]: time="2025-07-06T23:47:06.049737372Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:47:06.050622 containerd[1497]: time="2025-07-06T23:47:06.050453014Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=157646710" Jul 6 23:47:06.051527 containerd[1497]: time="2025-07-06T23:47:06.051490536Z" level=info msg="ImageCreate event name:\"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:47:06.055598 containerd[1497]: time="2025-07-06T23:47:06.055508306Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"157636062\" in 9.982523898s" Jul 6 23:47:06.055598 containerd[1497]: time="2025-07-06T23:47:06.055562346Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Jul 6 23:47:06.058635 containerd[1497]: time="2025-07-06T23:47:06.058547713Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jul 6 23:47:06.064492 containerd[1497]: time="2025-07-06T23:47:06.064437928Z" level=info msg="CreateContainer within sandbox \"aa0a92e092ed4c6dadb7b57af3e01fd1540044aec7859ccdbb30e28f790006be\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 6 23:47:06.113152 containerd[1497]: time="2025-07-06T23:47:06.113069447Z" level=info msg="Container 3b391ede02c8b9fc6e037deb19e099fb4386feda7146f597ec29729803f6e7e7: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:47:06.122001 containerd[1497]: time="2025-07-06T23:47:06.121927228Z" level=info msg="CreateContainer within sandbox \"aa0a92e092ed4c6dadb7b57af3e01fd1540044aec7859ccdbb30e28f790006be\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"3b391ede02c8b9fc6e037deb19e099fb4386feda7146f597ec29729803f6e7e7\"" Jul 6 23:47:06.129489 containerd[1497]: time="2025-07-06T23:47:06.129349367Z" level=info msg="StartContainer for \"3b391ede02c8b9fc6e037deb19e099fb4386feda7146f597ec29729803f6e7e7\"" Jul 6 23:47:06.143671 containerd[1497]: time="2025-07-06T23:47:06.143624362Z" level=info msg="connecting to shim 3b391ede02c8b9fc6e037deb19e099fb4386feda7146f597ec29729803f6e7e7" address="unix:///run/containerd/s/c870d899adbdaa24aa08e4bc43e077f6cf1e1cb5d47e3ae92048dbdc9a2324d5" protocol=ttrpc version=3 Jul 6 23:47:06.189748 systemd[1]: Started cri-containerd-3b391ede02c8b9fc6e037deb19e099fb4386feda7146f597ec29729803f6e7e7.scope - libcontainer container 3b391ede02c8b9fc6e037deb19e099fb4386feda7146f597ec29729803f6e7e7. Jul 6 23:47:06.239482 containerd[1497]: time="2025-07-06T23:47:06.237940512Z" level=info msg="StartContainer for \"3b391ede02c8b9fc6e037deb19e099fb4386feda7146f597ec29729803f6e7e7\" returns successfully" Jul 6 23:47:06.284264 systemd[1]: cri-containerd-3b391ede02c8b9fc6e037deb19e099fb4386feda7146f597ec29729803f6e7e7.scope: Deactivated successfully. Jul 6 23:47:06.311635 containerd[1497]: time="2025-07-06T23:47:06.311588613Z" level=info msg="received exit event container_id:\"3b391ede02c8b9fc6e037deb19e099fb4386feda7146f597ec29729803f6e7e7\" id:\"3b391ede02c8b9fc6e037deb19e099fb4386feda7146f597ec29729803f6e7e7\" pid:3054 exited_at:{seconds:1751845626 nanos:307702843}" Jul 6 23:47:06.311895 containerd[1497]: time="2025-07-06T23:47:06.311672133Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3b391ede02c8b9fc6e037deb19e099fb4386feda7146f597ec29729803f6e7e7\" id:\"3b391ede02c8b9fc6e037deb19e099fb4386feda7146f597ec29729803f6e7e7\" pid:3054 exited_at:{seconds:1751845626 nanos:307702843}" Jul 6 23:47:06.920495 containerd[1497]: time="2025-07-06T23:47:06.920453582Z" level=info msg="CreateContainer within sandbox \"aa0a92e092ed4c6dadb7b57af3e01fd1540044aec7859ccdbb30e28f790006be\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 6 23:47:06.943737 containerd[1497]: time="2025-07-06T23:47:06.943683479Z" level=info msg="Container 7a87fc22fc47ef3c1847c07e832e4030b349df78e6b889d128f618abae382899: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:47:06.956814 containerd[1497]: time="2025-07-06T23:47:06.956760951Z" level=info msg="CreateContainer within sandbox \"aa0a92e092ed4c6dadb7b57af3e01fd1540044aec7859ccdbb30e28f790006be\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"7a87fc22fc47ef3c1847c07e832e4030b349df78e6b889d128f618abae382899\"" Jul 6 23:47:06.960284 containerd[1497]: time="2025-07-06T23:47:06.960236160Z" level=info msg="StartContainer for \"7a87fc22fc47ef3c1847c07e832e4030b349df78e6b889d128f618abae382899\"" Jul 6 23:47:06.961217 containerd[1497]: time="2025-07-06T23:47:06.961188482Z" level=info msg="connecting to shim 7a87fc22fc47ef3c1847c07e832e4030b349df78e6b889d128f618abae382899" address="unix:///run/containerd/s/c870d899adbdaa24aa08e4bc43e077f6cf1e1cb5d47e3ae92048dbdc9a2324d5" protocol=ttrpc version=3 Jul 6 23:47:06.985802 systemd[1]: Started cri-containerd-7a87fc22fc47ef3c1847c07e832e4030b349df78e6b889d128f618abae382899.scope - libcontainer container 7a87fc22fc47ef3c1847c07e832e4030b349df78e6b889d128f618abae382899. Jul 6 23:47:07.032800 containerd[1497]: time="2025-07-06T23:47:07.032726212Z" level=info msg="StartContainer for \"7a87fc22fc47ef3c1847c07e832e4030b349df78e6b889d128f618abae382899\" returns successfully" Jul 6 23:47:07.046055 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 6 23:47:07.046294 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 6 23:47:07.046739 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jul 6 23:47:07.048693 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 6 23:47:07.050928 systemd[1]: cri-containerd-7a87fc22fc47ef3c1847c07e832e4030b349df78e6b889d128f618abae382899.scope: Deactivated successfully. Jul 6 23:47:07.054537 containerd[1497]: time="2025-07-06T23:47:07.054448262Z" level=info msg="TaskExit event in podsandbox handler container_id:\"7a87fc22fc47ef3c1847c07e832e4030b349df78e6b889d128f618abae382899\" id:\"7a87fc22fc47ef3c1847c07e832e4030b349df78e6b889d128f618abae382899\" pid:3100 exited_at:{seconds:1751845627 nanos:54049741}" Jul 6 23:47:07.061084 containerd[1497]: time="2025-07-06T23:47:07.061024957Z" level=info msg="received exit event container_id:\"7a87fc22fc47ef3c1847c07e832e4030b349df78e6b889d128f618abae382899\" id:\"7a87fc22fc47ef3c1847c07e832e4030b349df78e6b889d128f618abae382899\" pid:3100 exited_at:{seconds:1751845627 nanos:54049741}" Jul 6 23:47:07.074783 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 6 23:47:07.105509 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3b391ede02c8b9fc6e037deb19e099fb4386feda7146f597ec29729803f6e7e7-rootfs.mount: Deactivated successfully. Jul 6 23:47:07.924019 containerd[1497]: time="2025-07-06T23:47:07.923949697Z" level=info msg="CreateContainer within sandbox \"aa0a92e092ed4c6dadb7b57af3e01fd1540044aec7859ccdbb30e28f790006be\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 6 23:47:07.967640 containerd[1497]: time="2025-07-06T23:47:07.966705035Z" level=info msg="Container 436bbb776f1f29116f30b8390119db62cae986a8c6d520cddce8f0fca5b12727: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:47:07.970256 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3119956786.mount: Deactivated successfully. Jul 6 23:47:07.977146 containerd[1497]: time="2025-07-06T23:47:07.977064379Z" level=info msg="CreateContainer within sandbox \"aa0a92e092ed4c6dadb7b57af3e01fd1540044aec7859ccdbb30e28f790006be\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"436bbb776f1f29116f30b8390119db62cae986a8c6d520cddce8f0fca5b12727\"" Jul 6 23:47:07.979801 containerd[1497]: time="2025-07-06T23:47:07.979759465Z" level=info msg="StartContainer for \"436bbb776f1f29116f30b8390119db62cae986a8c6d520cddce8f0fca5b12727\"" Jul 6 23:47:07.982540 containerd[1497]: time="2025-07-06T23:47:07.982454911Z" level=info msg="connecting to shim 436bbb776f1f29116f30b8390119db62cae986a8c6d520cddce8f0fca5b12727" address="unix:///run/containerd/s/c870d899adbdaa24aa08e4bc43e077f6cf1e1cb5d47e3ae92048dbdc9a2324d5" protocol=ttrpc version=3 Jul 6 23:47:08.007815 systemd[1]: Started cri-containerd-436bbb776f1f29116f30b8390119db62cae986a8c6d520cddce8f0fca5b12727.scope - libcontainer container 436bbb776f1f29116f30b8390119db62cae986a8c6d520cddce8f0fca5b12727. Jul 6 23:47:08.064864 systemd[1]: cri-containerd-436bbb776f1f29116f30b8390119db62cae986a8c6d520cddce8f0fca5b12727.scope: Deactivated successfully. Jul 6 23:47:08.066431 containerd[1497]: time="2025-07-06T23:47:08.066334534Z" level=info msg="StartContainer for \"436bbb776f1f29116f30b8390119db62cae986a8c6d520cddce8f0fca5b12727\" returns successfully" Jul 6 23:47:08.068021 containerd[1497]: time="2025-07-06T23:47:08.067726057Z" level=info msg="received exit event container_id:\"436bbb776f1f29116f30b8390119db62cae986a8c6d520cddce8f0fca5b12727\" id:\"436bbb776f1f29116f30b8390119db62cae986a8c6d520cddce8f0fca5b12727\" pid:3155 exited_at:{seconds:1751845628 nanos:66134854}" Jul 6 23:47:08.068021 containerd[1497]: time="2025-07-06T23:47:08.067814177Z" level=info msg="TaskExit event in podsandbox handler container_id:\"436bbb776f1f29116f30b8390119db62cae986a8c6d520cddce8f0fca5b12727\" id:\"436bbb776f1f29116f30b8390119db62cae986a8c6d520cddce8f0fca5b12727\" pid:3155 exited_at:{seconds:1751845628 nanos:66134854}" Jul 6 23:47:08.094349 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-436bbb776f1f29116f30b8390119db62cae986a8c6d520cddce8f0fca5b12727-rootfs.mount: Deactivated successfully. Jul 6 23:47:08.308890 containerd[1497]: time="2025-07-06T23:47:08.308771936Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:47:08.310086 containerd[1497]: time="2025-07-06T23:47:08.309859658Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=17135306" Jul 6 23:47:08.310941 containerd[1497]: time="2025-07-06T23:47:08.310899140Z" level=info msg="ImageCreate event name:\"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:47:08.312252 containerd[1497]: time="2025-07-06T23:47:08.312216183Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"17128551\" in 2.253605909s" Jul 6 23:47:08.312252 containerd[1497]: time="2025-07-06T23:47:08.312251663Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Jul 6 23:47:08.315451 containerd[1497]: time="2025-07-06T23:47:08.315404430Z" level=info msg="CreateContainer within sandbox \"ba33e080559445465e95b62e9dcd63f4f9b70d58738b9aa91aa198f02c7186be\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jul 6 23:47:08.333785 containerd[1497]: time="2025-07-06T23:47:08.329160819Z" level=info msg="Container 584921fbef9b2b7211cc87802413c8fcc4abbd3c0bf0b25a0938aa882e35c75d: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:47:08.339325 containerd[1497]: time="2025-07-06T23:47:08.339280521Z" level=info msg="CreateContainer within sandbox \"ba33e080559445465e95b62e9dcd63f4f9b70d58738b9aa91aa198f02c7186be\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"584921fbef9b2b7211cc87802413c8fcc4abbd3c0bf0b25a0938aa882e35c75d\"" Jul 6 23:47:08.339869 containerd[1497]: time="2025-07-06T23:47:08.339839282Z" level=info msg="StartContainer for \"584921fbef9b2b7211cc87802413c8fcc4abbd3c0bf0b25a0938aa882e35c75d\"" Jul 6 23:47:08.340866 containerd[1497]: time="2025-07-06T23:47:08.340835405Z" level=info msg="connecting to shim 584921fbef9b2b7211cc87802413c8fcc4abbd3c0bf0b25a0938aa882e35c75d" address="unix:///run/containerd/s/dfbdae55aaeb508aaa324a2add3656cdf608add7e40a3c9e58400cb1a202129e" protocol=ttrpc version=3 Jul 6 23:47:08.382295 systemd[1]: Started cri-containerd-584921fbef9b2b7211cc87802413c8fcc4abbd3c0bf0b25a0938aa882e35c75d.scope - libcontainer container 584921fbef9b2b7211cc87802413c8fcc4abbd3c0bf0b25a0938aa882e35c75d. Jul 6 23:47:08.418557 containerd[1497]: time="2025-07-06T23:47:08.418522972Z" level=info msg="StartContainer for \"584921fbef9b2b7211cc87802413c8fcc4abbd3c0bf0b25a0938aa882e35c75d\" returns successfully" Jul 6 23:47:08.937265 containerd[1497]: time="2025-07-06T23:47:08.937220567Z" level=info msg="CreateContainer within sandbox \"aa0a92e092ed4c6dadb7b57af3e01fd1540044aec7859ccdbb30e28f790006be\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 6 23:47:08.945461 kubelet[2611]: I0706 23:47:08.945204 2611 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-bcnbr" podStartSLOduration=1.93070634 podStartE2EDuration="13.945182064s" podCreationTimestamp="2025-07-06 23:46:55 +0000 UTC" firstStartedPulling="2025-07-06 23:46:56.298628981 +0000 UTC m=+6.529355355" lastFinishedPulling="2025-07-06 23:47:08.313104705 +0000 UTC m=+18.543831079" observedRunningTime="2025-07-06 23:47:08.944537623 +0000 UTC m=+19.175264077" watchObservedRunningTime="2025-07-06 23:47:08.945182064 +0000 UTC m=+19.175908438" Jul 6 23:47:08.957876 containerd[1497]: time="2025-07-06T23:47:08.957818252Z" level=info msg="Container da1a78699cdcdc64d133b864fa8824c2d9399979530c73bea54720f95fe3492d: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:47:08.970789 containerd[1497]: time="2025-07-06T23:47:08.970724959Z" level=info msg="CreateContainer within sandbox \"aa0a92e092ed4c6dadb7b57af3e01fd1540044aec7859ccdbb30e28f790006be\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"da1a78699cdcdc64d133b864fa8824c2d9399979530c73bea54720f95fe3492d\"" Jul 6 23:47:08.973601 containerd[1497]: time="2025-07-06T23:47:08.972668803Z" level=info msg="StartContainer for \"da1a78699cdcdc64d133b864fa8824c2d9399979530c73bea54720f95fe3492d\"" Jul 6 23:47:08.973724 containerd[1497]: time="2025-07-06T23:47:08.973613285Z" level=info msg="connecting to shim da1a78699cdcdc64d133b864fa8824c2d9399979530c73bea54720f95fe3492d" address="unix:///run/containerd/s/c870d899adbdaa24aa08e4bc43e077f6cf1e1cb5d47e3ae92048dbdc9a2324d5" protocol=ttrpc version=3 Jul 6 23:47:09.012796 systemd[1]: Started cri-containerd-da1a78699cdcdc64d133b864fa8824c2d9399979530c73bea54720f95fe3492d.scope - libcontainer container da1a78699cdcdc64d133b864fa8824c2d9399979530c73bea54720f95fe3492d. Jul 6 23:47:09.083977 systemd[1]: cri-containerd-da1a78699cdcdc64d133b864fa8824c2d9399979530c73bea54720f95fe3492d.scope: Deactivated successfully. Jul 6 23:47:09.086315 containerd[1497]: time="2025-07-06T23:47:09.086273276Z" level=info msg="received exit event container_id:\"da1a78699cdcdc64d133b864fa8824c2d9399979530c73bea54720f95fe3492d\" id:\"da1a78699cdcdc64d133b864fa8824c2d9399979530c73bea54720f95fe3492d\" pid:3239 exited_at:{seconds:1751845629 nanos:85863836}" Jul 6 23:47:09.086618 containerd[1497]: time="2025-07-06T23:47:09.086314556Z" level=info msg="TaskExit event in podsandbox handler container_id:\"da1a78699cdcdc64d133b864fa8824c2d9399979530c73bea54720f95fe3492d\" id:\"da1a78699cdcdc64d133b864fa8824c2d9399979530c73bea54720f95fe3492d\" pid:3239 exited_at:{seconds:1751845629 nanos:85863836}" Jul 6 23:47:09.087637 containerd[1497]: time="2025-07-06T23:47:09.087530159Z" level=info msg="StartContainer for \"da1a78699cdcdc64d133b864fa8824c2d9399979530c73bea54720f95fe3492d\" returns successfully" Jul 6 23:47:09.943362 containerd[1497]: time="2025-07-06T23:47:09.943279404Z" level=info msg="CreateContainer within sandbox \"aa0a92e092ed4c6dadb7b57af3e01fd1540044aec7859ccdbb30e28f790006be\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 6 23:47:09.971778 containerd[1497]: time="2025-07-06T23:47:09.971738622Z" level=info msg="Container 24a5edaf68a0c9308c132a3517c06ab85fcebaaf47c179a151fb12422f575023: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:47:09.971947 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2745318653.mount: Deactivated successfully. Jul 6 23:47:09.980479 containerd[1497]: time="2025-07-06T23:47:09.980428479Z" level=info msg="CreateContainer within sandbox \"aa0a92e092ed4c6dadb7b57af3e01fd1540044aec7859ccdbb30e28f790006be\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"24a5edaf68a0c9308c132a3517c06ab85fcebaaf47c179a151fb12422f575023\"" Jul 6 23:47:09.980865 containerd[1497]: time="2025-07-06T23:47:09.980840080Z" level=info msg="StartContainer for \"24a5edaf68a0c9308c132a3517c06ab85fcebaaf47c179a151fb12422f575023\"" Jul 6 23:47:09.981763 containerd[1497]: time="2025-07-06T23:47:09.981737042Z" level=info msg="connecting to shim 24a5edaf68a0c9308c132a3517c06ab85fcebaaf47c179a151fb12422f575023" address="unix:///run/containerd/s/c870d899adbdaa24aa08e4bc43e077f6cf1e1cb5d47e3ae92048dbdc9a2324d5" protocol=ttrpc version=3 Jul 6 23:47:10.000753 systemd[1]: Started cri-containerd-24a5edaf68a0c9308c132a3517c06ab85fcebaaf47c179a151fb12422f575023.scope - libcontainer container 24a5edaf68a0c9308c132a3517c06ab85fcebaaf47c179a151fb12422f575023. Jul 6 23:47:10.039378 containerd[1497]: time="2025-07-06T23:47:10.039332873Z" level=info msg="StartContainer for \"24a5edaf68a0c9308c132a3517c06ab85fcebaaf47c179a151fb12422f575023\" returns successfully" Jul 6 23:47:10.142864 containerd[1497]: time="2025-07-06T23:47:10.142802909Z" level=info msg="TaskExit event in podsandbox handler container_id:\"24a5edaf68a0c9308c132a3517c06ab85fcebaaf47c179a151fb12422f575023\" id:\"3d455176fa8e18ead4df4ec6eeade6fa2800aa3db70e7efab16fe013c831e78d\" pid:3306 exited_at:{seconds:1751845630 nanos:142467068}" Jul 6 23:47:10.186329 kubelet[2611]: I0706 23:47:10.186037 2611 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jul 6 23:47:10.235812 systemd[1]: Created slice kubepods-burstable-pod33280068_0e03_4d92_9d5a_87613a8a2bca.slice - libcontainer container kubepods-burstable-pod33280068_0e03_4d92_9d5a_87613a8a2bca.slice. Jul 6 23:47:10.245254 systemd[1]: Created slice kubepods-burstable-pod1f6f5a1c_fb09_47cb_adfb_5f286032e4da.slice - libcontainer container kubepods-burstable-pod1f6f5a1c_fb09_47cb_adfb_5f286032e4da.slice. Jul 6 23:47:10.296587 kubelet[2611]: I0706 23:47:10.296536 2611 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/1f6f5a1c-fb09-47cb-adfb-5f286032e4da-config-volume\") pod \"coredns-7c65d6cfc9-f65wh\" (UID: \"1f6f5a1c-fb09-47cb-adfb-5f286032e4da\") " pod="kube-system/coredns-7c65d6cfc9-f65wh" Jul 6 23:47:10.296587 kubelet[2611]: I0706 23:47:10.296593 2611 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cwwq7\" (UniqueName: \"kubernetes.io/projected/1f6f5a1c-fb09-47cb-adfb-5f286032e4da-kube-api-access-cwwq7\") pod \"coredns-7c65d6cfc9-f65wh\" (UID: \"1f6f5a1c-fb09-47cb-adfb-5f286032e4da\") " pod="kube-system/coredns-7c65d6cfc9-f65wh" Jul 6 23:47:10.296759 kubelet[2611]: I0706 23:47:10.296622 2611 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/33280068-0e03-4d92-9d5a-87613a8a2bca-config-volume\") pod \"coredns-7c65d6cfc9-lrv42\" (UID: \"33280068-0e03-4d92-9d5a-87613a8a2bca\") " pod="kube-system/coredns-7c65d6cfc9-lrv42" Jul 6 23:47:10.296759 kubelet[2611]: I0706 23:47:10.296640 2611 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jdfj\" (UniqueName: \"kubernetes.io/projected/33280068-0e03-4d92-9d5a-87613a8a2bca-kube-api-access-6jdfj\") pod \"coredns-7c65d6cfc9-lrv42\" (UID: \"33280068-0e03-4d92-9d5a-87613a8a2bca\") " pod="kube-system/coredns-7c65d6cfc9-lrv42" Jul 6 23:47:10.543258 containerd[1497]: time="2025-07-06T23:47:10.543131706Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-lrv42,Uid:33280068-0e03-4d92-9d5a-87613a8a2bca,Namespace:kube-system,Attempt:0,}" Jul 6 23:47:10.553057 containerd[1497]: time="2025-07-06T23:47:10.553013884Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-f65wh,Uid:1f6f5a1c-fb09-47cb-adfb-5f286032e4da,Namespace:kube-system,Attempt:0,}" Jul 6 23:47:10.965169 kubelet[2611]: I0706 23:47:10.965103 2611 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-7jgm9" podStartSLOduration=5.978555473 podStartE2EDuration="15.964559102s" podCreationTimestamp="2025-07-06 23:46:55 +0000 UTC" firstStartedPulling="2025-07-06 23:46:56.071956643 +0000 UTC m=+6.302682977" lastFinishedPulling="2025-07-06 23:47:06.057960272 +0000 UTC m=+16.288686606" observedRunningTime="2025-07-06 23:47:10.963879501 +0000 UTC m=+21.194605915" watchObservedRunningTime="2025-07-06 23:47:10.964559102 +0000 UTC m=+21.195285476" Jul 6 23:47:12.472913 systemd-networkd[1430]: cilium_host: Link UP Jul 6 23:47:12.475423 systemd-networkd[1430]: cilium_net: Link UP Jul 6 23:47:12.475741 systemd-networkd[1430]: cilium_net: Gained carrier Jul 6 23:47:12.475861 systemd-networkd[1430]: cilium_host: Gained carrier Jul 6 23:47:12.591258 systemd-networkd[1430]: cilium_vxlan: Link UP Jul 6 23:47:12.591266 systemd-networkd[1430]: cilium_vxlan: Gained carrier Jul 6 23:47:12.742883 systemd-networkd[1430]: cilium_host: Gained IPv6LL Jul 6 23:47:12.910763 systemd-networkd[1430]: cilium_net: Gained IPv6LL Jul 6 23:47:12.915596 kernel: NET: Registered PF_ALG protocol family Jul 6 23:47:13.603439 systemd-networkd[1430]: lxc_health: Link UP Jul 6 23:47:13.604134 systemd-networkd[1430]: lxc_health: Gained carrier Jul 6 23:47:13.725751 systemd-networkd[1430]: cilium_vxlan: Gained IPv6LL Jul 6 23:47:14.153204 systemd-networkd[1430]: lxced27cefc2bfc: Link UP Jul 6 23:47:14.164649 kernel: eth0: renamed from tmp4178f Jul 6 23:47:14.166718 systemd-networkd[1430]: lxccbbb78c0bf73: Link UP Jul 6 23:47:14.177602 kernel: eth0: renamed from tmp021b4 Jul 6 23:47:14.178960 systemd-networkd[1430]: lxced27cefc2bfc: Gained carrier Jul 6 23:47:14.179098 systemd-networkd[1430]: lxccbbb78c0bf73: Gained carrier Jul 6 23:47:15.133756 systemd-networkd[1430]: lxc_health: Gained IPv6LL Jul 6 23:47:15.645983 systemd-networkd[1430]: lxccbbb78c0bf73: Gained IPv6LL Jul 6 23:47:15.901835 systemd-networkd[1430]: lxced27cefc2bfc: Gained IPv6LL Jul 6 23:47:17.948739 containerd[1497]: time="2025-07-06T23:47:17.948610295Z" level=info msg="connecting to shim 4178f0b83149c2f46921d4beef3b5f467f2aa24ea7f6b628cf1cecd9344f3f6b" address="unix:///run/containerd/s/a659138b2ff5e40a1b0675baf2fd6dd781a614d2919db414103d2297eac67049" namespace=k8s.io protocol=ttrpc version=3 Jul 6 23:47:17.949264 containerd[1497]: time="2025-07-06T23:47:17.949168976Z" level=info msg="connecting to shim 021b4fdb590abca6b200c294618a0f45bafa3edbe9e9b3785f6dcb91198a7762" address="unix:///run/containerd/s/ef71b557b0539310048a88b2270998a627ab3ca20f8e64042d205f44f0b56dea" namespace=k8s.io protocol=ttrpc version=3 Jul 6 23:47:17.974740 systemd[1]: Started cri-containerd-4178f0b83149c2f46921d4beef3b5f467f2aa24ea7f6b628cf1cecd9344f3f6b.scope - libcontainer container 4178f0b83149c2f46921d4beef3b5f467f2aa24ea7f6b628cf1cecd9344f3f6b. Jul 6 23:47:17.978062 systemd[1]: Started cri-containerd-021b4fdb590abca6b200c294618a0f45bafa3edbe9e9b3785f6dcb91198a7762.scope - libcontainer container 021b4fdb590abca6b200c294618a0f45bafa3edbe9e9b3785f6dcb91198a7762. Jul 6 23:47:17.990164 systemd-resolved[1348]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 6 23:47:17.991082 systemd-resolved[1348]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 6 23:47:18.012912 containerd[1497]: time="2025-07-06T23:47:18.012870251Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-f65wh,Uid:1f6f5a1c-fb09-47cb-adfb-5f286032e4da,Namespace:kube-system,Attempt:0,} returns sandbox id \"021b4fdb590abca6b200c294618a0f45bafa3edbe9e9b3785f6dcb91198a7762\"" Jul 6 23:47:18.014342 containerd[1497]: time="2025-07-06T23:47:18.014314453Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-lrv42,Uid:33280068-0e03-4d92-9d5a-87613a8a2bca,Namespace:kube-system,Attempt:0,} returns sandbox id \"4178f0b83149c2f46921d4beef3b5f467f2aa24ea7f6b628cf1cecd9344f3f6b\"" Jul 6 23:47:18.016566 containerd[1497]: time="2025-07-06T23:47:18.016020295Z" level=info msg="CreateContainer within sandbox \"021b4fdb590abca6b200c294618a0f45bafa3edbe9e9b3785f6dcb91198a7762\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 6 23:47:18.025876 containerd[1497]: time="2025-07-06T23:47:18.025823666Z" level=info msg="CreateContainer within sandbox \"4178f0b83149c2f46921d4beef3b5f467f2aa24ea7f6b628cf1cecd9344f3f6b\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 6 23:47:18.029529 containerd[1497]: time="2025-07-06T23:47:18.028785869Z" level=info msg="Container 401044625f444a8c08b27f56bcf6e4eae5502638c781c3fe8cead2b7f8bf044c: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:47:18.033441 containerd[1497]: time="2025-07-06T23:47:18.033400315Z" level=info msg="Container 1bb543f1e08805c646242f8a54238bbe04d04a5885386d19f0ab178b67f320dd: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:47:18.040563 containerd[1497]: time="2025-07-06T23:47:18.040529923Z" level=info msg="CreateContainer within sandbox \"021b4fdb590abca6b200c294618a0f45bafa3edbe9e9b3785f6dcb91198a7762\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"401044625f444a8c08b27f56bcf6e4eae5502638c781c3fe8cead2b7f8bf044c\"" Jul 6 23:47:18.041220 containerd[1497]: time="2025-07-06T23:47:18.041137603Z" level=info msg="StartContainer for \"401044625f444a8c08b27f56bcf6e4eae5502638c781c3fe8cead2b7f8bf044c\"" Jul 6 23:47:18.044726 containerd[1497]: time="2025-07-06T23:47:18.044678927Z" level=info msg="connecting to shim 401044625f444a8c08b27f56bcf6e4eae5502638c781c3fe8cead2b7f8bf044c" address="unix:///run/containerd/s/ef71b557b0539310048a88b2270998a627ab3ca20f8e64042d205f44f0b56dea" protocol=ttrpc version=3 Jul 6 23:47:18.049435 containerd[1497]: time="2025-07-06T23:47:18.049389693Z" level=info msg="CreateContainer within sandbox \"4178f0b83149c2f46921d4beef3b5f467f2aa24ea7f6b628cf1cecd9344f3f6b\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"1bb543f1e08805c646242f8a54238bbe04d04a5885386d19f0ab178b67f320dd\"" Jul 6 23:47:18.050015 containerd[1497]: time="2025-07-06T23:47:18.049985573Z" level=info msg="StartContainer for \"1bb543f1e08805c646242f8a54238bbe04d04a5885386d19f0ab178b67f320dd\"" Jul 6 23:47:18.059015 containerd[1497]: time="2025-07-06T23:47:18.058975623Z" level=info msg="connecting to shim 1bb543f1e08805c646242f8a54238bbe04d04a5885386d19f0ab178b67f320dd" address="unix:///run/containerd/s/a659138b2ff5e40a1b0675baf2fd6dd781a614d2919db414103d2297eac67049" protocol=ttrpc version=3 Jul 6 23:47:18.066766 systemd[1]: Started cri-containerd-401044625f444a8c08b27f56bcf6e4eae5502638c781c3fe8cead2b7f8bf044c.scope - libcontainer container 401044625f444a8c08b27f56bcf6e4eae5502638c781c3fe8cead2b7f8bf044c. Jul 6 23:47:18.083786 systemd[1]: Started cri-containerd-1bb543f1e08805c646242f8a54238bbe04d04a5885386d19f0ab178b67f320dd.scope - libcontainer container 1bb543f1e08805c646242f8a54238bbe04d04a5885386d19f0ab178b67f320dd. Jul 6 23:47:18.106770 containerd[1497]: time="2025-07-06T23:47:18.106696717Z" level=info msg="StartContainer for \"401044625f444a8c08b27f56bcf6e4eae5502638c781c3fe8cead2b7f8bf044c\" returns successfully" Jul 6 23:47:18.148440 containerd[1497]: time="2025-07-06T23:47:18.147210763Z" level=info msg="StartContainer for \"1bb543f1e08805c646242f8a54238bbe04d04a5885386d19f0ab178b67f320dd\" returns successfully" Jul 6 23:47:18.924636 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount791524546.mount: Deactivated successfully. Jul 6 23:47:18.978180 kubelet[2611]: I0706 23:47:18.977944 2611 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-lrv42" podStartSLOduration=23.97792742 podStartE2EDuration="23.97792742s" podCreationTimestamp="2025-07-06 23:46:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:47:18.9775155 +0000 UTC m=+29.208241874" watchObservedRunningTime="2025-07-06 23:47:18.97792742 +0000 UTC m=+29.208653794" Jul 6 23:47:19.709071 systemd[1]: Started sshd@7-10.0.0.139:22-10.0.0.1:45510.service - OpenSSH per-connection server daemon (10.0.0.1:45510). Jul 6 23:47:19.762478 sshd[3956]: Accepted publickey for core from 10.0.0.1 port 45510 ssh2: RSA SHA256:xPKA+TblypRwFFpP4Ulh9pljC5Xv/qD+dvpZZ1GZosc Jul 6 23:47:19.764051 sshd-session[3956]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:47:19.769097 systemd-logind[1478]: New session 8 of user core. Jul 6 23:47:19.787790 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 6 23:47:19.930913 sshd[3958]: Connection closed by 10.0.0.1 port 45510 Jul 6 23:47:19.931255 sshd-session[3956]: pam_unix(sshd:session): session closed for user core Jul 6 23:47:19.935281 systemd-logind[1478]: Session 8 logged out. Waiting for processes to exit. Jul 6 23:47:19.935530 systemd[1]: sshd@7-10.0.0.139:22-10.0.0.1:45510.service: Deactivated successfully. Jul 6 23:47:19.937245 systemd[1]: session-8.scope: Deactivated successfully. Jul 6 23:47:19.939052 systemd-logind[1478]: Removed session 8. Jul 6 23:47:24.947291 systemd[1]: Started sshd@8-10.0.0.139:22-10.0.0.1:40624.service - OpenSSH per-connection server daemon (10.0.0.1:40624). Jul 6 23:47:25.002061 sshd[3981]: Accepted publickey for core from 10.0.0.1 port 40624 ssh2: RSA SHA256:xPKA+TblypRwFFpP4Ulh9pljC5Xv/qD+dvpZZ1GZosc Jul 6 23:47:25.003480 sshd-session[3981]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:47:25.008176 systemd-logind[1478]: New session 9 of user core. Jul 6 23:47:25.014740 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 6 23:47:25.144684 sshd[3983]: Connection closed by 10.0.0.1 port 40624 Jul 6 23:47:25.143727 sshd-session[3981]: pam_unix(sshd:session): session closed for user core Jul 6 23:47:25.149153 systemd[1]: sshd@8-10.0.0.139:22-10.0.0.1:40624.service: Deactivated successfully. Jul 6 23:47:25.151371 systemd[1]: session-9.scope: Deactivated successfully. Jul 6 23:47:25.153366 systemd-logind[1478]: Session 9 logged out. Waiting for processes to exit. Jul 6 23:47:25.154810 systemd-logind[1478]: Removed session 9. Jul 6 23:47:30.159444 systemd[1]: Started sshd@9-10.0.0.139:22-10.0.0.1:40636.service - OpenSSH per-connection server daemon (10.0.0.1:40636). Jul 6 23:47:30.225465 sshd[3999]: Accepted publickey for core from 10.0.0.1 port 40636 ssh2: RSA SHA256:xPKA+TblypRwFFpP4Ulh9pljC5Xv/qD+dvpZZ1GZosc Jul 6 23:47:30.224566 sshd-session[3999]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:47:30.230507 systemd-logind[1478]: New session 10 of user core. Jul 6 23:47:30.239805 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 6 23:47:30.362725 sshd[4001]: Connection closed by 10.0.0.1 port 40636 Jul 6 23:47:30.363538 sshd-session[3999]: pam_unix(sshd:session): session closed for user core Jul 6 23:47:30.374082 systemd[1]: sshd@9-10.0.0.139:22-10.0.0.1:40636.service: Deactivated successfully. Jul 6 23:47:30.376045 systemd[1]: session-10.scope: Deactivated successfully. Jul 6 23:47:30.377070 systemd-logind[1478]: Session 10 logged out. Waiting for processes to exit. Jul 6 23:47:30.380677 systemd[1]: Started sshd@10-10.0.0.139:22-10.0.0.1:40640.service - OpenSSH per-connection server daemon (10.0.0.1:40640). Jul 6 23:47:30.381647 systemd-logind[1478]: Removed session 10. Jul 6 23:47:30.447453 sshd[4016]: Accepted publickey for core from 10.0.0.1 port 40640 ssh2: RSA SHA256:xPKA+TblypRwFFpP4Ulh9pljC5Xv/qD+dvpZZ1GZosc Jul 6 23:47:30.446409 sshd-session[4016]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:47:30.459630 systemd-logind[1478]: New session 11 of user core. Jul 6 23:47:30.465753 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 6 23:47:30.621022 sshd[4018]: Connection closed by 10.0.0.1 port 40640 Jul 6 23:47:30.620319 sshd-session[4016]: pam_unix(sshd:session): session closed for user core Jul 6 23:47:30.632281 systemd[1]: sshd@10-10.0.0.139:22-10.0.0.1:40640.service: Deactivated successfully. Jul 6 23:47:30.635173 systemd[1]: session-11.scope: Deactivated successfully. Jul 6 23:47:30.637714 systemd-logind[1478]: Session 11 logged out. Waiting for processes to exit. Jul 6 23:47:30.640283 systemd-logind[1478]: Removed session 11. Jul 6 23:47:30.643408 systemd[1]: Started sshd@11-10.0.0.139:22-10.0.0.1:40654.service - OpenSSH per-connection server daemon (10.0.0.1:40654). Jul 6 23:47:30.704308 sshd[4030]: Accepted publickey for core from 10.0.0.1 port 40654 ssh2: RSA SHA256:xPKA+TblypRwFFpP4Ulh9pljC5Xv/qD+dvpZZ1GZosc Jul 6 23:47:30.705935 sshd-session[4030]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:47:30.711261 systemd-logind[1478]: New session 12 of user core. Jul 6 23:47:30.721753 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 6 23:47:30.834811 sshd[4032]: Connection closed by 10.0.0.1 port 40654 Jul 6 23:47:30.835166 sshd-session[4030]: pam_unix(sshd:session): session closed for user core Jul 6 23:47:30.838988 systemd[1]: sshd@11-10.0.0.139:22-10.0.0.1:40654.service: Deactivated successfully. Jul 6 23:47:30.842115 systemd[1]: session-12.scope: Deactivated successfully. Jul 6 23:47:30.842854 systemd-logind[1478]: Session 12 logged out. Waiting for processes to exit. Jul 6 23:47:30.843989 systemd-logind[1478]: Removed session 12. Jul 6 23:47:35.849703 systemd[1]: Started sshd@12-10.0.0.139:22-10.0.0.1:51364.service - OpenSSH per-connection server daemon (10.0.0.1:51364). Jul 6 23:47:35.899850 sshd[4045]: Accepted publickey for core from 10.0.0.1 port 51364 ssh2: RSA SHA256:xPKA+TblypRwFFpP4Ulh9pljC5Xv/qD+dvpZZ1GZosc Jul 6 23:47:35.901219 sshd-session[4045]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:47:35.906314 systemd-logind[1478]: New session 13 of user core. Jul 6 23:47:35.920757 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 6 23:47:36.049443 sshd[4047]: Connection closed by 10.0.0.1 port 51364 Jul 6 23:47:36.049777 sshd-session[4045]: pam_unix(sshd:session): session closed for user core Jul 6 23:47:36.053982 systemd[1]: sshd@12-10.0.0.139:22-10.0.0.1:51364.service: Deactivated successfully. Jul 6 23:47:36.055711 systemd[1]: session-13.scope: Deactivated successfully. Jul 6 23:47:36.056398 systemd-logind[1478]: Session 13 logged out. Waiting for processes to exit. Jul 6 23:47:36.057718 systemd-logind[1478]: Removed session 13. Jul 6 23:47:41.067064 systemd[1]: Started sshd@13-10.0.0.139:22-10.0.0.1:51376.service - OpenSSH per-connection server daemon (10.0.0.1:51376). Jul 6 23:47:41.143807 sshd[4062]: Accepted publickey for core from 10.0.0.1 port 51376 ssh2: RSA SHA256:xPKA+TblypRwFFpP4Ulh9pljC5Xv/qD+dvpZZ1GZosc Jul 6 23:47:41.144994 sshd-session[4062]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:47:41.149339 systemd-logind[1478]: New session 14 of user core. Jul 6 23:47:41.157757 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 6 23:47:41.271872 sshd[4064]: Connection closed by 10.0.0.1 port 51376 Jul 6 23:47:41.272809 sshd-session[4062]: pam_unix(sshd:session): session closed for user core Jul 6 23:47:41.286867 systemd[1]: sshd@13-10.0.0.139:22-10.0.0.1:51376.service: Deactivated successfully. Jul 6 23:47:41.290217 systemd[1]: session-14.scope: Deactivated successfully. Jul 6 23:47:41.294694 systemd-logind[1478]: Session 14 logged out. Waiting for processes to exit. Jul 6 23:47:41.299021 systemd[1]: Started sshd@14-10.0.0.139:22-10.0.0.1:51382.service - OpenSSH per-connection server daemon (10.0.0.1:51382). Jul 6 23:47:41.299689 systemd-logind[1478]: Removed session 14. Jul 6 23:47:41.346890 sshd[4077]: Accepted publickey for core from 10.0.0.1 port 51382 ssh2: RSA SHA256:xPKA+TblypRwFFpP4Ulh9pljC5Xv/qD+dvpZZ1GZosc Jul 6 23:47:41.348340 sshd-session[4077]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:47:41.354906 systemd-logind[1478]: New session 15 of user core. Jul 6 23:47:41.362736 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 6 23:47:41.588880 sshd[4079]: Connection closed by 10.0.0.1 port 51382 Jul 6 23:47:41.591534 sshd-session[4077]: pam_unix(sshd:session): session closed for user core Jul 6 23:47:41.605445 systemd[1]: sshd@14-10.0.0.139:22-10.0.0.1:51382.service: Deactivated successfully. Jul 6 23:47:41.607809 systemd[1]: session-15.scope: Deactivated successfully. Jul 6 23:47:41.612212 systemd-logind[1478]: Session 15 logged out. Waiting for processes to exit. Jul 6 23:47:41.614013 systemd[1]: Started sshd@15-10.0.0.139:22-10.0.0.1:51396.service - OpenSSH per-connection server daemon (10.0.0.1:51396). Jul 6 23:47:41.617222 systemd-logind[1478]: Removed session 15. Jul 6 23:47:41.672332 sshd[4090]: Accepted publickey for core from 10.0.0.1 port 51396 ssh2: RSA SHA256:xPKA+TblypRwFFpP4Ulh9pljC5Xv/qD+dvpZZ1GZosc Jul 6 23:47:41.673855 sshd-session[4090]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:47:41.678658 systemd-logind[1478]: New session 16 of user core. Jul 6 23:47:41.685747 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 6 23:47:43.048028 sshd[4092]: Connection closed by 10.0.0.1 port 51396 Jul 6 23:47:43.048826 sshd-session[4090]: pam_unix(sshd:session): session closed for user core Jul 6 23:47:43.060120 systemd[1]: sshd@15-10.0.0.139:22-10.0.0.1:51396.service: Deactivated successfully. Jul 6 23:47:43.063677 systemd[1]: session-16.scope: Deactivated successfully. Jul 6 23:47:43.064782 systemd-logind[1478]: Session 16 logged out. Waiting for processes to exit. Jul 6 23:47:43.067519 systemd-logind[1478]: Removed session 16. Jul 6 23:47:43.071026 systemd[1]: Started sshd@16-10.0.0.139:22-10.0.0.1:36108.service - OpenSSH per-connection server daemon (10.0.0.1:36108). Jul 6 23:47:43.131900 sshd[4111]: Accepted publickey for core from 10.0.0.1 port 36108 ssh2: RSA SHA256:xPKA+TblypRwFFpP4Ulh9pljC5Xv/qD+dvpZZ1GZosc Jul 6 23:47:43.133301 sshd-session[4111]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:47:43.137479 systemd-logind[1478]: New session 17 of user core. Jul 6 23:47:43.150712 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 6 23:47:43.377015 sshd[4113]: Connection closed by 10.0.0.1 port 36108 Jul 6 23:47:43.377807 sshd-session[4111]: pam_unix(sshd:session): session closed for user core Jul 6 23:47:43.392723 systemd[1]: sshd@16-10.0.0.139:22-10.0.0.1:36108.service: Deactivated successfully. Jul 6 23:47:43.395285 systemd[1]: session-17.scope: Deactivated successfully. Jul 6 23:47:43.396488 systemd-logind[1478]: Session 17 logged out. Waiting for processes to exit. Jul 6 23:47:43.399168 systemd-logind[1478]: Removed session 17. Jul 6 23:47:43.402354 systemd[1]: Started sshd@17-10.0.0.139:22-10.0.0.1:36122.service - OpenSSH per-connection server daemon (10.0.0.1:36122). Jul 6 23:47:43.462961 sshd[4125]: Accepted publickey for core from 10.0.0.1 port 36122 ssh2: RSA SHA256:xPKA+TblypRwFFpP4Ulh9pljC5Xv/qD+dvpZZ1GZosc Jul 6 23:47:43.464473 sshd-session[4125]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:47:43.469704 systemd-logind[1478]: New session 18 of user core. Jul 6 23:47:43.478741 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 6 23:47:43.594427 sshd[4127]: Connection closed by 10.0.0.1 port 36122 Jul 6 23:47:43.594948 sshd-session[4125]: pam_unix(sshd:session): session closed for user core Jul 6 23:47:43.598721 systemd[1]: sshd@17-10.0.0.139:22-10.0.0.1:36122.service: Deactivated successfully. Jul 6 23:47:43.601518 systemd[1]: session-18.scope: Deactivated successfully. Jul 6 23:47:43.604390 systemd-logind[1478]: Session 18 logged out. Waiting for processes to exit. Jul 6 23:47:43.605972 systemd-logind[1478]: Removed session 18. Jul 6 23:47:48.608391 systemd[1]: Started sshd@18-10.0.0.139:22-10.0.0.1:36134.service - OpenSSH per-connection server daemon (10.0.0.1:36134). Jul 6 23:47:48.661986 sshd[4144]: Accepted publickey for core from 10.0.0.1 port 36134 ssh2: RSA SHA256:xPKA+TblypRwFFpP4Ulh9pljC5Xv/qD+dvpZZ1GZosc Jul 6 23:47:48.664141 sshd-session[4144]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:47:48.671160 systemd-logind[1478]: New session 19 of user core. Jul 6 23:47:48.681824 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 6 23:47:48.802477 sshd[4146]: Connection closed by 10.0.0.1 port 36134 Jul 6 23:47:48.803104 sshd-session[4144]: pam_unix(sshd:session): session closed for user core Jul 6 23:47:48.806841 systemd[1]: sshd@18-10.0.0.139:22-10.0.0.1:36134.service: Deactivated successfully. Jul 6 23:47:48.808608 systemd[1]: session-19.scope: Deactivated successfully. Jul 6 23:47:48.810004 systemd-logind[1478]: Session 19 logged out. Waiting for processes to exit. Jul 6 23:47:48.812905 systemd-logind[1478]: Removed session 19. Jul 6 23:47:53.817791 systemd[1]: Started sshd@19-10.0.0.139:22-10.0.0.1:57192.service - OpenSSH per-connection server daemon (10.0.0.1:57192). Jul 6 23:47:53.876616 sshd[4161]: Accepted publickey for core from 10.0.0.1 port 57192 ssh2: RSA SHA256:xPKA+TblypRwFFpP4Ulh9pljC5Xv/qD+dvpZZ1GZosc Jul 6 23:47:53.881724 sshd-session[4161]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:47:53.885700 systemd-logind[1478]: New session 20 of user core. Jul 6 23:47:53.895725 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 6 23:47:54.010766 sshd[4163]: Connection closed by 10.0.0.1 port 57192 Jul 6 23:47:54.011110 sshd-session[4161]: pam_unix(sshd:session): session closed for user core Jul 6 23:47:54.016775 systemd[1]: sshd@19-10.0.0.139:22-10.0.0.1:57192.service: Deactivated successfully. Jul 6 23:47:54.018545 systemd[1]: session-20.scope: Deactivated successfully. Jul 6 23:47:54.021799 systemd-logind[1478]: Session 20 logged out. Waiting for processes to exit. Jul 6 23:47:54.023426 systemd-logind[1478]: Removed session 20. Jul 6 23:47:59.027139 systemd[1]: Started sshd@20-10.0.0.139:22-10.0.0.1:57196.service - OpenSSH per-connection server daemon (10.0.0.1:57196). Jul 6 23:47:59.075129 sshd[4180]: Accepted publickey for core from 10.0.0.1 port 57196 ssh2: RSA SHA256:xPKA+TblypRwFFpP4Ulh9pljC5Xv/qD+dvpZZ1GZosc Jul 6 23:47:59.076444 sshd-session[4180]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:47:59.080790 systemd-logind[1478]: New session 21 of user core. Jul 6 23:47:59.091755 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 6 23:47:59.200908 sshd[4182]: Connection closed by 10.0.0.1 port 57196 Jul 6 23:47:59.201352 sshd-session[4180]: pam_unix(sshd:session): session closed for user core Jul 6 23:47:59.209038 systemd[1]: sshd@20-10.0.0.139:22-10.0.0.1:57196.service: Deactivated successfully. Jul 6 23:47:59.211090 systemd[1]: session-21.scope: Deactivated successfully. Jul 6 23:47:59.213124 systemd-logind[1478]: Session 21 logged out. Waiting for processes to exit. Jul 6 23:47:59.215542 systemd[1]: Started sshd@21-10.0.0.139:22-10.0.0.1:57202.service - OpenSSH per-connection server daemon (10.0.0.1:57202). Jul 6 23:47:59.216752 systemd-logind[1478]: Removed session 21. Jul 6 23:47:59.266174 sshd[4196]: Accepted publickey for core from 10.0.0.1 port 57202 ssh2: RSA SHA256:xPKA+TblypRwFFpP4Ulh9pljC5Xv/qD+dvpZZ1GZosc Jul 6 23:47:59.267483 sshd-session[4196]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:47:59.272524 systemd-logind[1478]: New session 22 of user core. Jul 6 23:47:59.278723 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 6 23:48:00.814680 kubelet[2611]: I0706 23:48:00.814266 2611 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-f65wh" podStartSLOduration=65.813774472 podStartE2EDuration="1m5.813774472s" podCreationTimestamp="2025-07-06 23:46:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:47:19.009650535 +0000 UTC m=+29.240376949" watchObservedRunningTime="2025-07-06 23:48:00.813774472 +0000 UTC m=+71.044500806" Jul 6 23:48:00.831924 containerd[1497]: time="2025-07-06T23:48:00.831868121Z" level=info msg="StopContainer for \"584921fbef9b2b7211cc87802413c8fcc4abbd3c0bf0b25a0938aa882e35c75d\" with timeout 30 (s)" Jul 6 23:48:00.833904 containerd[1497]: time="2025-07-06T23:48:00.833869376Z" level=info msg="Stop container \"584921fbef9b2b7211cc87802413c8fcc4abbd3c0bf0b25a0938aa882e35c75d\" with signal terminated" Jul 6 23:48:00.844876 systemd[1]: cri-containerd-584921fbef9b2b7211cc87802413c8fcc4abbd3c0bf0b25a0938aa882e35c75d.scope: Deactivated successfully. Jul 6 23:48:00.847259 containerd[1497]: time="2025-07-06T23:48:00.847221311Z" level=info msg="received exit event container_id:\"584921fbef9b2b7211cc87802413c8fcc4abbd3c0bf0b25a0938aa882e35c75d\" id:\"584921fbef9b2b7211cc87802413c8fcc4abbd3c0bf0b25a0938aa882e35c75d\" pid:3204 exited_at:{seconds:1751845680 nanos:846904229}" Jul 6 23:48:00.847567 containerd[1497]: time="2025-07-06T23:48:00.847407073Z" level=info msg="TaskExit event in podsandbox handler container_id:\"584921fbef9b2b7211cc87802413c8fcc4abbd3c0bf0b25a0938aa882e35c75d\" id:\"584921fbef9b2b7211cc87802413c8fcc4abbd3c0bf0b25a0938aa882e35c75d\" pid:3204 exited_at:{seconds:1751845680 nanos:846904229}" Jul 6 23:48:00.856458 containerd[1497]: time="2025-07-06T23:48:00.856416457Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 6 23:48:00.861990 containerd[1497]: time="2025-07-06T23:48:00.861947737Z" level=info msg="TaskExit event in podsandbox handler container_id:\"24a5edaf68a0c9308c132a3517c06ab85fcebaaf47c179a151fb12422f575023\" id:\"56f7bd9553edcc46ebaf56b563798790d0c779fa41dee4eee0af6a0850099eb2\" pid:4226 exited_at:{seconds:1751845680 nanos:861559614}" Jul 6 23:48:00.863816 containerd[1497]: time="2025-07-06T23:48:00.863773390Z" level=info msg="StopContainer for \"24a5edaf68a0c9308c132a3517c06ab85fcebaaf47c179a151fb12422f575023\" with timeout 2 (s)" Jul 6 23:48:00.864122 containerd[1497]: time="2025-07-06T23:48:00.864101672Z" level=info msg="Stop container \"24a5edaf68a0c9308c132a3517c06ab85fcebaaf47c179a151fb12422f575023\" with signal terminated" Jul 6 23:48:00.870519 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-584921fbef9b2b7211cc87802413c8fcc4abbd3c0bf0b25a0938aa882e35c75d-rootfs.mount: Deactivated successfully. Jul 6 23:48:00.873129 systemd-networkd[1430]: lxc_health: Link DOWN Jul 6 23:48:00.873135 systemd-networkd[1430]: lxc_health: Lost carrier Jul 6 23:48:00.884598 containerd[1497]: time="2025-07-06T23:48:00.884545898Z" level=info msg="StopContainer for \"584921fbef9b2b7211cc87802413c8fcc4abbd3c0bf0b25a0938aa882e35c75d\" returns successfully" Jul 6 23:48:00.886991 containerd[1497]: time="2025-07-06T23:48:00.886950395Z" level=info msg="StopPodSandbox for \"ba33e080559445465e95b62e9dcd63f4f9b70d58738b9aa91aa198f02c7186be\"" Jul 6 23:48:00.887780 systemd[1]: cri-containerd-24a5edaf68a0c9308c132a3517c06ab85fcebaaf47c179a151fb12422f575023.scope: Deactivated successfully. Jul 6 23:48:00.888225 systemd[1]: cri-containerd-24a5edaf68a0c9308c132a3517c06ab85fcebaaf47c179a151fb12422f575023.scope: Consumed 6.856s CPU time, 123M memory peak, 160K read from disk, 12.9M written to disk. Jul 6 23:48:00.888460 containerd[1497]: time="2025-07-06T23:48:00.888374886Z" level=info msg="received exit event container_id:\"24a5edaf68a0c9308c132a3517c06ab85fcebaaf47c179a151fb12422f575023\" id:\"24a5edaf68a0c9308c132a3517c06ab85fcebaaf47c179a151fb12422f575023\" pid:3277 exited_at:{seconds:1751845680 nanos:888144924}" Jul 6 23:48:00.888657 containerd[1497]: time="2025-07-06T23:48:00.888627247Z" level=info msg="TaskExit event in podsandbox handler container_id:\"24a5edaf68a0c9308c132a3517c06ab85fcebaaf47c179a151fb12422f575023\" id:\"24a5edaf68a0c9308c132a3517c06ab85fcebaaf47c179a151fb12422f575023\" pid:3277 exited_at:{seconds:1751845680 nanos:888144924}" Jul 6 23:48:00.896631 containerd[1497]: time="2025-07-06T23:48:00.896562784Z" level=info msg="Container to stop \"584921fbef9b2b7211cc87802413c8fcc4abbd3c0bf0b25a0938aa882e35c75d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 6 23:48:00.903047 systemd[1]: cri-containerd-ba33e080559445465e95b62e9dcd63f4f9b70d58738b9aa91aa198f02c7186be.scope: Deactivated successfully. Jul 6 23:48:00.905581 containerd[1497]: time="2025-07-06T23:48:00.904453881Z" level=info msg="TaskExit event in podsandbox handler container_id:\"ba33e080559445465e95b62e9dcd63f4f9b70d58738b9aa91aa198f02c7186be\" id:\"ba33e080559445465e95b62e9dcd63f4f9b70d58738b9aa91aa198f02c7186be\" pid:2850 exit_status:137 exited_at:{seconds:1751845680 nanos:904117638}" Jul 6 23:48:00.909318 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-24a5edaf68a0c9308c132a3517c06ab85fcebaaf47c179a151fb12422f575023-rootfs.mount: Deactivated successfully. Jul 6 23:48:00.929045 containerd[1497]: time="2025-07-06T23:48:00.928934896Z" level=info msg="StopContainer for \"24a5edaf68a0c9308c132a3517c06ab85fcebaaf47c179a151fb12422f575023\" returns successfully" Jul 6 23:48:00.929404 containerd[1497]: time="2025-07-06T23:48:00.929377979Z" level=info msg="StopPodSandbox for \"aa0a92e092ed4c6dadb7b57af3e01fd1540044aec7859ccdbb30e28f790006be\"" Jul 6 23:48:00.929475 containerd[1497]: time="2025-07-06T23:48:00.929458060Z" level=info msg="Container to stop \"da1a78699cdcdc64d133b864fa8824c2d9399979530c73bea54720f95fe3492d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 6 23:48:00.929503 containerd[1497]: time="2025-07-06T23:48:00.929474740Z" level=info msg="Container to stop \"24a5edaf68a0c9308c132a3517c06ab85fcebaaf47c179a151fb12422f575023\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 6 23:48:00.929503 containerd[1497]: time="2025-07-06T23:48:00.929484940Z" level=info msg="Container to stop \"3b391ede02c8b9fc6e037deb19e099fb4386feda7146f597ec29729803f6e7e7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 6 23:48:00.929503 containerd[1497]: time="2025-07-06T23:48:00.929493140Z" level=info msg="Container to stop \"7a87fc22fc47ef3c1847c07e832e4030b349df78e6b889d128f618abae382899\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 6 23:48:00.929503 containerd[1497]: time="2025-07-06T23:48:00.929500260Z" level=info msg="Container to stop \"436bbb776f1f29116f30b8390119db62cae986a8c6d520cddce8f0fca5b12727\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 6 23:48:00.937225 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ba33e080559445465e95b62e9dcd63f4f9b70d58738b9aa91aa198f02c7186be-rootfs.mount: Deactivated successfully. Jul 6 23:48:00.938477 systemd[1]: cri-containerd-aa0a92e092ed4c6dadb7b57af3e01fd1540044aec7859ccdbb30e28f790006be.scope: Deactivated successfully. Jul 6 23:48:00.945217 containerd[1497]: time="2025-07-06T23:48:00.945142172Z" level=info msg="shim disconnected" id=ba33e080559445465e95b62e9dcd63f4f9b70d58738b9aa91aa198f02c7186be namespace=k8s.io Jul 6 23:48:00.958594 containerd[1497]: time="2025-07-06T23:48:00.945172052Z" level=warning msg="cleaning up after shim disconnected" id=ba33e080559445465e95b62e9dcd63f4f9b70d58738b9aa91aa198f02c7186be namespace=k8s.io Jul 6 23:48:00.958594 containerd[1497]: time="2025-07-06T23:48:00.958583828Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:48:00.961833 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-aa0a92e092ed4c6dadb7b57af3e01fd1540044aec7859ccdbb30e28f790006be-rootfs.mount: Deactivated successfully. Jul 6 23:48:00.965042 containerd[1497]: time="2025-07-06T23:48:00.965012914Z" level=info msg="shim disconnected" id=aa0a92e092ed4c6dadb7b57af3e01fd1540044aec7859ccdbb30e28f790006be namespace=k8s.io Jul 6 23:48:00.965268 containerd[1497]: time="2025-07-06T23:48:00.965231956Z" level=warning msg="cleaning up after shim disconnected" id=aa0a92e092ed4c6dadb7b57af3e01fd1540044aec7859ccdbb30e28f790006be namespace=k8s.io Jul 6 23:48:00.965351 containerd[1497]: time="2025-07-06T23:48:00.965336796Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:48:00.980524 containerd[1497]: time="2025-07-06T23:48:00.980479505Z" level=info msg="received exit event sandbox_id:\"ba33e080559445465e95b62e9dcd63f4f9b70d58738b9aa91aa198f02c7186be\" exit_status:137 exited_at:{seconds:1751845680 nanos:904117638}" Jul 6 23:48:00.981044 containerd[1497]: time="2025-07-06T23:48:00.980521585Z" level=info msg="TaskExit event in podsandbox handler container_id:\"aa0a92e092ed4c6dadb7b57af3e01fd1540044aec7859ccdbb30e28f790006be\" id:\"aa0a92e092ed4c6dadb7b57af3e01fd1540044aec7859ccdbb30e28f790006be\" pid:2769 exit_status:137 exited_at:{seconds:1751845680 nanos:939111529}" Jul 6 23:48:00.981202 containerd[1497]: time="2025-07-06T23:48:00.981182190Z" level=info msg="received exit event sandbox_id:\"aa0a92e092ed4c6dadb7b57af3e01fd1540044aec7859ccdbb30e28f790006be\" exit_status:137 exited_at:{seconds:1751845680 nanos:939111529}" Jul 6 23:48:00.982019 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ba33e080559445465e95b62e9dcd63f4f9b70d58738b9aa91aa198f02c7186be-shm.mount: Deactivated successfully. Jul 6 23:48:00.983552 containerd[1497]: time="2025-07-06T23:48:00.983521286Z" level=info msg="TearDown network for sandbox \"ba33e080559445465e95b62e9dcd63f4f9b70d58738b9aa91aa198f02c7186be\" successfully" Jul 6 23:48:00.983552 containerd[1497]: time="2025-07-06T23:48:00.983549487Z" level=info msg="StopPodSandbox for \"ba33e080559445465e95b62e9dcd63f4f9b70d58738b9aa91aa198f02c7186be\" returns successfully" Jul 6 23:48:00.984090 containerd[1497]: time="2025-07-06T23:48:00.984045850Z" level=info msg="TearDown network for sandbox \"aa0a92e092ed4c6dadb7b57af3e01fd1540044aec7859ccdbb30e28f790006be\" successfully" Jul 6 23:48:00.984090 containerd[1497]: time="2025-07-06T23:48:00.984077170Z" level=info msg="StopPodSandbox for \"aa0a92e092ed4c6dadb7b57af3e01fd1540044aec7859ccdbb30e28f790006be\" returns successfully" Jul 6 23:48:01.025523 kubelet[2611]: I0706 23:48:01.025480 2611 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7c6e390b-ddbf-4568-a42d-13eabdc242e8-lib-modules\") pod \"7c6e390b-ddbf-4568-a42d-13eabdc242e8\" (UID: \"7c6e390b-ddbf-4568-a42d-13eabdc242e8\") " Jul 6 23:48:01.025679 kubelet[2611]: I0706 23:48:01.025552 2611 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7c6e390b-ddbf-4568-a42d-13eabdc242e8-cilium-config-path\") pod \"7c6e390b-ddbf-4568-a42d-13eabdc242e8\" (UID: \"7c6e390b-ddbf-4568-a42d-13eabdc242e8\") " Jul 6 23:48:01.025679 kubelet[2611]: I0706 23:48:01.025584 2611 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bv7ws\" (UniqueName: \"kubernetes.io/projected/dc42bb6a-438c-41d0-b222-b744e87b8330-kube-api-access-bv7ws\") pod \"dc42bb6a-438c-41d0-b222-b744e87b8330\" (UID: \"dc42bb6a-438c-41d0-b222-b744e87b8330\") " Jul 6 23:48:01.025679 kubelet[2611]: I0706 23:48:01.025600 2611 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7c6e390b-ddbf-4568-a42d-13eabdc242e8-host-proc-sys-kernel\") pod \"7c6e390b-ddbf-4568-a42d-13eabdc242e8\" (UID: \"7c6e390b-ddbf-4568-a42d-13eabdc242e8\") " Jul 6 23:48:01.025679 kubelet[2611]: I0706 23:48:01.025617 2611 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7c6e390b-ddbf-4568-a42d-13eabdc242e8-bpf-maps\") pod \"7c6e390b-ddbf-4568-a42d-13eabdc242e8\" (UID: \"7c6e390b-ddbf-4568-a42d-13eabdc242e8\") " Jul 6 23:48:01.025679 kubelet[2611]: I0706 23:48:01.025643 2611 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7c6e390b-ddbf-4568-a42d-13eabdc242e8-clustermesh-secrets\") pod \"7c6e390b-ddbf-4568-a42d-13eabdc242e8\" (UID: \"7c6e390b-ddbf-4568-a42d-13eabdc242e8\") " Jul 6 23:48:01.025679 kubelet[2611]: I0706 23:48:01.025659 2611 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7c6e390b-ddbf-4568-a42d-13eabdc242e8-xtables-lock\") pod \"7c6e390b-ddbf-4568-a42d-13eabdc242e8\" (UID: \"7c6e390b-ddbf-4568-a42d-13eabdc242e8\") " Jul 6 23:48:01.025815 kubelet[2611]: I0706 23:48:01.025678 2611 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7c6e390b-ddbf-4568-a42d-13eabdc242e8-hubble-tls\") pod \"7c6e390b-ddbf-4568-a42d-13eabdc242e8\" (UID: \"7c6e390b-ddbf-4568-a42d-13eabdc242e8\") " Jul 6 23:48:01.025815 kubelet[2611]: I0706 23:48:01.025745 2611 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7c6e390b-ddbf-4568-a42d-13eabdc242e8-etc-cni-netd\") pod \"7c6e390b-ddbf-4568-a42d-13eabdc242e8\" (UID: \"7c6e390b-ddbf-4568-a42d-13eabdc242e8\") " Jul 6 23:48:01.025815 kubelet[2611]: I0706 23:48:01.025779 2611 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7c6e390b-ddbf-4568-a42d-13eabdc242e8-cilium-cgroup\") pod \"7c6e390b-ddbf-4568-a42d-13eabdc242e8\" (UID: \"7c6e390b-ddbf-4568-a42d-13eabdc242e8\") " Jul 6 23:48:01.025815 kubelet[2611]: I0706 23:48:01.025794 2611 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7c6e390b-ddbf-4568-a42d-13eabdc242e8-hostproc\") pod \"7c6e390b-ddbf-4568-a42d-13eabdc242e8\" (UID: \"7c6e390b-ddbf-4568-a42d-13eabdc242e8\") " Jul 6 23:48:01.025815 kubelet[2611]: I0706 23:48:01.025811 2611 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7c6e390b-ddbf-4568-a42d-13eabdc242e8-cilium-run\") pod \"7c6e390b-ddbf-4568-a42d-13eabdc242e8\" (UID: \"7c6e390b-ddbf-4568-a42d-13eabdc242e8\") " Jul 6 23:48:01.025912 kubelet[2611]: I0706 23:48:01.025826 2611 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7c6e390b-ddbf-4568-a42d-13eabdc242e8-host-proc-sys-net\") pod \"7c6e390b-ddbf-4568-a42d-13eabdc242e8\" (UID: \"7c6e390b-ddbf-4568-a42d-13eabdc242e8\") " Jul 6 23:48:01.025912 kubelet[2611]: I0706 23:48:01.025843 2611 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xhrpl\" (UniqueName: \"kubernetes.io/projected/7c6e390b-ddbf-4568-a42d-13eabdc242e8-kube-api-access-xhrpl\") pod \"7c6e390b-ddbf-4568-a42d-13eabdc242e8\" (UID: \"7c6e390b-ddbf-4568-a42d-13eabdc242e8\") " Jul 6 23:48:01.025912 kubelet[2611]: I0706 23:48:01.025860 2611 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7c6e390b-ddbf-4568-a42d-13eabdc242e8-cni-path\") pod \"7c6e390b-ddbf-4568-a42d-13eabdc242e8\" (UID: \"7c6e390b-ddbf-4568-a42d-13eabdc242e8\") " Jul 6 23:48:01.025912 kubelet[2611]: I0706 23:48:01.025876 2611 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/dc42bb6a-438c-41d0-b222-b744e87b8330-cilium-config-path\") pod \"dc42bb6a-438c-41d0-b222-b744e87b8330\" (UID: \"dc42bb6a-438c-41d0-b222-b744e87b8330\") " Jul 6 23:48:01.028828 kubelet[2611]: I0706 23:48:01.028780 2611 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7c6e390b-ddbf-4568-a42d-13eabdc242e8-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "7c6e390b-ddbf-4568-a42d-13eabdc242e8" (UID: "7c6e390b-ddbf-4568-a42d-13eabdc242e8"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 6 23:48:01.028903 kubelet[2611]: I0706 23:48:01.028851 2611 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7c6e390b-ddbf-4568-a42d-13eabdc242e8-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "7c6e390b-ddbf-4568-a42d-13eabdc242e8" (UID: "7c6e390b-ddbf-4568-a42d-13eabdc242e8"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 6 23:48:01.029430 kubelet[2611]: I0706 23:48:01.029284 2611 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dc42bb6a-438c-41d0-b222-b744e87b8330-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "dc42bb6a-438c-41d0-b222-b744e87b8330" (UID: "dc42bb6a-438c-41d0-b222-b744e87b8330"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 6 23:48:01.030306 kubelet[2611]: I0706 23:48:01.030251 2611 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7c6e390b-ddbf-4568-a42d-13eabdc242e8-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "7c6e390b-ddbf-4568-a42d-13eabdc242e8" (UID: "7c6e390b-ddbf-4568-a42d-13eabdc242e8"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 6 23:48:01.031100 kubelet[2611]: I0706 23:48:01.031018 2611 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7c6e390b-ddbf-4568-a42d-13eabdc242e8-hostproc" (OuterVolumeSpecName: "hostproc") pod "7c6e390b-ddbf-4568-a42d-13eabdc242e8" (UID: "7c6e390b-ddbf-4568-a42d-13eabdc242e8"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 6 23:48:01.031100 kubelet[2611]: I0706 23:48:01.031051 2611 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7c6e390b-ddbf-4568-a42d-13eabdc242e8-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "7c6e390b-ddbf-4568-a42d-13eabdc242e8" (UID: "7c6e390b-ddbf-4568-a42d-13eabdc242e8"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 6 23:48:01.031536 kubelet[2611]: I0706 23:48:01.031500 2611 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc42bb6a-438c-41d0-b222-b744e87b8330-kube-api-access-bv7ws" (OuterVolumeSpecName: "kube-api-access-bv7ws") pod "dc42bb6a-438c-41d0-b222-b744e87b8330" (UID: "dc42bb6a-438c-41d0-b222-b744e87b8330"). InnerVolumeSpecName "kube-api-access-bv7ws". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 6 23:48:01.031613 kubelet[2611]: I0706 23:48:01.031547 2611 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7c6e390b-ddbf-4568-a42d-13eabdc242e8-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "7c6e390b-ddbf-4568-a42d-13eabdc242e8" (UID: "7c6e390b-ddbf-4568-a42d-13eabdc242e8"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 6 23:48:01.031613 kubelet[2611]: I0706 23:48:01.031567 2611 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7c6e390b-ddbf-4568-a42d-13eabdc242e8-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "7c6e390b-ddbf-4568-a42d-13eabdc242e8" (UID: "7c6e390b-ddbf-4568-a42d-13eabdc242e8"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 6 23:48:01.031672 kubelet[2611]: I0706 23:48:01.031613 2611 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7c6e390b-ddbf-4568-a42d-13eabdc242e8-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "7c6e390b-ddbf-4568-a42d-13eabdc242e8" (UID: "7c6e390b-ddbf-4568-a42d-13eabdc242e8"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 6 23:48:01.031857 kubelet[2611]: I0706 23:48:01.031824 2611 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7c6e390b-ddbf-4568-a42d-13eabdc242e8-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "7c6e390b-ddbf-4568-a42d-13eabdc242e8" (UID: "7c6e390b-ddbf-4568-a42d-13eabdc242e8"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 6 23:48:01.031902 kubelet[2611]: I0706 23:48:01.031872 2611 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7c6e390b-ddbf-4568-a42d-13eabdc242e8-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "7c6e390b-ddbf-4568-a42d-13eabdc242e8" (UID: "7c6e390b-ddbf-4568-a42d-13eabdc242e8"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 6 23:48:01.031932 kubelet[2611]: I0706 23:48:01.031905 2611 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7c6e390b-ddbf-4568-a42d-13eabdc242e8-cni-path" (OuterVolumeSpecName: "cni-path") pod "7c6e390b-ddbf-4568-a42d-13eabdc242e8" (UID: "7c6e390b-ddbf-4568-a42d-13eabdc242e8"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 6 23:48:01.032022 kubelet[2611]: I0706 23:48:01.032001 2611 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c6e390b-ddbf-4568-a42d-13eabdc242e8-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "7c6e390b-ddbf-4568-a42d-13eabdc242e8" (UID: "7c6e390b-ddbf-4568-a42d-13eabdc242e8"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 6 23:48:01.032866 kubelet[2611]: I0706 23:48:01.032836 2611 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c6e390b-ddbf-4568-a42d-13eabdc242e8-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "7c6e390b-ddbf-4568-a42d-13eabdc242e8" (UID: "7c6e390b-ddbf-4568-a42d-13eabdc242e8"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 6 23:48:01.033489 kubelet[2611]: I0706 23:48:01.033456 2611 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c6e390b-ddbf-4568-a42d-13eabdc242e8-kube-api-access-xhrpl" (OuterVolumeSpecName: "kube-api-access-xhrpl") pod "7c6e390b-ddbf-4568-a42d-13eabdc242e8" (UID: "7c6e390b-ddbf-4568-a42d-13eabdc242e8"). InnerVolumeSpecName "kube-api-access-xhrpl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 6 23:48:01.045733 kubelet[2611]: I0706 23:48:01.045635 2611 scope.go:117] "RemoveContainer" containerID="584921fbef9b2b7211cc87802413c8fcc4abbd3c0bf0b25a0938aa882e35c75d" Jul 6 23:48:01.050439 containerd[1497]: time="2025-07-06T23:48:01.050384515Z" level=info msg="RemoveContainer for \"584921fbef9b2b7211cc87802413c8fcc4abbd3c0bf0b25a0938aa882e35c75d\"" Jul 6 23:48:01.053098 systemd[1]: Removed slice kubepods-besteffort-poddc42bb6a_438c_41d0_b222_b744e87b8330.slice - libcontainer container kubepods-besteffort-poddc42bb6a_438c_41d0_b222_b744e87b8330.slice. Jul 6 23:48:01.058601 systemd[1]: Removed slice kubepods-burstable-pod7c6e390b_ddbf_4568_a42d_13eabdc242e8.slice - libcontainer container kubepods-burstable-pod7c6e390b_ddbf_4568_a42d_13eabdc242e8.slice. Jul 6 23:48:01.058697 systemd[1]: kubepods-burstable-pod7c6e390b_ddbf_4568_a42d_13eabdc242e8.slice: Consumed 7.007s CPU time, 123.3M memory peak, 172K read from disk, 12.9M written to disk. Jul 6 23:48:01.071510 containerd[1497]: time="2025-07-06T23:48:01.071231260Z" level=info msg="RemoveContainer for \"584921fbef9b2b7211cc87802413c8fcc4abbd3c0bf0b25a0938aa882e35c75d\" returns successfully" Jul 6 23:48:01.075020 kubelet[2611]: I0706 23:48:01.074991 2611 scope.go:117] "RemoveContainer" containerID="584921fbef9b2b7211cc87802413c8fcc4abbd3c0bf0b25a0938aa882e35c75d" Jul 6 23:48:01.075390 containerd[1497]: time="2025-07-06T23:48:01.075331249Z" level=error msg="ContainerStatus for \"584921fbef9b2b7211cc87802413c8fcc4abbd3c0bf0b25a0938aa882e35c75d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"584921fbef9b2b7211cc87802413c8fcc4abbd3c0bf0b25a0938aa882e35c75d\": not found" Jul 6 23:48:01.075586 kubelet[2611]: E0706 23:48:01.075556 2611 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"584921fbef9b2b7211cc87802413c8fcc4abbd3c0bf0b25a0938aa882e35c75d\": not found" containerID="584921fbef9b2b7211cc87802413c8fcc4abbd3c0bf0b25a0938aa882e35c75d" Jul 6 23:48:01.075711 kubelet[2611]: I0706 23:48:01.075593 2611 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"584921fbef9b2b7211cc87802413c8fcc4abbd3c0bf0b25a0938aa882e35c75d"} err="failed to get container status \"584921fbef9b2b7211cc87802413c8fcc4abbd3c0bf0b25a0938aa882e35c75d\": rpc error: code = NotFound desc = an error occurred when try to find container \"584921fbef9b2b7211cc87802413c8fcc4abbd3c0bf0b25a0938aa882e35c75d\": not found" Jul 6 23:48:01.075757 kubelet[2611]: I0706 23:48:01.075713 2611 scope.go:117] "RemoveContainer" containerID="24a5edaf68a0c9308c132a3517c06ab85fcebaaf47c179a151fb12422f575023" Jul 6 23:48:01.077472 containerd[1497]: time="2025-07-06T23:48:01.077448063Z" level=info msg="RemoveContainer for \"24a5edaf68a0c9308c132a3517c06ab85fcebaaf47c179a151fb12422f575023\"" Jul 6 23:48:01.082843 containerd[1497]: time="2025-07-06T23:48:01.082791141Z" level=info msg="RemoveContainer for \"24a5edaf68a0c9308c132a3517c06ab85fcebaaf47c179a151fb12422f575023\" returns successfully" Jul 6 23:48:01.083165 kubelet[2611]: I0706 23:48:01.083135 2611 scope.go:117] "RemoveContainer" containerID="da1a78699cdcdc64d133b864fa8824c2d9399979530c73bea54720f95fe3492d" Jul 6 23:48:01.085828 containerd[1497]: time="2025-07-06T23:48:01.085785401Z" level=info msg="RemoveContainer for \"da1a78699cdcdc64d133b864fa8824c2d9399979530c73bea54720f95fe3492d\"" Jul 6 23:48:01.089472 containerd[1497]: time="2025-07-06T23:48:01.089434107Z" level=info msg="RemoveContainer for \"da1a78699cdcdc64d133b864fa8824c2d9399979530c73bea54720f95fe3492d\" returns successfully" Jul 6 23:48:01.089735 kubelet[2611]: I0706 23:48:01.089705 2611 scope.go:117] "RemoveContainer" containerID="436bbb776f1f29116f30b8390119db62cae986a8c6d520cddce8f0fca5b12727" Jul 6 23:48:01.092444 containerd[1497]: time="2025-07-06T23:48:01.092413368Z" level=info msg="RemoveContainer for \"436bbb776f1f29116f30b8390119db62cae986a8c6d520cddce8f0fca5b12727\"" Jul 6 23:48:01.098188 containerd[1497]: time="2025-07-06T23:48:01.098142127Z" level=info msg="RemoveContainer for \"436bbb776f1f29116f30b8390119db62cae986a8c6d520cddce8f0fca5b12727\" returns successfully" Jul 6 23:48:01.098557 kubelet[2611]: I0706 23:48:01.098522 2611 scope.go:117] "RemoveContainer" containerID="7a87fc22fc47ef3c1847c07e832e4030b349df78e6b889d128f618abae382899" Jul 6 23:48:01.100620 containerd[1497]: time="2025-07-06T23:48:01.100463704Z" level=info msg="RemoveContainer for \"7a87fc22fc47ef3c1847c07e832e4030b349df78e6b889d128f618abae382899\"" Jul 6 23:48:01.103534 containerd[1497]: time="2025-07-06T23:48:01.103495285Z" level=info msg="RemoveContainer for \"7a87fc22fc47ef3c1847c07e832e4030b349df78e6b889d128f618abae382899\" returns successfully" Jul 6 23:48:01.103757 kubelet[2611]: I0706 23:48:01.103731 2611 scope.go:117] "RemoveContainer" containerID="3b391ede02c8b9fc6e037deb19e099fb4386feda7146f597ec29729803f6e7e7" Jul 6 23:48:01.105317 containerd[1497]: time="2025-07-06T23:48:01.105274337Z" level=info msg="RemoveContainer for \"3b391ede02c8b9fc6e037deb19e099fb4386feda7146f597ec29729803f6e7e7\"" Jul 6 23:48:01.108634 containerd[1497]: time="2025-07-06T23:48:01.108604200Z" level=info msg="RemoveContainer for \"3b391ede02c8b9fc6e037deb19e099fb4386feda7146f597ec29729803f6e7e7\" returns successfully" Jul 6 23:48:01.108767 kubelet[2611]: I0706 23:48:01.108742 2611 scope.go:117] "RemoveContainer" containerID="24a5edaf68a0c9308c132a3517c06ab85fcebaaf47c179a151fb12422f575023" Jul 6 23:48:01.109232 containerd[1497]: time="2025-07-06T23:48:01.109186684Z" level=error msg="ContainerStatus for \"24a5edaf68a0c9308c132a3517c06ab85fcebaaf47c179a151fb12422f575023\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"24a5edaf68a0c9308c132a3517c06ab85fcebaaf47c179a151fb12422f575023\": not found" Jul 6 23:48:01.109405 kubelet[2611]: E0706 23:48:01.109319 2611 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"24a5edaf68a0c9308c132a3517c06ab85fcebaaf47c179a151fb12422f575023\": not found" containerID="24a5edaf68a0c9308c132a3517c06ab85fcebaaf47c179a151fb12422f575023" Jul 6 23:48:01.109405 kubelet[2611]: I0706 23:48:01.109348 2611 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"24a5edaf68a0c9308c132a3517c06ab85fcebaaf47c179a151fb12422f575023"} err="failed to get container status \"24a5edaf68a0c9308c132a3517c06ab85fcebaaf47c179a151fb12422f575023\": rpc error: code = NotFound desc = an error occurred when try to find container \"24a5edaf68a0c9308c132a3517c06ab85fcebaaf47c179a151fb12422f575023\": not found" Jul 6 23:48:01.109405 kubelet[2611]: I0706 23:48:01.109369 2611 scope.go:117] "RemoveContainer" containerID="da1a78699cdcdc64d133b864fa8824c2d9399979530c73bea54720f95fe3492d" Jul 6 23:48:01.110362 kubelet[2611]: E0706 23:48:01.109626 2611 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"da1a78699cdcdc64d133b864fa8824c2d9399979530c73bea54720f95fe3492d\": not found" containerID="da1a78699cdcdc64d133b864fa8824c2d9399979530c73bea54720f95fe3492d" Jul 6 23:48:01.110362 kubelet[2611]: I0706 23:48:01.109644 2611 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"da1a78699cdcdc64d133b864fa8824c2d9399979530c73bea54720f95fe3492d"} err="failed to get container status \"da1a78699cdcdc64d133b864fa8824c2d9399979530c73bea54720f95fe3492d\": rpc error: code = NotFound desc = an error occurred when try to find container \"da1a78699cdcdc64d133b864fa8824c2d9399979530c73bea54720f95fe3492d\": not found" Jul 6 23:48:01.110362 kubelet[2611]: I0706 23:48:01.109681 2611 scope.go:117] "RemoveContainer" containerID="436bbb776f1f29116f30b8390119db62cae986a8c6d520cddce8f0fca5b12727" Jul 6 23:48:01.110473 containerd[1497]: time="2025-07-06T23:48:01.109516967Z" level=error msg="ContainerStatus for \"da1a78699cdcdc64d133b864fa8824c2d9399979530c73bea54720f95fe3492d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"da1a78699cdcdc64d133b864fa8824c2d9399979530c73bea54720f95fe3492d\": not found" Jul 6 23:48:01.111634 containerd[1497]: time="2025-07-06T23:48:01.111510260Z" level=error msg="ContainerStatus for \"436bbb776f1f29116f30b8390119db62cae986a8c6d520cddce8f0fca5b12727\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"436bbb776f1f29116f30b8390119db62cae986a8c6d520cddce8f0fca5b12727\": not found" Jul 6 23:48:01.111696 kubelet[2611]: E0706 23:48:01.111666 2611 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"436bbb776f1f29116f30b8390119db62cae986a8c6d520cddce8f0fca5b12727\": not found" containerID="436bbb776f1f29116f30b8390119db62cae986a8c6d520cddce8f0fca5b12727" Jul 6 23:48:01.111696 kubelet[2611]: I0706 23:48:01.111687 2611 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"436bbb776f1f29116f30b8390119db62cae986a8c6d520cddce8f0fca5b12727"} err="failed to get container status \"436bbb776f1f29116f30b8390119db62cae986a8c6d520cddce8f0fca5b12727\": rpc error: code = NotFound desc = an error occurred when try to find container \"436bbb776f1f29116f30b8390119db62cae986a8c6d520cddce8f0fca5b12727\": not found" Jul 6 23:48:01.111696 kubelet[2611]: I0706 23:48:01.111702 2611 scope.go:117] "RemoveContainer" containerID="7a87fc22fc47ef3c1847c07e832e4030b349df78e6b889d128f618abae382899" Jul 6 23:48:01.113970 containerd[1497]: time="2025-07-06T23:48:01.113041471Z" level=error msg="ContainerStatus for \"7a87fc22fc47ef3c1847c07e832e4030b349df78e6b889d128f618abae382899\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"7a87fc22fc47ef3c1847c07e832e4030b349df78e6b889d128f618abae382899\": not found" Jul 6 23:48:01.114098 kubelet[2611]: E0706 23:48:01.113755 2611 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"7a87fc22fc47ef3c1847c07e832e4030b349df78e6b889d128f618abae382899\": not found" containerID="7a87fc22fc47ef3c1847c07e832e4030b349df78e6b889d128f618abae382899" Jul 6 23:48:01.114098 kubelet[2611]: I0706 23:48:01.113785 2611 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"7a87fc22fc47ef3c1847c07e832e4030b349df78e6b889d128f618abae382899"} err="failed to get container status \"7a87fc22fc47ef3c1847c07e832e4030b349df78e6b889d128f618abae382899\": rpc error: code = NotFound desc = an error occurred when try to find container \"7a87fc22fc47ef3c1847c07e832e4030b349df78e6b889d128f618abae382899\": not found" Jul 6 23:48:01.114098 kubelet[2611]: I0706 23:48:01.113800 2611 scope.go:117] "RemoveContainer" containerID="3b391ede02c8b9fc6e037deb19e099fb4386feda7146f597ec29729803f6e7e7" Jul 6 23:48:01.114237 containerd[1497]: time="2025-07-06T23:48:01.114193079Z" level=error msg="ContainerStatus for \"3b391ede02c8b9fc6e037deb19e099fb4386feda7146f597ec29729803f6e7e7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3b391ede02c8b9fc6e037deb19e099fb4386feda7146f597ec29729803f6e7e7\": not found" Jul 6 23:48:01.117318 kubelet[2611]: E0706 23:48:01.117291 2611 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3b391ede02c8b9fc6e037deb19e099fb4386feda7146f597ec29729803f6e7e7\": not found" containerID="3b391ede02c8b9fc6e037deb19e099fb4386feda7146f597ec29729803f6e7e7" Jul 6 23:48:01.117388 kubelet[2611]: I0706 23:48:01.117323 2611 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3b391ede02c8b9fc6e037deb19e099fb4386feda7146f597ec29729803f6e7e7"} err="failed to get container status \"3b391ede02c8b9fc6e037deb19e099fb4386feda7146f597ec29729803f6e7e7\": rpc error: code = NotFound desc = an error occurred when try to find container \"3b391ede02c8b9fc6e037deb19e099fb4386feda7146f597ec29729803f6e7e7\": not found" Jul 6 23:48:01.126972 kubelet[2611]: I0706 23:48:01.126947 2611 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/7c6e390b-ddbf-4568-a42d-13eabdc242e8-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Jul 6 23:48:01.126972 kubelet[2611]: I0706 23:48:01.126975 2611 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xhrpl\" (UniqueName: \"kubernetes.io/projected/7c6e390b-ddbf-4568-a42d-13eabdc242e8-kube-api-access-xhrpl\") on node \"localhost\" DevicePath \"\"" Jul 6 23:48:01.127051 kubelet[2611]: I0706 23:48:01.126987 2611 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/7c6e390b-ddbf-4568-a42d-13eabdc242e8-cni-path\") on node \"localhost\" DevicePath \"\"" Jul 6 23:48:01.127051 kubelet[2611]: I0706 23:48:01.126996 2611 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/dc42bb6a-438c-41d0-b222-b744e87b8330-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 6 23:48:01.127051 kubelet[2611]: I0706 23:48:01.127005 2611 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7c6e390b-ddbf-4568-a42d-13eabdc242e8-lib-modules\") on node \"localhost\" DevicePath \"\"" Jul 6 23:48:01.127051 kubelet[2611]: I0706 23:48:01.127012 2611 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7c6e390b-ddbf-4568-a42d-13eabdc242e8-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 6 23:48:01.127051 kubelet[2611]: I0706 23:48:01.127020 2611 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bv7ws\" (UniqueName: \"kubernetes.io/projected/dc42bb6a-438c-41d0-b222-b744e87b8330-kube-api-access-bv7ws\") on node \"localhost\" DevicePath \"\"" Jul 6 23:48:01.127051 kubelet[2611]: I0706 23:48:01.127028 2611 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/7c6e390b-ddbf-4568-a42d-13eabdc242e8-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Jul 6 23:48:01.127051 kubelet[2611]: I0706 23:48:01.127037 2611 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/7c6e390b-ddbf-4568-a42d-13eabdc242e8-bpf-maps\") on node \"localhost\" DevicePath \"\"" Jul 6 23:48:01.127051 kubelet[2611]: I0706 23:48:01.127045 2611 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/7c6e390b-ddbf-4568-a42d-13eabdc242e8-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Jul 6 23:48:01.127217 kubelet[2611]: I0706 23:48:01.127052 2611 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7c6e390b-ddbf-4568-a42d-13eabdc242e8-xtables-lock\") on node \"localhost\" DevicePath \"\"" Jul 6 23:48:01.127217 kubelet[2611]: I0706 23:48:01.127059 2611 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/7c6e390b-ddbf-4568-a42d-13eabdc242e8-hubble-tls\") on node \"localhost\" DevicePath \"\"" Jul 6 23:48:01.127217 kubelet[2611]: I0706 23:48:01.127070 2611 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7c6e390b-ddbf-4568-a42d-13eabdc242e8-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Jul 6 23:48:01.127217 kubelet[2611]: I0706 23:48:01.127078 2611 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/7c6e390b-ddbf-4568-a42d-13eabdc242e8-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Jul 6 23:48:01.127217 kubelet[2611]: I0706 23:48:01.127086 2611 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/7c6e390b-ddbf-4568-a42d-13eabdc242e8-hostproc\") on node \"localhost\" DevicePath \"\"" Jul 6 23:48:01.127217 kubelet[2611]: I0706 23:48:01.127093 2611 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/7c6e390b-ddbf-4568-a42d-13eabdc242e8-cilium-run\") on node \"localhost\" DevicePath \"\"" Jul 6 23:48:01.852533 kubelet[2611]: I0706 23:48:01.852475 2611 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7c6e390b-ddbf-4568-a42d-13eabdc242e8" path="/var/lib/kubelet/pods/7c6e390b-ddbf-4568-a42d-13eabdc242e8/volumes" Jul 6 23:48:01.853037 kubelet[2611]: I0706 23:48:01.853014 2611 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dc42bb6a-438c-41d0-b222-b744e87b8330" path="/var/lib/kubelet/pods/dc42bb6a-438c-41d0-b222-b744e87b8330/volumes" Jul 6 23:48:01.869365 systemd[1]: var-lib-kubelet-pods-dc42bb6a\x2d438c\x2d41d0\x2db222\x2db744e87b8330-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dbv7ws.mount: Deactivated successfully. Jul 6 23:48:01.869465 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-aa0a92e092ed4c6dadb7b57af3e01fd1540044aec7859ccdbb30e28f790006be-shm.mount: Deactivated successfully. Jul 6 23:48:01.869517 systemd[1]: var-lib-kubelet-pods-7c6e390b\x2dddbf\x2d4568\x2da42d\x2d13eabdc242e8-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dxhrpl.mount: Deactivated successfully. Jul 6 23:48:01.869582 systemd[1]: var-lib-kubelet-pods-7c6e390b\x2dddbf\x2d4568\x2da42d\x2d13eabdc242e8-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 6 23:48:01.869646 systemd[1]: var-lib-kubelet-pods-7c6e390b\x2dddbf\x2d4568\x2da42d\x2d13eabdc242e8-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 6 23:48:02.779599 sshd[4198]: Connection closed by 10.0.0.1 port 57202 Jul 6 23:48:02.780074 sshd-session[4196]: pam_unix(sshd:session): session closed for user core Jul 6 23:48:02.789682 systemd[1]: sshd@21-10.0.0.139:22-10.0.0.1:57202.service: Deactivated successfully. Jul 6 23:48:02.791358 systemd[1]: session-22.scope: Deactivated successfully. Jul 6 23:48:02.792669 systemd-logind[1478]: Session 22 logged out. Waiting for processes to exit. Jul 6 23:48:02.794306 systemd[1]: Started sshd@22-10.0.0.139:22-10.0.0.1:52844.service - OpenSSH per-connection server daemon (10.0.0.1:52844). Jul 6 23:48:02.795552 systemd-logind[1478]: Removed session 22. Jul 6 23:48:02.859748 sshd[4353]: Accepted publickey for core from 10.0.0.1 port 52844 ssh2: RSA SHA256:xPKA+TblypRwFFpP4Ulh9pljC5Xv/qD+dvpZZ1GZosc Jul 6 23:48:02.861310 sshd-session[4353]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:48:02.866430 systemd-logind[1478]: New session 23 of user core. Jul 6 23:48:02.874722 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 6 23:48:03.949897 sshd[4355]: Connection closed by 10.0.0.1 port 52844 Jul 6 23:48:03.950220 sshd-session[4353]: pam_unix(sshd:session): session closed for user core Jul 6 23:48:03.965002 systemd[1]: sshd@22-10.0.0.139:22-10.0.0.1:52844.service: Deactivated successfully. Jul 6 23:48:03.967940 systemd[1]: session-23.scope: Deactivated successfully. Jul 6 23:48:03.968595 systemd-logind[1478]: Session 23 logged out. Waiting for processes to exit. Jul 6 23:48:03.972045 systemd[1]: Started sshd@23-10.0.0.139:22-10.0.0.1:52846.service - OpenSSH per-connection server daemon (10.0.0.1:52846). Jul 6 23:48:03.972692 systemd-logind[1478]: Removed session 23. Jul 6 23:48:04.005622 kubelet[2611]: E0706 23:48:04.005377 2611 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7c6e390b-ddbf-4568-a42d-13eabdc242e8" containerName="apply-sysctl-overwrites" Jul 6 23:48:04.005622 kubelet[2611]: E0706 23:48:04.005404 2611 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7c6e390b-ddbf-4568-a42d-13eabdc242e8" containerName="mount-bpf-fs" Jul 6 23:48:04.005622 kubelet[2611]: E0706 23:48:04.005411 2611 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7c6e390b-ddbf-4568-a42d-13eabdc242e8" containerName="clean-cilium-state" Jul 6 23:48:04.005622 kubelet[2611]: E0706 23:48:04.005418 2611 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7c6e390b-ddbf-4568-a42d-13eabdc242e8" containerName="mount-cgroup" Jul 6 23:48:04.005622 kubelet[2611]: E0706 23:48:04.005423 2611 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="dc42bb6a-438c-41d0-b222-b744e87b8330" containerName="cilium-operator" Jul 6 23:48:04.005622 kubelet[2611]: E0706 23:48:04.005428 2611 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="7c6e390b-ddbf-4568-a42d-13eabdc242e8" containerName="cilium-agent" Jul 6 23:48:04.005622 kubelet[2611]: I0706 23:48:04.005448 2611 memory_manager.go:354] "RemoveStaleState removing state" podUID="7c6e390b-ddbf-4568-a42d-13eabdc242e8" containerName="cilium-agent" Jul 6 23:48:04.005622 kubelet[2611]: I0706 23:48:04.005456 2611 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc42bb6a-438c-41d0-b222-b744e87b8330" containerName="cilium-operator" Jul 6 23:48:04.020113 systemd[1]: Created slice kubepods-burstable-podead9359d_9ea8_4a05_b3a7_20e4332e20fa.slice - libcontainer container kubepods-burstable-podead9359d_9ea8_4a05_b3a7_20e4332e20fa.slice. Jul 6 23:48:04.044855 kubelet[2611]: I0706 23:48:04.044811 2611 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/ead9359d-9ea8-4a05-b3a7-20e4332e20fa-host-proc-sys-net\") pod \"cilium-4zfhx\" (UID: \"ead9359d-9ea8-4a05-b3a7-20e4332e20fa\") " pod="kube-system/cilium-4zfhx" Jul 6 23:48:04.044855 kubelet[2611]: I0706 23:48:04.044856 2611 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/ead9359d-9ea8-4a05-b3a7-20e4332e20fa-cilium-run\") pod \"cilium-4zfhx\" (UID: \"ead9359d-9ea8-4a05-b3a7-20e4332e20fa\") " pod="kube-system/cilium-4zfhx" Jul 6 23:48:04.045017 kubelet[2611]: I0706 23:48:04.044879 2611 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ead9359d-9ea8-4a05-b3a7-20e4332e20fa-lib-modules\") pod \"cilium-4zfhx\" (UID: \"ead9359d-9ea8-4a05-b3a7-20e4332e20fa\") " pod="kube-system/cilium-4zfhx" Jul 6 23:48:04.045017 kubelet[2611]: I0706 23:48:04.044897 2611 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/ead9359d-9ea8-4a05-b3a7-20e4332e20fa-cilium-cgroup\") pod \"cilium-4zfhx\" (UID: \"ead9359d-9ea8-4a05-b3a7-20e4332e20fa\") " pod="kube-system/cilium-4zfhx" Jul 6 23:48:04.045017 kubelet[2611]: I0706 23:48:04.044914 2611 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/ead9359d-9ea8-4a05-b3a7-20e4332e20fa-cni-path\") pod \"cilium-4zfhx\" (UID: \"ead9359d-9ea8-4a05-b3a7-20e4332e20fa\") " pod="kube-system/cilium-4zfhx" Jul 6 23:48:04.045017 kubelet[2611]: I0706 23:48:04.044931 2611 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/ead9359d-9ea8-4a05-b3a7-20e4332e20fa-clustermesh-secrets\") pod \"cilium-4zfhx\" (UID: \"ead9359d-9ea8-4a05-b3a7-20e4332e20fa\") " pod="kube-system/cilium-4zfhx" Jul 6 23:48:04.045017 kubelet[2611]: I0706 23:48:04.044947 2611 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/ead9359d-9ea8-4a05-b3a7-20e4332e20fa-hubble-tls\") pod \"cilium-4zfhx\" (UID: \"ead9359d-9ea8-4a05-b3a7-20e4332e20fa\") " pod="kube-system/cilium-4zfhx" Jul 6 23:48:04.045017 kubelet[2611]: I0706 23:48:04.044963 2611 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ead9359d-9ea8-4a05-b3a7-20e4332e20fa-etc-cni-netd\") pod \"cilium-4zfhx\" (UID: \"ead9359d-9ea8-4a05-b3a7-20e4332e20fa\") " pod="kube-system/cilium-4zfhx" Jul 6 23:48:04.045141 kubelet[2611]: I0706 23:48:04.044979 2611 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/ead9359d-9ea8-4a05-b3a7-20e4332e20fa-cilium-config-path\") pod \"cilium-4zfhx\" (UID: \"ead9359d-9ea8-4a05-b3a7-20e4332e20fa\") " pod="kube-system/cilium-4zfhx" Jul 6 23:48:04.045141 kubelet[2611]: I0706 23:48:04.044995 2611 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/ead9359d-9ea8-4a05-b3a7-20e4332e20fa-cilium-ipsec-secrets\") pod \"cilium-4zfhx\" (UID: \"ead9359d-9ea8-4a05-b3a7-20e4332e20fa\") " pod="kube-system/cilium-4zfhx" Jul 6 23:48:04.045141 kubelet[2611]: I0706 23:48:04.045012 2611 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/ead9359d-9ea8-4a05-b3a7-20e4332e20fa-hostproc\") pod \"cilium-4zfhx\" (UID: \"ead9359d-9ea8-4a05-b3a7-20e4332e20fa\") " pod="kube-system/cilium-4zfhx" Jul 6 23:48:04.045141 kubelet[2611]: I0706 23:48:04.045028 2611 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ead9359d-9ea8-4a05-b3a7-20e4332e20fa-xtables-lock\") pod \"cilium-4zfhx\" (UID: \"ead9359d-9ea8-4a05-b3a7-20e4332e20fa\") " pod="kube-system/cilium-4zfhx" Jul 6 23:48:04.045141 kubelet[2611]: I0706 23:48:04.045044 2611 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/ead9359d-9ea8-4a05-b3a7-20e4332e20fa-host-proc-sys-kernel\") pod \"cilium-4zfhx\" (UID: \"ead9359d-9ea8-4a05-b3a7-20e4332e20fa\") " pod="kube-system/cilium-4zfhx" Jul 6 23:48:04.045237 kubelet[2611]: I0706 23:48:04.045087 2611 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v49xb\" (UniqueName: \"kubernetes.io/projected/ead9359d-9ea8-4a05-b3a7-20e4332e20fa-kube-api-access-v49xb\") pod \"cilium-4zfhx\" (UID: \"ead9359d-9ea8-4a05-b3a7-20e4332e20fa\") " pod="kube-system/cilium-4zfhx" Jul 6 23:48:04.045237 kubelet[2611]: I0706 23:48:04.045141 2611 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/ead9359d-9ea8-4a05-b3a7-20e4332e20fa-bpf-maps\") pod \"cilium-4zfhx\" (UID: \"ead9359d-9ea8-4a05-b3a7-20e4332e20fa\") " pod="kube-system/cilium-4zfhx" Jul 6 23:48:04.051603 sshd[4367]: Accepted publickey for core from 10.0.0.1 port 52846 ssh2: RSA SHA256:xPKA+TblypRwFFpP4Ulh9pljC5Xv/qD+dvpZZ1GZosc Jul 6 23:48:04.053294 sshd-session[4367]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:48:04.058523 systemd-logind[1478]: New session 24 of user core. Jul 6 23:48:04.070774 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 6 23:48:04.120553 sshd[4369]: Connection closed by 10.0.0.1 port 52846 Jul 6 23:48:04.121274 sshd-session[4367]: pam_unix(sshd:session): session closed for user core Jul 6 23:48:04.136785 systemd[1]: sshd@23-10.0.0.139:22-10.0.0.1:52846.service: Deactivated successfully. Jul 6 23:48:04.138658 systemd[1]: session-24.scope: Deactivated successfully. Jul 6 23:48:04.139473 systemd-logind[1478]: Session 24 logged out. Waiting for processes to exit. Jul 6 23:48:04.142498 systemd[1]: Started sshd@24-10.0.0.139:22-10.0.0.1:52848.service - OpenSSH per-connection server daemon (10.0.0.1:52848). Jul 6 23:48:04.143049 systemd-logind[1478]: Removed session 24. Jul 6 23:48:04.190221 sshd[4376]: Accepted publickey for core from 10.0.0.1 port 52848 ssh2: RSA SHA256:xPKA+TblypRwFFpP4Ulh9pljC5Xv/qD+dvpZZ1GZosc Jul 6 23:48:04.191832 sshd-session[4376]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:48:04.195750 systemd-logind[1478]: New session 25 of user core. Jul 6 23:48:04.207739 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 6 23:48:04.329231 containerd[1497]: time="2025-07-06T23:48:04.329189498Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4zfhx,Uid:ead9359d-9ea8-4a05-b3a7-20e4332e20fa,Namespace:kube-system,Attempt:0,}" Jul 6 23:48:04.344160 containerd[1497]: time="2025-07-06T23:48:04.344055753Z" level=info msg="connecting to shim 89f4327b219fd28b554da0931636bd79710d61f932fd0451e15e3f3b33f5294c" address="unix:///run/containerd/s/2e5fc878d7d5dbc0c3d9c6effcde7393a4c314d3d19950d73746b5fa4457b511" namespace=k8s.io protocol=ttrpc version=3 Jul 6 23:48:04.365767 systemd[1]: Started cri-containerd-89f4327b219fd28b554da0931636bd79710d61f932fd0451e15e3f3b33f5294c.scope - libcontainer container 89f4327b219fd28b554da0931636bd79710d61f932fd0451e15e3f3b33f5294c. Jul 6 23:48:04.415012 containerd[1497]: time="2025-07-06T23:48:04.414966807Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-4zfhx,Uid:ead9359d-9ea8-4a05-b3a7-20e4332e20fa,Namespace:kube-system,Attempt:0,} returns sandbox id \"89f4327b219fd28b554da0931636bd79710d61f932fd0451e15e3f3b33f5294c\"" Jul 6 23:48:04.417962 containerd[1497]: time="2025-07-06T23:48:04.417496663Z" level=info msg="CreateContainer within sandbox \"89f4327b219fd28b554da0931636bd79710d61f932fd0451e15e3f3b33f5294c\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 6 23:48:04.424497 containerd[1497]: time="2025-07-06T23:48:04.424467548Z" level=info msg="Container 300e7abd26784e3f37255c463de839a21b2eeb83ec3b7170e637ae7f8d3e8c74: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:48:04.429965 containerd[1497]: time="2025-07-06T23:48:04.429919463Z" level=info msg="CreateContainer within sandbox \"89f4327b219fd28b554da0931636bd79710d61f932fd0451e15e3f3b33f5294c\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"300e7abd26784e3f37255c463de839a21b2eeb83ec3b7170e637ae7f8d3e8c74\"" Jul 6 23:48:04.430403 containerd[1497]: time="2025-07-06T23:48:04.430376626Z" level=info msg="StartContainer for \"300e7abd26784e3f37255c463de839a21b2eeb83ec3b7170e637ae7f8d3e8c74\"" Jul 6 23:48:04.432310 containerd[1497]: time="2025-07-06T23:48:04.432270678Z" level=info msg="connecting to shim 300e7abd26784e3f37255c463de839a21b2eeb83ec3b7170e637ae7f8d3e8c74" address="unix:///run/containerd/s/2e5fc878d7d5dbc0c3d9c6effcde7393a4c314d3d19950d73746b5fa4457b511" protocol=ttrpc version=3 Jul 6 23:48:04.460763 systemd[1]: Started cri-containerd-300e7abd26784e3f37255c463de839a21b2eeb83ec3b7170e637ae7f8d3e8c74.scope - libcontainer container 300e7abd26784e3f37255c463de839a21b2eeb83ec3b7170e637ae7f8d3e8c74. Jul 6 23:48:04.486502 containerd[1497]: time="2025-07-06T23:48:04.486461025Z" level=info msg="StartContainer for \"300e7abd26784e3f37255c463de839a21b2eeb83ec3b7170e637ae7f8d3e8c74\" returns successfully" Jul 6 23:48:04.514709 systemd[1]: cri-containerd-300e7abd26784e3f37255c463de839a21b2eeb83ec3b7170e637ae7f8d3e8c74.scope: Deactivated successfully. Jul 6 23:48:04.515606 containerd[1497]: time="2025-07-06T23:48:04.515544851Z" level=info msg="received exit event container_id:\"300e7abd26784e3f37255c463de839a21b2eeb83ec3b7170e637ae7f8d3e8c74\" id:\"300e7abd26784e3f37255c463de839a21b2eeb83ec3b7170e637ae7f8d3e8c74\" pid:4448 exited_at:{seconds:1751845684 nanos:513749720}" Jul 6 23:48:04.515784 containerd[1497]: time="2025-07-06T23:48:04.515712572Z" level=info msg="TaskExit event in podsandbox handler container_id:\"300e7abd26784e3f37255c463de839a21b2eeb83ec3b7170e637ae7f8d3e8c74\" id:\"300e7abd26784e3f37255c463de839a21b2eeb83ec3b7170e637ae7f8d3e8c74\" pid:4448 exited_at:{seconds:1751845684 nanos:513749720}" Jul 6 23:48:04.933686 kubelet[2611]: E0706 23:48:04.933632 2611 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 6 23:48:05.076271 containerd[1497]: time="2025-07-06T23:48:05.076209467Z" level=info msg="CreateContainer within sandbox \"89f4327b219fd28b554da0931636bd79710d61f932fd0451e15e3f3b33f5294c\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 6 23:48:05.082741 containerd[1497]: time="2025-07-06T23:48:05.082688828Z" level=info msg="Container bc1949be15197583a92774d254b0fd746e03a0cdce628f8fb1523f9369356551: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:48:05.087546 containerd[1497]: time="2025-07-06T23:48:05.087509098Z" level=info msg="CreateContainer within sandbox \"89f4327b219fd28b554da0931636bd79710d61f932fd0451e15e3f3b33f5294c\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"bc1949be15197583a92774d254b0fd746e03a0cdce628f8fb1523f9369356551\"" Jul 6 23:48:05.087951 containerd[1497]: time="2025-07-06T23:48:05.087933780Z" level=info msg="StartContainer for \"bc1949be15197583a92774d254b0fd746e03a0cdce628f8fb1523f9369356551\"" Jul 6 23:48:05.089026 containerd[1497]: time="2025-07-06T23:48:05.088984987Z" level=info msg="connecting to shim bc1949be15197583a92774d254b0fd746e03a0cdce628f8fb1523f9369356551" address="unix:///run/containerd/s/2e5fc878d7d5dbc0c3d9c6effcde7393a4c314d3d19950d73746b5fa4457b511" protocol=ttrpc version=3 Jul 6 23:48:05.108736 systemd[1]: Started cri-containerd-bc1949be15197583a92774d254b0fd746e03a0cdce628f8fb1523f9369356551.scope - libcontainer container bc1949be15197583a92774d254b0fd746e03a0cdce628f8fb1523f9369356551. Jul 6 23:48:05.132959 containerd[1497]: time="2025-07-06T23:48:05.132925941Z" level=info msg="StartContainer for \"bc1949be15197583a92774d254b0fd746e03a0cdce628f8fb1523f9369356551\" returns successfully" Jul 6 23:48:05.143198 systemd[1]: cri-containerd-bc1949be15197583a92774d254b0fd746e03a0cdce628f8fb1523f9369356551.scope: Deactivated successfully. Jul 6 23:48:05.143766 containerd[1497]: time="2025-07-06T23:48:05.143735408Z" level=info msg="received exit event container_id:\"bc1949be15197583a92774d254b0fd746e03a0cdce628f8fb1523f9369356551\" id:\"bc1949be15197583a92774d254b0fd746e03a0cdce628f8fb1523f9369356551\" pid:4499 exited_at:{seconds:1751845685 nanos:143508286}" Jul 6 23:48:05.144037 containerd[1497]: time="2025-07-06T23:48:05.143738168Z" level=info msg="TaskExit event in podsandbox handler container_id:\"bc1949be15197583a92774d254b0fd746e03a0cdce628f8fb1523f9369356551\" id:\"bc1949be15197583a92774d254b0fd746e03a0cdce628f8fb1523f9369356551\" pid:4499 exited_at:{seconds:1751845685 nanos:143508286}" Jul 6 23:48:05.168028 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bc1949be15197583a92774d254b0fd746e03a0cdce628f8fb1523f9369356551-rootfs.mount: Deactivated successfully. Jul 6 23:48:06.069290 containerd[1497]: time="2025-07-06T23:48:06.069248483Z" level=info msg="CreateContainer within sandbox \"89f4327b219fd28b554da0931636bd79710d61f932fd0451e15e3f3b33f5294c\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 6 23:48:06.085365 containerd[1497]: time="2025-07-06T23:48:06.085319940Z" level=info msg="Container b1193a3cfca5efd6cb4a1409351a1ea86dd65442ee72b650f8a41305a261bfed: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:48:06.087764 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1657582553.mount: Deactivated successfully. Jul 6 23:48:06.102628 containerd[1497]: time="2025-07-06T23:48:06.102545684Z" level=info msg="CreateContainer within sandbox \"89f4327b219fd28b554da0931636bd79710d61f932fd0451e15e3f3b33f5294c\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"b1193a3cfca5efd6cb4a1409351a1ea86dd65442ee72b650f8a41305a261bfed\"" Jul 6 23:48:06.107608 containerd[1497]: time="2025-07-06T23:48:06.106743070Z" level=info msg="StartContainer for \"b1193a3cfca5efd6cb4a1409351a1ea86dd65442ee72b650f8a41305a261bfed\"" Jul 6 23:48:06.108137 containerd[1497]: time="2025-07-06T23:48:06.108094758Z" level=info msg="connecting to shim b1193a3cfca5efd6cb4a1409351a1ea86dd65442ee72b650f8a41305a261bfed" address="unix:///run/containerd/s/2e5fc878d7d5dbc0c3d9c6effcde7393a4c314d3d19950d73746b5fa4457b511" protocol=ttrpc version=3 Jul 6 23:48:06.132795 systemd[1]: Started cri-containerd-b1193a3cfca5efd6cb4a1409351a1ea86dd65442ee72b650f8a41305a261bfed.scope - libcontainer container b1193a3cfca5efd6cb4a1409351a1ea86dd65442ee72b650f8a41305a261bfed. Jul 6 23:48:06.166339 systemd[1]: cri-containerd-b1193a3cfca5efd6cb4a1409351a1ea86dd65442ee72b650f8a41305a261bfed.scope: Deactivated successfully. Jul 6 23:48:06.167072 containerd[1497]: time="2025-07-06T23:48:06.167040235Z" level=info msg="received exit event container_id:\"b1193a3cfca5efd6cb4a1409351a1ea86dd65442ee72b650f8a41305a261bfed\" id:\"b1193a3cfca5efd6cb4a1409351a1ea86dd65442ee72b650f8a41305a261bfed\" pid:4543 exited_at:{seconds:1751845686 nanos:166877234}" Jul 6 23:48:06.167328 containerd[1497]: time="2025-07-06T23:48:06.167270397Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b1193a3cfca5efd6cb4a1409351a1ea86dd65442ee72b650f8a41305a261bfed\" id:\"b1193a3cfca5efd6cb4a1409351a1ea86dd65442ee72b650f8a41305a261bfed\" pid:4543 exited_at:{seconds:1751845686 nanos:166877234}" Jul 6 23:48:06.169056 containerd[1497]: time="2025-07-06T23:48:06.169019567Z" level=info msg="StartContainer for \"b1193a3cfca5efd6cb4a1409351a1ea86dd65442ee72b650f8a41305a261bfed\" returns successfully" Jul 6 23:48:06.184499 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b1193a3cfca5efd6cb4a1409351a1ea86dd65442ee72b650f8a41305a261bfed-rootfs.mount: Deactivated successfully. Jul 6 23:48:07.074384 containerd[1497]: time="2025-07-06T23:48:07.074332964Z" level=info msg="CreateContainer within sandbox \"89f4327b219fd28b554da0931636bd79710d61f932fd0451e15e3f3b33f5294c\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 6 23:48:07.084509 containerd[1497]: time="2025-07-06T23:48:07.083845420Z" level=info msg="Container 56d9e44ad576db80bb4c178fe9ab10aaf00888754666e4c383b7cc268b8f4c3e: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:48:07.100358 containerd[1497]: time="2025-07-06T23:48:07.100320157Z" level=info msg="CreateContainer within sandbox \"89f4327b219fd28b554da0931636bd79710d61f932fd0451e15e3f3b33f5294c\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"56d9e44ad576db80bb4c178fe9ab10aaf00888754666e4c383b7cc268b8f4c3e\"" Jul 6 23:48:07.101142 containerd[1497]: time="2025-07-06T23:48:07.101108961Z" level=info msg="StartContainer for \"56d9e44ad576db80bb4c178fe9ab10aaf00888754666e4c383b7cc268b8f4c3e\"" Jul 6 23:48:07.102018 containerd[1497]: time="2025-07-06T23:48:07.101993767Z" level=info msg="connecting to shim 56d9e44ad576db80bb4c178fe9ab10aaf00888754666e4c383b7cc268b8f4c3e" address="unix:///run/containerd/s/2e5fc878d7d5dbc0c3d9c6effcde7393a4c314d3d19950d73746b5fa4457b511" protocol=ttrpc version=3 Jul 6 23:48:07.119736 systemd[1]: Started cri-containerd-56d9e44ad576db80bb4c178fe9ab10aaf00888754666e4c383b7cc268b8f4c3e.scope - libcontainer container 56d9e44ad576db80bb4c178fe9ab10aaf00888754666e4c383b7cc268b8f4c3e. Jul 6 23:48:07.141779 systemd[1]: cri-containerd-56d9e44ad576db80bb4c178fe9ab10aaf00888754666e4c383b7cc268b8f4c3e.scope: Deactivated successfully. Jul 6 23:48:07.142548 containerd[1497]: time="2025-07-06T23:48:07.142516646Z" level=info msg="TaskExit event in podsandbox handler container_id:\"56d9e44ad576db80bb4c178fe9ab10aaf00888754666e4c383b7cc268b8f4c3e\" id:\"56d9e44ad576db80bb4c178fe9ab10aaf00888754666e4c383b7cc268b8f4c3e\" pid:4582 exited_at:{seconds:1751845687 nanos:142309764}" Jul 6 23:48:07.142801 containerd[1497]: time="2025-07-06T23:48:07.142756527Z" level=info msg="received exit event container_id:\"56d9e44ad576db80bb4c178fe9ab10aaf00888754666e4c383b7cc268b8f4c3e\" id:\"56d9e44ad576db80bb4c178fe9ab10aaf00888754666e4c383b7cc268b8f4c3e\" pid:4582 exited_at:{seconds:1751845687 nanos:142309764}" Jul 6 23:48:07.149048 containerd[1497]: time="2025-07-06T23:48:07.148989924Z" level=info msg="StartContainer for \"56d9e44ad576db80bb4c178fe9ab10aaf00888754666e4c383b7cc268b8f4c3e\" returns successfully" Jul 6 23:48:07.160226 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-56d9e44ad576db80bb4c178fe9ab10aaf00888754666e4c383b7cc268b8f4c3e-rootfs.mount: Deactivated successfully. Jul 6 23:48:08.082346 containerd[1497]: time="2025-07-06T23:48:08.082296014Z" level=info msg="CreateContainer within sandbox \"89f4327b219fd28b554da0931636bd79710d61f932fd0451e15e3f3b33f5294c\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 6 23:48:08.095288 containerd[1497]: time="2025-07-06T23:48:08.095236249Z" level=info msg="Container 597be5510f3c538a08582838d6d8f6a10b16c9f6275cb161ff41c5bb8bc4486f: CDI devices from CRI Config.CDIDevices: []" Jul 6 23:48:08.102116 containerd[1497]: time="2025-07-06T23:48:08.102075048Z" level=info msg="CreateContainer within sandbox \"89f4327b219fd28b554da0931636bd79710d61f932fd0451e15e3f3b33f5294c\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"597be5510f3c538a08582838d6d8f6a10b16c9f6275cb161ff41c5bb8bc4486f\"" Jul 6 23:48:08.103746 containerd[1497]: time="2025-07-06T23:48:08.102823212Z" level=info msg="StartContainer for \"597be5510f3c538a08582838d6d8f6a10b16c9f6275cb161ff41c5bb8bc4486f\"" Jul 6 23:48:08.104015 containerd[1497]: time="2025-07-06T23:48:08.103983379Z" level=info msg="connecting to shim 597be5510f3c538a08582838d6d8f6a10b16c9f6275cb161ff41c5bb8bc4486f" address="unix:///run/containerd/s/2e5fc878d7d5dbc0c3d9c6effcde7393a4c314d3d19950d73746b5fa4457b511" protocol=ttrpc version=3 Jul 6 23:48:08.126727 systemd[1]: Started cri-containerd-597be5510f3c538a08582838d6d8f6a10b16c9f6275cb161ff41c5bb8bc4486f.scope - libcontainer container 597be5510f3c538a08582838d6d8f6a10b16c9f6275cb161ff41c5bb8bc4486f. Jul 6 23:48:08.155897 containerd[1497]: time="2025-07-06T23:48:08.155857396Z" level=info msg="StartContainer for \"597be5510f3c538a08582838d6d8f6a10b16c9f6275cb161ff41c5bb8bc4486f\" returns successfully" Jul 6 23:48:08.223545 containerd[1497]: time="2025-07-06T23:48:08.222118537Z" level=info msg="TaskExit event in podsandbox handler container_id:\"597be5510f3c538a08582838d6d8f6a10b16c9f6275cb161ff41c5bb8bc4486f\" id:\"efd5845322d63130cf9019f20d676fc98db2ad0f7318ac2008c933127f8ab43a\" pid:4650 exited_at:{seconds:1751845688 nanos:221819095}" Jul 6 23:48:08.440645 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aes-ce)) Jul 6 23:48:09.122809 kubelet[2611]: I0706 23:48:09.122749 2611 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-4zfhx" podStartSLOduration=6.122732487 podStartE2EDuration="6.122732487s" podCreationTimestamp="2025-07-06 23:48:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:48:09.122214404 +0000 UTC m=+79.352940818" watchObservedRunningTime="2025-07-06 23:48:09.122732487 +0000 UTC m=+79.353458861" Jul 6 23:48:10.589714 containerd[1497]: time="2025-07-06T23:48:10.589675394Z" level=info msg="TaskExit event in podsandbox handler container_id:\"597be5510f3c538a08582838d6d8f6a10b16c9f6275cb161ff41c5bb8bc4486f\" id:\"20d4624d1818bc14305761dc2a367d6249851974ddcbe3e2b0c462cb8d40fd93\" pid:4930 exit_status:1 exited_at:{seconds:1751845690 nanos:589393912}" Jul 6 23:48:11.325411 systemd-networkd[1430]: lxc_health: Link UP Jul 6 23:48:11.327158 systemd-networkd[1430]: lxc_health: Gained carrier Jul 6 23:48:12.721960 containerd[1497]: time="2025-07-06T23:48:12.721916760Z" level=info msg="TaskExit event in podsandbox handler container_id:\"597be5510f3c538a08582838d6d8f6a10b16c9f6275cb161ff41c5bb8bc4486f\" id:\"38f5c0d359d8a8fe8cbf3b1073865dda9a2074b6f98d68ce8aa03c8d745b99b3\" pid:5185 exited_at:{seconds:1751845692 nanos:721505118}" Jul 6 23:48:12.728968 kubelet[2611]: E0706 23:48:12.728873 2611 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:57186->127.0.0.1:41997: write tcp 127.0.0.1:57186->127.0.0.1:41997: write: broken pipe Jul 6 23:48:13.309738 systemd-networkd[1430]: lxc_health: Gained IPv6LL Jul 6 23:48:14.862761 containerd[1497]: time="2025-07-06T23:48:14.861561986Z" level=info msg="TaskExit event in podsandbox handler container_id:\"597be5510f3c538a08582838d6d8f6a10b16c9f6275cb161ff41c5bb8bc4486f\" id:\"fd9a938591eb6ee67a4badd22464e411db8934b049430b5464b302d4d04fd5fc\" pid:5212 exited_at:{seconds:1751845694 nanos:861133424}" Jul 6 23:48:16.984682 containerd[1497]: time="2025-07-06T23:48:16.984639235Z" level=info msg="TaskExit event in podsandbox handler container_id:\"597be5510f3c538a08582838d6d8f6a10b16c9f6275cb161ff41c5bb8bc4486f\" id:\"002e8cc4ca30c07b75d9cdd300a7126baf3dc3ea81b856d1920eaadf37dbbdb5\" pid:5241 exited_at:{seconds:1751845696 nanos:984009992}" Jul 6 23:48:16.988947 sshd[4382]: Connection closed by 10.0.0.1 port 52848 Jul 6 23:48:16.989065 sshd-session[4376]: pam_unix(sshd:session): session closed for user core Jul 6 23:48:16.993190 systemd[1]: sshd@24-10.0.0.139:22-10.0.0.1:52848.service: Deactivated successfully. Jul 6 23:48:16.995104 systemd[1]: session-25.scope: Deactivated successfully. Jul 6 23:48:16.996030 systemd-logind[1478]: Session 25 logged out. Waiting for processes to exit. Jul 6 23:48:16.997299 systemd-logind[1478]: Removed session 25.