Jul 9 10:03:24.915533 kernel: Linux version 6.6.96-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT_DYNAMIC Wed Jul 9 08:34:41 -00 2025 Jul 9 10:03:24.915564 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=6e1010069223556136809e96f0b3cf40412e680ac74b0bea8760b419233d2964 Jul 9 10:03:24.915575 kernel: BIOS-provided physical RAM map: Jul 9 10:03:24.915582 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jul 9 10:03:24.915589 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jul 9 10:03:24.915596 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jul 9 10:03:24.915603 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009cfdbfff] usable Jul 9 10:03:24.915610 kernel: BIOS-e820: [mem 0x000000009cfdc000-0x000000009cffffff] reserved Jul 9 10:03:24.915617 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jul 9 10:03:24.915626 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Jul 9 10:03:24.915633 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jul 9 10:03:24.915640 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jul 9 10:03:24.915649 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jul 9 10:03:24.915656 kernel: NX (Execute Disable) protection: active Jul 9 10:03:24.915664 kernel: APIC: Static calls initialized Jul 9 10:03:24.915676 kernel: SMBIOS 2.8 present. Jul 9 10:03:24.915683 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.2-debian-1.16.2-1 04/01/2014 Jul 9 10:03:24.915691 kernel: Hypervisor detected: KVM Jul 9 10:03:24.915698 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jul 9 10:03:24.915705 kernel: kvm-clock: using sched offset of 4736784852 cycles Jul 9 10:03:24.915713 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jul 9 10:03:24.915720 kernel: tsc: Detected 2794.750 MHz processor Jul 9 10:03:24.915728 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 9 10:03:24.915736 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 9 10:03:24.915743 kernel: last_pfn = 0x9cfdc max_arch_pfn = 0x400000000 Jul 9 10:03:24.915753 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jul 9 10:03:24.915761 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 9 10:03:24.915768 kernel: Using GB pages for direct mapping Jul 9 10:03:24.915776 kernel: ACPI: Early table checksum verification disabled Jul 9 10:03:24.915783 kernel: ACPI: RSDP 0x00000000000F59D0 000014 (v00 BOCHS ) Jul 9 10:03:24.915791 kernel: ACPI: RSDT 0x000000009CFE241A 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 9 10:03:24.915798 kernel: ACPI: FACP 0x000000009CFE21FA 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 9 10:03:24.915806 kernel: ACPI: DSDT 0x000000009CFE0040 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 9 10:03:24.915813 kernel: ACPI: FACS 0x000000009CFE0000 000040 Jul 9 10:03:24.915823 kernel: ACPI: APIC 0x000000009CFE22EE 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 9 10:03:24.915830 kernel: ACPI: HPET 0x000000009CFE237E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 9 10:03:24.915837 kernel: ACPI: MCFG 0x000000009CFE23B6 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 9 10:03:24.915845 kernel: ACPI: WAET 0x000000009CFE23F2 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 9 10:03:24.915852 kernel: ACPI: Reserving FACP table memory at [mem 0x9cfe21fa-0x9cfe22ed] Jul 9 10:03:24.915860 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cfe0040-0x9cfe21f9] Jul 9 10:03:24.915871 kernel: ACPI: Reserving FACS table memory at [mem 0x9cfe0000-0x9cfe003f] Jul 9 10:03:24.915888 kernel: ACPI: Reserving APIC table memory at [mem 0x9cfe22ee-0x9cfe237d] Jul 9 10:03:24.915895 kernel: ACPI: Reserving HPET table memory at [mem 0x9cfe237e-0x9cfe23b5] Jul 9 10:03:24.915903 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cfe23b6-0x9cfe23f1] Jul 9 10:03:24.915911 kernel: ACPI: Reserving WAET table memory at [mem 0x9cfe23f2-0x9cfe2419] Jul 9 10:03:24.915921 kernel: No NUMA configuration found Jul 9 10:03:24.915931 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cfdbfff] Jul 9 10:03:24.915942 kernel: NODE_DATA(0) allocated [mem 0x9cfd6000-0x9cfdbfff] Jul 9 10:03:24.915956 kernel: Zone ranges: Jul 9 10:03:24.915964 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 9 10:03:24.915971 kernel: DMA32 [mem 0x0000000001000000-0x000000009cfdbfff] Jul 9 10:03:24.915979 kernel: Normal empty Jul 9 10:03:24.915987 kernel: Movable zone start for each node Jul 9 10:03:24.915994 kernel: Early memory node ranges Jul 9 10:03:24.916002 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jul 9 10:03:24.916009 kernel: node 0: [mem 0x0000000000100000-0x000000009cfdbfff] Jul 9 10:03:24.916017 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cfdbfff] Jul 9 10:03:24.916027 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 9 10:03:24.916038 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jul 9 10:03:24.916046 kernel: On node 0, zone DMA32: 12324 pages in unavailable ranges Jul 9 10:03:24.916053 kernel: ACPI: PM-Timer IO Port: 0x608 Jul 9 10:03:24.916061 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jul 9 10:03:24.916069 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jul 9 10:03:24.916076 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jul 9 10:03:24.916084 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jul 9 10:03:24.916092 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 9 10:03:24.916099 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jul 9 10:03:24.916110 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jul 9 10:03:24.916117 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 9 10:03:24.916125 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jul 9 10:03:24.916132 kernel: TSC deadline timer available Jul 9 10:03:24.916140 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Jul 9 10:03:24.916172 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jul 9 10:03:24.916180 kernel: kvm-guest: KVM setup pv remote TLB flush Jul 9 10:03:24.916190 kernel: kvm-guest: setup PV sched yield Jul 9 10:03:24.916198 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Jul 9 10:03:24.916209 kernel: Booting paravirtualized kernel on KVM Jul 9 10:03:24.916217 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 9 10:03:24.916225 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jul 9 10:03:24.916233 kernel: percpu: Embedded 58 pages/cpu s197096 r8192 d32280 u524288 Jul 9 10:03:24.916241 kernel: pcpu-alloc: s197096 r8192 d32280 u524288 alloc=1*2097152 Jul 9 10:03:24.916248 kernel: pcpu-alloc: [0] 0 1 2 3 Jul 9 10:03:24.916256 kernel: kvm-guest: PV spinlocks enabled Jul 9 10:03:24.916263 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jul 9 10:03:24.916272 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=6e1010069223556136809e96f0b3cf40412e680ac74b0bea8760b419233d2964 Jul 9 10:03:24.916283 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 9 10:03:24.916291 kernel: random: crng init done Jul 9 10:03:24.916298 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 9 10:03:24.916306 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 9 10:03:24.916314 kernel: Fallback order for Node 0: 0 Jul 9 10:03:24.916321 kernel: Built 1 zonelists, mobility grouping on. Total pages: 632732 Jul 9 10:03:24.916329 kernel: Policy zone: DMA32 Jul 9 10:03:24.916337 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 9 10:03:24.916347 kernel: Memory: 2432544K/2571752K available (14336K kernel code, 2295K rwdata, 22872K rodata, 43492K init, 1584K bss, 138948K reserved, 0K cma-reserved) Jul 9 10:03:24.916355 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jul 9 10:03:24.916363 kernel: ftrace: allocating 37940 entries in 149 pages Jul 9 10:03:24.916370 kernel: ftrace: allocated 149 pages with 4 groups Jul 9 10:03:24.916378 kernel: Dynamic Preempt: voluntary Jul 9 10:03:24.916386 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 9 10:03:24.916399 kernel: rcu: RCU event tracing is enabled. Jul 9 10:03:24.916407 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jul 9 10:03:24.916414 kernel: Trampoline variant of Tasks RCU enabled. Jul 9 10:03:24.916424 kernel: Rude variant of Tasks RCU enabled. Jul 9 10:03:24.916432 kernel: Tracing variant of Tasks RCU enabled. Jul 9 10:03:24.916440 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 9 10:03:24.916450 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jul 9 10:03:24.916457 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jul 9 10:03:24.916465 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 9 10:03:24.916473 kernel: Console: colour VGA+ 80x25 Jul 9 10:03:24.916480 kernel: printk: console [ttyS0] enabled Jul 9 10:03:24.916488 kernel: ACPI: Core revision 20230628 Jul 9 10:03:24.916498 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jul 9 10:03:24.916506 kernel: APIC: Switch to symmetric I/O mode setup Jul 9 10:03:24.916513 kernel: x2apic enabled Jul 9 10:03:24.916521 kernel: APIC: Switched APIC routing to: physical x2apic Jul 9 10:03:24.916529 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jul 9 10:03:24.916537 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jul 9 10:03:24.916545 kernel: kvm-guest: setup PV IPIs Jul 9 10:03:24.916563 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jul 9 10:03:24.916571 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jul 9 10:03:24.916579 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Jul 9 10:03:24.916587 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jul 9 10:03:24.916595 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jul 9 10:03:24.916605 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jul 9 10:03:24.916613 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 9 10:03:24.916621 kernel: Spectre V2 : Mitigation: Retpolines Jul 9 10:03:24.916629 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jul 9 10:03:24.916637 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Jul 9 10:03:24.916647 kernel: RETBleed: Mitigation: untrained return thunk Jul 9 10:03:24.916658 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jul 9 10:03:24.916666 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jul 9 10:03:24.916674 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jul 9 10:03:24.916682 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jul 9 10:03:24.916691 kernel: x86/bugs: return thunk changed Jul 9 10:03:24.916698 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jul 9 10:03:24.916707 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 9 10:03:24.916717 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 9 10:03:24.916725 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 9 10:03:24.916733 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 9 10:03:24.916741 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jul 9 10:03:24.916749 kernel: Freeing SMP alternatives memory: 32K Jul 9 10:03:24.916757 kernel: pid_max: default: 32768 minimum: 301 Jul 9 10:03:24.916765 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jul 9 10:03:24.916773 kernel: landlock: Up and running. Jul 9 10:03:24.916781 kernel: SELinux: Initializing. Jul 9 10:03:24.916791 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 9 10:03:24.916799 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 9 10:03:24.916807 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Jul 9 10:03:24.916815 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 9 10:03:24.916823 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 9 10:03:24.916832 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 9 10:03:24.916840 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jul 9 10:03:24.916850 kernel: ... version: 0 Jul 9 10:03:24.916858 kernel: ... bit width: 48 Jul 9 10:03:24.916868 kernel: ... generic registers: 6 Jul 9 10:03:24.916876 kernel: ... value mask: 0000ffffffffffff Jul 9 10:03:24.916892 kernel: ... max period: 00007fffffffffff Jul 9 10:03:24.916899 kernel: ... fixed-purpose events: 0 Jul 9 10:03:24.916907 kernel: ... event mask: 000000000000003f Jul 9 10:03:24.916915 kernel: signal: max sigframe size: 1776 Jul 9 10:03:24.916924 kernel: rcu: Hierarchical SRCU implementation. Jul 9 10:03:24.916935 kernel: rcu: Max phase no-delay instances is 400. Jul 9 10:03:24.916945 kernel: smp: Bringing up secondary CPUs ... Jul 9 10:03:24.916958 kernel: smpboot: x86: Booting SMP configuration: Jul 9 10:03:24.916966 kernel: .... node #0, CPUs: #1 #2 #3 Jul 9 10:03:24.916974 kernel: smp: Brought up 1 node, 4 CPUs Jul 9 10:03:24.916982 kernel: smpboot: Max logical packages: 1 Jul 9 10:03:24.916990 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Jul 9 10:03:24.916997 kernel: devtmpfs: initialized Jul 9 10:03:24.917005 kernel: x86/mm: Memory block size: 128MB Jul 9 10:03:24.917013 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 9 10:03:24.917021 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jul 9 10:03:24.917032 kernel: pinctrl core: initialized pinctrl subsystem Jul 9 10:03:24.917040 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 9 10:03:24.917048 kernel: audit: initializing netlink subsys (disabled) Jul 9 10:03:24.917056 kernel: audit: type=2000 audit(1752055404.172:1): state=initialized audit_enabled=0 res=1 Jul 9 10:03:24.917063 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 9 10:03:24.917071 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 9 10:03:24.917079 kernel: cpuidle: using governor menu Jul 9 10:03:24.917087 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 9 10:03:24.917095 kernel: dca service started, version 1.12.1 Jul 9 10:03:24.917105 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Jul 9 10:03:24.917113 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jul 9 10:03:24.917121 kernel: PCI: Using configuration type 1 for base access Jul 9 10:03:24.917129 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 9 10:03:24.917137 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 9 10:03:24.917241 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jul 9 10:03:24.917253 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 9 10:03:24.917261 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jul 9 10:03:24.917269 kernel: ACPI: Added _OSI(Module Device) Jul 9 10:03:24.917280 kernel: ACPI: Added _OSI(Processor Device) Jul 9 10:03:24.917288 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 9 10:03:24.917296 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 9 10:03:24.917304 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jul 9 10:03:24.917312 kernel: ACPI: Interpreter enabled Jul 9 10:03:24.917320 kernel: ACPI: PM: (supports S0 S3 S5) Jul 9 10:03:24.917328 kernel: ACPI: Using IOAPIC for interrupt routing Jul 9 10:03:24.917336 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 9 10:03:24.917344 kernel: PCI: Using E820 reservations for host bridge windows Jul 9 10:03:24.917354 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jul 9 10:03:24.917362 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 9 10:03:24.917581 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 9 10:03:24.917721 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jul 9 10:03:24.917852 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jul 9 10:03:24.917863 kernel: PCI host bridge to bus 0000:00 Jul 9 10:03:24.918033 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 9 10:03:24.918179 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jul 9 10:03:24.918301 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 9 10:03:24.918418 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Jul 9 10:03:24.918537 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jul 9 10:03:24.918656 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x8ffffffff window] Jul 9 10:03:24.918774 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 9 10:03:24.918955 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jul 9 10:03:24.919115 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Jul 9 10:03:24.919279 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfd000000-0xfdffffff pref] Jul 9 10:03:24.919410 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xfebd0000-0xfebd0fff] Jul 9 10:03:24.919551 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfebc0000-0xfebcffff pref] Jul 9 10:03:24.919684 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jul 9 10:03:24.919835 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Jul 9 10:03:24.919995 kernel: pci 0000:00:02.0: reg 0x10: [io 0xc0c0-0xc0df] Jul 9 10:03:24.920130 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xfebd1000-0xfebd1fff] Jul 9 10:03:24.920283 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfe000000-0xfe003fff 64bit pref] Jul 9 10:03:24.920449 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Jul 9 10:03:24.920581 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc000-0xc07f] Jul 9 10:03:24.920712 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebd2000-0xfebd2fff] Jul 9 10:03:24.920841 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe004000-0xfe007fff 64bit pref] Jul 9 10:03:24.921015 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jul 9 10:03:24.921167 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc0e0-0xc0ff] Jul 9 10:03:24.921304 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebd3000-0xfebd3fff] Jul 9 10:03:24.921434 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe008000-0xfe00bfff 64bit pref] Jul 9 10:03:24.921658 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfeb80000-0xfebbffff pref] Jul 9 10:03:24.921832 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jul 9 10:03:24.921986 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jul 9 10:03:24.922140 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jul 9 10:03:24.922292 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc100-0xc11f] Jul 9 10:03:24.922423 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfebd4000-0xfebd4fff] Jul 9 10:03:24.922573 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jul 9 10:03:24.922703 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Jul 9 10:03:24.922714 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jul 9 10:03:24.922722 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jul 9 10:03:24.922735 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jul 9 10:03:24.922743 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jul 9 10:03:24.922756 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jul 9 10:03:24.922770 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jul 9 10:03:24.922783 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jul 9 10:03:24.922796 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jul 9 10:03:24.922810 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jul 9 10:03:24.922824 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jul 9 10:03:24.922838 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jul 9 10:03:24.922854 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jul 9 10:03:24.922870 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jul 9 10:03:24.922894 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jul 9 10:03:24.922907 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jul 9 10:03:24.922922 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jul 9 10:03:24.922943 kernel: iommu: Default domain type: Translated Jul 9 10:03:24.922954 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 9 10:03:24.922963 kernel: PCI: Using ACPI for IRQ routing Jul 9 10:03:24.922971 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 9 10:03:24.922982 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jul 9 10:03:24.922990 kernel: e820: reserve RAM buffer [mem 0x9cfdc000-0x9fffffff] Jul 9 10:03:24.923142 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jul 9 10:03:24.923353 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jul 9 10:03:24.923483 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jul 9 10:03:24.923495 kernel: vgaarb: loaded Jul 9 10:03:24.923506 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jul 9 10:03:24.923517 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jul 9 10:03:24.923536 kernel: clocksource: Switched to clocksource kvm-clock Jul 9 10:03:24.923546 kernel: VFS: Disk quotas dquot_6.6.0 Jul 9 10:03:24.923557 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 9 10:03:24.923568 kernel: pnp: PnP ACPI init Jul 9 10:03:24.923745 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Jul 9 10:03:24.923759 kernel: pnp: PnP ACPI: found 6 devices Jul 9 10:03:24.923768 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 9 10:03:24.923776 kernel: NET: Registered PF_INET protocol family Jul 9 10:03:24.923788 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 9 10:03:24.923796 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 9 10:03:24.923804 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 9 10:03:24.923814 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 9 10:03:24.923822 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 9 10:03:24.923832 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 9 10:03:24.923842 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 9 10:03:24.923852 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 9 10:03:24.923861 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 9 10:03:24.923872 kernel: NET: Registered PF_XDP protocol family Jul 9 10:03:24.924017 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jul 9 10:03:24.924139 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jul 9 10:03:24.924278 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jul 9 10:03:24.924445 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Jul 9 10:03:24.924588 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jul 9 10:03:24.924712 kernel: pci_bus 0000:00: resource 9 [mem 0x100000000-0x8ffffffff window] Jul 9 10:03:24.924723 kernel: PCI: CLS 0 bytes, default 64 Jul 9 10:03:24.924737 kernel: Initialise system trusted keyrings Jul 9 10:03:24.924745 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 9 10:03:24.924753 kernel: Key type asymmetric registered Jul 9 10:03:24.924761 kernel: Asymmetric key parser 'x509' registered Jul 9 10:03:24.924769 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jul 9 10:03:24.924777 kernel: io scheduler mq-deadline registered Jul 9 10:03:24.924785 kernel: io scheduler kyber registered Jul 9 10:03:24.924792 kernel: io scheduler bfq registered Jul 9 10:03:24.924800 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 9 10:03:24.924809 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jul 9 10:03:24.924820 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jul 9 10:03:24.924827 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jul 9 10:03:24.924835 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 9 10:03:24.924843 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 9 10:03:24.924851 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jul 9 10:03:24.924859 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jul 9 10:03:24.924867 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jul 9 10:03:24.924876 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jul 9 10:03:24.925050 kernel: rtc_cmos 00:04: RTC can wake from S4 Jul 9 10:03:24.925228 kernel: rtc_cmos 00:04: registered as rtc0 Jul 9 10:03:24.925441 kernel: rtc_cmos 00:04: setting system clock to 2025-07-09T10:03:24 UTC (1752055404) Jul 9 10:03:24.925617 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jul 9 10:03:24.925631 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jul 9 10:03:24.925639 kernel: NET: Registered PF_INET6 protocol family Jul 9 10:03:24.925649 kernel: Segment Routing with IPv6 Jul 9 10:03:24.925657 kernel: In-situ OAM (IOAM) with IPv6 Jul 9 10:03:24.925673 kernel: NET: Registered PF_PACKET protocol family Jul 9 10:03:24.925684 kernel: Key type dns_resolver registered Jul 9 10:03:24.925694 kernel: IPI shorthand broadcast: enabled Jul 9 10:03:24.925702 kernel: sched_clock: Marking stable (706002604, 102348995)->(837169132, -28817533) Jul 9 10:03:24.925710 kernel: registered taskstats version 1 Jul 9 10:03:24.925718 kernel: Loading compiled-in X.509 certificates Jul 9 10:03:24.925726 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.96-flatcar: ca337f535813abed3aeaf44049b8c63030c8a35d' Jul 9 10:03:24.925734 kernel: Key type .fscrypt registered Jul 9 10:03:24.925741 kernel: Key type fscrypt-provisioning registered Jul 9 10:03:24.925753 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 9 10:03:24.925761 kernel: ima: Allocated hash algorithm: sha1 Jul 9 10:03:24.925769 kernel: ima: No architecture policies found Jul 9 10:03:24.925777 kernel: clk: Disabling unused clocks Jul 9 10:03:24.925785 kernel: Freeing unused kernel image (initmem) memory: 43492K Jul 9 10:03:24.925793 kernel: Write protecting the kernel read-only data: 38912k Jul 9 10:03:24.925801 kernel: Freeing unused kernel image (rodata/data gap) memory: 1704K Jul 9 10:03:24.925809 kernel: Run /init as init process Jul 9 10:03:24.925817 kernel: with arguments: Jul 9 10:03:24.925827 kernel: /init Jul 9 10:03:24.925834 kernel: with environment: Jul 9 10:03:24.925842 kernel: HOME=/ Jul 9 10:03:24.925850 kernel: TERM=linux Jul 9 10:03:24.925858 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 9 10:03:24.925866 systemd[1]: Successfully made /usr/ read-only. Jul 9 10:03:24.925888 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 9 10:03:24.925898 systemd[1]: Detected virtualization kvm. Jul 9 10:03:24.925909 systemd[1]: Detected architecture x86-64. Jul 9 10:03:24.925917 systemd[1]: Running in initrd. Jul 9 10:03:24.925927 systemd[1]: No hostname configured, using default hostname. Jul 9 10:03:24.925939 systemd[1]: Hostname set to . Jul 9 10:03:24.925950 systemd[1]: Initializing machine ID from VM UUID. Jul 9 10:03:24.925960 systemd[1]: Queued start job for default target initrd.target. Jul 9 10:03:24.925969 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 9 10:03:24.925978 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 9 10:03:24.925990 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 9 10:03:24.926010 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 9 10:03:24.926021 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 9 10:03:24.926031 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 9 10:03:24.926041 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 9 10:03:24.926052 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 9 10:03:24.926061 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 9 10:03:24.926070 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 9 10:03:24.926079 systemd[1]: Reached target paths.target - Path Units. Jul 9 10:03:24.926088 systemd[1]: Reached target slices.target - Slice Units. Jul 9 10:03:24.926096 systemd[1]: Reached target swap.target - Swaps. Jul 9 10:03:24.926105 systemd[1]: Reached target timers.target - Timer Units. Jul 9 10:03:24.926114 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 9 10:03:24.926125 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 9 10:03:24.926134 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 9 10:03:24.926142 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jul 9 10:03:24.926170 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 9 10:03:24.926179 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 9 10:03:24.926187 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 9 10:03:24.926196 systemd[1]: Reached target sockets.target - Socket Units. Jul 9 10:03:24.926205 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 9 10:03:24.926214 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 9 10:03:24.926226 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 9 10:03:24.926234 systemd[1]: Starting systemd-fsck-usr.service... Jul 9 10:03:24.926243 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 9 10:03:24.926252 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 9 10:03:24.926261 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 9 10:03:24.926269 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 9 10:03:24.926278 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 9 10:03:24.926290 systemd[1]: Finished systemd-fsck-usr.service. Jul 9 10:03:24.926323 systemd-journald[194]: Collecting audit messages is disabled. Jul 9 10:03:24.926346 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 9 10:03:24.926356 systemd-journald[194]: Journal started Jul 9 10:03:24.926383 systemd-journald[194]: Runtime Journal (/run/log/journal/c838fa3c283e420c86ad9f08ddafcb51) is 6M, max 48.4M, 42.3M free. Jul 9 10:03:24.923273 systemd-modules-load[195]: Inserted module 'overlay' Jul 9 10:03:24.958794 systemd[1]: Started systemd-journald.service - Journal Service. Jul 9 10:03:24.958825 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 9 10:03:24.958838 kernel: Bridge firewalling registered Jul 9 10:03:24.950612 systemd-modules-load[195]: Inserted module 'br_netfilter' Jul 9 10:03:24.957792 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 9 10:03:24.960011 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 9 10:03:24.961821 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 9 10:03:24.972356 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 9 10:03:24.973484 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 9 10:03:24.975867 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 9 10:03:24.978627 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 9 10:03:24.989487 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 9 10:03:24.990832 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 9 10:03:24.997665 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 9 10:03:25.000551 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 9 10:03:25.008942 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 9 10:03:25.017334 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 9 10:03:25.033652 dracut-cmdline[232]: dracut-dracut-053 Jul 9 10:03:25.037033 dracut-cmdline[232]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=6e1010069223556136809e96f0b3cf40412e680ac74b0bea8760b419233d2964 Jul 9 10:03:25.049351 systemd-resolved[225]: Positive Trust Anchors: Jul 9 10:03:25.049368 systemd-resolved[225]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 9 10:03:25.049401 systemd-resolved[225]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 9 10:03:25.052278 systemd-resolved[225]: Defaulting to hostname 'linux'. Jul 9 10:03:25.053589 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 9 10:03:25.061515 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 9 10:03:25.142199 kernel: SCSI subsystem initialized Jul 9 10:03:25.151175 kernel: Loading iSCSI transport class v2.0-870. Jul 9 10:03:25.164184 kernel: iscsi: registered transport (tcp) Jul 9 10:03:25.191213 kernel: iscsi: registered transport (qla4xxx) Jul 9 10:03:25.191275 kernel: QLogic iSCSI HBA Driver Jul 9 10:03:25.264232 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 9 10:03:25.283330 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 9 10:03:25.311648 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 9 10:03:25.311712 kernel: device-mapper: uevent: version 1.0.3 Jul 9 10:03:25.312649 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jul 9 10:03:25.357191 kernel: raid6: avx2x4 gen() 28885 MB/s Jul 9 10:03:25.374202 kernel: raid6: avx2x2 gen() 27420 MB/s Jul 9 10:03:25.391266 kernel: raid6: avx2x1 gen() 24604 MB/s Jul 9 10:03:25.391321 kernel: raid6: using algorithm avx2x4 gen() 28885 MB/s Jul 9 10:03:25.409240 kernel: raid6: .... xor() 6966 MB/s, rmw enabled Jul 9 10:03:25.409298 kernel: raid6: using avx2x2 recovery algorithm Jul 9 10:03:25.431384 kernel: xor: automatically using best checksumming function avx Jul 9 10:03:25.584217 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 9 10:03:25.599730 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 9 10:03:25.609293 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 9 10:03:25.624715 systemd-udevd[414]: Using default interface naming scheme 'v255'. Jul 9 10:03:25.630435 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 9 10:03:25.643351 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 9 10:03:25.662623 dracut-pre-trigger[425]: rd.md=0: removing MD RAID activation Jul 9 10:03:25.700367 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 9 10:03:25.712376 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 9 10:03:25.780869 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 9 10:03:25.789309 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 9 10:03:25.802555 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 9 10:03:25.805557 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 9 10:03:25.808120 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 9 10:03:25.810458 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 9 10:03:25.822177 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jul 9 10:03:25.819368 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 9 10:03:25.825674 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jul 9 10:03:25.832771 kernel: cryptd: max_cpu_qlen set to 1000 Jul 9 10:03:25.833461 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 9 10:03:25.845223 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 9 10:03:25.845257 kernel: GPT:9289727 != 19775487 Jul 9 10:03:25.845269 kernel: AVX2 version of gcm_enc/dec engaged. Jul 9 10:03:25.845280 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 9 10:03:25.845298 kernel: GPT:9289727 != 19775487 Jul 9 10:03:25.847417 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 9 10:03:25.847456 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 9 10:03:25.851176 kernel: AES CTR mode by8 optimization enabled Jul 9 10:03:25.851214 kernel: libata version 3.00 loaded. Jul 9 10:03:25.859315 kernel: ahci 0000:00:1f.2: version 3.0 Jul 9 10:03:25.859572 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jul 9 10:03:25.862758 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jul 9 10:03:25.863047 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jul 9 10:03:25.866324 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 9 10:03:25.870569 kernel: scsi host0: ahci Jul 9 10:03:25.870795 kernel: scsi host1: ahci Jul 9 10:03:25.866499 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 9 10:03:25.868885 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 9 10:03:25.879139 kernel: scsi host2: ahci Jul 9 10:03:25.879397 kernel: scsi host3: ahci Jul 9 10:03:25.879557 kernel: scsi host4: ahci Jul 9 10:03:25.879725 kernel: scsi host5: ahci Jul 9 10:03:25.869997 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 9 10:03:25.886307 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4100 irq 34 Jul 9 10:03:25.886328 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4180 irq 34 Jul 9 10:03:25.886342 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4200 irq 34 Jul 9 10:03:25.886356 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4280 irq 34 Jul 9 10:03:25.886369 kernel: BTRFS: device fsid a0fcba15-d6b5-4866-9f07-4bde3c5c2769 devid 1 transid 38 /dev/vda3 scanned by (udev-worker) (459) Jul 9 10:03:25.886391 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4300 irq 34 Jul 9 10:03:25.886405 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd4000 port 0xfebd4380 irq 34 Jul 9 10:03:25.870461 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 9 10:03:25.874096 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 9 10:03:25.886658 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 9 10:03:25.950200 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by (udev-worker) (477) Jul 9 10:03:25.984958 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jul 9 10:03:26.001746 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jul 9 10:03:26.004171 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jul 9 10:03:26.004647 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 9 10:03:26.016999 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 9 10:03:26.026064 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jul 9 10:03:26.040351 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 9 10:03:26.041632 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 9 10:03:26.054340 disk-uuid[558]: Primary Header is updated. Jul 9 10:03:26.054340 disk-uuid[558]: Secondary Entries is updated. Jul 9 10:03:26.054340 disk-uuid[558]: Secondary Header is updated. Jul 9 10:03:26.059173 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 9 10:03:26.060103 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 9 10:03:26.198256 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jul 9 10:03:26.198306 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jul 9 10:03:26.199176 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jul 9 10:03:26.200187 kernel: ata3.00: applying bridge limits Jul 9 10:03:26.201179 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jul 9 10:03:26.201194 kernel: ata3.00: configured for UDMA/100 Jul 9 10:03:26.248542 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jul 9 10:03:26.248618 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jul 9 10:03:26.254172 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jul 9 10:03:26.254193 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jul 9 10:03:26.301700 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jul 9 10:03:26.302009 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jul 9 10:03:26.316405 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jul 9 10:03:27.090934 disk-uuid[566]: The operation has completed successfully. Jul 9 10:03:27.092456 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 9 10:03:27.124114 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 9 10:03:27.124258 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 9 10:03:27.170507 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 9 10:03:27.175874 sh[595]: Success Jul 9 10:03:27.189205 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Jul 9 10:03:27.225397 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 9 10:03:27.239784 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 9 10:03:27.243787 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 9 10:03:27.254938 kernel: BTRFS info (device dm-0): first mount of filesystem a0fcba15-d6b5-4866-9f07-4bde3c5c2769 Jul 9 10:03:27.254967 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jul 9 10:03:27.254978 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jul 9 10:03:27.255911 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jul 9 10:03:27.256605 kernel: BTRFS info (device dm-0): using free space tree Jul 9 10:03:27.261686 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 9 10:03:27.263894 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 9 10:03:27.276288 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 9 10:03:27.279077 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 9 10:03:27.294191 kernel: BTRFS info (device vda6): first mount of filesystem e8fbe8ac-7a4a-4b49-99f6-05f536d09187 Jul 9 10:03:27.294229 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 9 10:03:27.295587 kernel: BTRFS info (device vda6): using free space tree Jul 9 10:03:27.299171 kernel: BTRFS info (device vda6): auto enabling async discard Jul 9 10:03:27.305190 kernel: BTRFS info (device vda6): last unmount of filesystem e8fbe8ac-7a4a-4b49-99f6-05f536d09187 Jul 9 10:03:27.394823 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 9 10:03:27.408441 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 9 10:03:27.435371 systemd-networkd[771]: lo: Link UP Jul 9 10:03:27.435383 systemd-networkd[771]: lo: Gained carrier Jul 9 10:03:27.437223 systemd-networkd[771]: Enumeration completed Jul 9 10:03:27.437408 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 9 10:03:27.437611 systemd-networkd[771]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 9 10:03:27.437615 systemd-networkd[771]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 9 10:03:27.439245 systemd-networkd[771]: eth0: Link UP Jul 9 10:03:27.439249 systemd-networkd[771]: eth0: Gained carrier Jul 9 10:03:27.439256 systemd-networkd[771]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 9 10:03:27.439654 systemd[1]: Reached target network.target - Network. Jul 9 10:03:27.462199 systemd-networkd[771]: eth0: DHCPv4 address 10.0.0.36/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 9 10:03:27.471968 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 9 10:03:27.482336 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 9 10:03:27.630387 ignition[776]: Ignition 2.20.0 Jul 9 10:03:27.630404 ignition[776]: Stage: fetch-offline Jul 9 10:03:27.630476 ignition[776]: no configs at "/usr/lib/ignition/base.d" Jul 9 10:03:27.630488 ignition[776]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 9 10:03:27.631478 ignition[776]: parsed url from cmdline: "" Jul 9 10:03:27.631483 ignition[776]: no config URL provided Jul 9 10:03:27.631488 ignition[776]: reading system config file "/usr/lib/ignition/user.ign" Jul 9 10:03:27.631499 ignition[776]: no config at "/usr/lib/ignition/user.ign" Jul 9 10:03:27.631529 ignition[776]: op(1): [started] loading QEMU firmware config module Jul 9 10:03:27.631534 ignition[776]: op(1): executing: "modprobe" "qemu_fw_cfg" Jul 9 10:03:27.649307 ignition[776]: op(1): [finished] loading QEMU firmware config module Jul 9 10:03:27.686679 ignition[776]: parsing config with SHA512: 6cb8b0c2dd15bc0cc6d0cdc0208f76c7a9684b9c4e73b9aa574612a27804be9ce8ff298a8dd74e24fa69bd523f051bdbd8aef894b8b87a8621984c63ba4f84f7 Jul 9 10:03:27.692700 unknown[776]: fetched base config from "system" Jul 9 10:03:27.692715 unknown[776]: fetched user config from "qemu" Jul 9 10:03:27.697654 ignition[776]: fetch-offline: fetch-offline passed Jul 9 10:03:27.697816 ignition[776]: Ignition finished successfully Jul 9 10:03:27.703844 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 9 10:03:27.705312 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 9 10:03:27.721294 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 9 10:03:27.739455 ignition[786]: Ignition 2.20.0 Jul 9 10:03:27.739466 ignition[786]: Stage: kargs Jul 9 10:03:27.739618 ignition[786]: no configs at "/usr/lib/ignition/base.d" Jul 9 10:03:27.739630 ignition[786]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 9 10:03:27.740499 ignition[786]: kargs: kargs passed Jul 9 10:03:27.740541 ignition[786]: Ignition finished successfully Jul 9 10:03:27.744240 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 9 10:03:27.757295 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 9 10:03:27.770798 ignition[795]: Ignition 2.20.0 Jul 9 10:03:27.770816 ignition[795]: Stage: disks Jul 9 10:03:27.771011 ignition[795]: no configs at "/usr/lib/ignition/base.d" Jul 9 10:03:27.771023 ignition[795]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 9 10:03:27.771991 ignition[795]: disks: disks passed Jul 9 10:03:27.774544 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 9 10:03:27.772033 ignition[795]: Ignition finished successfully Jul 9 10:03:27.775834 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 9 10:03:27.777667 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 9 10:03:27.779834 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 9 10:03:27.781836 systemd[1]: Reached target sysinit.target - System Initialization. Jul 9 10:03:27.782843 systemd[1]: Reached target basic.target - Basic System. Jul 9 10:03:27.792393 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 9 10:03:27.806272 systemd-fsck[806]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jul 9 10:03:27.812641 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 9 10:03:27.824254 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 9 10:03:27.913172 kernel: EXT4-fs (vda9): mounted filesystem 29e28e54-dd5e-4a04-9ac3-da19004f73eb r/w with ordered data mode. Quota mode: none. Jul 9 10:03:27.913714 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 9 10:03:27.914611 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 9 10:03:27.922256 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 9 10:03:27.924049 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 9 10:03:27.924706 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 9 10:03:27.924750 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 9 10:03:27.924773 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 9 10:03:27.933654 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by mount (814) Jul 9 10:03:27.935630 kernel: BTRFS info (device vda6): first mount of filesystem e8fbe8ac-7a4a-4b49-99f6-05f536d09187 Jul 9 10:03:27.935655 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 9 10:03:27.935666 kernel: BTRFS info (device vda6): using free space tree Jul 9 10:03:27.939172 kernel: BTRFS info (device vda6): auto enabling async discard Jul 9 10:03:27.949264 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 9 10:03:27.955285 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 9 10:03:27.956509 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 9 10:03:27.994253 initrd-setup-root[838]: cut: /sysroot/etc/passwd: No such file or directory Jul 9 10:03:27.999989 initrd-setup-root[845]: cut: /sysroot/etc/group: No such file or directory Jul 9 10:03:28.004099 initrd-setup-root[852]: cut: /sysroot/etc/shadow: No such file or directory Jul 9 10:03:28.009012 initrd-setup-root[859]: cut: /sysroot/etc/gshadow: No such file or directory Jul 9 10:03:28.108628 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 9 10:03:28.120240 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 9 10:03:28.122784 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 9 10:03:28.131176 kernel: BTRFS info (device vda6): last unmount of filesystem e8fbe8ac-7a4a-4b49-99f6-05f536d09187 Jul 9 10:03:28.153016 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 9 10:03:28.156440 ignition[927]: INFO : Ignition 2.20.0 Jul 9 10:03:28.156440 ignition[927]: INFO : Stage: mount Jul 9 10:03:28.158032 ignition[927]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 9 10:03:28.158032 ignition[927]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 9 10:03:28.158032 ignition[927]: INFO : mount: mount passed Jul 9 10:03:28.158032 ignition[927]: INFO : Ignition finished successfully Jul 9 10:03:28.160047 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 9 10:03:28.165323 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 9 10:03:28.254325 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 9 10:03:28.268311 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 9 10:03:28.276178 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (942) Jul 9 10:03:28.276207 kernel: BTRFS info (device vda6): first mount of filesystem e8fbe8ac-7a4a-4b49-99f6-05f536d09187 Jul 9 10:03:28.278278 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 9 10:03:28.278292 kernel: BTRFS info (device vda6): using free space tree Jul 9 10:03:28.282177 kernel: BTRFS info (device vda6): auto enabling async discard Jul 9 10:03:28.283342 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 9 10:03:28.312823 ignition[959]: INFO : Ignition 2.20.0 Jul 9 10:03:28.312823 ignition[959]: INFO : Stage: files Jul 9 10:03:28.314692 ignition[959]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 9 10:03:28.314692 ignition[959]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 9 10:03:28.314692 ignition[959]: DEBUG : files: compiled without relabeling support, skipping Jul 9 10:03:28.318201 ignition[959]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 9 10:03:28.318201 ignition[959]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 9 10:03:28.321042 ignition[959]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 9 10:03:28.321042 ignition[959]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 9 10:03:28.321042 ignition[959]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 9 10:03:28.320449 unknown[959]: wrote ssh authorized keys file for user: core Jul 9 10:03:28.326027 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jul 9 10:03:28.326027 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Jul 9 10:03:28.371795 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 9 10:03:28.478438 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jul 9 10:03:28.478438 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 9 10:03:28.482289 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jul 9 10:03:28.984208 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jul 9 10:03:29.230624 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 9 10:03:29.230624 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jul 9 10:03:29.234423 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jul 9 10:03:29.234423 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 9 10:03:29.234423 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 9 10:03:29.234423 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 9 10:03:29.234423 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 9 10:03:29.234423 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 9 10:03:29.234423 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 9 10:03:29.234423 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 9 10:03:29.234423 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 9 10:03:29.234423 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 9 10:03:29.234423 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 9 10:03:29.234423 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 9 10:03:29.234423 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Jul 9 10:03:29.383449 systemd-networkd[771]: eth0: Gained IPv6LL Jul 9 10:03:29.776717 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jul 9 10:03:30.387596 ignition[959]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 9 10:03:30.387596 ignition[959]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jul 9 10:03:30.391035 ignition[959]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 9 10:03:30.391035 ignition[959]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 9 10:03:30.391035 ignition[959]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jul 9 10:03:30.391035 ignition[959]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Jul 9 10:03:30.391035 ignition[959]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 9 10:03:30.391035 ignition[959]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 9 10:03:30.391035 ignition[959]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Jul 9 10:03:30.391035 ignition[959]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Jul 9 10:03:30.413423 ignition[959]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 9 10:03:30.428429 ignition[959]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 9 10:03:30.430045 ignition[959]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Jul 9 10:03:30.430045 ignition[959]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Jul 9 10:03:30.432674 ignition[959]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Jul 9 10:03:30.434154 ignition[959]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 9 10:03:30.435954 ignition[959]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 9 10:03:30.437599 ignition[959]: INFO : files: files passed Jul 9 10:03:30.438298 ignition[959]: INFO : Ignition finished successfully Jul 9 10:03:30.442214 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 9 10:03:30.456297 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 9 10:03:30.459324 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 9 10:03:30.462874 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 9 10:03:30.463907 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 9 10:03:30.469889 initrd-setup-root-after-ignition[988]: grep: /sysroot/oem/oem-release: No such file or directory Jul 9 10:03:30.473738 initrd-setup-root-after-ignition[990]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 9 10:03:30.473738 initrd-setup-root-after-ignition[990]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 9 10:03:30.476705 initrd-setup-root-after-ignition[994]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 9 10:03:30.480566 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 9 10:03:30.482022 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 9 10:03:30.495296 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 9 10:03:30.520581 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 9 10:03:30.520698 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 9 10:03:30.523084 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 9 10:03:30.525005 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 9 10:03:30.527019 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 9 10:03:30.527890 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 9 10:03:30.547618 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 9 10:03:30.555327 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 9 10:03:30.564475 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 9 10:03:30.566829 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 9 10:03:30.568116 systemd[1]: Stopped target timers.target - Timer Units. Jul 9 10:03:30.571641 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 9 10:03:30.572654 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 9 10:03:30.575139 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 9 10:03:30.577182 systemd[1]: Stopped target basic.target - Basic System. Jul 9 10:03:30.579080 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 9 10:03:30.581219 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 9 10:03:30.583466 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 9 10:03:30.585562 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 9 10:03:30.587562 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 9 10:03:30.589965 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 9 10:03:30.591956 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 9 10:03:30.594063 systemd[1]: Stopped target swap.target - Swaps. Jul 9 10:03:30.595817 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 9 10:03:30.596914 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 9 10:03:30.599180 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 9 10:03:30.601309 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 9 10:03:30.603672 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 9 10:03:30.604697 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 9 10:03:30.607268 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 9 10:03:30.608316 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 9 10:03:30.610659 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 9 10:03:30.611748 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 9 10:03:30.614104 systemd[1]: Stopped target paths.target - Path Units. Jul 9 10:03:30.615868 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 9 10:03:30.620206 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 9 10:03:30.622938 systemd[1]: Stopped target slices.target - Slice Units. Jul 9 10:03:30.624794 systemd[1]: Stopped target sockets.target - Socket Units. Jul 9 10:03:30.626700 systemd[1]: iscsid.socket: Deactivated successfully. Jul 9 10:03:30.627650 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 9 10:03:30.629543 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 9 10:03:30.630417 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 9 10:03:30.632417 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 9 10:03:30.633553 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 9 10:03:30.635989 systemd[1]: ignition-files.service: Deactivated successfully. Jul 9 10:03:30.636962 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 9 10:03:30.650303 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 9 10:03:30.652879 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 9 10:03:30.654916 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 9 10:03:30.656166 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 9 10:03:30.659005 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 9 10:03:30.660270 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 9 10:03:30.665485 ignition[1014]: INFO : Ignition 2.20.0 Jul 9 10:03:30.665485 ignition[1014]: INFO : Stage: umount Jul 9 10:03:30.665485 ignition[1014]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 9 10:03:30.665485 ignition[1014]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 9 10:03:30.665485 ignition[1014]: INFO : umount: umount passed Jul 9 10:03:30.665485 ignition[1014]: INFO : Ignition finished successfully Jul 9 10:03:30.672779 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 9 10:03:30.673822 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 9 10:03:30.676103 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 9 10:03:30.677140 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 9 10:03:30.682831 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 9 10:03:30.684289 systemd[1]: Stopped target network.target - Network. Jul 9 10:03:30.686170 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 9 10:03:30.687065 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 9 10:03:30.688935 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 9 10:03:30.689802 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 9 10:03:30.692030 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 9 10:03:30.692100 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 9 10:03:30.694992 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 9 10:03:30.695062 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 9 10:03:30.698751 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 9 10:03:30.701397 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 9 10:03:30.708620 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 9 10:03:30.709754 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 9 10:03:30.714109 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jul 9 10:03:30.715938 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 9 10:03:30.716010 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 9 10:03:30.720882 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jul 9 10:03:30.722597 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 9 10:03:30.723757 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 9 10:03:30.727140 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jul 9 10:03:30.729138 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 9 10:03:30.730199 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 9 10:03:30.744250 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 9 10:03:30.744483 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 9 10:03:30.744540 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 9 10:03:30.746387 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 9 10:03:30.746437 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 9 10:03:30.750682 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 9 10:03:30.750740 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 9 10:03:30.752816 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 9 10:03:30.754405 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 9 10:03:30.762185 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 9 10:03:30.762340 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 9 10:03:30.777865 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 9 10:03:30.778095 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 9 10:03:30.780485 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 9 10:03:30.780548 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 9 10:03:30.782672 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 9 10:03:30.782741 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 9 10:03:30.784794 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 9 10:03:30.784864 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 9 10:03:30.787258 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 9 10:03:30.787322 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 9 10:03:30.789182 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 9 10:03:30.789249 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 9 10:03:30.811293 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 9 10:03:30.812427 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 9 10:03:30.812503 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 9 10:03:30.815031 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jul 9 10:03:30.815097 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 9 10:03:30.817351 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 9 10:03:30.817416 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 9 10:03:30.819997 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 9 10:03:30.820062 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 9 10:03:30.822959 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 9 10:03:30.823091 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 9 10:03:30.921484 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 9 10:03:30.921626 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 9 10:03:30.924058 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 9 10:03:30.925294 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 9 10:03:30.925362 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 9 10:03:30.935325 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 9 10:03:30.944980 systemd[1]: Switching root. Jul 9 10:03:30.978659 systemd-journald[194]: Journal stopped Jul 9 10:03:32.580135 systemd-journald[194]: Received SIGTERM from PID 1 (systemd). Jul 9 10:03:32.580274 kernel: SELinux: policy capability network_peer_controls=1 Jul 9 10:03:32.580299 kernel: SELinux: policy capability open_perms=1 Jul 9 10:03:32.580311 kernel: SELinux: policy capability extended_socket_class=1 Jul 9 10:03:32.580323 kernel: SELinux: policy capability always_check_network=0 Jul 9 10:03:32.580334 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 9 10:03:32.580354 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 9 10:03:32.580366 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 9 10:03:32.580378 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 9 10:03:32.580394 kernel: audit: type=1403 audit(1752055411.726:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 9 10:03:32.580407 systemd[1]: Successfully loaded SELinux policy in 42.998ms. Jul 9 10:03:32.580432 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 13.996ms. Jul 9 10:03:32.580446 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 9 10:03:32.580459 systemd[1]: Detected virtualization kvm. Jul 9 10:03:32.580473 systemd[1]: Detected architecture x86-64. Jul 9 10:03:32.580491 systemd[1]: Detected first boot. Jul 9 10:03:32.580504 systemd[1]: Initializing machine ID from VM UUID. Jul 9 10:03:32.580517 zram_generator::config[1059]: No configuration found. Jul 9 10:03:32.580531 kernel: Guest personality initialized and is inactive Jul 9 10:03:32.580544 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Jul 9 10:03:32.580557 kernel: Initialized host personality Jul 9 10:03:32.580568 kernel: NET: Registered PF_VSOCK protocol family Jul 9 10:03:32.580581 systemd[1]: Populated /etc with preset unit settings. Jul 9 10:03:32.580600 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jul 9 10:03:32.580618 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 9 10:03:32.580630 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 9 10:03:32.580642 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 9 10:03:32.580655 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 9 10:03:32.580668 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 9 10:03:32.580688 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 9 10:03:32.580706 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 9 10:03:32.580719 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 9 10:03:32.580736 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 9 10:03:32.580749 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 9 10:03:32.580762 systemd[1]: Created slice user.slice - User and Session Slice. Jul 9 10:03:32.580775 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 9 10:03:32.580788 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 9 10:03:32.580800 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 9 10:03:32.580813 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 9 10:03:32.580826 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 9 10:03:32.580845 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 9 10:03:32.580858 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jul 9 10:03:32.580870 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 9 10:03:32.580882 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 9 10:03:32.580895 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 9 10:03:32.580907 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 9 10:03:32.580920 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 9 10:03:32.580933 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 9 10:03:32.580950 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 9 10:03:32.580963 systemd[1]: Reached target slices.target - Slice Units. Jul 9 10:03:32.580975 systemd[1]: Reached target swap.target - Swaps. Jul 9 10:03:32.580987 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 9 10:03:32.580999 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 9 10:03:32.581012 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jul 9 10:03:32.581024 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 9 10:03:32.581036 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 9 10:03:32.581049 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 9 10:03:32.581061 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 9 10:03:32.581079 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 9 10:03:32.581091 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 9 10:03:32.581103 systemd[1]: Mounting media.mount - External Media Directory... Jul 9 10:03:32.581116 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 9 10:03:32.581128 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 9 10:03:32.581185 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 9 10:03:32.581210 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 9 10:03:32.581223 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 9 10:03:32.581243 systemd[1]: Reached target machines.target - Containers. Jul 9 10:03:32.581256 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 9 10:03:32.581268 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 9 10:03:32.581281 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 9 10:03:32.581293 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 9 10:03:32.581306 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 9 10:03:32.581318 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 9 10:03:32.581331 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 9 10:03:32.581348 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 9 10:03:32.581361 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 9 10:03:32.581374 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 9 10:03:32.581386 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 9 10:03:32.581398 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 9 10:03:32.581411 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 9 10:03:32.581423 systemd[1]: Stopped systemd-fsck-usr.service. Jul 9 10:03:32.581436 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 9 10:03:32.581448 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 9 10:03:32.581465 kernel: fuse: init (API version 7.39) Jul 9 10:03:32.581477 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 9 10:03:32.581490 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 9 10:03:32.581502 kernel: loop: module loaded Jul 9 10:03:32.581514 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 9 10:03:32.581527 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jul 9 10:03:32.581540 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 9 10:03:32.581552 systemd[1]: verity-setup.service: Deactivated successfully. Jul 9 10:03:32.581565 systemd[1]: Stopped verity-setup.service. Jul 9 10:03:32.581584 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 9 10:03:32.581596 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 9 10:03:32.581608 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 9 10:03:32.581621 systemd[1]: Mounted media.mount - External Media Directory. Jul 9 10:03:32.581639 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 9 10:03:32.581652 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 9 10:03:32.581665 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 9 10:03:32.581685 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 9 10:03:32.581698 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 9 10:03:32.581710 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 9 10:03:32.581722 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 9 10:03:32.581740 kernel: ACPI: bus type drm_connector registered Jul 9 10:03:32.581752 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 9 10:03:32.581765 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 9 10:03:32.581797 systemd-journald[1130]: Collecting audit messages is disabled. Jul 9 10:03:32.581821 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 9 10:03:32.581833 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 9 10:03:32.581852 systemd-journald[1130]: Journal started Jul 9 10:03:32.581874 systemd-journald[1130]: Runtime Journal (/run/log/journal/c838fa3c283e420c86ad9f08ddafcb51) is 6M, max 48.4M, 42.3M free. Jul 9 10:03:32.317922 systemd[1]: Queued start job for default target multi-user.target. Jul 9 10:03:32.332350 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jul 9 10:03:32.332846 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 9 10:03:32.584306 systemd[1]: Started systemd-journald.service - Journal Service. Jul 9 10:03:32.586194 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 9 10:03:32.586445 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 9 10:03:32.587974 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 9 10:03:32.588232 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 9 10:03:32.589609 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 9 10:03:32.589832 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 9 10:03:32.591282 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 9 10:03:32.592681 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 9 10:03:32.594304 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 9 10:03:32.595828 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jul 9 10:03:32.610208 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 9 10:03:32.615254 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 9 10:03:32.617483 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 9 10:03:32.618635 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 9 10:03:32.618667 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 9 10:03:32.620731 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jul 9 10:03:32.623108 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 9 10:03:32.625947 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 9 10:03:32.627101 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 9 10:03:32.629413 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 9 10:03:32.631665 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 9 10:03:32.631962 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 9 10:03:32.635510 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 9 10:03:32.637014 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 9 10:03:32.642324 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 9 10:03:32.645379 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 9 10:03:32.656401 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 9 10:03:32.662685 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 9 10:03:32.664601 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 9 10:03:32.668732 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 9 10:03:32.671901 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 9 10:03:32.681513 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 9 10:03:32.701438 systemd-journald[1130]: Time spent on flushing to /var/log/journal/c838fa3c283e420c86ad9f08ddafcb51 is 19.564ms for 971 entries. Jul 9 10:03:32.701438 systemd-journald[1130]: System Journal (/var/log/journal/c838fa3c283e420c86ad9f08ddafcb51) is 8M, max 195.6M, 187.6M free. Jul 9 10:03:32.737302 systemd-journald[1130]: Received client request to flush runtime journal. Jul 9 10:03:32.737363 kernel: loop0: detected capacity change from 0 to 224512 Jul 9 10:03:32.711356 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jul 9 10:03:32.724864 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 9 10:03:32.738636 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jul 9 10:03:32.740898 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 9 10:03:32.742608 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 9 10:03:32.742713 systemd-tmpfiles[1180]: ACLs are not supported, ignoring. Jul 9 10:03:32.742727 systemd-tmpfiles[1180]: ACLs are not supported, ignoring. Jul 9 10:03:32.749304 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 9 10:03:32.751678 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 9 10:03:32.762387 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 9 10:03:32.764098 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jul 9 10:03:32.770044 udevadm[1190]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jul 9 10:03:32.814653 kernel: loop1: detected capacity change from 0 to 138176 Jul 9 10:03:32.832137 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 9 10:03:32.847941 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 9 10:03:32.868197 kernel: loop2: detected capacity change from 0 to 147912 Jul 9 10:03:32.951302 systemd-tmpfiles[1203]: ACLs are not supported, ignoring. Jul 9 10:03:32.951322 systemd-tmpfiles[1203]: ACLs are not supported, ignoring. Jul 9 10:03:32.957760 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 9 10:03:32.967182 kernel: loop3: detected capacity change from 0 to 224512 Jul 9 10:03:32.978429 kernel: loop4: detected capacity change from 0 to 138176 Jul 9 10:03:32.994193 kernel: loop5: detected capacity change from 0 to 147912 Jul 9 10:03:33.007632 (sd-merge)[1207]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jul 9 10:03:33.008349 (sd-merge)[1207]: Merged extensions into '/usr'. Jul 9 10:03:33.086682 systemd[1]: Reload requested from client PID 1179 ('systemd-sysext') (unit systemd-sysext.service)... Jul 9 10:03:33.086702 systemd[1]: Reloading... Jul 9 10:03:33.151897 zram_generator::config[1231]: No configuration found. Jul 9 10:03:33.260948 ldconfig[1174]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 9 10:03:33.347064 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 9 10:03:33.413621 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 9 10:03:33.414298 systemd[1]: Reloading finished in 327 ms. Jul 9 10:03:33.433753 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 9 10:03:33.435367 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 9 10:03:33.453998 systemd[1]: Starting ensure-sysext.service... Jul 9 10:03:33.456182 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 9 10:03:33.536077 systemd-tmpfiles[1273]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 9 10:03:33.536437 systemd-tmpfiles[1273]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 9 10:03:33.537563 systemd-tmpfiles[1273]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 9 10:03:33.537909 systemd-tmpfiles[1273]: ACLs are not supported, ignoring. Jul 9 10:03:33.538007 systemd-tmpfiles[1273]: ACLs are not supported, ignoring. Jul 9 10:03:33.542779 systemd-tmpfiles[1273]: Detected autofs mount point /boot during canonicalization of boot. Jul 9 10:03:33.542796 systemd-tmpfiles[1273]: Skipping /boot Jul 9 10:03:33.543685 systemd[1]: Reload requested from client PID 1272 ('systemctl') (unit ensure-sysext.service)... Jul 9 10:03:33.543701 systemd[1]: Reloading... Jul 9 10:03:33.559545 systemd-tmpfiles[1273]: Detected autofs mount point /boot during canonicalization of boot. Jul 9 10:03:33.559559 systemd-tmpfiles[1273]: Skipping /boot Jul 9 10:03:33.606174 zram_generator::config[1305]: No configuration found. Jul 9 10:03:33.742684 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 9 10:03:33.809421 systemd[1]: Reloading finished in 265 ms. Jul 9 10:03:33.820911 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 9 10:03:33.843489 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 9 10:03:33.864624 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 9 10:03:33.867854 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 9 10:03:33.919339 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 9 10:03:33.923267 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 9 10:03:33.928930 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 9 10:03:33.934092 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 9 10:03:33.938661 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 9 10:03:33.938845 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 9 10:03:33.942216 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 9 10:03:33.945348 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 9 10:03:33.948760 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 9 10:03:33.949485 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 9 10:03:33.950163 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 9 10:03:33.952360 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 9 10:03:33.954208 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 9 10:03:33.960056 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 9 10:03:33.960581 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 9 10:03:33.960851 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 9 10:03:33.960986 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 9 10:03:33.961126 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 9 10:03:33.966575 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 9 10:03:33.966883 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 9 10:03:33.969562 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 9 10:03:33.974772 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 9 10:03:33.977979 systemd-udevd[1346]: Using default interface naming scheme 'v255'. Jul 9 10:03:33.978589 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 9 10:03:33.978834 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 9 10:03:33.980626 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 9 10:03:33.980888 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 9 10:03:33.989908 augenrules[1373]: No rules Jul 9 10:03:33.990072 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 9 10:03:33.990603 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 9 10:03:33.996385 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 9 10:03:34.000382 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 9 10:03:34.004585 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 9 10:03:34.009315 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 9 10:03:34.010660 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 9 10:03:34.010772 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 9 10:03:34.012490 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 9 10:03:34.013612 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 9 10:03:34.015356 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 9 10:03:34.017436 systemd[1]: audit-rules.service: Deactivated successfully. Jul 9 10:03:34.017728 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 9 10:03:34.020965 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 9 10:03:34.023715 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 9 10:03:34.023942 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 9 10:03:34.025757 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 9 10:03:34.027561 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 9 10:03:34.027794 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 9 10:03:34.029790 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 9 10:03:34.030005 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 9 10:03:34.031832 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 9 10:03:34.033341 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 9 10:03:34.047401 systemd[1]: Finished ensure-sysext.service. Jul 9 10:03:34.055276 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 9 10:03:34.073351 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 9 10:03:34.073903 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 9 10:03:34.073982 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 9 10:03:34.083371 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 9 10:03:34.084717 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 9 10:03:34.093480 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jul 9 10:03:34.165783 systemd-resolved[1345]: Positive Trust Anchors: Jul 9 10:03:34.165800 systemd-resolved[1345]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 9 10:03:34.166242 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1387) Jul 9 10:03:34.165832 systemd-resolved[1345]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 9 10:03:34.173394 systemd-resolved[1345]: Defaulting to hostname 'linux'. Jul 9 10:03:34.177033 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 9 10:03:34.178573 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 9 10:03:34.181169 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jul 9 10:03:34.195170 kernel: ACPI: button: Power Button [PWRF] Jul 9 10:03:34.202121 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jul 9 10:03:34.208197 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jul 9 10:03:34.208474 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jul 9 10:03:34.208749 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jul 9 10:03:34.227048 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 9 10:03:34.235316 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 9 10:03:34.240043 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 9 10:03:34.241467 systemd[1]: Reached target time-set.target - System Time Set. Jul 9 10:03:34.252252 systemd-networkd[1418]: lo: Link UP Jul 9 10:03:34.252265 systemd-networkd[1418]: lo: Gained carrier Jul 9 10:03:34.257403 systemd-networkd[1418]: Enumeration completed Jul 9 10:03:34.257485 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 9 10:03:34.260314 systemd[1]: Reached target network.target - Network. Jul 9 10:03:34.265932 systemd-networkd[1418]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 9 10:03:34.265945 systemd-networkd[1418]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 9 10:03:34.268319 systemd-networkd[1418]: eth0: Link UP Jul 9 10:03:34.268334 systemd-networkd[1418]: eth0: Gained carrier Jul 9 10:03:34.268352 systemd-networkd[1418]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 9 10:03:34.317190 kernel: mousedev: PS/2 mouse device common for all mice Jul 9 10:03:34.317952 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jul 9 10:03:34.324353 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 9 10:03:34.327272 systemd-networkd[1418]: eth0: DHCPv4 address 10.0.0.36/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 9 10:03:34.328301 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 9 10:03:34.328584 systemd-timesyncd[1421]: Network configuration changed, trying to establish connection. Jul 9 10:03:34.329479 systemd-timesyncd[1421]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jul 9 10:03:34.329521 systemd-timesyncd[1421]: Initial clock synchronization to Wed 2025-07-09 10:03:34.703650 UTC. Jul 9 10:03:34.330915 kernel: kvm_amd: TSC scaling supported Jul 9 10:03:34.330949 kernel: kvm_amd: Nested Virtualization enabled Jul 9 10:03:34.330972 kernel: kvm_amd: Nested Paging enabled Jul 9 10:03:34.330989 kernel: kvm_amd: LBR virtualization supported Jul 9 10:03:34.332456 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jul 9 10:03:34.332548 kernel: kvm_amd: Virtual GIF supported Jul 9 10:03:34.355449 kernel: EDAC MC: Ver: 3.0.0 Jul 9 10:03:34.365752 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 9 10:03:34.371304 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jul 9 10:03:34.384641 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jul 9 10:03:34.388568 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jul 9 10:03:34.402637 lvm[1442]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 9 10:03:34.443701 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jul 9 10:03:34.472731 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 9 10:03:34.474964 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 9 10:03:34.476098 systemd[1]: Reached target sysinit.target - System Initialization. Jul 9 10:03:34.477338 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 9 10:03:34.478604 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 9 10:03:34.480209 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 9 10:03:34.481498 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 9 10:03:34.482905 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 9 10:03:34.484293 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 9 10:03:34.484327 systemd[1]: Reached target paths.target - Path Units. Jul 9 10:03:34.485218 systemd[1]: Reached target timers.target - Timer Units. Jul 9 10:03:34.487107 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 9 10:03:34.490282 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 9 10:03:34.494437 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jul 9 10:03:34.496088 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jul 9 10:03:34.497531 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jul 9 10:03:34.505545 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 9 10:03:34.507274 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jul 9 10:03:34.510011 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jul 9 10:03:34.511781 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 9 10:03:34.513040 systemd[1]: Reached target sockets.target - Socket Units. Jul 9 10:03:34.514156 systemd[1]: Reached target basic.target - Basic System. Jul 9 10:03:34.515249 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 9 10:03:34.515277 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 9 10:03:34.516427 systemd[1]: Starting containerd.service - containerd container runtime... Jul 9 10:03:34.518664 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 9 10:03:34.519391 lvm[1451]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 9 10:03:34.523381 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 9 10:03:34.526269 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 9 10:03:34.527295 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 9 10:03:34.531913 jq[1454]: false Jul 9 10:03:34.532313 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 9 10:03:34.536267 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 9 10:03:34.538797 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 9 10:03:34.542487 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 9 10:03:34.547939 extend-filesystems[1455]: Found loop3 Jul 9 10:03:34.547939 extend-filesystems[1455]: Found loop4 Jul 9 10:03:34.547939 extend-filesystems[1455]: Found loop5 Jul 9 10:03:34.547939 extend-filesystems[1455]: Found sr0 Jul 9 10:03:34.547939 extend-filesystems[1455]: Found vda Jul 9 10:03:34.547939 extend-filesystems[1455]: Found vda1 Jul 9 10:03:34.547939 extend-filesystems[1455]: Found vda2 Jul 9 10:03:34.547939 extend-filesystems[1455]: Found vda3 Jul 9 10:03:34.547939 extend-filesystems[1455]: Found usr Jul 9 10:03:34.547939 extend-filesystems[1455]: Found vda4 Jul 9 10:03:34.547939 extend-filesystems[1455]: Found vda6 Jul 9 10:03:34.547939 extend-filesystems[1455]: Found vda7 Jul 9 10:03:34.547939 extend-filesystems[1455]: Found vda9 Jul 9 10:03:34.547939 extend-filesystems[1455]: Checking size of /dev/vda9 Jul 9 10:03:34.554471 dbus-daemon[1453]: [system] SELinux support is enabled Jul 9 10:03:34.548458 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 9 10:03:34.552554 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 9 10:03:34.553104 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 9 10:03:34.554684 systemd[1]: Starting update-engine.service - Update Engine... Jul 9 10:03:34.559369 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 9 10:03:34.561380 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 9 10:03:34.565604 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jul 9 10:03:34.568229 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 9 10:03:34.568478 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 9 10:03:34.569537 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 9 10:03:34.569895 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 9 10:03:34.571178 jq[1466]: true Jul 9 10:03:34.581578 extend-filesystems[1455]: Resized partition /dev/vda9 Jul 9 10:03:34.589670 systemd[1]: motdgen.service: Deactivated successfully. Jul 9 10:03:34.589994 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 9 10:03:34.594976 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 9 10:03:34.596038 jq[1473]: true Jul 9 10:03:34.595040 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 9 10:03:34.596836 update_engine[1464]: I20250709 10:03:34.596421 1464 main.cc:92] Flatcar Update Engine starting Jul 9 10:03:34.597043 extend-filesystems[1482]: resize2fs 1.47.1 (20-May-2024) Jul 9 10:03:34.598192 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 9 10:03:34.598218 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 9 10:03:34.601204 update_engine[1464]: I20250709 10:03:34.600976 1464 update_check_scheduler.cc:74] Next update check in 6m59s Jul 9 10:03:34.604306 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jul 9 10:03:34.609480 systemd[1]: Started update-engine.service - Update Engine. Jul 9 10:03:34.618254 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1389) Jul 9 10:03:34.618453 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 9 10:03:34.620884 (ntainerd)[1488]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 9 10:03:34.621902 tar[1472]: linux-amd64/LICENSE Jul 9 10:03:34.623439 tar[1472]: linux-amd64/helm Jul 9 10:03:34.630184 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jul 9 10:03:34.650304 systemd-logind[1461]: Watching system buttons on /dev/input/event1 (Power Button) Jul 9 10:03:34.655408 extend-filesystems[1482]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 9 10:03:34.655408 extend-filesystems[1482]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 9 10:03:34.655408 extend-filesystems[1482]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jul 9 10:03:34.650337 systemd-logind[1461]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 9 10:03:34.666295 extend-filesystems[1455]: Resized filesystem in /dev/vda9 Jul 9 10:03:34.650883 systemd-logind[1461]: New seat seat0. Jul 9 10:03:34.654995 systemd[1]: Started systemd-logind.service - User Login Management. Jul 9 10:03:34.659719 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 9 10:03:34.660458 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 9 10:03:34.741880 bash[1509]: Updated "/home/core/.ssh/authorized_keys" Jul 9 10:03:34.744991 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 9 10:03:34.747114 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 9 10:03:34.752688 locksmithd[1492]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 9 10:03:34.821237 sshd_keygen[1486]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 9 10:03:34.871135 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 9 10:03:34.878485 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 9 10:03:34.888174 systemd[1]: issuegen.service: Deactivated successfully. Jul 9 10:03:34.888565 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 9 10:03:34.898408 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 9 10:03:34.914329 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 9 10:03:34.967497 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 9 10:03:34.975427 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jul 9 10:03:34.976850 systemd[1]: Reached target getty.target - Login Prompts. Jul 9 10:03:35.066494 containerd[1488]: time="2025-07-09T10:03:35.066296699Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jul 9 10:03:35.097160 containerd[1488]: time="2025-07-09T10:03:35.097102744Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 9 10:03:35.099470 containerd[1488]: time="2025-07-09T10:03:35.099422951Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.96-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 9 10:03:35.099470 containerd[1488]: time="2025-07-09T10:03:35.099454297Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 9 10:03:35.099470 containerd[1488]: time="2025-07-09T10:03:35.099470889Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 9 10:03:35.099702 containerd[1488]: time="2025-07-09T10:03:35.099678478Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jul 9 10:03:35.099702 containerd[1488]: time="2025-07-09T10:03:35.099697985Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jul 9 10:03:35.099819 containerd[1488]: time="2025-07-09T10:03:35.099779640Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jul 9 10:03:35.099819 containerd[1488]: time="2025-07-09T10:03:35.099811480Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 9 10:03:35.100120 containerd[1488]: time="2025-07-09T10:03:35.100091580Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 9 10:03:35.100120 containerd[1488]: time="2025-07-09T10:03:35.100111632Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 9 10:03:35.100165 containerd[1488]: time="2025-07-09T10:03:35.100125118Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jul 9 10:03:35.100165 containerd[1488]: time="2025-07-09T10:03:35.100135354Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 9 10:03:35.100308 containerd[1488]: time="2025-07-09T10:03:35.100284633Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 9 10:03:35.100589 containerd[1488]: time="2025-07-09T10:03:35.100564376Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 9 10:03:35.100755 containerd[1488]: time="2025-07-09T10:03:35.100732469Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 9 10:03:35.100755 containerd[1488]: time="2025-07-09T10:03:35.100748367Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 9 10:03:35.100882 containerd[1488]: time="2025-07-09T10:03:35.100867862Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 9 10:03:35.100950 containerd[1488]: time="2025-07-09T10:03:35.100936178Z" level=info msg="metadata content store policy set" policy=shared Jul 9 10:03:35.109173 containerd[1488]: time="2025-07-09T10:03:35.109112420Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 9 10:03:35.109173 containerd[1488]: time="2025-07-09T10:03:35.109164763Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 9 10:03:35.109173 containerd[1488]: time="2025-07-09T10:03:35.109181092Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jul 9 10:03:35.109354 containerd[1488]: time="2025-07-09T10:03:35.109211925Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jul 9 10:03:35.109354 containerd[1488]: time="2025-07-09T10:03:35.109226418Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 9 10:03:35.109420 containerd[1488]: time="2025-07-09T10:03:35.109377008Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 9 10:03:35.109778 containerd[1488]: time="2025-07-09T10:03:35.109722675Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 9 10:03:35.109938 containerd[1488]: time="2025-07-09T10:03:35.109915110Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jul 9 10:03:35.109963 containerd[1488]: time="2025-07-09T10:03:35.109947034Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jul 9 10:03:35.109983 containerd[1488]: time="2025-07-09T10:03:35.109963950Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jul 9 10:03:35.109983 containerd[1488]: time="2025-07-09T10:03:35.109977982Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 9 10:03:35.110021 containerd[1488]: time="2025-07-09T10:03:35.109991888Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 9 10:03:35.110021 containerd[1488]: time="2025-07-09T10:03:35.110003792Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 9 10:03:35.110070 containerd[1488]: time="2025-07-09T10:03:35.110034678Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 9 10:03:35.110070 containerd[1488]: time="2025-07-09T10:03:35.110052055Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 9 10:03:35.110070 containerd[1488]: time="2025-07-09T10:03:35.110068490Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 9 10:03:35.110125 containerd[1488]: time="2025-07-09T10:03:35.110081231Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 9 10:03:35.110125 containerd[1488]: time="2025-07-09T10:03:35.110092044Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 9 10:03:35.110125 containerd[1488]: time="2025-07-09T10:03:35.110120255Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 9 10:03:35.110186 containerd[1488]: time="2025-07-09T10:03:35.110135505Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 9 10:03:35.110224 containerd[1488]: time="2025-07-09T10:03:35.110167470Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 9 10:03:35.110224 containerd[1488]: time="2025-07-09T10:03:35.110204177Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 9 10:03:35.110224 containerd[1488]: time="2025-07-09T10:03:35.110216100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 9 10:03:35.110299 containerd[1488]: time="2025-07-09T10:03:35.110229146Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 9 10:03:35.110299 containerd[1488]: time="2025-07-09T10:03:35.110256226Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 9 10:03:35.110299 containerd[1488]: time="2025-07-09T10:03:35.110268348Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 9 10:03:35.110355 containerd[1488]: time="2025-07-09T10:03:35.110300650Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jul 9 10:03:35.110355 containerd[1488]: time="2025-07-09T10:03:35.110320273Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jul 9 10:03:35.110355 containerd[1488]: time="2025-07-09T10:03:35.110336119Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 9 10:03:35.110355 containerd[1488]: time="2025-07-09T10:03:35.110351336Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jul 9 10:03:35.110432 containerd[1488]: time="2025-07-09T10:03:35.110362683Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 9 10:03:35.110432 containerd[1488]: time="2025-07-09T10:03:35.110376516Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jul 9 10:03:35.110432 containerd[1488]: time="2025-07-09T10:03:35.110399358Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jul 9 10:03:35.110488 containerd[1488]: time="2025-07-09T10:03:35.110432341Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 9 10:03:35.110488 containerd[1488]: time="2025-07-09T10:03:35.110444517Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 9 10:03:35.110558 containerd[1488]: time="2025-07-09T10:03:35.110530232Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 9 10:03:35.110582 containerd[1488]: time="2025-07-09T10:03:35.110559680Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jul 9 10:03:35.110582 containerd[1488]: time="2025-07-09T10:03:35.110570021Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 9 10:03:35.110715 containerd[1488]: time="2025-07-09T10:03:35.110581337Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jul 9 10:03:35.110715 containerd[1488]: time="2025-07-09T10:03:35.110678735Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 9 10:03:35.110715 containerd[1488]: time="2025-07-09T10:03:35.110707606Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jul 9 10:03:35.110788 containerd[1488]: time="2025-07-09T10:03:35.110723893Z" level=info msg="NRI interface is disabled by configuration." Jul 9 10:03:35.110788 containerd[1488]: time="2025-07-09T10:03:35.110744816Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 9 10:03:35.111226 containerd[1488]: time="2025-07-09T10:03:35.111150890Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 9 10:03:35.111373 containerd[1488]: time="2025-07-09T10:03:35.111235105Z" level=info msg="Connect containerd service" Jul 9 10:03:35.111373 containerd[1488]: time="2025-07-09T10:03:35.111291926Z" level=info msg="using legacy CRI server" Jul 9 10:03:35.111373 containerd[1488]: time="2025-07-09T10:03:35.111301386Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 9 10:03:35.111463 containerd[1488]: time="2025-07-09T10:03:35.111436339Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 9 10:03:35.112378 containerd[1488]: time="2025-07-09T10:03:35.112295808Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 9 10:03:35.113136 containerd[1488]: time="2025-07-09T10:03:35.112510140Z" level=info msg="Start subscribing containerd event" Jul 9 10:03:35.113136 containerd[1488]: time="2025-07-09T10:03:35.112615183Z" level=info msg="Start recovering state" Jul 9 10:03:35.113136 containerd[1488]: time="2025-07-09T10:03:35.112618455Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 9 10:03:35.113136 containerd[1488]: time="2025-07-09T10:03:35.112691605Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 9 10:03:35.113136 containerd[1488]: time="2025-07-09T10:03:35.112738505Z" level=info msg="Start event monitor" Jul 9 10:03:35.113136 containerd[1488]: time="2025-07-09T10:03:35.112774445Z" level=info msg="Start snapshots syncer" Jul 9 10:03:35.113136 containerd[1488]: time="2025-07-09T10:03:35.112787020Z" level=info msg="Start cni network conf syncer for default" Jul 9 10:03:35.113136 containerd[1488]: time="2025-07-09T10:03:35.112802174Z" level=info msg="Start streaming server" Jul 9 10:03:35.113018 systemd[1]: Started containerd.service - containerd container runtime. Jul 9 10:03:35.114942 containerd[1488]: time="2025-07-09T10:03:35.114583104Z" level=info msg="containerd successfully booted in 0.067541s" Jul 9 10:03:35.380289 tar[1472]: linux-amd64/README.md Jul 9 10:03:35.395746 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 9 10:03:35.597458 systemd-networkd[1418]: eth0: Gained IPv6LL Jul 9 10:03:35.601706 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 9 10:03:35.603775 systemd[1]: Reached target network-online.target - Network is Online. Jul 9 10:03:35.617418 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jul 9 10:03:35.620117 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 9 10:03:35.622501 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 9 10:03:35.645918 systemd[1]: coreos-metadata.service: Deactivated successfully. Jul 9 10:03:35.646306 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jul 9 10:03:35.648001 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 9 10:03:35.650188 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 9 10:03:36.916645 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 9 10:03:36.918610 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 9 10:03:36.920042 systemd[1]: Startup finished in 840ms (kernel) + 7.023s (initrd) + 5.234s (userspace) = 13.098s. Jul 9 10:03:36.960583 (kubelet)[1568]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 9 10:03:37.690118 kubelet[1568]: E0709 10:03:37.690007 1568 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 9 10:03:37.694553 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 9 10:03:37.694795 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 9 10:03:37.695261 systemd[1]: kubelet.service: Consumed 1.829s CPU time, 268.9M memory peak. Jul 9 10:03:38.283633 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 9 10:03:38.285216 systemd[1]: Started sshd@0-10.0.0.36:22-10.0.0.1:56494.service - OpenSSH per-connection server daemon (10.0.0.1:56494). Jul 9 10:03:38.341905 sshd[1581]: Accepted publickey for core from 10.0.0.1 port 56494 ssh2: RSA SHA256:reH7imFaG11BTrq9feyiNgkDqTgB0gigrS0G7NIp5EM Jul 9 10:03:38.344325 sshd-session[1581]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 10:03:38.351809 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 9 10:03:38.363500 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 9 10:03:38.370408 systemd-logind[1461]: New session 1 of user core. Jul 9 10:03:38.376066 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 9 10:03:38.388609 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 9 10:03:38.392871 (systemd)[1585]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 9 10:03:38.395691 systemd-logind[1461]: New session c1 of user core. Jul 9 10:03:38.555295 systemd[1585]: Queued start job for default target default.target. Jul 9 10:03:38.564614 systemd[1585]: Created slice app.slice - User Application Slice. Jul 9 10:03:38.564642 systemd[1585]: Reached target paths.target - Paths. Jul 9 10:03:38.564688 systemd[1585]: Reached target timers.target - Timers. Jul 9 10:03:38.566470 systemd[1585]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 9 10:03:38.581636 systemd[1585]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 9 10:03:38.581782 systemd[1585]: Reached target sockets.target - Sockets. Jul 9 10:03:38.581832 systemd[1585]: Reached target basic.target - Basic System. Jul 9 10:03:38.581878 systemd[1585]: Reached target default.target - Main User Target. Jul 9 10:03:38.581917 systemd[1585]: Startup finished in 177ms. Jul 9 10:03:38.582564 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 9 10:03:38.585095 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 9 10:03:38.655633 systemd[1]: Started sshd@1-10.0.0.36:22-10.0.0.1:56506.service - OpenSSH per-connection server daemon (10.0.0.1:56506). Jul 9 10:03:38.696074 sshd[1596]: Accepted publickey for core from 10.0.0.1 port 56506 ssh2: RSA SHA256:reH7imFaG11BTrq9feyiNgkDqTgB0gigrS0G7NIp5EM Jul 9 10:03:38.697950 sshd-session[1596]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 10:03:38.702877 systemd-logind[1461]: New session 2 of user core. Jul 9 10:03:38.712336 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 9 10:03:38.768826 sshd[1598]: Connection closed by 10.0.0.1 port 56506 Jul 9 10:03:38.769248 sshd-session[1596]: pam_unix(sshd:session): session closed for user core Jul 9 10:03:38.788425 systemd[1]: sshd@1-10.0.0.36:22-10.0.0.1:56506.service: Deactivated successfully. Jul 9 10:03:38.790487 systemd[1]: session-2.scope: Deactivated successfully. Jul 9 10:03:38.792637 systemd-logind[1461]: Session 2 logged out. Waiting for processes to exit. Jul 9 10:03:38.794026 systemd[1]: Started sshd@2-10.0.0.36:22-10.0.0.1:56512.service - OpenSSH per-connection server daemon (10.0.0.1:56512). Jul 9 10:03:38.794861 systemd-logind[1461]: Removed session 2. Jul 9 10:03:38.858827 sshd[1603]: Accepted publickey for core from 10.0.0.1 port 56512 ssh2: RSA SHA256:reH7imFaG11BTrq9feyiNgkDqTgB0gigrS0G7NIp5EM Jul 9 10:03:38.860431 sshd-session[1603]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 10:03:38.865133 systemd-logind[1461]: New session 3 of user core. Jul 9 10:03:38.874318 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 9 10:03:38.924935 sshd[1606]: Connection closed by 10.0.0.1 port 56512 Jul 9 10:03:38.925327 sshd-session[1603]: pam_unix(sshd:session): session closed for user core Jul 9 10:03:38.943160 systemd[1]: sshd@2-10.0.0.36:22-10.0.0.1:56512.service: Deactivated successfully. Jul 9 10:03:38.945292 systemd[1]: session-3.scope: Deactivated successfully. Jul 9 10:03:38.946745 systemd-logind[1461]: Session 3 logged out. Waiting for processes to exit. Jul 9 10:03:38.948148 systemd[1]: Started sshd@3-10.0.0.36:22-10.0.0.1:56514.service - OpenSSH per-connection server daemon (10.0.0.1:56514). Jul 9 10:03:38.948967 systemd-logind[1461]: Removed session 3. Jul 9 10:03:38.988813 sshd[1611]: Accepted publickey for core from 10.0.0.1 port 56514 ssh2: RSA SHA256:reH7imFaG11BTrq9feyiNgkDqTgB0gigrS0G7NIp5EM Jul 9 10:03:38.990387 sshd-session[1611]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 10:03:38.994787 systemd-logind[1461]: New session 4 of user core. Jul 9 10:03:39.004305 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 9 10:03:39.059766 sshd[1614]: Connection closed by 10.0.0.1 port 56514 Jul 9 10:03:39.060216 sshd-session[1611]: pam_unix(sshd:session): session closed for user core Jul 9 10:03:39.079073 systemd[1]: sshd@3-10.0.0.36:22-10.0.0.1:56514.service: Deactivated successfully. Jul 9 10:03:39.081217 systemd[1]: session-4.scope: Deactivated successfully. Jul 9 10:03:39.082992 systemd-logind[1461]: Session 4 logged out. Waiting for processes to exit. Jul 9 10:03:39.093443 systemd[1]: Started sshd@4-10.0.0.36:22-10.0.0.1:56530.service - OpenSSH per-connection server daemon (10.0.0.1:56530). Jul 9 10:03:39.094347 systemd-logind[1461]: Removed session 4. Jul 9 10:03:39.134586 sshd[1619]: Accepted publickey for core from 10.0.0.1 port 56530 ssh2: RSA SHA256:reH7imFaG11BTrq9feyiNgkDqTgB0gigrS0G7NIp5EM Jul 9 10:03:39.136360 sshd-session[1619]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 10:03:39.141483 systemd-logind[1461]: New session 5 of user core. Jul 9 10:03:39.151327 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 9 10:03:39.210228 sudo[1623]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 9 10:03:39.210597 sudo[1623]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 9 10:03:39.227744 sudo[1623]: pam_unix(sudo:session): session closed for user root Jul 9 10:03:39.229505 sshd[1622]: Connection closed by 10.0.0.1 port 56530 Jul 9 10:03:39.229956 sshd-session[1619]: pam_unix(sshd:session): session closed for user core Jul 9 10:03:39.241139 systemd[1]: sshd@4-10.0.0.36:22-10.0.0.1:56530.service: Deactivated successfully. Jul 9 10:03:39.243259 systemd[1]: session-5.scope: Deactivated successfully. Jul 9 10:03:39.244958 systemd-logind[1461]: Session 5 logged out. Waiting for processes to exit. Jul 9 10:03:39.258507 systemd[1]: Started sshd@5-10.0.0.36:22-10.0.0.1:56542.service - OpenSSH per-connection server daemon (10.0.0.1:56542). Jul 9 10:03:39.259572 systemd-logind[1461]: Removed session 5. Jul 9 10:03:39.294874 sshd[1628]: Accepted publickey for core from 10.0.0.1 port 56542 ssh2: RSA SHA256:reH7imFaG11BTrq9feyiNgkDqTgB0gigrS0G7NIp5EM Jul 9 10:03:39.296303 sshd-session[1628]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 10:03:39.300720 systemd-logind[1461]: New session 6 of user core. Jul 9 10:03:39.310310 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 9 10:03:39.365615 sudo[1633]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 9 10:03:39.365979 sudo[1633]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 9 10:03:39.370325 sudo[1633]: pam_unix(sudo:session): session closed for user root Jul 9 10:03:39.377727 sudo[1632]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jul 9 10:03:39.378078 sudo[1632]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 9 10:03:39.395489 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 9 10:03:39.429053 augenrules[1655]: No rules Jul 9 10:03:39.431355 systemd[1]: audit-rules.service: Deactivated successfully. Jul 9 10:03:39.431688 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 9 10:03:39.432924 sudo[1632]: pam_unix(sudo:session): session closed for user root Jul 9 10:03:39.434584 sshd[1631]: Connection closed by 10.0.0.1 port 56542 Jul 9 10:03:39.434985 sshd-session[1628]: pam_unix(sshd:session): session closed for user core Jul 9 10:03:39.446301 systemd[1]: sshd@5-10.0.0.36:22-10.0.0.1:56542.service: Deactivated successfully. Jul 9 10:03:39.448446 systemd[1]: session-6.scope: Deactivated successfully. Jul 9 10:03:39.450051 systemd-logind[1461]: Session 6 logged out. Waiting for processes to exit. Jul 9 10:03:39.461437 systemd[1]: Started sshd@6-10.0.0.36:22-10.0.0.1:56544.service - OpenSSH per-connection server daemon (10.0.0.1:56544). Jul 9 10:03:39.462476 systemd-logind[1461]: Removed session 6. Jul 9 10:03:39.496719 sshd[1663]: Accepted publickey for core from 10.0.0.1 port 56544 ssh2: RSA SHA256:reH7imFaG11BTrq9feyiNgkDqTgB0gigrS0G7NIp5EM Jul 9 10:03:39.498077 sshd-session[1663]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 10:03:39.502647 systemd-logind[1461]: New session 7 of user core. Jul 9 10:03:39.516309 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 9 10:03:39.570691 sudo[1667]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 9 10:03:39.571049 sudo[1667]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 9 10:03:40.220425 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 9 10:03:40.220645 (dockerd)[1687]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 9 10:03:41.151091 dockerd[1687]: time="2025-07-09T10:03:41.150973321Z" level=info msg="Starting up" Jul 9 10:03:41.719761 dockerd[1687]: time="2025-07-09T10:03:41.719693938Z" level=info msg="Loading containers: start." Jul 9 10:03:41.943237 kernel: Initializing XFRM netlink socket Jul 9 10:03:42.051071 systemd-networkd[1418]: docker0: Link UP Jul 9 10:03:42.095198 dockerd[1687]: time="2025-07-09T10:03:42.095128311Z" level=info msg="Loading containers: done." Jul 9 10:03:42.123117 dockerd[1687]: time="2025-07-09T10:03:42.123064810Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 9 10:03:42.123341 dockerd[1687]: time="2025-07-09T10:03:42.123197620Z" level=info msg="Docker daemon" commit=41ca978a0a5400cc24b274137efa9f25517fcc0b containerd-snapshotter=false storage-driver=overlay2 version=27.3.1 Jul 9 10:03:42.123375 dockerd[1687]: time="2025-07-09T10:03:42.123356631Z" level=info msg="Daemon has completed initialization" Jul 9 10:03:42.161346 dockerd[1687]: time="2025-07-09T10:03:42.161274483Z" level=info msg="API listen on /run/docker.sock" Jul 9 10:03:42.161465 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 9 10:03:43.290485 containerd[1488]: time="2025-07-09T10:03:43.290370722Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\"" Jul 9 10:03:43.981848 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount207504898.mount: Deactivated successfully. Jul 9 10:03:45.510648 containerd[1488]: time="2025-07-09T10:03:45.510577194Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 10:03:45.511293 containerd[1488]: time="2025-07-09T10:03:45.511205648Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.6: active requests=0, bytes read=28799045" Jul 9 10:03:45.512466 containerd[1488]: time="2025-07-09T10:03:45.512437892Z" level=info msg="ImageCreate event name:\"sha256:8c5b95b1b5cb4a908fcbbbe81697c57019f9e9d89bfb5e0355235d440b7a6aa9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 10:03:45.515207 containerd[1488]: time="2025-07-09T10:03:45.515130453Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:0f5764551d7de4ef70489ff8a70f32df7dea00701f5545af089b60bc5ede4f6f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 10:03:45.516307 containerd[1488]: time="2025-07-09T10:03:45.516274652Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.6\" with image id \"sha256:8c5b95b1b5cb4a908fcbbbe81697c57019f9e9d89bfb5e0355235d440b7a6aa9\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.6\", repo digest \"registry.k8s.io/kube-apiserver@sha256:0f5764551d7de4ef70489ff8a70f32df7dea00701f5545af089b60bc5ede4f6f\", size \"28795845\" in 2.225809748s" Jul 9 10:03:45.516344 containerd[1488]: time="2025-07-09T10:03:45.516315352Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\" returns image reference \"sha256:8c5b95b1b5cb4a908fcbbbe81697c57019f9e9d89bfb5e0355235d440b7a6aa9\"" Jul 9 10:03:45.517011 containerd[1488]: time="2025-07-09T10:03:45.516975988Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\"" Jul 9 10:03:47.026378 containerd[1488]: time="2025-07-09T10:03:47.026255706Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 10:03:47.027263 containerd[1488]: time="2025-07-09T10:03:47.027168916Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.6: active requests=0, bytes read=24783912" Jul 9 10:03:47.028725 containerd[1488]: time="2025-07-09T10:03:47.028689637Z" level=info msg="ImageCreate event name:\"sha256:77d0e7de0c6b41e2331c3997698c3f917527cf7bbe462f5c813f514e788436de\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 10:03:47.033113 containerd[1488]: time="2025-07-09T10:03:47.033031733Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3425f29c94a77d74cb89f38413e6274277dcf5e2bc7ab6ae953578a91e9e8356\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 10:03:47.035500 containerd[1488]: time="2025-07-09T10:03:47.035429398Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.6\" with image id \"sha256:77d0e7de0c6b41e2331c3997698c3f917527cf7bbe462f5c813f514e788436de\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.6\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3425f29c94a77d74cb89f38413e6274277dcf5e2bc7ab6ae953578a91e9e8356\", size \"26385746\" in 1.518416282s" Jul 9 10:03:47.035500 containerd[1488]: time="2025-07-09T10:03:47.035464319Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\" returns image reference \"sha256:77d0e7de0c6b41e2331c3997698c3f917527cf7bbe462f5c813f514e788436de\"" Jul 9 10:03:47.036083 containerd[1488]: time="2025-07-09T10:03:47.036006962Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\"" Jul 9 10:03:47.945221 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 9 10:03:47.954352 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 9 10:03:48.369938 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 9 10:03:48.376594 (kubelet)[1954]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 9 10:03:49.207434 kubelet[1954]: E0709 10:03:49.207366 1954 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 9 10:03:49.214365 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 9 10:03:49.214593 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 9 10:03:49.215056 systemd[1]: kubelet.service: Consumed 552ms CPU time, 111.2M memory peak. Jul 9 10:03:49.298716 containerd[1488]: time="2025-07-09T10:03:49.298632948Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 10:03:49.300165 containerd[1488]: time="2025-07-09T10:03:49.300091168Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.6: active requests=0, bytes read=19176916" Jul 9 10:03:49.301512 containerd[1488]: time="2025-07-09T10:03:49.301441978Z" level=info msg="ImageCreate event name:\"sha256:b34d1cd163151c2491919f315274d85bff904721213f2b19341b403a28a39ae2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 10:03:49.306781 containerd[1488]: time="2025-07-09T10:03:49.306741701Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:130f633cbd1d70e2f4655350153cb3fc469f4d5a6310b4f0b49d93fb2ba2132b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 10:03:49.308481 containerd[1488]: time="2025-07-09T10:03:49.308388806Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.6\" with image id \"sha256:b34d1cd163151c2491919f315274d85bff904721213f2b19341b403a28a39ae2\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.6\", repo digest \"registry.k8s.io/kube-scheduler@sha256:130f633cbd1d70e2f4655350153cb3fc469f4d5a6310b4f0b49d93fb2ba2132b\", size \"20778768\" in 2.272265785s" Jul 9 10:03:49.308481 containerd[1488]: time="2025-07-09T10:03:49.308486164Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\" returns image reference \"sha256:b34d1cd163151c2491919f315274d85bff904721213f2b19341b403a28a39ae2\"" Jul 9 10:03:49.309107 containerd[1488]: time="2025-07-09T10:03:49.309048972Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\"" Jul 9 10:03:50.572984 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3076119940.mount: Deactivated successfully. Jul 9 10:03:51.428723 containerd[1488]: time="2025-07-09T10:03:51.428651940Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 10:03:51.429829 containerd[1488]: time="2025-07-09T10:03:51.429789339Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.6: active requests=0, bytes read=30895363" Jul 9 10:03:51.430997 containerd[1488]: time="2025-07-09T10:03:51.430951841Z" level=info msg="ImageCreate event name:\"sha256:63f0cbe3b7339c5d006efc9964228e48271bae73039320037c451b5e8f763e02\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 10:03:51.433958 containerd[1488]: time="2025-07-09T10:03:51.433906350Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 10:03:51.434898 containerd[1488]: time="2025-07-09T10:03:51.434817212Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.6\" with image id \"sha256:63f0cbe3b7339c5d006efc9964228e48271bae73039320037c451b5e8f763e02\", repo tag \"registry.k8s.io/kube-proxy:v1.32.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9\", size \"30894382\" in 2.125708636s" Jul 9 10:03:51.434898 containerd[1488]: time="2025-07-09T10:03:51.434872428Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\" returns image reference \"sha256:63f0cbe3b7339c5d006efc9964228e48271bae73039320037c451b5e8f763e02\"" Jul 9 10:03:51.435619 containerd[1488]: time="2025-07-09T10:03:51.435578866Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 9 10:03:51.905813 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4110530134.mount: Deactivated successfully. Jul 9 10:03:52.967791 containerd[1488]: time="2025-07-09T10:03:52.967706302Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 10:03:52.968754 containerd[1488]: time="2025-07-09T10:03:52.968662907Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Jul 9 10:03:52.969596 containerd[1488]: time="2025-07-09T10:03:52.969555274Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 10:03:52.972754 containerd[1488]: time="2025-07-09T10:03:52.972688209Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 10:03:52.974598 containerd[1488]: time="2025-07-09T10:03:52.974544501Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.538921418s" Jul 9 10:03:52.974598 containerd[1488]: time="2025-07-09T10:03:52.974580702Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jul 9 10:03:52.975312 containerd[1488]: time="2025-07-09T10:03:52.975275953Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 9 10:03:53.438121 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1464102178.mount: Deactivated successfully. Jul 9 10:03:53.467049 containerd[1488]: time="2025-07-09T10:03:53.466980168Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 10:03:53.468033 containerd[1488]: time="2025-07-09T10:03:53.467984734Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jul 9 10:03:53.469391 containerd[1488]: time="2025-07-09T10:03:53.469363083Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 10:03:53.471759 containerd[1488]: time="2025-07-09T10:03:53.471702954Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 10:03:53.472578 containerd[1488]: time="2025-07-09T10:03:53.472517062Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 497.024196ms" Jul 9 10:03:53.472578 containerd[1488]: time="2025-07-09T10:03:53.472572600Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jul 9 10:03:53.473196 containerd[1488]: time="2025-07-09T10:03:53.473126708Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jul 9 10:03:54.118681 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1917967792.mount: Deactivated successfully. Jul 9 10:03:56.195053 containerd[1488]: time="2025-07-09T10:03:56.194978538Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 10:03:56.195798 containerd[1488]: time="2025-07-09T10:03:56.195742716Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57551360" Jul 9 10:03:56.197108 containerd[1488]: time="2025-07-09T10:03:56.197070248Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 10:03:56.200002 containerd[1488]: time="2025-07-09T10:03:56.199968404Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 10:03:56.201209 containerd[1488]: time="2025-07-09T10:03:56.201141451Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 2.727960978s" Jul 9 10:03:56.201257 containerd[1488]: time="2025-07-09T10:03:56.201209420Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Jul 9 10:03:58.122844 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 9 10:03:58.123046 systemd[1]: kubelet.service: Consumed 552ms CPU time, 111.2M memory peak. Jul 9 10:03:58.136361 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 9 10:03:58.162855 systemd[1]: Reload requested from client PID 2110 ('systemctl') (unit session-7.scope)... Jul 9 10:03:58.162879 systemd[1]: Reloading... Jul 9 10:03:58.401215 zram_generator::config[2157]: No configuration found. Jul 9 10:03:58.852523 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 9 10:03:58.957958 systemd[1]: Reloading finished in 794 ms. Jul 9 10:03:59.014965 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 9 10:03:59.019234 (kubelet)[2192]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 9 10:03:59.023372 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 9 10:03:59.111138 systemd[1]: kubelet.service: Deactivated successfully. Jul 9 10:03:59.111520 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 9 10:03:59.111571 systemd[1]: kubelet.service: Consumed 395ms CPU time, 99.4M memory peak. Jul 9 10:03:59.113528 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 9 10:03:59.298608 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 9 10:03:59.303202 (kubelet)[2205]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 9 10:03:59.346690 kubelet[2205]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 9 10:03:59.346690 kubelet[2205]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 9 10:03:59.346690 kubelet[2205]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 9 10:03:59.347239 kubelet[2205]: I0709 10:03:59.346766 2205 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 9 10:03:59.754190 kubelet[2205]: I0709 10:03:59.754122 2205 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jul 9 10:03:59.754190 kubelet[2205]: I0709 10:03:59.754175 2205 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 9 10:03:59.754514 kubelet[2205]: I0709 10:03:59.754484 2205 server.go:954] "Client rotation is on, will bootstrap in background" Jul 9 10:03:59.779430 kubelet[2205]: I0709 10:03:59.779358 2205 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 9 10:03:59.780683 kubelet[2205]: E0709 10:03:59.780632 2205 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.36:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.36:6443: connect: connection refused" logger="UnhandledError" Jul 9 10:03:59.786118 kubelet[2205]: E0709 10:03:59.786079 2205 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 9 10:03:59.786118 kubelet[2205]: I0709 10:03:59.786119 2205 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 9 10:03:59.793118 kubelet[2205]: I0709 10:03:59.793073 2205 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 9 10:03:59.794584 kubelet[2205]: I0709 10:03:59.794531 2205 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 9 10:03:59.794795 kubelet[2205]: I0709 10:03:59.794573 2205 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 9 10:03:59.794926 kubelet[2205]: I0709 10:03:59.794811 2205 topology_manager.go:138] "Creating topology manager with none policy" Jul 9 10:03:59.794926 kubelet[2205]: I0709 10:03:59.794822 2205 container_manager_linux.go:304] "Creating device plugin manager" Jul 9 10:03:59.795014 kubelet[2205]: I0709 10:03:59.794997 2205 state_mem.go:36] "Initialized new in-memory state store" Jul 9 10:03:59.797497 kubelet[2205]: I0709 10:03:59.797470 2205 kubelet.go:446] "Attempting to sync node with API server" Jul 9 10:03:59.797543 kubelet[2205]: I0709 10:03:59.797506 2205 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 9 10:03:59.797543 kubelet[2205]: I0709 10:03:59.797535 2205 kubelet.go:352] "Adding apiserver pod source" Jul 9 10:03:59.797595 kubelet[2205]: I0709 10:03:59.797552 2205 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 9 10:03:59.801217 kubelet[2205]: I0709 10:03:59.800583 2205 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jul 9 10:03:59.801217 kubelet[2205]: I0709 10:03:59.801053 2205 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 9 10:03:59.802026 kubelet[2205]: W0709 10:03:59.802001 2205 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 9 10:03:59.802250 kubelet[2205]: W0709 10:03:59.802131 2205 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.36:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.36:6443: connect: connection refused Jul 9 10:03:59.802250 kubelet[2205]: W0709 10:03:59.802189 2205 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.36:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.36:6443: connect: connection refused Jul 9 10:03:59.802337 kubelet[2205]: E0709 10:03:59.802255 2205 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.36:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.36:6443: connect: connection refused" logger="UnhandledError" Jul 9 10:03:59.802337 kubelet[2205]: E0709 10:03:59.802257 2205 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.36:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.36:6443: connect: connection refused" logger="UnhandledError" Jul 9 10:03:59.804382 kubelet[2205]: I0709 10:03:59.804354 2205 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 9 10:03:59.804491 kubelet[2205]: I0709 10:03:59.804402 2205 server.go:1287] "Started kubelet" Jul 9 10:03:59.805133 kubelet[2205]: I0709 10:03:59.805069 2205 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 9 10:03:59.805621 kubelet[2205]: I0709 10:03:59.805270 2205 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jul 9 10:03:59.805621 kubelet[2205]: I0709 10:03:59.805448 2205 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 9 10:03:59.806883 kubelet[2205]: I0709 10:03:59.806364 2205 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 9 10:03:59.806883 kubelet[2205]: I0709 10:03:59.806564 2205 server.go:479] "Adding debug handlers to kubelet server" Jul 9 10:03:59.808444 kubelet[2205]: E0709 10:03:59.807941 2205 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 9 10:03:59.808444 kubelet[2205]: I0709 10:03:59.807979 2205 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 9 10:03:59.808444 kubelet[2205]: I0709 10:03:59.808004 2205 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 9 10:03:59.808444 kubelet[2205]: I0709 10:03:59.808173 2205 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 9 10:03:59.808444 kubelet[2205]: I0709 10:03:59.808230 2205 reconciler.go:26] "Reconciler: start to sync state" Jul 9 10:03:59.808612 kubelet[2205]: W0709 10:03:59.808493 2205 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.36:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.36:6443: connect: connection refused Jul 9 10:03:59.808612 kubelet[2205]: E0709 10:03:59.808525 2205 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.36:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.36:6443: connect: connection refused" logger="UnhandledError" Jul 9 10:03:59.809370 kubelet[2205]: E0709 10:03:59.809112 2205 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.36:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.36:6443: connect: connection refused" interval="200ms" Jul 9 10:03:59.809492 kubelet[2205]: E0709 10:03:59.808278 2205 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.36:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.36:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18508d23c4549e05 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-09 10:03:59.804374533 +0000 UTC m=+0.496908828,LastTimestamp:2025-07-09 10:03:59.804374533 +0000 UTC m=+0.496908828,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 9 10:03:59.810341 kubelet[2205]: I0709 10:03:59.810270 2205 factory.go:221] Registration of the systemd container factory successfully Jul 9 10:03:59.810434 kubelet[2205]: I0709 10:03:59.810363 2205 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 9 10:03:59.810590 kubelet[2205]: E0709 10:03:59.810559 2205 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 9 10:03:59.811594 kubelet[2205]: I0709 10:03:59.811564 2205 factory.go:221] Registration of the containerd container factory successfully Jul 9 10:03:59.825596 kubelet[2205]: I0709 10:03:59.825567 2205 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 9 10:03:59.825596 kubelet[2205]: I0709 10:03:59.825585 2205 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 9 10:03:59.825722 kubelet[2205]: I0709 10:03:59.825614 2205 state_mem.go:36] "Initialized new in-memory state store" Jul 9 10:03:59.829008 kubelet[2205]: I0709 10:03:59.828976 2205 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 9 10:03:59.830892 kubelet[2205]: I0709 10:03:59.830862 2205 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 9 10:03:59.830946 kubelet[2205]: I0709 10:03:59.830917 2205 status_manager.go:227] "Starting to sync pod status with apiserver" Jul 9 10:03:59.830969 kubelet[2205]: I0709 10:03:59.830952 2205 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 9 10:03:59.830969 kubelet[2205]: I0709 10:03:59.830964 2205 kubelet.go:2382] "Starting kubelet main sync loop" Jul 9 10:03:59.831173 kubelet[2205]: E0709 10:03:59.831025 2205 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 9 10:03:59.831852 kubelet[2205]: W0709 10:03:59.831829 2205 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.36:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.36:6443: connect: connection refused Jul 9 10:03:59.831897 kubelet[2205]: E0709 10:03:59.831864 2205 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.36:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.36:6443: connect: connection refused" logger="UnhandledError" Jul 9 10:03:59.908576 kubelet[2205]: E0709 10:03:59.908514 2205 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 9 10:03:59.931928 kubelet[2205]: E0709 10:03:59.931859 2205 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 9 10:04:00.009409 kubelet[2205]: E0709 10:04:00.009239 2205 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 9 10:04:00.009856 kubelet[2205]: E0709 10:04:00.009784 2205 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.36:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.36:6443: connect: connection refused" interval="400ms" Jul 9 10:04:00.110181 kubelet[2205]: E0709 10:04:00.110085 2205 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 9 10:04:00.132375 kubelet[2205]: E0709 10:04:00.132301 2205 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 9 10:04:00.176915 kubelet[2205]: I0709 10:04:00.176869 2205 policy_none.go:49] "None policy: Start" Jul 9 10:04:00.176915 kubelet[2205]: I0709 10:04:00.176914 2205 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 9 10:04:00.176915 kubelet[2205]: I0709 10:04:00.176936 2205 state_mem.go:35] "Initializing new in-memory state store" Jul 9 10:04:00.184836 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 9 10:04:00.200480 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 9 10:04:00.203812 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 9 10:04:00.210906 kubelet[2205]: E0709 10:04:00.210876 2205 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 9 10:04:00.215321 kubelet[2205]: I0709 10:04:00.215174 2205 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 9 10:04:00.215460 kubelet[2205]: I0709 10:04:00.215385 2205 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 9 10:04:00.215460 kubelet[2205]: I0709 10:04:00.215400 2205 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 9 10:04:00.215888 kubelet[2205]: I0709 10:04:00.215696 2205 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 9 10:04:00.216584 kubelet[2205]: E0709 10:04:00.216565 2205 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 9 10:04:00.216647 kubelet[2205]: E0709 10:04:00.216615 2205 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jul 9 10:04:00.317013 kubelet[2205]: I0709 10:04:00.316862 2205 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 9 10:04:00.317247 kubelet[2205]: E0709 10:04:00.317221 2205 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.36:6443/api/v1/nodes\": dial tcp 10.0.0.36:6443: connect: connection refused" node="localhost" Jul 9 10:04:00.411286 kubelet[2205]: E0709 10:04:00.411220 2205 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.36:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.36:6443: connect: connection refused" interval="800ms" Jul 9 10:04:00.519351 kubelet[2205]: I0709 10:04:00.519315 2205 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 9 10:04:00.519811 kubelet[2205]: E0709 10:04:00.519758 2205 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.36:6443/api/v1/nodes\": dial tcp 10.0.0.36:6443: connect: connection refused" node="localhost" Jul 9 10:04:00.540673 systemd[1]: Created slice kubepods-burstable-pode91b195228d7d388bfc0b0f87323acec.slice - libcontainer container kubepods-burstable-pode91b195228d7d388bfc0b0f87323acec.slice. Jul 9 10:04:00.553272 kubelet[2205]: E0709 10:04:00.553231 2205 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 9 10:04:00.556803 systemd[1]: Created slice kubepods-burstable-podd1af03769b64da1b1e8089a7035018fc.slice - libcontainer container kubepods-burstable-podd1af03769b64da1b1e8089a7035018fc.slice. Jul 9 10:04:00.558569 kubelet[2205]: E0709 10:04:00.558536 2205 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 9 10:04:00.560183 systemd[1]: Created slice kubepods-burstable-pod8a75e163f27396b2168da0f88f85f8a5.slice - libcontainer container kubepods-burstable-pod8a75e163f27396b2168da0f88f85f8a5.slice. Jul 9 10:04:00.561882 kubelet[2205]: E0709 10:04:00.561847 2205 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 9 10:04:00.613489 kubelet[2205]: I0709 10:04:00.613362 2205 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 9 10:04:00.613489 kubelet[2205]: I0709 10:04:00.613416 2205 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8a75e163f27396b2168da0f88f85f8a5-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"8a75e163f27396b2168da0f88f85f8a5\") " pod="kube-system/kube-scheduler-localhost" Jul 9 10:04:00.613489 kubelet[2205]: I0709 10:04:00.613439 2205 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e91b195228d7d388bfc0b0f87323acec-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"e91b195228d7d388bfc0b0f87323acec\") " pod="kube-system/kube-apiserver-localhost" Jul 9 10:04:00.613489 kubelet[2205]: I0709 10:04:00.613456 2205 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e91b195228d7d388bfc0b0f87323acec-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"e91b195228d7d388bfc0b0f87323acec\") " pod="kube-system/kube-apiserver-localhost" Jul 9 10:04:00.613489 kubelet[2205]: I0709 10:04:00.613474 2205 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e91b195228d7d388bfc0b0f87323acec-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"e91b195228d7d388bfc0b0f87323acec\") " pod="kube-system/kube-apiserver-localhost" Jul 9 10:04:00.613661 kubelet[2205]: I0709 10:04:00.613491 2205 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 9 10:04:00.613661 kubelet[2205]: I0709 10:04:00.613509 2205 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 9 10:04:00.613661 kubelet[2205]: I0709 10:04:00.613553 2205 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 9 10:04:00.613661 kubelet[2205]: I0709 10:04:00.613575 2205 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 9 10:04:00.649048 kubelet[2205]: W0709 10:04:00.648973 2205 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.36:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.36:6443: connect: connection refused Jul 9 10:04:00.649115 kubelet[2205]: E0709 10:04:00.649064 2205 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.36:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.36:6443: connect: connection refused" logger="UnhandledError" Jul 9 10:04:00.854786 containerd[1488]: time="2025-07-09T10:04:00.854731678Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:e91b195228d7d388bfc0b0f87323acec,Namespace:kube-system,Attempt:0,}" Jul 9 10:04:00.859418 containerd[1488]: time="2025-07-09T10:04:00.859362293Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d1af03769b64da1b1e8089a7035018fc,Namespace:kube-system,Attempt:0,}" Jul 9 10:04:00.863005 containerd[1488]: time="2025-07-09T10:04:00.862967599Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:8a75e163f27396b2168da0f88f85f8a5,Namespace:kube-system,Attempt:0,}" Jul 9 10:04:00.892716 kubelet[2205]: W0709 10:04:00.892574 2205 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.36:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.36:6443: connect: connection refused Jul 9 10:04:00.892716 kubelet[2205]: E0709 10:04:00.892659 2205 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.36:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.36:6443: connect: connection refused" logger="UnhandledError" Jul 9 10:04:00.921707 kubelet[2205]: I0709 10:04:00.921668 2205 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 9 10:04:00.922118 kubelet[2205]: E0709 10:04:00.922065 2205 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.36:6443/api/v1/nodes\": dial tcp 10.0.0.36:6443: connect: connection refused" node="localhost" Jul 9 10:04:01.149175 kubelet[2205]: W0709 10:04:01.148921 2205 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.36:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.36:6443: connect: connection refused Jul 9 10:04:01.149175 kubelet[2205]: E0709 10:04:01.149018 2205 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.36:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.36:6443: connect: connection refused" logger="UnhandledError" Jul 9 10:04:01.212198 kubelet[2205]: E0709 10:04:01.212125 2205 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.36:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.36:6443: connect: connection refused" interval="1.6s" Jul 9 10:04:01.384603 kubelet[2205]: E0709 10:04:01.384451 2205 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.36:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.36:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.18508d23c4549e05 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-09 10:03:59.804374533 +0000 UTC m=+0.496908828,LastTimestamp:2025-07-09 10:03:59.804374533 +0000 UTC m=+0.496908828,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 9 10:04:01.432507 kubelet[2205]: W0709 10:04:01.432389 2205 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.36:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.36:6443: connect: connection refused Jul 9 10:04:01.432507 kubelet[2205]: E0709 10:04:01.432429 2205 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.36:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.36:6443: connect: connection refused" logger="UnhandledError" Jul 9 10:04:01.724117 kubelet[2205]: I0709 10:04:01.723965 2205 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 9 10:04:01.724510 kubelet[2205]: E0709 10:04:01.724440 2205 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.36:6443/api/v1/nodes\": dial tcp 10.0.0.36:6443: connect: connection refused" node="localhost" Jul 9 10:04:01.876613 kubelet[2205]: E0709 10:04:01.876552 2205 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.36:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.36:6443: connect: connection refused" logger="UnhandledError" Jul 9 10:04:01.966922 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3202773203.mount: Deactivated successfully. Jul 9 10:04:01.973702 containerd[1488]: time="2025-07-09T10:04:01.973659518Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 9 10:04:01.977435 containerd[1488]: time="2025-07-09T10:04:01.977345527Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jul 9 10:04:01.978407 containerd[1488]: time="2025-07-09T10:04:01.978366209Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 9 10:04:01.979343 containerd[1488]: time="2025-07-09T10:04:01.979311922Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 9 10:04:01.980407 containerd[1488]: time="2025-07-09T10:04:01.980376308Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 9 10:04:01.981112 containerd[1488]: time="2025-07-09T10:04:01.981078432Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 9 10:04:01.982065 containerd[1488]: time="2025-07-09T10:04:01.982014482Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 9 10:04:01.983518 containerd[1488]: time="2025-07-09T10:04:01.983478267Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 9 10:04:01.985570 containerd[1488]: time="2025-07-09T10:04:01.985520012Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.126050203s" Jul 9 10:04:01.986184 containerd[1488]: time="2025-07-09T10:04:01.986115240Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.13128049s" Jul 9 10:04:01.989323 containerd[1488]: time="2025-07-09T10:04:01.989284322Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 1.126172164s" Jul 9 10:04:02.101813 containerd[1488]: time="2025-07-09T10:04:02.100530504Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 9 10:04:02.101813 containerd[1488]: time="2025-07-09T10:04:02.101804969Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 9 10:04:02.102046 containerd[1488]: time="2025-07-09T10:04:02.101835766Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 9 10:04:02.102046 containerd[1488]: time="2025-07-09T10:04:02.101950797Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 9 10:04:02.102622 containerd[1488]: time="2025-07-09T10:04:02.102400048Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 9 10:04:02.102710 containerd[1488]: time="2025-07-09T10:04:02.102614934Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 9 10:04:02.102710 containerd[1488]: time="2025-07-09T10:04:02.102637174Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 9 10:04:02.102710 containerd[1488]: time="2025-07-09T10:04:02.102467972Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 9 10:04:02.102710 containerd[1488]: time="2025-07-09T10:04:02.102629148Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 9 10:04:02.102710 containerd[1488]: time="2025-07-09T10:04:02.102650957Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 9 10:04:02.102974 containerd[1488]: time="2025-07-09T10:04:02.102865280Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 9 10:04:02.103303 containerd[1488]: time="2025-07-09T10:04:02.102953688Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 9 10:04:02.123418 systemd[1]: Started cri-containerd-5a795bf3e0041c66e5eda57bc56b762ac4237e7fdaf23684f227a9341397097e.scope - libcontainer container 5a795bf3e0041c66e5eda57bc56b762ac4237e7fdaf23684f227a9341397097e. Jul 9 10:04:02.129039 systemd[1]: Started cri-containerd-0d8da7ffd35e6403b844c8ba9e1d8a261f652f8fe82f37ecd0d1dbb5e97114fb.scope - libcontainer container 0d8da7ffd35e6403b844c8ba9e1d8a261f652f8fe82f37ecd0d1dbb5e97114fb. Jul 9 10:04:02.131883 systemd[1]: Started cri-containerd-4c100af72aeed2503351685e9ee50eae9bf36e44246304c089effee053238e2d.scope - libcontainer container 4c100af72aeed2503351685e9ee50eae9bf36e44246304c089effee053238e2d. Jul 9 10:04:02.171568 containerd[1488]: time="2025-07-09T10:04:02.171472975Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:e91b195228d7d388bfc0b0f87323acec,Namespace:kube-system,Attempt:0,} returns sandbox id \"0d8da7ffd35e6403b844c8ba9e1d8a261f652f8fe82f37ecd0d1dbb5e97114fb\"" Jul 9 10:04:02.174320 containerd[1488]: time="2025-07-09T10:04:02.173967543Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:8a75e163f27396b2168da0f88f85f8a5,Namespace:kube-system,Attempt:0,} returns sandbox id \"5a795bf3e0041c66e5eda57bc56b762ac4237e7fdaf23684f227a9341397097e\"" Jul 9 10:04:02.176777 containerd[1488]: time="2025-07-09T10:04:02.176698455Z" level=info msg="CreateContainer within sandbox \"5a795bf3e0041c66e5eda57bc56b762ac4237e7fdaf23684f227a9341397097e\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 9 10:04:02.177269 containerd[1488]: time="2025-07-09T10:04:02.177130061Z" level=info msg="CreateContainer within sandbox \"0d8da7ffd35e6403b844c8ba9e1d8a261f652f8fe82f37ecd0d1dbb5e97114fb\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 9 10:04:02.180755 containerd[1488]: time="2025-07-09T10:04:02.180712759Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d1af03769b64da1b1e8089a7035018fc,Namespace:kube-system,Attempt:0,} returns sandbox id \"4c100af72aeed2503351685e9ee50eae9bf36e44246304c089effee053238e2d\"" Jul 9 10:04:02.183929 containerd[1488]: time="2025-07-09T10:04:02.183787331Z" level=info msg="CreateContainer within sandbox \"4c100af72aeed2503351685e9ee50eae9bf36e44246304c089effee053238e2d\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 9 10:04:02.197254 containerd[1488]: time="2025-07-09T10:04:02.197175091Z" level=info msg="CreateContainer within sandbox \"0d8da7ffd35e6403b844c8ba9e1d8a261f652f8fe82f37ecd0d1dbb5e97114fb\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"a2e180ca23ff9d8167a9142a65f73161cf0e2efaba47c88b4b42fa72ff926f06\"" Jul 9 10:04:02.197777 containerd[1488]: time="2025-07-09T10:04:02.197746055Z" level=info msg="StartContainer for \"a2e180ca23ff9d8167a9142a65f73161cf0e2efaba47c88b4b42fa72ff926f06\"" Jul 9 10:04:02.199341 containerd[1488]: time="2025-07-09T10:04:02.199312317Z" level=info msg="CreateContainer within sandbox \"5a795bf3e0041c66e5eda57bc56b762ac4237e7fdaf23684f227a9341397097e\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"bcd5a1ed0dbb185bb3297e96b2ff0cf48a1d76c175db01db7430fbed3e04b7cf\"" Jul 9 10:04:02.199649 containerd[1488]: time="2025-07-09T10:04:02.199614205Z" level=info msg="StartContainer for \"bcd5a1ed0dbb185bb3297e96b2ff0cf48a1d76c175db01db7430fbed3e04b7cf\"" Jul 9 10:04:02.205712 containerd[1488]: time="2025-07-09T10:04:02.205678091Z" level=info msg="CreateContainer within sandbox \"4c100af72aeed2503351685e9ee50eae9bf36e44246304c089effee053238e2d\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"360bd4f9f159d30cae9b099af2da16ddf1ca04b505f863bced8c401165b0e9d2\"" Jul 9 10:04:02.206541 containerd[1488]: time="2025-07-09T10:04:02.206520618Z" level=info msg="StartContainer for \"360bd4f9f159d30cae9b099af2da16ddf1ca04b505f863bced8c401165b0e9d2\"" Jul 9 10:04:02.226314 systemd[1]: Started cri-containerd-a2e180ca23ff9d8167a9142a65f73161cf0e2efaba47c88b4b42fa72ff926f06.scope - libcontainer container a2e180ca23ff9d8167a9142a65f73161cf0e2efaba47c88b4b42fa72ff926f06. Jul 9 10:04:02.230205 systemd[1]: Started cri-containerd-bcd5a1ed0dbb185bb3297e96b2ff0cf48a1d76c175db01db7430fbed3e04b7cf.scope - libcontainer container bcd5a1ed0dbb185bb3297e96b2ff0cf48a1d76c175db01db7430fbed3e04b7cf. Jul 9 10:04:02.234304 systemd[1]: Started cri-containerd-360bd4f9f159d30cae9b099af2da16ddf1ca04b505f863bced8c401165b0e9d2.scope - libcontainer container 360bd4f9f159d30cae9b099af2da16ddf1ca04b505f863bced8c401165b0e9d2. Jul 9 10:04:02.273384 containerd[1488]: time="2025-07-09T10:04:02.273310149Z" level=info msg="StartContainer for \"a2e180ca23ff9d8167a9142a65f73161cf0e2efaba47c88b4b42fa72ff926f06\" returns successfully" Jul 9 10:04:02.285825 containerd[1488]: time="2025-07-09T10:04:02.285739978Z" level=info msg="StartContainer for \"bcd5a1ed0dbb185bb3297e96b2ff0cf48a1d76c175db01db7430fbed3e04b7cf\" returns successfully" Jul 9 10:04:02.285825 containerd[1488]: time="2025-07-09T10:04:02.285744231Z" level=info msg="StartContainer for \"360bd4f9f159d30cae9b099af2da16ddf1ca04b505f863bced8c401165b0e9d2\" returns successfully" Jul 9 10:04:04.165879 kubelet[2205]: I0709 10:04:04.165767 2205 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 9 10:04:04.175525 kubelet[2205]: E0709 10:04:04.174904 2205 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 9 10:04:04.181546 kubelet[2205]: E0709 10:04:04.181505 2205 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 9 10:04:04.185500 kubelet[2205]: E0709 10:04:04.185467 2205 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 9 10:04:04.679342 kubelet[2205]: E0709 10:04:04.679296 2205 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jul 9 10:04:04.765951 kubelet[2205]: I0709 10:04:04.765896 2205 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jul 9 10:04:04.809304 kubelet[2205]: I0709 10:04:04.809238 2205 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 9 10:04:04.813713 kubelet[2205]: E0709 10:04:04.813654 2205 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jul 9 10:04:04.813713 kubelet[2205]: I0709 10:04:04.813701 2205 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 9 10:04:04.815144 kubelet[2205]: E0709 10:04:04.815097 2205 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Jul 9 10:04:04.815144 kubelet[2205]: I0709 10:04:04.815119 2205 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 9 10:04:04.816669 kubelet[2205]: E0709 10:04:04.816633 2205 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jul 9 10:04:05.165633 kubelet[2205]: I0709 10:04:05.165493 2205 apiserver.go:52] "Watching apiserver" Jul 9 10:04:05.185254 kubelet[2205]: I0709 10:04:05.185236 2205 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 9 10:04:05.185730 kubelet[2205]: I0709 10:04:05.185433 2205 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 9 10:04:05.185769 kubelet[2205]: I0709 10:04:05.185725 2205 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 9 10:04:05.187275 kubelet[2205]: E0709 10:04:05.187251 2205 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Jul 9 10:04:05.187348 kubelet[2205]: E0709 10:04:05.187253 2205 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jul 9 10:04:05.187683 kubelet[2205]: E0709 10:04:05.187667 2205 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jul 9 10:04:05.209045 kubelet[2205]: I0709 10:04:05.209000 2205 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 9 10:04:06.750698 systemd[1]: Reload requested from client PID 2482 ('systemctl') (unit session-7.scope)... Jul 9 10:04:06.750721 systemd[1]: Reloading... Jul 9 10:04:07.011775 zram_generator::config[2527]: No configuration found. Jul 9 10:04:07.191273 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 9 10:04:07.327684 systemd[1]: Reloading finished in 576 ms. Jul 9 10:04:07.354682 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 9 10:04:07.374902 systemd[1]: kubelet.service: Deactivated successfully. Jul 9 10:04:07.375344 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 9 10:04:07.375434 systemd[1]: kubelet.service: Consumed 1.068s CPU time, 133.1M memory peak. Jul 9 10:04:07.387396 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 9 10:04:07.613706 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 9 10:04:07.623599 (kubelet)[2571]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 9 10:04:07.675220 kubelet[2571]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 9 10:04:07.675220 kubelet[2571]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 9 10:04:07.675220 kubelet[2571]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 9 10:04:07.675683 kubelet[2571]: I0709 10:04:07.675332 2571 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 9 10:04:07.682272 kubelet[2571]: I0709 10:04:07.682225 2571 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jul 9 10:04:07.682272 kubelet[2571]: I0709 10:04:07.682262 2571 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 9 10:04:07.682534 kubelet[2571]: I0709 10:04:07.682507 2571 server.go:954] "Client rotation is on, will bootstrap in background" Jul 9 10:04:07.683788 kubelet[2571]: I0709 10:04:07.683762 2571 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 9 10:04:07.686638 kubelet[2571]: I0709 10:04:07.686525 2571 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 9 10:04:07.692250 kubelet[2571]: E0709 10:04:07.692211 2571 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 9 10:04:07.692250 kubelet[2571]: I0709 10:04:07.692243 2571 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 9 10:04:07.697478 kubelet[2571]: I0709 10:04:07.697437 2571 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 9 10:04:07.697791 kubelet[2571]: I0709 10:04:07.697737 2571 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 9 10:04:07.697968 kubelet[2571]: I0709 10:04:07.697782 2571 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 9 10:04:07.698077 kubelet[2571]: I0709 10:04:07.697979 2571 topology_manager.go:138] "Creating topology manager with none policy" Jul 9 10:04:07.698077 kubelet[2571]: I0709 10:04:07.697990 2571 container_manager_linux.go:304] "Creating device plugin manager" Jul 9 10:04:07.698077 kubelet[2571]: I0709 10:04:07.698052 2571 state_mem.go:36] "Initialized new in-memory state store" Jul 9 10:04:07.698260 kubelet[2571]: I0709 10:04:07.698243 2571 kubelet.go:446] "Attempting to sync node with API server" Jul 9 10:04:07.698309 kubelet[2571]: I0709 10:04:07.698272 2571 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 9 10:04:07.698309 kubelet[2571]: I0709 10:04:07.698293 2571 kubelet.go:352] "Adding apiserver pod source" Jul 9 10:04:07.698309 kubelet[2571]: I0709 10:04:07.698304 2571 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 9 10:04:07.702567 kubelet[2571]: I0709 10:04:07.702537 2571 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jul 9 10:04:07.702962 kubelet[2571]: I0709 10:04:07.702937 2571 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 9 10:04:07.703947 kubelet[2571]: I0709 10:04:07.703918 2571 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 9 10:04:07.704000 kubelet[2571]: I0709 10:04:07.703952 2571 server.go:1287] "Started kubelet" Jul 9 10:04:07.705011 kubelet[2571]: I0709 10:04:07.704970 2571 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jul 9 10:04:07.706555 kubelet[2571]: I0709 10:04:07.706531 2571 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 9 10:04:07.706555 kubelet[2571]: I0709 10:04:07.706565 2571 server.go:479] "Adding debug handlers to kubelet server" Jul 9 10:04:07.706681 kubelet[2571]: I0709 10:04:07.706527 2571 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 9 10:04:07.709688 kubelet[2571]: I0709 10:04:07.707973 2571 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 9 10:04:07.709688 kubelet[2571]: I0709 10:04:07.708016 2571 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 9 10:04:07.710796 kubelet[2571]: I0709 10:04:07.710750 2571 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 9 10:04:07.711087 kubelet[2571]: E0709 10:04:07.710877 2571 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 9 10:04:07.712013 kubelet[2571]: I0709 10:04:07.711216 2571 factory.go:221] Registration of the systemd container factory successfully Jul 9 10:04:07.712013 kubelet[2571]: I0709 10:04:07.711317 2571 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 9 10:04:07.712013 kubelet[2571]: I0709 10:04:07.711523 2571 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 9 10:04:07.712013 kubelet[2571]: I0709 10:04:07.711665 2571 reconciler.go:26] "Reconciler: start to sync state" Jul 9 10:04:07.717175 kubelet[2571]: E0709 10:04:07.717128 2571 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 9 10:04:07.717881 kubelet[2571]: I0709 10:04:07.717849 2571 factory.go:221] Registration of the containerd container factory successfully Jul 9 10:04:07.729817 kubelet[2571]: I0709 10:04:07.729174 2571 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 9 10:04:07.732613 kubelet[2571]: I0709 10:04:07.732531 2571 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 9 10:04:07.732680 kubelet[2571]: I0709 10:04:07.732658 2571 status_manager.go:227] "Starting to sync pod status with apiserver" Jul 9 10:04:07.733740 kubelet[2571]: I0709 10:04:07.732694 2571 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 9 10:04:07.733740 kubelet[2571]: I0709 10:04:07.732711 2571 kubelet.go:2382] "Starting kubelet main sync loop" Jul 9 10:04:07.733740 kubelet[2571]: E0709 10:04:07.732869 2571 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 9 10:04:07.764983 sudo[2603]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jul 9 10:04:07.765454 sudo[2603]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jul 9 10:04:07.778917 kubelet[2571]: I0709 10:04:07.778871 2571 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 9 10:04:07.778917 kubelet[2571]: I0709 10:04:07.778891 2571 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 9 10:04:07.778917 kubelet[2571]: I0709 10:04:07.778914 2571 state_mem.go:36] "Initialized new in-memory state store" Jul 9 10:04:07.779117 kubelet[2571]: I0709 10:04:07.779102 2571 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 9 10:04:07.779145 kubelet[2571]: I0709 10:04:07.779113 2571 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 9 10:04:07.779145 kubelet[2571]: I0709 10:04:07.779135 2571 policy_none.go:49] "None policy: Start" Jul 9 10:04:07.779208 kubelet[2571]: I0709 10:04:07.779166 2571 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 9 10:04:07.779208 kubelet[2571]: I0709 10:04:07.779194 2571 state_mem.go:35] "Initializing new in-memory state store" Jul 9 10:04:07.779315 kubelet[2571]: I0709 10:04:07.779302 2571 state_mem.go:75] "Updated machine memory state" Jul 9 10:04:07.787346 kubelet[2571]: I0709 10:04:07.787307 2571 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 9 10:04:07.787754 kubelet[2571]: I0709 10:04:07.787734 2571 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 9 10:04:07.787794 kubelet[2571]: I0709 10:04:07.787752 2571 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 9 10:04:07.788135 kubelet[2571]: I0709 10:04:07.788106 2571 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 9 10:04:07.791471 kubelet[2571]: E0709 10:04:07.790214 2571 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 9 10:04:07.833686 kubelet[2571]: I0709 10:04:07.833630 2571 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 9 10:04:07.834638 kubelet[2571]: I0709 10:04:07.834605 2571 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 9 10:04:07.835756 kubelet[2571]: I0709 10:04:07.835734 2571 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 9 10:04:07.891043 kubelet[2571]: I0709 10:04:07.890865 2571 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 9 10:04:07.897896 kubelet[2571]: I0709 10:04:07.897840 2571 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Jul 9 10:04:07.898073 kubelet[2571]: I0709 10:04:07.897966 2571 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jul 9 10:04:08.013161 kubelet[2571]: I0709 10:04:08.013093 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e91b195228d7d388bfc0b0f87323acec-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"e91b195228d7d388bfc0b0f87323acec\") " pod="kube-system/kube-apiserver-localhost" Jul 9 10:04:08.013161 kubelet[2571]: I0709 10:04:08.013132 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e91b195228d7d388bfc0b0f87323acec-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"e91b195228d7d388bfc0b0f87323acec\") " pod="kube-system/kube-apiserver-localhost" Jul 9 10:04:08.013350 kubelet[2571]: I0709 10:04:08.013175 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 9 10:04:08.013350 kubelet[2571]: I0709 10:04:08.013194 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 9 10:04:08.013350 kubelet[2571]: I0709 10:04:08.013208 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e91b195228d7d388bfc0b0f87323acec-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"e91b195228d7d388bfc0b0f87323acec\") " pod="kube-system/kube-apiserver-localhost" Jul 9 10:04:08.013350 kubelet[2571]: I0709 10:04:08.013221 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 9 10:04:08.013350 kubelet[2571]: I0709 10:04:08.013235 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 9 10:04:08.013470 kubelet[2571]: I0709 10:04:08.013250 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 9 10:04:08.013470 kubelet[2571]: I0709 10:04:08.013264 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8a75e163f27396b2168da0f88f85f8a5-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"8a75e163f27396b2168da0f88f85f8a5\") " pod="kube-system/kube-scheduler-localhost" Jul 9 10:04:08.320811 sudo[2603]: pam_unix(sudo:session): session closed for user root Jul 9 10:04:08.699543 kubelet[2571]: I0709 10:04:08.699331 2571 apiserver.go:52] "Watching apiserver" Jul 9 10:04:08.712081 kubelet[2571]: I0709 10:04:08.712012 2571 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 9 10:04:08.759171 kubelet[2571]: I0709 10:04:08.759107 2571 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 9 10:04:08.766265 kubelet[2571]: E0709 10:04:08.766206 2571 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 9 10:04:08.784522 kubelet[2571]: I0709 10:04:08.784431 2571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.7843987019999998 podStartE2EDuration="1.784398702s" podCreationTimestamp="2025-07-09 10:04:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-09 10:04:08.77728159 +0000 UTC m=+1.143337352" watchObservedRunningTime="2025-07-09 10:04:08.784398702 +0000 UTC m=+1.150454464" Jul 9 10:04:08.792444 kubelet[2571]: I0709 10:04:08.792373 2571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.792352181 podStartE2EDuration="1.792352181s" podCreationTimestamp="2025-07-09 10:04:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-09 10:04:08.784676612 +0000 UTC m=+1.150732374" watchObservedRunningTime="2025-07-09 10:04:08.792352181 +0000 UTC m=+1.158407943" Jul 9 10:04:08.801337 kubelet[2571]: I0709 10:04:08.801267 2571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.801248151 podStartE2EDuration="1.801248151s" podCreationTimestamp="2025-07-09 10:04:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-09 10:04:08.792560952 +0000 UTC m=+1.158616714" watchObservedRunningTime="2025-07-09 10:04:08.801248151 +0000 UTC m=+1.167303913" Jul 9 10:04:09.751052 sudo[1667]: pam_unix(sudo:session): session closed for user root Jul 9 10:04:09.753238 sshd[1666]: Connection closed by 10.0.0.1 port 56544 Jul 9 10:04:09.753954 sshd-session[1663]: pam_unix(sshd:session): session closed for user core Jul 9 10:04:09.758629 systemd[1]: sshd@6-10.0.0.36:22-10.0.0.1:56544.service: Deactivated successfully. Jul 9 10:04:09.761437 systemd[1]: session-7.scope: Deactivated successfully. Jul 9 10:04:09.761790 systemd[1]: session-7.scope: Consumed 4.857s CPU time, 254M memory peak. Jul 9 10:04:09.763334 systemd-logind[1461]: Session 7 logged out. Waiting for processes to exit. Jul 9 10:04:09.764573 systemd-logind[1461]: Removed session 7. Jul 9 10:04:12.313256 kubelet[2571]: I0709 10:04:12.313212 2571 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 9 10:04:12.313807 kubelet[2571]: I0709 10:04:12.313720 2571 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 9 10:04:12.313851 containerd[1488]: time="2025-07-09T10:04:12.313521378Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 9 10:04:13.358123 systemd[1]: Created slice kubepods-besteffort-poda26baa31_b699_4187_b9ba_9a8a83927eb7.slice - libcontainer container kubepods-besteffort-poda26baa31_b699_4187_b9ba_9a8a83927eb7.slice. Jul 9 10:04:13.376141 systemd[1]: Created slice kubepods-burstable-pod0427ac22_349b_4e16_8131_7c94ffd506d2.slice - libcontainer container kubepods-burstable-pod0427ac22_349b_4e16_8131_7c94ffd506d2.slice. Jul 9 10:04:13.443319 kubelet[2571]: I0709 10:04:13.443267 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0427ac22-349b-4e16-8131-7c94ffd506d2-etc-cni-netd\") pod \"cilium-fbzh6\" (UID: \"0427ac22-349b-4e16-8131-7c94ffd506d2\") " pod="kube-system/cilium-fbzh6" Jul 9 10:04:13.443319 kubelet[2571]: I0709 10:04:13.443311 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0427ac22-349b-4e16-8131-7c94ffd506d2-hubble-tls\") pod \"cilium-fbzh6\" (UID: \"0427ac22-349b-4e16-8131-7c94ffd506d2\") " pod="kube-system/cilium-fbzh6" Jul 9 10:04:13.443928 kubelet[2571]: I0709 10:04:13.443339 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ztrc7\" (UniqueName: \"kubernetes.io/projected/a26baa31-b699-4187-b9ba-9a8a83927eb7-kube-api-access-ztrc7\") pod \"kube-proxy-9mrl9\" (UID: \"a26baa31-b699-4187-b9ba-9a8a83927eb7\") " pod="kube-system/kube-proxy-9mrl9" Jul 9 10:04:13.443928 kubelet[2571]: I0709 10:04:13.443361 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0427ac22-349b-4e16-8131-7c94ffd506d2-cilium-run\") pod \"cilium-fbzh6\" (UID: \"0427ac22-349b-4e16-8131-7c94ffd506d2\") " pod="kube-system/cilium-fbzh6" Jul 9 10:04:13.443928 kubelet[2571]: I0709 10:04:13.443378 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0427ac22-349b-4e16-8131-7c94ffd506d2-xtables-lock\") pod \"cilium-fbzh6\" (UID: \"0427ac22-349b-4e16-8131-7c94ffd506d2\") " pod="kube-system/cilium-fbzh6" Jul 9 10:04:13.443928 kubelet[2571]: I0709 10:04:13.443396 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0427ac22-349b-4e16-8131-7c94ffd506d2-cni-path\") pod \"cilium-fbzh6\" (UID: \"0427ac22-349b-4e16-8131-7c94ffd506d2\") " pod="kube-system/cilium-fbzh6" Jul 9 10:04:13.443928 kubelet[2571]: I0709 10:04:13.443415 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0427ac22-349b-4e16-8131-7c94ffd506d2-clustermesh-secrets\") pod \"cilium-fbzh6\" (UID: \"0427ac22-349b-4e16-8131-7c94ffd506d2\") " pod="kube-system/cilium-fbzh6" Jul 9 10:04:13.443928 kubelet[2571]: I0709 10:04:13.443437 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0427ac22-349b-4e16-8131-7c94ffd506d2-cilium-config-path\") pod \"cilium-fbzh6\" (UID: \"0427ac22-349b-4e16-8131-7c94ffd506d2\") " pod="kube-system/cilium-fbzh6" Jul 9 10:04:13.444139 kubelet[2571]: I0709 10:04:13.443484 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/a26baa31-b699-4187-b9ba-9a8a83927eb7-kube-proxy\") pod \"kube-proxy-9mrl9\" (UID: \"a26baa31-b699-4187-b9ba-9a8a83927eb7\") " pod="kube-system/kube-proxy-9mrl9" Jul 9 10:04:13.444139 kubelet[2571]: I0709 10:04:13.443517 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0427ac22-349b-4e16-8131-7c94ffd506d2-bpf-maps\") pod \"cilium-fbzh6\" (UID: \"0427ac22-349b-4e16-8131-7c94ffd506d2\") " pod="kube-system/cilium-fbzh6" Jul 9 10:04:13.444139 kubelet[2571]: I0709 10:04:13.443537 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0427ac22-349b-4e16-8131-7c94ffd506d2-host-proc-sys-net\") pod \"cilium-fbzh6\" (UID: \"0427ac22-349b-4e16-8131-7c94ffd506d2\") " pod="kube-system/cilium-fbzh6" Jul 9 10:04:13.444139 kubelet[2571]: I0709 10:04:13.443556 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0427ac22-349b-4e16-8131-7c94ffd506d2-cilium-cgroup\") pod \"cilium-fbzh6\" (UID: \"0427ac22-349b-4e16-8131-7c94ffd506d2\") " pod="kube-system/cilium-fbzh6" Jul 9 10:04:13.447253 kubelet[2571]: I0709 10:04:13.443574 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gdkqx\" (UniqueName: \"kubernetes.io/projected/0427ac22-349b-4e16-8131-7c94ffd506d2-kube-api-access-gdkqx\") pod \"cilium-fbzh6\" (UID: \"0427ac22-349b-4e16-8131-7c94ffd506d2\") " pod="kube-system/cilium-fbzh6" Jul 9 10:04:13.447253 kubelet[2571]: I0709 10:04:13.447117 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0427ac22-349b-4e16-8131-7c94ffd506d2-host-proc-sys-kernel\") pod \"cilium-fbzh6\" (UID: \"0427ac22-349b-4e16-8131-7c94ffd506d2\") " pod="kube-system/cilium-fbzh6" Jul 9 10:04:13.447253 kubelet[2571]: I0709 10:04:13.447175 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0427ac22-349b-4e16-8131-7c94ffd506d2-lib-modules\") pod \"cilium-fbzh6\" (UID: \"0427ac22-349b-4e16-8131-7c94ffd506d2\") " pod="kube-system/cilium-fbzh6" Jul 9 10:04:13.448061 kubelet[2571]: I0709 10:04:13.447672 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a26baa31-b699-4187-b9ba-9a8a83927eb7-xtables-lock\") pod \"kube-proxy-9mrl9\" (UID: \"a26baa31-b699-4187-b9ba-9a8a83927eb7\") " pod="kube-system/kube-proxy-9mrl9" Jul 9 10:04:13.448061 kubelet[2571]: I0709 10:04:13.447739 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a26baa31-b699-4187-b9ba-9a8a83927eb7-lib-modules\") pod \"kube-proxy-9mrl9\" (UID: \"a26baa31-b699-4187-b9ba-9a8a83927eb7\") " pod="kube-system/kube-proxy-9mrl9" Jul 9 10:04:13.448061 kubelet[2571]: I0709 10:04:13.447773 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0427ac22-349b-4e16-8131-7c94ffd506d2-hostproc\") pod \"cilium-fbzh6\" (UID: \"0427ac22-349b-4e16-8131-7c94ffd506d2\") " pod="kube-system/cilium-fbzh6" Jul 9 10:04:13.465916 systemd[1]: Created slice kubepods-besteffort-podd7eed657_f6f5_4b6e_a39d_3c671510523a.slice - libcontainer container kubepods-besteffort-podd7eed657_f6f5_4b6e_a39d_3c671510523a.slice. Jul 9 10:04:13.549325 kubelet[2571]: I0709 10:04:13.548480 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d7eed657-f6f5-4b6e-a39d-3c671510523a-cilium-config-path\") pod \"cilium-operator-6c4d7847fc-4wf9c\" (UID: \"d7eed657-f6f5-4b6e-a39d-3c671510523a\") " pod="kube-system/cilium-operator-6c4d7847fc-4wf9c" Jul 9 10:04:13.549325 kubelet[2571]: I0709 10:04:13.548622 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7q5fp\" (UniqueName: \"kubernetes.io/projected/d7eed657-f6f5-4b6e-a39d-3c671510523a-kube-api-access-7q5fp\") pod \"cilium-operator-6c4d7847fc-4wf9c\" (UID: \"d7eed657-f6f5-4b6e-a39d-3c671510523a\") " pod="kube-system/cilium-operator-6c4d7847fc-4wf9c" Jul 9 10:04:13.672353 containerd[1488]: time="2025-07-09T10:04:13.672246588Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9mrl9,Uid:a26baa31-b699-4187-b9ba-9a8a83927eb7,Namespace:kube-system,Attempt:0,}" Jul 9 10:04:13.679767 containerd[1488]: time="2025-07-09T10:04:13.679730403Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-fbzh6,Uid:0427ac22-349b-4e16-8131-7c94ffd506d2,Namespace:kube-system,Attempt:0,}" Jul 9 10:04:13.703133 containerd[1488]: time="2025-07-09T10:04:13.702865237Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 9 10:04:13.703133 containerd[1488]: time="2025-07-09T10:04:13.702925125Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 9 10:04:13.703133 containerd[1488]: time="2025-07-09T10:04:13.702947765Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 9 10:04:13.703133 containerd[1488]: time="2025-07-09T10:04:13.703039646Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 9 10:04:13.710386 containerd[1488]: time="2025-07-09T10:04:13.709893432Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 9 10:04:13.710386 containerd[1488]: time="2025-07-09T10:04:13.709960449Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 9 10:04:13.710386 containerd[1488]: time="2025-07-09T10:04:13.709973293Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 9 10:04:13.710386 containerd[1488]: time="2025-07-09T10:04:13.710054798Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 9 10:04:13.727376 systemd[1]: Started cri-containerd-7148c7ac4e73b85a339f232405b8f57e6bba324ee65074a713a2a0b0d6de34a3.scope - libcontainer container 7148c7ac4e73b85a339f232405b8f57e6bba324ee65074a713a2a0b0d6de34a3. Jul 9 10:04:13.731377 systemd[1]: Started cri-containerd-bbc3be2d2f0728ff017b7bc4863e1e464f1a76b1d1d41f7297546a7f63eb9bf5.scope - libcontainer container bbc3be2d2f0728ff017b7bc4863e1e464f1a76b1d1d41f7297546a7f63eb9bf5. Jul 9 10:04:13.763088 containerd[1488]: time="2025-07-09T10:04:13.763032308Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9mrl9,Uid:a26baa31-b699-4187-b9ba-9a8a83927eb7,Namespace:kube-system,Attempt:0,} returns sandbox id \"7148c7ac4e73b85a339f232405b8f57e6bba324ee65074a713a2a0b0d6de34a3\"" Jul 9 10:04:13.763558 containerd[1488]: time="2025-07-09T10:04:13.763425404Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-fbzh6,Uid:0427ac22-349b-4e16-8131-7c94ffd506d2,Namespace:kube-system,Attempt:0,} returns sandbox id \"bbc3be2d2f0728ff017b7bc4863e1e464f1a76b1d1d41f7297546a7f63eb9bf5\"" Jul 9 10:04:13.769258 containerd[1488]: time="2025-07-09T10:04:13.769206615Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-4wf9c,Uid:d7eed657-f6f5-4b6e-a39d-3c671510523a,Namespace:kube-system,Attempt:0,}" Jul 9 10:04:13.771298 containerd[1488]: time="2025-07-09T10:04:13.771001692Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jul 9 10:04:13.771930 containerd[1488]: time="2025-07-09T10:04:13.771816256Z" level=info msg="CreateContainer within sandbox \"7148c7ac4e73b85a339f232405b8f57e6bba324ee65074a713a2a0b0d6de34a3\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 9 10:04:13.797083 containerd[1488]: time="2025-07-09T10:04:13.796915654Z" level=info msg="CreateContainer within sandbox \"7148c7ac4e73b85a339f232405b8f57e6bba324ee65074a713a2a0b0d6de34a3\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"79f235dfb7130301f008fbc4d21d5625721acb9be63aef222fd523fecd8988de\"" Jul 9 10:04:13.798232 containerd[1488]: time="2025-07-09T10:04:13.797750292Z" level=info msg="StartContainer for \"79f235dfb7130301f008fbc4d21d5625721acb9be63aef222fd523fecd8988de\"" Jul 9 10:04:13.807550 containerd[1488]: time="2025-07-09T10:04:13.807440987Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 9 10:04:13.807550 containerd[1488]: time="2025-07-09T10:04:13.807499933Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 9 10:04:13.807550 containerd[1488]: time="2025-07-09T10:04:13.807512525Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 9 10:04:13.807761 containerd[1488]: time="2025-07-09T10:04:13.807597549Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 9 10:04:13.828396 systemd[1]: Started cri-containerd-9d1f5e02d069ca91a66505ffef543d13c4374fc2a03a088d89e1fa937cec26f5.scope - libcontainer container 9d1f5e02d069ca91a66505ffef543d13c4374fc2a03a088d89e1fa937cec26f5. Jul 9 10:04:13.832498 systemd[1]: Started cri-containerd-79f235dfb7130301f008fbc4d21d5625721acb9be63aef222fd523fecd8988de.scope - libcontainer container 79f235dfb7130301f008fbc4d21d5625721acb9be63aef222fd523fecd8988de. Jul 9 10:04:13.867417 containerd[1488]: time="2025-07-09T10:04:13.867368256Z" level=info msg="StartContainer for \"79f235dfb7130301f008fbc4d21d5625721acb9be63aef222fd523fecd8988de\" returns successfully" Jul 9 10:04:13.870587 containerd[1488]: time="2025-07-09T10:04:13.870551979Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6c4d7847fc-4wf9c,Uid:d7eed657-f6f5-4b6e-a39d-3c671510523a,Namespace:kube-system,Attempt:0,} returns sandbox id \"9d1f5e02d069ca91a66505ffef543d13c4374fc2a03a088d89e1fa937cec26f5\"" Jul 9 10:04:14.789319 kubelet[2571]: I0709 10:04:14.789240 2571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-9mrl9" podStartSLOduration=1.789212802 podStartE2EDuration="1.789212802s" podCreationTimestamp="2025-07-09 10:04:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-09 10:04:14.789190504 +0000 UTC m=+7.155246266" watchObservedRunningTime="2025-07-09 10:04:14.789212802 +0000 UTC m=+7.155268564" Jul 9 10:04:19.828077 update_engine[1464]: I20250709 10:04:19.827960 1464 update_attempter.cc:509] Updating boot flags... Jul 9 10:04:19.889180 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2949) Jul 9 10:04:19.965598 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2952) Jul 9 10:04:20.110186 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (2952) Jul 9 10:04:22.971595 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2167727639.mount: Deactivated successfully. Jul 9 10:04:25.437430 containerd[1488]: time="2025-07-09T10:04:25.437349555Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 10:04:25.438175 containerd[1488]: time="2025-07-09T10:04:25.438094570Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Jul 9 10:04:25.439314 containerd[1488]: time="2025-07-09T10:04:25.439256629Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 10:04:25.441069 containerd[1488]: time="2025-07-09T10:04:25.441009259Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 11.669878167s" Jul 9 10:04:25.441069 containerd[1488]: time="2025-07-09T10:04:25.441069948Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jul 9 10:04:25.443308 containerd[1488]: time="2025-07-09T10:04:25.443277348Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jul 9 10:04:25.445230 containerd[1488]: time="2025-07-09T10:04:25.445189062Z" level=info msg="CreateContainer within sandbox \"bbc3be2d2f0728ff017b7bc4863e1e464f1a76b1d1d41f7297546a7f63eb9bf5\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 9 10:04:25.461373 containerd[1488]: time="2025-07-09T10:04:25.461300647Z" level=info msg="CreateContainer within sandbox \"bbc3be2d2f0728ff017b7bc4863e1e464f1a76b1d1d41f7297546a7f63eb9bf5\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"5f8a35f5b33bb5e9c181dc07ec43972276cfb91f2f9036031c205e30259dd72e\"" Jul 9 10:04:25.461968 containerd[1488]: time="2025-07-09T10:04:25.461849192Z" level=info msg="StartContainer for \"5f8a35f5b33bb5e9c181dc07ec43972276cfb91f2f9036031c205e30259dd72e\"" Jul 9 10:04:25.503459 systemd[1]: Started cri-containerd-5f8a35f5b33bb5e9c181dc07ec43972276cfb91f2f9036031c205e30259dd72e.scope - libcontainer container 5f8a35f5b33bb5e9c181dc07ec43972276cfb91f2f9036031c205e30259dd72e. Jul 9 10:04:25.533509 containerd[1488]: time="2025-07-09T10:04:25.533457578Z" level=info msg="StartContainer for \"5f8a35f5b33bb5e9c181dc07ec43972276cfb91f2f9036031c205e30259dd72e\" returns successfully" Jul 9 10:04:25.545962 systemd[1]: cri-containerd-5f8a35f5b33bb5e9c181dc07ec43972276cfb91f2f9036031c205e30259dd72e.scope: Deactivated successfully. Jul 9 10:04:25.999773 containerd[1488]: time="2025-07-09T10:04:25.999683374Z" level=info msg="shim disconnected" id=5f8a35f5b33bb5e9c181dc07ec43972276cfb91f2f9036031c205e30259dd72e namespace=k8s.io Jul 9 10:04:25.999773 containerd[1488]: time="2025-07-09T10:04:25.999763828Z" level=warning msg="cleaning up after shim disconnected" id=5f8a35f5b33bb5e9c181dc07ec43972276cfb91f2f9036031c205e30259dd72e namespace=k8s.io Jul 9 10:04:25.999773 containerd[1488]: time="2025-07-09T10:04:25.999778351Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 9 10:04:26.456588 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5f8a35f5b33bb5e9c181dc07ec43972276cfb91f2f9036031c205e30259dd72e-rootfs.mount: Deactivated successfully. Jul 9 10:04:26.809207 containerd[1488]: time="2025-07-09T10:04:26.809000942Z" level=info msg="CreateContainer within sandbox \"bbc3be2d2f0728ff017b7bc4863e1e464f1a76b1d1d41f7297546a7f63eb9bf5\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 9 10:04:26.825455 containerd[1488]: time="2025-07-09T10:04:26.825382903Z" level=info msg="CreateContainer within sandbox \"bbc3be2d2f0728ff017b7bc4863e1e464f1a76b1d1d41f7297546a7f63eb9bf5\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"a03947aae8fcd42e3e3212866e608d6fa924d9466b75db94d8ebed0a8d80794c\"" Jul 9 10:04:26.826115 containerd[1488]: time="2025-07-09T10:04:26.826062336Z" level=info msg="StartContainer for \"a03947aae8fcd42e3e3212866e608d6fa924d9466b75db94d8ebed0a8d80794c\"" Jul 9 10:04:26.856293 systemd[1]: Started cri-containerd-a03947aae8fcd42e3e3212866e608d6fa924d9466b75db94d8ebed0a8d80794c.scope - libcontainer container a03947aae8fcd42e3e3212866e608d6fa924d9466b75db94d8ebed0a8d80794c. Jul 9 10:04:26.882101 containerd[1488]: time="2025-07-09T10:04:26.882054959Z" level=info msg="StartContainer for \"a03947aae8fcd42e3e3212866e608d6fa924d9466b75db94d8ebed0a8d80794c\" returns successfully" Jul 9 10:04:26.902497 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 9 10:04:26.902740 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 9 10:04:26.903452 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jul 9 10:04:26.911885 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 9 10:04:26.912141 systemd[1]: cri-containerd-a03947aae8fcd42e3e3212866e608d6fa924d9466b75db94d8ebed0a8d80794c.scope: Deactivated successfully. Jul 9 10:04:26.952774 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 9 10:04:26.967804 containerd[1488]: time="2025-07-09T10:04:26.967726132Z" level=info msg="shim disconnected" id=a03947aae8fcd42e3e3212866e608d6fa924d9466b75db94d8ebed0a8d80794c namespace=k8s.io Jul 9 10:04:26.967804 containerd[1488]: time="2025-07-09T10:04:26.967799779Z" level=warning msg="cleaning up after shim disconnected" id=a03947aae8fcd42e3e3212866e608d6fa924d9466b75db94d8ebed0a8d80794c namespace=k8s.io Jul 9 10:04:26.967957 containerd[1488]: time="2025-07-09T10:04:26.967811987Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 9 10:04:27.329868 containerd[1488]: time="2025-07-09T10:04:27.329808517Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 10:04:27.330687 containerd[1488]: time="2025-07-09T10:04:27.330644389Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Jul 9 10:04:27.331753 containerd[1488]: time="2025-07-09T10:04:27.331722508Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 9 10:04:27.333197 containerd[1488]: time="2025-07-09T10:04:27.333137926Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 1.889732705s" Jul 9 10:04:27.333243 containerd[1488]: time="2025-07-09T10:04:27.333196217Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jul 9 10:04:27.335057 containerd[1488]: time="2025-07-09T10:04:27.335030299Z" level=info msg="CreateContainer within sandbox \"9d1f5e02d069ca91a66505ffef543d13c4374fc2a03a088d89e1fa937cec26f5\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jul 9 10:04:27.347393 containerd[1488]: time="2025-07-09T10:04:27.347337588Z" level=info msg="CreateContainer within sandbox \"9d1f5e02d069ca91a66505ffef543d13c4374fc2a03a088d89e1fa937cec26f5\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"4554e335b82f96e95faeb82da526110125c14c8b3f9c79529421d413086c7838\"" Jul 9 10:04:27.347739 containerd[1488]: time="2025-07-09T10:04:27.347701789Z" level=info msg="StartContainer for \"4554e335b82f96e95faeb82da526110125c14c8b3f9c79529421d413086c7838\"" Jul 9 10:04:27.388464 systemd[1]: Started cri-containerd-4554e335b82f96e95faeb82da526110125c14c8b3f9c79529421d413086c7838.scope - libcontainer container 4554e335b82f96e95faeb82da526110125c14c8b3f9c79529421d413086c7838. Jul 9 10:04:27.417922 containerd[1488]: time="2025-07-09T10:04:27.417858828Z" level=info msg="StartContainer for \"4554e335b82f96e95faeb82da526110125c14c8b3f9c79529421d413086c7838\" returns successfully" Jul 9 10:04:27.457492 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a03947aae8fcd42e3e3212866e608d6fa924d9466b75db94d8ebed0a8d80794c-rootfs.mount: Deactivated successfully. Jul 9 10:04:27.821455 containerd[1488]: time="2025-07-09T10:04:27.819869357Z" level=info msg="CreateContainer within sandbox \"bbc3be2d2f0728ff017b7bc4863e1e464f1a76b1d1d41f7297546a7f63eb9bf5\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 9 10:04:27.852950 kubelet[2571]: I0709 10:04:27.852201 2571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-6c4d7847fc-4wf9c" podStartSLOduration=1.390055147 podStartE2EDuration="14.852183069s" podCreationTimestamp="2025-07-09 10:04:13 +0000 UTC" firstStartedPulling="2025-07-09 10:04:13.871767721 +0000 UTC m=+6.237823483" lastFinishedPulling="2025-07-09 10:04:27.333895643 +0000 UTC m=+19.699951405" observedRunningTime="2025-07-09 10:04:27.85185041 +0000 UTC m=+20.217906172" watchObservedRunningTime="2025-07-09 10:04:27.852183069 +0000 UTC m=+20.218238831" Jul 9 10:04:27.858926 containerd[1488]: time="2025-07-09T10:04:27.858713227Z" level=info msg="CreateContainer within sandbox \"bbc3be2d2f0728ff017b7bc4863e1e464f1a76b1d1d41f7297546a7f63eb9bf5\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"145a437b4007092f39c9e9ac7a08ad76b2e9644ef68a25796ba2e9a0bfa8f4c5\"" Jul 9 10:04:27.862165 containerd[1488]: time="2025-07-09T10:04:27.859872869Z" level=info msg="StartContainer for \"145a437b4007092f39c9e9ac7a08ad76b2e9644ef68a25796ba2e9a0bfa8f4c5\"" Jul 9 10:04:27.902297 systemd[1]: Started cri-containerd-145a437b4007092f39c9e9ac7a08ad76b2e9644ef68a25796ba2e9a0bfa8f4c5.scope - libcontainer container 145a437b4007092f39c9e9ac7a08ad76b2e9644ef68a25796ba2e9a0bfa8f4c5. Jul 9 10:04:27.958182 containerd[1488]: time="2025-07-09T10:04:27.956623054Z" level=info msg="StartContainer for \"145a437b4007092f39c9e9ac7a08ad76b2e9644ef68a25796ba2e9a0bfa8f4c5\" returns successfully" Jul 9 10:04:27.973324 systemd[1]: cri-containerd-145a437b4007092f39c9e9ac7a08ad76b2e9644ef68a25796ba2e9a0bfa8f4c5.scope: Deactivated successfully. Jul 9 10:04:28.125979 containerd[1488]: time="2025-07-09T10:04:28.125761364Z" level=info msg="shim disconnected" id=145a437b4007092f39c9e9ac7a08ad76b2e9644ef68a25796ba2e9a0bfa8f4c5 namespace=k8s.io Jul 9 10:04:28.125979 containerd[1488]: time="2025-07-09T10:04:28.125836121Z" level=warning msg="cleaning up after shim disconnected" id=145a437b4007092f39c9e9ac7a08ad76b2e9644ef68a25796ba2e9a0bfa8f4c5 namespace=k8s.io Jul 9 10:04:28.125979 containerd[1488]: time="2025-07-09T10:04:28.125845272Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 9 10:04:28.456622 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-145a437b4007092f39c9e9ac7a08ad76b2e9644ef68a25796ba2e9a0bfa8f4c5-rootfs.mount: Deactivated successfully. Jul 9 10:04:28.829375 containerd[1488]: time="2025-07-09T10:04:28.829317607Z" level=info msg="CreateContainer within sandbox \"bbc3be2d2f0728ff017b7bc4863e1e464f1a76b1d1d41f7297546a7f63eb9bf5\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 9 10:04:28.845800 containerd[1488]: time="2025-07-09T10:04:28.845733144Z" level=info msg="CreateContainer within sandbox \"bbc3be2d2f0728ff017b7bc4863e1e464f1a76b1d1d41f7297546a7f63eb9bf5\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"a4725d4a3ccef7e2a049d1fca438ae257c5d9a0169fc239ed624d0365c2ca8d6\"" Jul 9 10:04:28.846377 containerd[1488]: time="2025-07-09T10:04:28.846330219Z" level=info msg="StartContainer for \"a4725d4a3ccef7e2a049d1fca438ae257c5d9a0169fc239ed624d0365c2ca8d6\"" Jul 9 10:04:28.881453 systemd[1]: Started cri-containerd-a4725d4a3ccef7e2a049d1fca438ae257c5d9a0169fc239ed624d0365c2ca8d6.scope - libcontainer container a4725d4a3ccef7e2a049d1fca438ae257c5d9a0169fc239ed624d0365c2ca8d6. Jul 9 10:04:28.909599 systemd[1]: cri-containerd-a4725d4a3ccef7e2a049d1fca438ae257c5d9a0169fc239ed624d0365c2ca8d6.scope: Deactivated successfully. Jul 9 10:04:28.911533 containerd[1488]: time="2025-07-09T10:04:28.911494726Z" level=info msg="StartContainer for \"a4725d4a3ccef7e2a049d1fca438ae257c5d9a0169fc239ed624d0365c2ca8d6\" returns successfully" Jul 9 10:04:28.936933 containerd[1488]: time="2025-07-09T10:04:28.936855026Z" level=info msg="shim disconnected" id=a4725d4a3ccef7e2a049d1fca438ae257c5d9a0169fc239ed624d0365c2ca8d6 namespace=k8s.io Jul 9 10:04:28.936933 containerd[1488]: time="2025-07-09T10:04:28.936918758Z" level=warning msg="cleaning up after shim disconnected" id=a4725d4a3ccef7e2a049d1fca438ae257c5d9a0169fc239ed624d0365c2ca8d6 namespace=k8s.io Jul 9 10:04:28.936933 containerd[1488]: time="2025-07-09T10:04:28.936928240Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 9 10:04:29.457410 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a4725d4a3ccef7e2a049d1fca438ae257c5d9a0169fc239ed624d0365c2ca8d6-rootfs.mount: Deactivated successfully. Jul 9 10:04:29.833839 containerd[1488]: time="2025-07-09T10:04:29.833766204Z" level=info msg="CreateContainer within sandbox \"bbc3be2d2f0728ff017b7bc4863e1e464f1a76b1d1d41f7297546a7f63eb9bf5\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 9 10:04:29.853038 containerd[1488]: time="2025-07-09T10:04:29.852986943Z" level=info msg="CreateContainer within sandbox \"bbc3be2d2f0728ff017b7bc4863e1e464f1a76b1d1d41f7297546a7f63eb9bf5\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"dd376dd6871e585dedfcb3dbc7288a7b8c8f8e1a3991d0956566388bfdcafd39\"" Jul 9 10:04:29.853574 containerd[1488]: time="2025-07-09T10:04:29.853521910Z" level=info msg="StartContainer for \"dd376dd6871e585dedfcb3dbc7288a7b8c8f8e1a3991d0956566388bfdcafd39\"" Jul 9 10:04:29.884317 systemd[1]: Started cri-containerd-dd376dd6871e585dedfcb3dbc7288a7b8c8f8e1a3991d0956566388bfdcafd39.scope - libcontainer container dd376dd6871e585dedfcb3dbc7288a7b8c8f8e1a3991d0956566388bfdcafd39. Jul 9 10:04:29.918506 containerd[1488]: time="2025-07-09T10:04:29.918257756Z" level=info msg="StartContainer for \"dd376dd6871e585dedfcb3dbc7288a7b8c8f8e1a3991d0956566388bfdcafd39\" returns successfully" Jul 9 10:04:30.085940 kubelet[2571]: I0709 10:04:30.085568 2571 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jul 9 10:04:30.128206 systemd[1]: Created slice kubepods-burstable-poddfed8d58_7ae0_4938_82a5_0ed7cd8c6e4f.slice - libcontainer container kubepods-burstable-poddfed8d58_7ae0_4938_82a5_0ed7cd8c6e4f.slice. Jul 9 10:04:30.141493 systemd[1]: Created slice kubepods-burstable-pod8f48fe70_2556_4d18_9c40_c0d7d8298e6a.slice - libcontainer container kubepods-burstable-pod8f48fe70_2556_4d18_9c40_c0d7d8298e6a.slice. Jul 9 10:04:30.167793 kubelet[2571]: I0709 10:04:30.166049 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dfed8d58-7ae0-4938-82a5-0ed7cd8c6e4f-config-volume\") pod \"coredns-668d6bf9bc-8tqwx\" (UID: \"dfed8d58-7ae0-4938-82a5-0ed7cd8c6e4f\") " pod="kube-system/coredns-668d6bf9bc-8tqwx" Jul 9 10:04:30.167793 kubelet[2571]: I0709 10:04:30.166104 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/8f48fe70-2556-4d18-9c40-c0d7d8298e6a-config-volume\") pod \"coredns-668d6bf9bc-vm6mn\" (UID: \"8f48fe70-2556-4d18-9c40-c0d7d8298e6a\") " pod="kube-system/coredns-668d6bf9bc-vm6mn" Jul 9 10:04:30.167793 kubelet[2571]: I0709 10:04:30.166123 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-shm8m\" (UniqueName: \"kubernetes.io/projected/dfed8d58-7ae0-4938-82a5-0ed7cd8c6e4f-kube-api-access-shm8m\") pod \"coredns-668d6bf9bc-8tqwx\" (UID: \"dfed8d58-7ae0-4938-82a5-0ed7cd8c6e4f\") " pod="kube-system/coredns-668d6bf9bc-8tqwx" Jul 9 10:04:30.167793 kubelet[2571]: I0709 10:04:30.166142 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mnkhd\" (UniqueName: \"kubernetes.io/projected/8f48fe70-2556-4d18-9c40-c0d7d8298e6a-kube-api-access-mnkhd\") pod \"coredns-668d6bf9bc-vm6mn\" (UID: \"8f48fe70-2556-4d18-9c40-c0d7d8298e6a\") " pod="kube-system/coredns-668d6bf9bc-vm6mn" Jul 9 10:04:30.437498 containerd[1488]: time="2025-07-09T10:04:30.437353986Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-8tqwx,Uid:dfed8d58-7ae0-4938-82a5-0ed7cd8c6e4f,Namespace:kube-system,Attempt:0,}" Jul 9 10:04:30.449397 containerd[1488]: time="2025-07-09T10:04:30.449340938Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-vm6mn,Uid:8f48fe70-2556-4d18-9c40-c0d7d8298e6a,Namespace:kube-system,Attempt:0,}" Jul 9 10:04:30.853135 kubelet[2571]: I0709 10:04:30.852768 2571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-fbzh6" podStartSLOduration=6.179845755 podStartE2EDuration="17.852746692s" podCreationTimestamp="2025-07-09 10:04:13 +0000 UTC" firstStartedPulling="2025-07-09 10:04:13.770136032 +0000 UTC m=+6.136191794" lastFinishedPulling="2025-07-09 10:04:25.443036969 +0000 UTC m=+17.809092731" observedRunningTime="2025-07-09 10:04:30.852162613 +0000 UTC m=+23.218218405" watchObservedRunningTime="2025-07-09 10:04:30.852746692 +0000 UTC m=+23.218802454" Jul 9 10:04:32.209924 systemd-networkd[1418]: cilium_host: Link UP Jul 9 10:04:32.210088 systemd-networkd[1418]: cilium_net: Link UP Jul 9 10:04:32.210316 systemd-networkd[1418]: cilium_net: Gained carrier Jul 9 10:04:32.210505 systemd-networkd[1418]: cilium_host: Gained carrier Jul 9 10:04:32.334996 systemd-networkd[1418]: cilium_vxlan: Link UP Jul 9 10:04:32.335009 systemd-networkd[1418]: cilium_vxlan: Gained carrier Jul 9 10:04:32.423406 systemd-networkd[1418]: cilium_host: Gained IPv6LL Jul 9 10:04:32.550192 kernel: NET: Registered PF_ALG protocol family Jul 9 10:04:32.719393 systemd[1]: Started sshd@7-10.0.0.36:22-10.0.0.1:53666.service - OpenSSH per-connection server daemon (10.0.0.1:53666). Jul 9 10:04:32.760647 sshd[3530]: Accepted publickey for core from 10.0.0.1 port 53666 ssh2: RSA SHA256:reH7imFaG11BTrq9feyiNgkDqTgB0gigrS0G7NIp5EM Jul 9 10:04:32.762544 sshd-session[3530]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 10:04:32.767698 systemd-logind[1461]: New session 8 of user core. Jul 9 10:04:32.772911 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 9 10:04:32.871377 systemd-networkd[1418]: cilium_net: Gained IPv6LL Jul 9 10:04:32.908930 sshd[3549]: Connection closed by 10.0.0.1 port 53666 Jul 9 10:04:32.910716 sshd-session[3530]: pam_unix(sshd:session): session closed for user core Jul 9 10:04:32.913594 systemd[1]: sshd@7-10.0.0.36:22-10.0.0.1:53666.service: Deactivated successfully. Jul 9 10:04:32.915901 systemd[1]: session-8.scope: Deactivated successfully. Jul 9 10:04:32.917965 systemd-logind[1461]: Session 8 logged out. Waiting for processes to exit. Jul 9 10:04:32.918891 systemd-logind[1461]: Removed session 8. Jul 9 10:04:33.236593 systemd-networkd[1418]: lxc_health: Link UP Jul 9 10:04:33.246909 systemd-networkd[1418]: lxc_health: Gained carrier Jul 9 10:04:33.658243 kernel: eth0: renamed from tmp4a377 Jul 9 10:04:33.667184 kernel: eth0: renamed from tmp98557 Jul 9 10:04:33.676696 systemd-networkd[1418]: lxc3a08871fb176: Link UP Jul 9 10:04:33.676994 systemd-networkd[1418]: lxca83296e151a0: Link UP Jul 9 10:04:33.677316 systemd-networkd[1418]: lxca83296e151a0: Gained carrier Jul 9 10:04:33.677500 systemd-networkd[1418]: lxc3a08871fb176: Gained carrier Jul 9 10:04:33.831313 systemd-networkd[1418]: cilium_vxlan: Gained IPv6LL Jul 9 10:04:34.919355 systemd-networkd[1418]: lxc_health: Gained IPv6LL Jul 9 10:04:34.983283 systemd-networkd[1418]: lxc3a08871fb176: Gained IPv6LL Jul 9 10:04:35.559445 systemd-networkd[1418]: lxca83296e151a0: Gained IPv6LL Jul 9 10:04:37.118021 containerd[1488]: time="2025-07-09T10:04:37.117890747Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 9 10:04:37.118021 containerd[1488]: time="2025-07-09T10:04:37.117964673Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 9 10:04:37.118021 containerd[1488]: time="2025-07-09T10:04:37.117979185Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 9 10:04:37.118779 containerd[1488]: time="2025-07-09T10:04:37.118076783Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 9 10:04:37.120298 containerd[1488]: time="2025-07-09T10:04:37.119867094Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 9 10:04:37.120298 containerd[1488]: time="2025-07-09T10:04:37.119944268Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 9 10:04:37.120298 containerd[1488]: time="2025-07-09T10:04:37.119965152Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 9 10:04:37.120298 containerd[1488]: time="2025-07-09T10:04:37.120067130Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 9 10:04:37.140551 systemd[1]: Started cri-containerd-985572c15be9531247025d7d29c89cc2b7f22f0484dc7d2068eb38a63cbcda6f.scope - libcontainer container 985572c15be9531247025d7d29c89cc2b7f22f0484dc7d2068eb38a63cbcda6f. Jul 9 10:04:37.146448 systemd[1]: Started cri-containerd-4a377efce2a73ca09c0826568bc2d10b22d98277f8ffb2ff72d6107eb9bebecd.scope - libcontainer container 4a377efce2a73ca09c0826568bc2d10b22d98277f8ffb2ff72d6107eb9bebecd. Jul 9 10:04:37.154638 systemd-resolved[1345]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 9 10:04:37.161492 systemd-resolved[1345]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 9 10:04:37.183116 containerd[1488]: time="2025-07-09T10:04:37.183052867Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-vm6mn,Uid:8f48fe70-2556-4d18-9c40-c0d7d8298e6a,Namespace:kube-system,Attempt:0,} returns sandbox id \"985572c15be9531247025d7d29c89cc2b7f22f0484dc7d2068eb38a63cbcda6f\"" Jul 9 10:04:37.186874 containerd[1488]: time="2025-07-09T10:04:37.186813954Z" level=info msg="CreateContainer within sandbox \"985572c15be9531247025d7d29c89cc2b7f22f0484dc7d2068eb38a63cbcda6f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 9 10:04:37.196470 containerd[1488]: time="2025-07-09T10:04:37.196425056Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-8tqwx,Uid:dfed8d58-7ae0-4938-82a5-0ed7cd8c6e4f,Namespace:kube-system,Attempt:0,} returns sandbox id \"4a377efce2a73ca09c0826568bc2d10b22d98277f8ffb2ff72d6107eb9bebecd\"" Jul 9 10:04:37.200632 containerd[1488]: time="2025-07-09T10:04:37.200571424Z" level=info msg="CreateContainer within sandbox \"4a377efce2a73ca09c0826568bc2d10b22d98277f8ffb2ff72d6107eb9bebecd\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 9 10:04:37.213695 containerd[1488]: time="2025-07-09T10:04:37.213637111Z" level=info msg="CreateContainer within sandbox \"985572c15be9531247025d7d29c89cc2b7f22f0484dc7d2068eb38a63cbcda6f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"b2617c4693e50381e5923eba9dcbe4dbca9a7527b49cc8be97f9187bd2d4aaed\"" Jul 9 10:04:37.214264 containerd[1488]: time="2025-07-09T10:04:37.214207876Z" level=info msg="StartContainer for \"b2617c4693e50381e5923eba9dcbe4dbca9a7527b49cc8be97f9187bd2d4aaed\"" Jul 9 10:04:37.232257 containerd[1488]: time="2025-07-09T10:04:37.232200834Z" level=info msg="CreateContainer within sandbox \"4a377efce2a73ca09c0826568bc2d10b22d98277f8ffb2ff72d6107eb9bebecd\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f67f44a97b449e4e080968720885a362097eb6a83dd3219ffeb88ebf0f0bcd17\"" Jul 9 10:04:37.233199 containerd[1488]: time="2025-07-09T10:04:37.233164465Z" level=info msg="StartContainer for \"f67f44a97b449e4e080968720885a362097eb6a83dd3219ffeb88ebf0f0bcd17\"" Jul 9 10:04:37.245334 systemd[1]: Started cri-containerd-b2617c4693e50381e5923eba9dcbe4dbca9a7527b49cc8be97f9187bd2d4aaed.scope - libcontainer container b2617c4693e50381e5923eba9dcbe4dbca9a7527b49cc8be97f9187bd2d4aaed. Jul 9 10:04:37.266298 systemd[1]: Started cri-containerd-f67f44a97b449e4e080968720885a362097eb6a83dd3219ffeb88ebf0f0bcd17.scope - libcontainer container f67f44a97b449e4e080968720885a362097eb6a83dd3219ffeb88ebf0f0bcd17. Jul 9 10:04:37.292989 containerd[1488]: time="2025-07-09T10:04:37.292943986Z" level=info msg="StartContainer for \"b2617c4693e50381e5923eba9dcbe4dbca9a7527b49cc8be97f9187bd2d4aaed\" returns successfully" Jul 9 10:04:37.304846 containerd[1488]: time="2025-07-09T10:04:37.304724086Z" level=info msg="StartContainer for \"f67f44a97b449e4e080968720885a362097eb6a83dd3219ffeb88ebf0f0bcd17\" returns successfully" Jul 9 10:04:37.934300 kubelet[2571]: I0709 10:04:37.930429 2571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-vm6mn" podStartSLOduration=24.930405206 podStartE2EDuration="24.930405206s" podCreationTimestamp="2025-07-09 10:04:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-09 10:04:37.915840467 +0000 UTC m=+30.281896219" watchObservedRunningTime="2025-07-09 10:04:37.930405206 +0000 UTC m=+30.296460968" Jul 9 10:04:37.930693 systemd[1]: Started sshd@8-10.0.0.36:22-10.0.0.1:53676.service - OpenSSH per-connection server daemon (10.0.0.1:53676). Jul 9 10:04:37.951877 kubelet[2571]: I0709 10:04:37.951103 2571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-8tqwx" podStartSLOduration=24.951079681 podStartE2EDuration="24.951079681s" podCreationTimestamp="2025-07-09 10:04:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-09 10:04:37.93674385 +0000 UTC m=+30.302799633" watchObservedRunningTime="2025-07-09 10:04:37.951079681 +0000 UTC m=+30.317135443" Jul 9 10:04:37.983018 sshd[3988]: Accepted publickey for core from 10.0.0.1 port 53676 ssh2: RSA SHA256:reH7imFaG11BTrq9feyiNgkDqTgB0gigrS0G7NIp5EM Jul 9 10:04:37.984815 sshd-session[3988]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 10:04:37.989465 systemd-logind[1461]: New session 9 of user core. Jul 9 10:04:37.994315 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 9 10:04:38.125647 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount819094590.mount: Deactivated successfully. Jul 9 10:04:38.132987 sshd[3995]: Connection closed by 10.0.0.1 port 53676 Jul 9 10:04:38.133450 sshd-session[3988]: pam_unix(sshd:session): session closed for user core Jul 9 10:04:38.138406 systemd[1]: sshd@8-10.0.0.36:22-10.0.0.1:53676.service: Deactivated successfully. Jul 9 10:04:38.140586 systemd[1]: session-9.scope: Deactivated successfully. Jul 9 10:04:38.141584 systemd-logind[1461]: Session 9 logged out. Waiting for processes to exit. Jul 9 10:04:38.142775 systemd-logind[1461]: Removed session 9. Jul 9 10:04:43.149525 systemd[1]: Started sshd@9-10.0.0.36:22-10.0.0.1:49808.service - OpenSSH per-connection server daemon (10.0.0.1:49808). Jul 9 10:04:43.191485 sshd[4010]: Accepted publickey for core from 10.0.0.1 port 49808 ssh2: RSA SHA256:reH7imFaG11BTrq9feyiNgkDqTgB0gigrS0G7NIp5EM Jul 9 10:04:43.193183 sshd-session[4010]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 10:04:43.198074 systemd-logind[1461]: New session 10 of user core. Jul 9 10:04:43.209343 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 9 10:04:43.325121 sshd[4012]: Connection closed by 10.0.0.1 port 49808 Jul 9 10:04:43.325651 sshd-session[4010]: pam_unix(sshd:session): session closed for user core Jul 9 10:04:43.330787 systemd[1]: sshd@9-10.0.0.36:22-10.0.0.1:49808.service: Deactivated successfully. Jul 9 10:04:43.333977 systemd[1]: session-10.scope: Deactivated successfully. Jul 9 10:04:43.334869 systemd-logind[1461]: Session 10 logged out. Waiting for processes to exit. Jul 9 10:04:43.335856 systemd-logind[1461]: Removed session 10. Jul 9 10:04:48.345009 systemd[1]: Started sshd@10-10.0.0.36:22-10.0.0.1:53072.service - OpenSSH per-connection server daemon (10.0.0.1:53072). Jul 9 10:04:48.387845 sshd[4030]: Accepted publickey for core from 10.0.0.1 port 53072 ssh2: RSA SHA256:reH7imFaG11BTrq9feyiNgkDqTgB0gigrS0G7NIp5EM Jul 9 10:04:48.389339 sshd-session[4030]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 10:04:48.393512 systemd-logind[1461]: New session 11 of user core. Jul 9 10:04:48.407302 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 9 10:04:48.523609 sshd[4032]: Connection closed by 10.0.0.1 port 53072 Jul 9 10:04:48.524073 sshd-session[4030]: pam_unix(sshd:session): session closed for user core Jul 9 10:04:48.536428 systemd[1]: sshd@10-10.0.0.36:22-10.0.0.1:53072.service: Deactivated successfully. Jul 9 10:04:48.539382 systemd[1]: session-11.scope: Deactivated successfully. Jul 9 10:04:48.541691 systemd-logind[1461]: Session 11 logged out. Waiting for processes to exit. Jul 9 10:04:48.546730 systemd[1]: Started sshd@11-10.0.0.36:22-10.0.0.1:53074.service - OpenSSH per-connection server daemon (10.0.0.1:53074). Jul 9 10:04:48.547598 systemd-logind[1461]: Removed session 11. Jul 9 10:04:48.582783 sshd[4045]: Accepted publickey for core from 10.0.0.1 port 53074 ssh2: RSA SHA256:reH7imFaG11BTrq9feyiNgkDqTgB0gigrS0G7NIp5EM Jul 9 10:04:48.584439 sshd-session[4045]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 10:04:48.589325 systemd-logind[1461]: New session 12 of user core. Jul 9 10:04:48.599301 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 9 10:04:48.754621 sshd[4048]: Connection closed by 10.0.0.1 port 53074 Jul 9 10:04:48.755172 sshd-session[4045]: pam_unix(sshd:session): session closed for user core Jul 9 10:04:48.766995 systemd[1]: sshd@11-10.0.0.36:22-10.0.0.1:53074.service: Deactivated successfully. Jul 9 10:04:48.770277 systemd[1]: session-12.scope: Deactivated successfully. Jul 9 10:04:48.772048 systemd-logind[1461]: Session 12 logged out. Waiting for processes to exit. Jul 9 10:04:48.785461 systemd[1]: Started sshd@12-10.0.0.36:22-10.0.0.1:53082.service - OpenSSH per-connection server daemon (10.0.0.1:53082). Jul 9 10:04:48.786251 systemd-logind[1461]: Removed session 12. Jul 9 10:04:48.821648 sshd[4058]: Accepted publickey for core from 10.0.0.1 port 53082 ssh2: RSA SHA256:reH7imFaG11BTrq9feyiNgkDqTgB0gigrS0G7NIp5EM Jul 9 10:04:48.823538 sshd-session[4058]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 10:04:48.828330 systemd-logind[1461]: New session 13 of user core. Jul 9 10:04:48.837272 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 9 10:04:48.951245 sshd[4061]: Connection closed by 10.0.0.1 port 53082 Jul 9 10:04:48.951570 sshd-session[4058]: pam_unix(sshd:session): session closed for user core Jul 9 10:04:48.955889 systemd[1]: sshd@12-10.0.0.36:22-10.0.0.1:53082.service: Deactivated successfully. Jul 9 10:04:48.958405 systemd[1]: session-13.scope: Deactivated successfully. Jul 9 10:04:48.959290 systemd-logind[1461]: Session 13 logged out. Waiting for processes to exit. Jul 9 10:04:48.960216 systemd-logind[1461]: Removed session 13. Jul 9 10:04:53.968080 systemd[1]: Started sshd@13-10.0.0.36:22-10.0.0.1:53086.service - OpenSSH per-connection server daemon (10.0.0.1:53086). Jul 9 10:04:54.009358 sshd[4076]: Accepted publickey for core from 10.0.0.1 port 53086 ssh2: RSA SHA256:reH7imFaG11BTrq9feyiNgkDqTgB0gigrS0G7NIp5EM Jul 9 10:04:54.010959 sshd-session[4076]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 10:04:54.015550 systemd-logind[1461]: New session 14 of user core. Jul 9 10:04:54.029332 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 9 10:04:54.141718 sshd[4078]: Connection closed by 10.0.0.1 port 53086 Jul 9 10:04:54.142229 sshd-session[4076]: pam_unix(sshd:session): session closed for user core Jul 9 10:04:54.146437 systemd[1]: sshd@13-10.0.0.36:22-10.0.0.1:53086.service: Deactivated successfully. Jul 9 10:04:54.149182 systemd[1]: session-14.scope: Deactivated successfully. Jul 9 10:04:54.149948 systemd-logind[1461]: Session 14 logged out. Waiting for processes to exit. Jul 9 10:04:54.151013 systemd-logind[1461]: Removed session 14. Jul 9 10:04:59.158013 systemd[1]: Started sshd@14-10.0.0.36:22-10.0.0.1:50440.service - OpenSSH per-connection server daemon (10.0.0.1:50440). Jul 9 10:04:59.200512 sshd[4092]: Accepted publickey for core from 10.0.0.1 port 50440 ssh2: RSA SHA256:reH7imFaG11BTrq9feyiNgkDqTgB0gigrS0G7NIp5EM Jul 9 10:04:59.202320 sshd-session[4092]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 10:04:59.207460 systemd-logind[1461]: New session 15 of user core. Jul 9 10:04:59.222333 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 9 10:04:59.342308 sshd[4094]: Connection closed by 10.0.0.1 port 50440 Jul 9 10:04:59.342736 sshd-session[4092]: pam_unix(sshd:session): session closed for user core Jul 9 10:04:59.347058 systemd[1]: sshd@14-10.0.0.36:22-10.0.0.1:50440.service: Deactivated successfully. Jul 9 10:04:59.349515 systemd[1]: session-15.scope: Deactivated successfully. Jul 9 10:04:59.350221 systemd-logind[1461]: Session 15 logged out. Waiting for processes to exit. Jul 9 10:04:59.351080 systemd-logind[1461]: Removed session 15. Jul 9 10:05:04.356329 systemd[1]: Started sshd@15-10.0.0.36:22-10.0.0.1:50446.service - OpenSSH per-connection server daemon (10.0.0.1:50446). Jul 9 10:05:04.397407 sshd[4108]: Accepted publickey for core from 10.0.0.1 port 50446 ssh2: RSA SHA256:reH7imFaG11BTrq9feyiNgkDqTgB0gigrS0G7NIp5EM Jul 9 10:05:04.398980 sshd-session[4108]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 10:05:04.403917 systemd-logind[1461]: New session 16 of user core. Jul 9 10:05:04.409329 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 9 10:05:04.529785 sshd[4110]: Connection closed by 10.0.0.1 port 50446 Jul 9 10:05:04.530465 sshd-session[4108]: pam_unix(sshd:session): session closed for user core Jul 9 10:05:04.548248 systemd[1]: sshd@15-10.0.0.36:22-10.0.0.1:50446.service: Deactivated successfully. Jul 9 10:05:04.550824 systemd[1]: session-16.scope: Deactivated successfully. Jul 9 10:05:04.552928 systemd-logind[1461]: Session 16 logged out. Waiting for processes to exit. Jul 9 10:05:04.572918 systemd[1]: Started sshd@16-10.0.0.36:22-10.0.0.1:50452.service - OpenSSH per-connection server daemon (10.0.0.1:50452). Jul 9 10:05:04.574292 systemd-logind[1461]: Removed session 16. Jul 9 10:05:04.613236 sshd[4122]: Accepted publickey for core from 10.0.0.1 port 50452 ssh2: RSA SHA256:reH7imFaG11BTrq9feyiNgkDqTgB0gigrS0G7NIp5EM Jul 9 10:05:04.615041 sshd-session[4122]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 10:05:04.619911 systemd-logind[1461]: New session 17 of user core. Jul 9 10:05:04.629283 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 9 10:05:05.897549 sshd[4126]: Connection closed by 10.0.0.1 port 50452 Jul 9 10:05:05.898087 sshd-session[4122]: pam_unix(sshd:session): session closed for user core Jul 9 10:05:05.907369 systemd[1]: sshd@16-10.0.0.36:22-10.0.0.1:50452.service: Deactivated successfully. Jul 9 10:05:05.909552 systemd[1]: session-17.scope: Deactivated successfully. Jul 9 10:05:05.911532 systemd-logind[1461]: Session 17 logged out. Waiting for processes to exit. Jul 9 10:05:05.922388 systemd[1]: Started sshd@17-10.0.0.36:22-10.0.0.1:50464.service - OpenSSH per-connection server daemon (10.0.0.1:50464). Jul 9 10:05:05.923341 systemd-logind[1461]: Removed session 17. Jul 9 10:05:05.961621 sshd[4137]: Accepted publickey for core from 10.0.0.1 port 50464 ssh2: RSA SHA256:reH7imFaG11BTrq9feyiNgkDqTgB0gigrS0G7NIp5EM Jul 9 10:05:05.963406 sshd-session[4137]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 10:05:05.968420 systemd-logind[1461]: New session 18 of user core. Jul 9 10:05:05.978306 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 9 10:05:07.390510 sshd[4140]: Connection closed by 10.0.0.1 port 50464 Jul 9 10:05:07.390946 sshd-session[4137]: pam_unix(sshd:session): session closed for user core Jul 9 10:05:07.400312 systemd[1]: sshd@17-10.0.0.36:22-10.0.0.1:50464.service: Deactivated successfully. Jul 9 10:05:07.402574 systemd[1]: session-18.scope: Deactivated successfully. Jul 9 10:05:07.404351 systemd-logind[1461]: Session 18 logged out. Waiting for processes to exit. Jul 9 10:05:07.412809 systemd[1]: Started sshd@18-10.0.0.36:22-10.0.0.1:50466.service - OpenSSH per-connection server daemon (10.0.0.1:50466). Jul 9 10:05:07.414317 systemd-logind[1461]: Removed session 18. Jul 9 10:05:07.453674 sshd[4159]: Accepted publickey for core from 10.0.0.1 port 50466 ssh2: RSA SHA256:reH7imFaG11BTrq9feyiNgkDqTgB0gigrS0G7NIp5EM Jul 9 10:05:07.455709 sshd-session[4159]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 10:05:07.461001 systemd-logind[1461]: New session 19 of user core. Jul 9 10:05:07.468504 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 9 10:05:07.706415 sshd[4162]: Connection closed by 10.0.0.1 port 50466 Jul 9 10:05:07.707591 sshd-session[4159]: pam_unix(sshd:session): session closed for user core Jul 9 10:05:07.719775 systemd[1]: sshd@18-10.0.0.36:22-10.0.0.1:50466.service: Deactivated successfully. Jul 9 10:05:07.721985 systemd[1]: session-19.scope: Deactivated successfully. Jul 9 10:05:07.722868 systemd-logind[1461]: Session 19 logged out. Waiting for processes to exit. Jul 9 10:05:07.735535 systemd[1]: Started sshd@19-10.0.0.36:22-10.0.0.1:50478.service - OpenSSH per-connection server daemon (10.0.0.1:50478). Jul 9 10:05:07.737001 systemd-logind[1461]: Removed session 19. Jul 9 10:05:07.771391 sshd[4172]: Accepted publickey for core from 10.0.0.1 port 50478 ssh2: RSA SHA256:reH7imFaG11BTrq9feyiNgkDqTgB0gigrS0G7NIp5EM Jul 9 10:05:07.772788 sshd-session[4172]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 10:05:07.777940 systemd-logind[1461]: New session 20 of user core. Jul 9 10:05:07.783273 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 9 10:05:07.897550 sshd[4178]: Connection closed by 10.0.0.1 port 50478 Jul 9 10:05:07.897954 sshd-session[4172]: pam_unix(sshd:session): session closed for user core Jul 9 10:05:07.901948 systemd[1]: sshd@19-10.0.0.36:22-10.0.0.1:50478.service: Deactivated successfully. Jul 9 10:05:07.904224 systemd[1]: session-20.scope: Deactivated successfully. Jul 9 10:05:07.905040 systemd-logind[1461]: Session 20 logged out. Waiting for processes to exit. Jul 9 10:05:07.906175 systemd-logind[1461]: Removed session 20. Jul 9 10:05:12.915312 systemd[1]: Started sshd@20-10.0.0.36:22-10.0.0.1:54806.service - OpenSSH per-connection server daemon (10.0.0.1:54806). Jul 9 10:05:12.955598 sshd[4191]: Accepted publickey for core from 10.0.0.1 port 54806 ssh2: RSA SHA256:reH7imFaG11BTrq9feyiNgkDqTgB0gigrS0G7NIp5EM Jul 9 10:05:12.957010 sshd-session[4191]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 10:05:12.961407 systemd-logind[1461]: New session 21 of user core. Jul 9 10:05:12.973315 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 9 10:05:13.082272 sshd[4193]: Connection closed by 10.0.0.1 port 54806 Jul 9 10:05:13.082649 sshd-session[4191]: pam_unix(sshd:session): session closed for user core Jul 9 10:05:13.087546 systemd[1]: sshd@20-10.0.0.36:22-10.0.0.1:54806.service: Deactivated successfully. Jul 9 10:05:13.089799 systemd[1]: session-21.scope: Deactivated successfully. Jul 9 10:05:13.090669 systemd-logind[1461]: Session 21 logged out. Waiting for processes to exit. Jul 9 10:05:13.091600 systemd-logind[1461]: Removed session 21. Jul 9 10:05:18.095342 systemd[1]: Started sshd@21-10.0.0.36:22-10.0.0.1:48502.service - OpenSSH per-connection server daemon (10.0.0.1:48502). Jul 9 10:05:18.135085 sshd[4210]: Accepted publickey for core from 10.0.0.1 port 48502 ssh2: RSA SHA256:reH7imFaG11BTrq9feyiNgkDqTgB0gigrS0G7NIp5EM Jul 9 10:05:18.136727 sshd-session[4210]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 10:05:18.141192 systemd-logind[1461]: New session 22 of user core. Jul 9 10:05:18.149313 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 9 10:05:18.258573 sshd[4212]: Connection closed by 10.0.0.1 port 48502 Jul 9 10:05:18.258969 sshd-session[4210]: pam_unix(sshd:session): session closed for user core Jul 9 10:05:18.263379 systemd[1]: sshd@21-10.0.0.36:22-10.0.0.1:48502.service: Deactivated successfully. Jul 9 10:05:18.265767 systemd[1]: session-22.scope: Deactivated successfully. Jul 9 10:05:18.266449 systemd-logind[1461]: Session 22 logged out. Waiting for processes to exit. Jul 9 10:05:18.267357 systemd-logind[1461]: Removed session 22. Jul 9 10:05:23.271535 systemd[1]: Started sshd@22-10.0.0.36:22-10.0.0.1:48504.service - OpenSSH per-connection server daemon (10.0.0.1:48504). Jul 9 10:05:23.312082 sshd[4225]: Accepted publickey for core from 10.0.0.1 port 48504 ssh2: RSA SHA256:reH7imFaG11BTrq9feyiNgkDqTgB0gigrS0G7NIp5EM Jul 9 10:05:23.313572 sshd-session[4225]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 10:05:23.318590 systemd-logind[1461]: New session 23 of user core. Jul 9 10:05:23.326436 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 9 10:05:23.440660 sshd[4227]: Connection closed by 10.0.0.1 port 48504 Jul 9 10:05:23.441120 sshd-session[4225]: pam_unix(sshd:session): session closed for user core Jul 9 10:05:23.445470 systemd[1]: sshd@22-10.0.0.36:22-10.0.0.1:48504.service: Deactivated successfully. Jul 9 10:05:23.447624 systemd[1]: session-23.scope: Deactivated successfully. Jul 9 10:05:23.448543 systemd-logind[1461]: Session 23 logged out. Waiting for processes to exit. Jul 9 10:05:23.449544 systemd-logind[1461]: Removed session 23. Jul 9 10:05:28.457586 systemd[1]: Started sshd@23-10.0.0.36:22-10.0.0.1:40364.service - OpenSSH per-connection server daemon (10.0.0.1:40364). Jul 9 10:05:28.502021 sshd[4240]: Accepted publickey for core from 10.0.0.1 port 40364 ssh2: RSA SHA256:reH7imFaG11BTrq9feyiNgkDqTgB0gigrS0G7NIp5EM Jul 9 10:05:28.503696 sshd-session[4240]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 10:05:28.508180 systemd-logind[1461]: New session 24 of user core. Jul 9 10:05:28.519316 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 9 10:05:28.643390 sshd[4242]: Connection closed by 10.0.0.1 port 40364 Jul 9 10:05:28.643830 sshd-session[4240]: pam_unix(sshd:session): session closed for user core Jul 9 10:05:28.653707 systemd[1]: sshd@23-10.0.0.36:22-10.0.0.1:40364.service: Deactivated successfully. Jul 9 10:05:28.656056 systemd[1]: session-24.scope: Deactivated successfully. Jul 9 10:05:28.657851 systemd-logind[1461]: Session 24 logged out. Waiting for processes to exit. Jul 9 10:05:28.669540 systemd[1]: Started sshd@24-10.0.0.36:22-10.0.0.1:40378.service - OpenSSH per-connection server daemon (10.0.0.1:40378). Jul 9 10:05:28.670554 systemd-logind[1461]: Removed session 24. Jul 9 10:05:28.707173 sshd[4255]: Accepted publickey for core from 10.0.0.1 port 40378 ssh2: RSA SHA256:reH7imFaG11BTrq9feyiNgkDqTgB0gigrS0G7NIp5EM Jul 9 10:05:28.708839 sshd-session[4255]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 10:05:28.713504 systemd-logind[1461]: New session 25 of user core. Jul 9 10:05:28.720306 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 9 10:05:30.074201 containerd[1488]: time="2025-07-09T10:05:30.074047229Z" level=info msg="StopContainer for \"4554e335b82f96e95faeb82da526110125c14c8b3f9c79529421d413086c7838\" with timeout 30 (s)" Jul 9 10:05:30.086802 containerd[1488]: time="2025-07-09T10:05:30.086750275Z" level=info msg="Stop container \"4554e335b82f96e95faeb82da526110125c14c8b3f9c79529421d413086c7838\" with signal terminated" Jul 9 10:05:30.101183 systemd[1]: cri-containerd-4554e335b82f96e95faeb82da526110125c14c8b3f9c79529421d413086c7838.scope: Deactivated successfully. Jul 9 10:05:30.106854 containerd[1488]: time="2025-07-09T10:05:30.106299895Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 9 10:05:30.108361 containerd[1488]: time="2025-07-09T10:05:30.108302811Z" level=info msg="StopContainer for \"dd376dd6871e585dedfcb3dbc7288a7b8c8f8e1a3991d0956566388bfdcafd39\" with timeout 2 (s)" Jul 9 10:05:30.108626 containerd[1488]: time="2025-07-09T10:05:30.108604054Z" level=info msg="Stop container \"dd376dd6871e585dedfcb3dbc7288a7b8c8f8e1a3991d0956566388bfdcafd39\" with signal terminated" Jul 9 10:05:30.115395 systemd-networkd[1418]: lxc_health: Link DOWN Jul 9 10:05:30.115405 systemd-networkd[1418]: lxc_health: Lost carrier Jul 9 10:05:30.127389 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4554e335b82f96e95faeb82da526110125c14c8b3f9c79529421d413086c7838-rootfs.mount: Deactivated successfully. Jul 9 10:05:30.136665 systemd[1]: cri-containerd-dd376dd6871e585dedfcb3dbc7288a7b8c8f8e1a3991d0956566388bfdcafd39.scope: Deactivated successfully. Jul 9 10:05:30.137031 systemd[1]: cri-containerd-dd376dd6871e585dedfcb3dbc7288a7b8c8f8e1a3991d0956566388bfdcafd39.scope: Consumed 6.996s CPU time, 124.8M memory peak, 136K read from disk, 13.3M written to disk. Jul 9 10:05:30.137673 containerd[1488]: time="2025-07-09T10:05:30.137592746Z" level=info msg="shim disconnected" id=4554e335b82f96e95faeb82da526110125c14c8b3f9c79529421d413086c7838 namespace=k8s.io Jul 9 10:05:30.137673 containerd[1488]: time="2025-07-09T10:05:30.137666745Z" level=warning msg="cleaning up after shim disconnected" id=4554e335b82f96e95faeb82da526110125c14c8b3f9c79529421d413086c7838 namespace=k8s.io Jul 9 10:05:30.137856 containerd[1488]: time="2025-07-09T10:05:30.137679649Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 9 10:05:30.157861 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dd376dd6871e585dedfcb3dbc7288a7b8c8f8e1a3991d0956566388bfdcafd39-rootfs.mount: Deactivated successfully. Jul 9 10:05:30.158932 containerd[1488]: time="2025-07-09T10:05:30.158888381Z" level=info msg="StopContainer for \"4554e335b82f96e95faeb82da526110125c14c8b3f9c79529421d413086c7838\" returns successfully" Jul 9 10:05:30.161061 containerd[1488]: time="2025-07-09T10:05:30.161007004Z" level=info msg="shim disconnected" id=dd376dd6871e585dedfcb3dbc7288a7b8c8f8e1a3991d0956566388bfdcafd39 namespace=k8s.io Jul 9 10:05:30.161061 containerd[1488]: time="2025-07-09T10:05:30.161051556Z" level=warning msg="cleaning up after shim disconnected" id=dd376dd6871e585dedfcb3dbc7288a7b8c8f8e1a3991d0956566388bfdcafd39 namespace=k8s.io Jul 9 10:05:30.161061 containerd[1488]: time="2025-07-09T10:05:30.161060182Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 9 10:05:30.162908 containerd[1488]: time="2025-07-09T10:05:30.162872322Z" level=info msg="StopPodSandbox for \"9d1f5e02d069ca91a66505ffef543d13c4374fc2a03a088d89e1fa937cec26f5\"" Jul 9 10:05:30.178039 containerd[1488]: time="2025-07-09T10:05:30.177982420Z" level=info msg="StopContainer for \"dd376dd6871e585dedfcb3dbc7288a7b8c8f8e1a3991d0956566388bfdcafd39\" returns successfully" Jul 9 10:05:30.178778 containerd[1488]: time="2025-07-09T10:05:30.178555662Z" level=info msg="StopPodSandbox for \"bbc3be2d2f0728ff017b7bc4863e1e464f1a76b1d1d41f7297546a7f63eb9bf5\"" Jul 9 10:05:30.178778 containerd[1488]: time="2025-07-09T10:05:30.178590557Z" level=info msg="Container to stop \"5f8a35f5b33bb5e9c181dc07ec43972276cfb91f2f9036031c205e30259dd72e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 9 10:05:30.178778 containerd[1488]: time="2025-07-09T10:05:30.178640139Z" level=info msg="Container to stop \"a03947aae8fcd42e3e3212866e608d6fa924d9466b75db94d8ebed0a8d80794c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 9 10:05:30.178778 containerd[1488]: time="2025-07-09T10:05:30.178653775Z" level=info msg="Container to stop \"145a437b4007092f39c9e9ac7a08ad76b2e9644ef68a25796ba2e9a0bfa8f4c5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 9 10:05:30.178778 containerd[1488]: time="2025-07-09T10:05:30.178667260Z" level=info msg="Container to stop \"a4725d4a3ccef7e2a049d1fca438ae257c5d9a0169fc239ed624d0365c2ca8d6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 9 10:05:30.178778 containerd[1488]: time="2025-07-09T10:05:30.178677449Z" level=info msg="Container to stop \"dd376dd6871e585dedfcb3dbc7288a7b8c8f8e1a3991d0956566388bfdcafd39\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 9 10:05:30.180533 containerd[1488]: time="2025-07-09T10:05:30.162909631Z" level=info msg="Container to stop \"4554e335b82f96e95faeb82da526110125c14c8b3f9c79529421d413086c7838\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 9 10:05:30.180839 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-bbc3be2d2f0728ff017b7bc4863e1e464f1a76b1d1d41f7297546a7f63eb9bf5-shm.mount: Deactivated successfully. Jul 9 10:05:30.183853 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9d1f5e02d069ca91a66505ffef543d13c4374fc2a03a088d89e1fa937cec26f5-shm.mount: Deactivated successfully. Jul 9 10:05:30.187474 systemd[1]: cri-containerd-bbc3be2d2f0728ff017b7bc4863e1e464f1a76b1d1d41f7297546a7f63eb9bf5.scope: Deactivated successfully. Jul 9 10:05:30.189479 systemd[1]: cri-containerd-9d1f5e02d069ca91a66505ffef543d13c4374fc2a03a088d89e1fa937cec26f5.scope: Deactivated successfully. Jul 9 10:05:30.212818 containerd[1488]: time="2025-07-09T10:05:30.212748065Z" level=info msg="shim disconnected" id=9d1f5e02d069ca91a66505ffef543d13c4374fc2a03a088d89e1fa937cec26f5 namespace=k8s.io Jul 9 10:05:30.212818 containerd[1488]: time="2025-07-09T10:05:30.212812676Z" level=warning msg="cleaning up after shim disconnected" id=9d1f5e02d069ca91a66505ffef543d13c4374fc2a03a088d89e1fa937cec26f5 namespace=k8s.io Jul 9 10:05:30.212818 containerd[1488]: time="2025-07-09T10:05:30.212822184Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 9 10:05:30.213227 containerd[1488]: time="2025-07-09T10:05:30.212896713Z" level=info msg="shim disconnected" id=bbc3be2d2f0728ff017b7bc4863e1e464f1a76b1d1d41f7297546a7f63eb9bf5 namespace=k8s.io Jul 9 10:05:30.213227 containerd[1488]: time="2025-07-09T10:05:30.212954942Z" level=warning msg="cleaning up after shim disconnected" id=bbc3be2d2f0728ff017b7bc4863e1e464f1a76b1d1d41f7297546a7f63eb9bf5 namespace=k8s.io Jul 9 10:05:30.213227 containerd[1488]: time="2025-07-09T10:05:30.212963788Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 9 10:05:30.229748 containerd[1488]: time="2025-07-09T10:05:30.229701099Z" level=info msg="TearDown network for sandbox \"bbc3be2d2f0728ff017b7bc4863e1e464f1a76b1d1d41f7297546a7f63eb9bf5\" successfully" Jul 9 10:05:30.229748 containerd[1488]: time="2025-07-09T10:05:30.229735764Z" level=info msg="StopPodSandbox for \"bbc3be2d2f0728ff017b7bc4863e1e464f1a76b1d1d41f7297546a7f63eb9bf5\" returns successfully" Jul 9 10:05:30.231501 containerd[1488]: time="2025-07-09T10:05:30.231474105Z" level=info msg="TearDown network for sandbox \"9d1f5e02d069ca91a66505ffef543d13c4374fc2a03a088d89e1fa937cec26f5\" successfully" Jul 9 10:05:30.231501 containerd[1488]: time="2025-07-09T10:05:30.231499161Z" level=info msg="StopPodSandbox for \"9d1f5e02d069ca91a66505ffef543d13c4374fc2a03a088d89e1fa937cec26f5\" returns successfully" Jul 9 10:05:30.337740 kubelet[2571]: I0709 10:05:30.337576 2571 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0427ac22-349b-4e16-8131-7c94ffd506d2-lib-modules\") pod \"0427ac22-349b-4e16-8131-7c94ffd506d2\" (UID: \"0427ac22-349b-4e16-8131-7c94ffd506d2\") " Jul 9 10:05:30.337740 kubelet[2571]: I0709 10:05:30.337641 2571 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0427ac22-349b-4e16-8131-7c94ffd506d2-etc-cni-netd\") pod \"0427ac22-349b-4e16-8131-7c94ffd506d2\" (UID: \"0427ac22-349b-4e16-8131-7c94ffd506d2\") " Jul 9 10:05:30.337740 kubelet[2571]: I0709 10:05:30.337662 2571 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0427ac22-349b-4e16-8131-7c94ffd506d2-bpf-maps\") pod \"0427ac22-349b-4e16-8131-7c94ffd506d2\" (UID: \"0427ac22-349b-4e16-8131-7c94ffd506d2\") " Jul 9 10:05:30.337740 kubelet[2571]: I0709 10:05:30.337689 2571 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d7eed657-f6f5-4b6e-a39d-3c671510523a-cilium-config-path\") pod \"d7eed657-f6f5-4b6e-a39d-3c671510523a\" (UID: \"d7eed657-f6f5-4b6e-a39d-3c671510523a\") " Jul 9 10:05:30.337740 kubelet[2571]: I0709 10:05:30.337714 2571 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0427ac22-349b-4e16-8131-7c94ffd506d2-xtables-lock\") pod \"0427ac22-349b-4e16-8131-7c94ffd506d2\" (UID: \"0427ac22-349b-4e16-8131-7c94ffd506d2\") " Jul 9 10:05:30.337740 kubelet[2571]: I0709 10:05:30.337712 2571 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0427ac22-349b-4e16-8131-7c94ffd506d2-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "0427ac22-349b-4e16-8131-7c94ffd506d2" (UID: "0427ac22-349b-4e16-8131-7c94ffd506d2"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 9 10:05:30.338371 kubelet[2571]: I0709 10:05:30.337740 2571 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0427ac22-349b-4e16-8131-7c94ffd506d2-cilium-config-path\") pod \"0427ac22-349b-4e16-8131-7c94ffd506d2\" (UID: \"0427ac22-349b-4e16-8131-7c94ffd506d2\") " Jul 9 10:05:30.338371 kubelet[2571]: I0709 10:05:30.337761 2571 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0427ac22-349b-4e16-8131-7c94ffd506d2-hostproc\") pod \"0427ac22-349b-4e16-8131-7c94ffd506d2\" (UID: \"0427ac22-349b-4e16-8131-7c94ffd506d2\") " Jul 9 10:05:30.338371 kubelet[2571]: I0709 10:05:30.337770 2571 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0427ac22-349b-4e16-8131-7c94ffd506d2-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "0427ac22-349b-4e16-8131-7c94ffd506d2" (UID: "0427ac22-349b-4e16-8131-7c94ffd506d2"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 9 10:05:30.338371 kubelet[2571]: I0709 10:05:30.337783 2571 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0427ac22-349b-4e16-8131-7c94ffd506d2-hubble-tls\") pod \"0427ac22-349b-4e16-8131-7c94ffd506d2\" (UID: \"0427ac22-349b-4e16-8131-7c94ffd506d2\") " Jul 9 10:05:30.338371 kubelet[2571]: I0709 10:05:30.337788 2571 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0427ac22-349b-4e16-8131-7c94ffd506d2-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "0427ac22-349b-4e16-8131-7c94ffd506d2" (UID: "0427ac22-349b-4e16-8131-7c94ffd506d2"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 9 10:05:30.338371 kubelet[2571]: I0709 10:05:30.337803 2571 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0427ac22-349b-4e16-8131-7c94ffd506d2-cilium-run\") pod \"0427ac22-349b-4e16-8131-7c94ffd506d2\" (UID: \"0427ac22-349b-4e16-8131-7c94ffd506d2\") " Jul 9 10:05:30.338520 kubelet[2571]: I0709 10:05:30.337806 2571 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0427ac22-349b-4e16-8131-7c94ffd506d2-hostproc" (OuterVolumeSpecName: "hostproc") pod "0427ac22-349b-4e16-8131-7c94ffd506d2" (UID: "0427ac22-349b-4e16-8131-7c94ffd506d2"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 9 10:05:30.338520 kubelet[2571]: I0709 10:05:30.337823 2571 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0427ac22-349b-4e16-8131-7c94ffd506d2-cni-path\") pod \"0427ac22-349b-4e16-8131-7c94ffd506d2\" (UID: \"0427ac22-349b-4e16-8131-7c94ffd506d2\") " Jul 9 10:05:30.338520 kubelet[2571]: I0709 10:05:30.337843 2571 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7q5fp\" (UniqueName: \"kubernetes.io/projected/d7eed657-f6f5-4b6e-a39d-3c671510523a-kube-api-access-7q5fp\") pod \"d7eed657-f6f5-4b6e-a39d-3c671510523a\" (UID: \"d7eed657-f6f5-4b6e-a39d-3c671510523a\") " Jul 9 10:05:30.338520 kubelet[2571]: I0709 10:05:30.337863 2571 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0427ac22-349b-4e16-8131-7c94ffd506d2-clustermesh-secrets\") pod \"0427ac22-349b-4e16-8131-7c94ffd506d2\" (UID: \"0427ac22-349b-4e16-8131-7c94ffd506d2\") " Jul 9 10:05:30.338520 kubelet[2571]: I0709 10:05:30.337882 2571 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0427ac22-349b-4e16-8131-7c94ffd506d2-host-proc-sys-net\") pod \"0427ac22-349b-4e16-8131-7c94ffd506d2\" (UID: \"0427ac22-349b-4e16-8131-7c94ffd506d2\") " Jul 9 10:05:30.338520 kubelet[2571]: I0709 10:05:30.337899 2571 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0427ac22-349b-4e16-8131-7c94ffd506d2-cilium-cgroup\") pod \"0427ac22-349b-4e16-8131-7c94ffd506d2\" (UID: \"0427ac22-349b-4e16-8131-7c94ffd506d2\") " Jul 9 10:05:30.338656 kubelet[2571]: I0709 10:05:30.337919 2571 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gdkqx\" (UniqueName: \"kubernetes.io/projected/0427ac22-349b-4e16-8131-7c94ffd506d2-kube-api-access-gdkqx\") pod \"0427ac22-349b-4e16-8131-7c94ffd506d2\" (UID: \"0427ac22-349b-4e16-8131-7c94ffd506d2\") " Jul 9 10:05:30.338656 kubelet[2571]: I0709 10:05:30.337939 2571 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0427ac22-349b-4e16-8131-7c94ffd506d2-host-proc-sys-kernel\") pod \"0427ac22-349b-4e16-8131-7c94ffd506d2\" (UID: \"0427ac22-349b-4e16-8131-7c94ffd506d2\") " Jul 9 10:05:30.338656 kubelet[2571]: I0709 10:05:30.337980 2571 reconciler_common.go:299] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0427ac22-349b-4e16-8131-7c94ffd506d2-hostproc\") on node \"localhost\" DevicePath \"\"" Jul 9 10:05:30.338656 kubelet[2571]: I0709 10:05:30.337995 2571 reconciler_common.go:299] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0427ac22-349b-4e16-8131-7c94ffd506d2-bpf-maps\") on node \"localhost\" DevicePath \"\"" Jul 9 10:05:30.338656 kubelet[2571]: I0709 10:05:30.338007 2571 reconciler_common.go:299] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0427ac22-349b-4e16-8131-7c94ffd506d2-lib-modules\") on node \"localhost\" DevicePath \"\"" Jul 9 10:05:30.338656 kubelet[2571]: I0709 10:05:30.338019 2571 reconciler_common.go:299] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0427ac22-349b-4e16-8131-7c94ffd506d2-xtables-lock\") on node \"localhost\" DevicePath \"\"" Jul 9 10:05:30.338656 kubelet[2571]: I0709 10:05:30.338049 2571 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0427ac22-349b-4e16-8131-7c94ffd506d2-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "0427ac22-349b-4e16-8131-7c94ffd506d2" (UID: "0427ac22-349b-4e16-8131-7c94ffd506d2"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 9 10:05:30.338808 kubelet[2571]: I0709 10:05:30.338672 2571 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0427ac22-349b-4e16-8131-7c94ffd506d2-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "0427ac22-349b-4e16-8131-7c94ffd506d2" (UID: "0427ac22-349b-4e16-8131-7c94ffd506d2"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 9 10:05:30.342523 kubelet[2571]: I0709 10:05:30.341252 2571 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0427ac22-349b-4e16-8131-7c94ffd506d2-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "0427ac22-349b-4e16-8131-7c94ffd506d2" (UID: "0427ac22-349b-4e16-8131-7c94ffd506d2"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 9 10:05:30.342523 kubelet[2571]: I0709 10:05:30.341282 2571 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0427ac22-349b-4e16-8131-7c94ffd506d2-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "0427ac22-349b-4e16-8131-7c94ffd506d2" (UID: "0427ac22-349b-4e16-8131-7c94ffd506d2"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 9 10:05:30.342523 kubelet[2571]: I0709 10:05:30.341380 2571 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0427ac22-349b-4e16-8131-7c94ffd506d2-cni-path" (OuterVolumeSpecName: "cni-path") pod "0427ac22-349b-4e16-8131-7c94ffd506d2" (UID: "0427ac22-349b-4e16-8131-7c94ffd506d2"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 9 10:05:30.342523 kubelet[2571]: I0709 10:05:30.341410 2571 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0427ac22-349b-4e16-8131-7c94ffd506d2-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "0427ac22-349b-4e16-8131-7c94ffd506d2" (UID: "0427ac22-349b-4e16-8131-7c94ffd506d2"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jul 9 10:05:30.342523 kubelet[2571]: I0709 10:05:30.341835 2571 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7eed657-f6f5-4b6e-a39d-3c671510523a-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "d7eed657-f6f5-4b6e-a39d-3c671510523a" (UID: "d7eed657-f6f5-4b6e-a39d-3c671510523a"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 9 10:05:30.342804 kubelet[2571]: I0709 10:05:30.342477 2571 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0427ac22-349b-4e16-8131-7c94ffd506d2-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "0427ac22-349b-4e16-8131-7c94ffd506d2" (UID: "0427ac22-349b-4e16-8131-7c94ffd506d2"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 9 10:05:30.343892 kubelet[2571]: I0709 10:05:30.343800 2571 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0427ac22-349b-4e16-8131-7c94ffd506d2-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "0427ac22-349b-4e16-8131-7c94ffd506d2" (UID: "0427ac22-349b-4e16-8131-7c94ffd506d2"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 9 10:05:30.344104 kubelet[2571]: I0709 10:05:30.344065 2571 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0427ac22-349b-4e16-8131-7c94ffd506d2-kube-api-access-gdkqx" (OuterVolumeSpecName: "kube-api-access-gdkqx") pod "0427ac22-349b-4e16-8131-7c94ffd506d2" (UID: "0427ac22-349b-4e16-8131-7c94ffd506d2"). InnerVolumeSpecName "kube-api-access-gdkqx". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 9 10:05:30.344238 kubelet[2571]: I0709 10:05:30.344206 2571 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0427ac22-349b-4e16-8131-7c94ffd506d2-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "0427ac22-349b-4e16-8131-7c94ffd506d2" (UID: "0427ac22-349b-4e16-8131-7c94ffd506d2"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 9 10:05:30.344578 kubelet[2571]: I0709 10:05:30.344547 2571 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7eed657-f6f5-4b6e-a39d-3c671510523a-kube-api-access-7q5fp" (OuterVolumeSpecName: "kube-api-access-7q5fp") pod "d7eed657-f6f5-4b6e-a39d-3c671510523a" (UID: "d7eed657-f6f5-4b6e-a39d-3c671510523a"). InnerVolumeSpecName "kube-api-access-7q5fp". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 9 10:05:30.439247 kubelet[2571]: I0709 10:05:30.439190 2571 reconciler_common.go:299] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0427ac22-349b-4e16-8131-7c94ffd506d2-cni-path\") on node \"localhost\" DevicePath \"\"" Jul 9 10:05:30.439247 kubelet[2571]: I0709 10:05:30.439228 2571 reconciler_common.go:299] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0427ac22-349b-4e16-8131-7c94ffd506d2-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Jul 9 10:05:30.439247 kubelet[2571]: I0709 10:05:30.439238 2571 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0427ac22-349b-4e16-8131-7c94ffd506d2-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Jul 9 10:05:30.439247 kubelet[2571]: I0709 10:05:30.439247 2571 reconciler_common.go:299] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0427ac22-349b-4e16-8131-7c94ffd506d2-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Jul 9 10:05:30.439247 kubelet[2571]: I0709 10:05:30.439256 2571 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-gdkqx\" (UniqueName: \"kubernetes.io/projected/0427ac22-349b-4e16-8131-7c94ffd506d2-kube-api-access-gdkqx\") on node \"localhost\" DevicePath \"\"" Jul 9 10:05:30.439534 kubelet[2571]: I0709 10:05:30.439269 2571 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7q5fp\" (UniqueName: \"kubernetes.io/projected/d7eed657-f6f5-4b6e-a39d-3c671510523a-kube-api-access-7q5fp\") on node \"localhost\" DevicePath \"\"" Jul 9 10:05:30.439534 kubelet[2571]: I0709 10:05:30.439279 2571 reconciler_common.go:299] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0427ac22-349b-4e16-8131-7c94ffd506d2-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Jul 9 10:05:30.439534 kubelet[2571]: I0709 10:05:30.439287 2571 reconciler_common.go:299] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0427ac22-349b-4e16-8131-7c94ffd506d2-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Jul 9 10:05:30.439534 kubelet[2571]: I0709 10:05:30.439297 2571 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0427ac22-349b-4e16-8131-7c94ffd506d2-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 9 10:05:30.439534 kubelet[2571]: I0709 10:05:30.439306 2571 reconciler_common.go:299] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d7eed657-f6f5-4b6e-a39d-3c671510523a-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jul 9 10:05:30.439534 kubelet[2571]: I0709 10:05:30.439313 2571 reconciler_common.go:299] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0427ac22-349b-4e16-8131-7c94ffd506d2-hubble-tls\") on node \"localhost\" DevicePath \"\"" Jul 9 10:05:30.439534 kubelet[2571]: I0709 10:05:30.439321 2571 reconciler_common.go:299] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0427ac22-349b-4e16-8131-7c94ffd506d2-cilium-run\") on node \"localhost\" DevicePath \"\"" Jul 9 10:05:30.961414 kubelet[2571]: I0709 10:05:30.961374 2571 scope.go:117] "RemoveContainer" containerID="4554e335b82f96e95faeb82da526110125c14c8b3f9c79529421d413086c7838" Jul 9 10:05:30.968004 systemd[1]: Removed slice kubepods-besteffort-podd7eed657_f6f5_4b6e_a39d_3c671510523a.slice - libcontainer container kubepods-besteffort-podd7eed657_f6f5_4b6e_a39d_3c671510523a.slice. Jul 9 10:05:30.969439 containerd[1488]: time="2025-07-09T10:05:30.969247031Z" level=info msg="RemoveContainer for \"4554e335b82f96e95faeb82da526110125c14c8b3f9c79529421d413086c7838\"" Jul 9 10:05:30.978076 systemd[1]: Removed slice kubepods-burstable-pod0427ac22_349b_4e16_8131_7c94ffd506d2.slice - libcontainer container kubepods-burstable-pod0427ac22_349b_4e16_8131_7c94ffd506d2.slice. Jul 9 10:05:30.978242 systemd[1]: kubepods-burstable-pod0427ac22_349b_4e16_8131_7c94ffd506d2.slice: Consumed 7.115s CPU time, 125.1M memory peak, 160K read from disk, 13.3M written to disk. Jul 9 10:05:30.991363 containerd[1488]: time="2025-07-09T10:05:30.991261631Z" level=info msg="RemoveContainer for \"4554e335b82f96e95faeb82da526110125c14c8b3f9c79529421d413086c7838\" returns successfully" Jul 9 10:05:30.991823 kubelet[2571]: I0709 10:05:30.991619 2571 scope.go:117] "RemoveContainer" containerID="4554e335b82f96e95faeb82da526110125c14c8b3f9c79529421d413086c7838" Jul 9 10:05:30.991944 containerd[1488]: time="2025-07-09T10:05:30.991901938Z" level=error msg="ContainerStatus for \"4554e335b82f96e95faeb82da526110125c14c8b3f9c79529421d413086c7838\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4554e335b82f96e95faeb82da526110125c14c8b3f9c79529421d413086c7838\": not found" Jul 9 10:05:30.999297 kubelet[2571]: E0709 10:05:30.999261 2571 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4554e335b82f96e95faeb82da526110125c14c8b3f9c79529421d413086c7838\": not found" containerID="4554e335b82f96e95faeb82da526110125c14c8b3f9c79529421d413086c7838" Jul 9 10:05:30.999440 kubelet[2571]: I0709 10:05:30.999302 2571 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"4554e335b82f96e95faeb82da526110125c14c8b3f9c79529421d413086c7838"} err="failed to get container status \"4554e335b82f96e95faeb82da526110125c14c8b3f9c79529421d413086c7838\": rpc error: code = NotFound desc = an error occurred when try to find container \"4554e335b82f96e95faeb82da526110125c14c8b3f9c79529421d413086c7838\": not found" Jul 9 10:05:30.999440 kubelet[2571]: I0709 10:05:30.999399 2571 scope.go:117] "RemoveContainer" containerID="dd376dd6871e585dedfcb3dbc7288a7b8c8f8e1a3991d0956566388bfdcafd39" Jul 9 10:05:31.001144 containerd[1488]: time="2025-07-09T10:05:31.000801913Z" level=info msg="RemoveContainer for \"dd376dd6871e585dedfcb3dbc7288a7b8c8f8e1a3991d0956566388bfdcafd39\"" Jul 9 10:05:31.005399 containerd[1488]: time="2025-07-09T10:05:31.005318913Z" level=info msg="RemoveContainer for \"dd376dd6871e585dedfcb3dbc7288a7b8c8f8e1a3991d0956566388bfdcafd39\" returns successfully" Jul 9 10:05:31.005698 kubelet[2571]: I0709 10:05:31.005644 2571 scope.go:117] "RemoveContainer" containerID="a4725d4a3ccef7e2a049d1fca438ae257c5d9a0169fc239ed624d0365c2ca8d6" Jul 9 10:05:31.007097 containerd[1488]: time="2025-07-09T10:05:31.007071186Z" level=info msg="RemoveContainer for \"a4725d4a3ccef7e2a049d1fca438ae257c5d9a0169fc239ed624d0365c2ca8d6\"" Jul 9 10:05:31.010929 containerd[1488]: time="2025-07-09T10:05:31.010879339Z" level=info msg="RemoveContainer for \"a4725d4a3ccef7e2a049d1fca438ae257c5d9a0169fc239ed624d0365c2ca8d6\" returns successfully" Jul 9 10:05:31.011125 kubelet[2571]: I0709 10:05:31.011068 2571 scope.go:117] "RemoveContainer" containerID="145a437b4007092f39c9e9ac7a08ad76b2e9644ef68a25796ba2e9a0bfa8f4c5" Jul 9 10:05:31.012453 containerd[1488]: time="2025-07-09T10:05:31.012422260Z" level=info msg="RemoveContainer for \"145a437b4007092f39c9e9ac7a08ad76b2e9644ef68a25796ba2e9a0bfa8f4c5\"" Jul 9 10:05:31.015881 containerd[1488]: time="2025-07-09T10:05:31.015828741Z" level=info msg="RemoveContainer for \"145a437b4007092f39c9e9ac7a08ad76b2e9644ef68a25796ba2e9a0bfa8f4c5\" returns successfully" Jul 9 10:05:31.016068 kubelet[2571]: I0709 10:05:31.016001 2571 scope.go:117] "RemoveContainer" containerID="a03947aae8fcd42e3e3212866e608d6fa924d9466b75db94d8ebed0a8d80794c" Jul 9 10:05:31.017194 containerd[1488]: time="2025-07-09T10:05:31.017127223Z" level=info msg="RemoveContainer for \"a03947aae8fcd42e3e3212866e608d6fa924d9466b75db94d8ebed0a8d80794c\"" Jul 9 10:05:31.020526 containerd[1488]: time="2025-07-09T10:05:31.020495422Z" level=info msg="RemoveContainer for \"a03947aae8fcd42e3e3212866e608d6fa924d9466b75db94d8ebed0a8d80794c\" returns successfully" Jul 9 10:05:31.020975 kubelet[2571]: I0709 10:05:31.020711 2571 scope.go:117] "RemoveContainer" containerID="5f8a35f5b33bb5e9c181dc07ec43972276cfb91f2f9036031c205e30259dd72e" Jul 9 10:05:31.021853 containerd[1488]: time="2025-07-09T10:05:31.021804354Z" level=info msg="RemoveContainer for \"5f8a35f5b33bb5e9c181dc07ec43972276cfb91f2f9036031c205e30259dd72e\"" Jul 9 10:05:31.025305 containerd[1488]: time="2025-07-09T10:05:31.025257723Z" level=info msg="RemoveContainer for \"5f8a35f5b33bb5e9c181dc07ec43972276cfb91f2f9036031c205e30259dd72e\" returns successfully" Jul 9 10:05:31.025474 kubelet[2571]: I0709 10:05:31.025427 2571 scope.go:117] "RemoveContainer" containerID="dd376dd6871e585dedfcb3dbc7288a7b8c8f8e1a3991d0956566388bfdcafd39" Jul 9 10:05:31.025699 containerd[1488]: time="2025-07-09T10:05:31.025657873Z" level=error msg="ContainerStatus for \"dd376dd6871e585dedfcb3dbc7288a7b8c8f8e1a3991d0956566388bfdcafd39\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"dd376dd6871e585dedfcb3dbc7288a7b8c8f8e1a3991d0956566388bfdcafd39\": not found" Jul 9 10:05:31.025825 kubelet[2571]: E0709 10:05:31.025792 2571 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"dd376dd6871e585dedfcb3dbc7288a7b8c8f8e1a3991d0956566388bfdcafd39\": not found" containerID="dd376dd6871e585dedfcb3dbc7288a7b8c8f8e1a3991d0956566388bfdcafd39" Jul 9 10:05:31.025878 kubelet[2571]: I0709 10:05:31.025828 2571 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"dd376dd6871e585dedfcb3dbc7288a7b8c8f8e1a3991d0956566388bfdcafd39"} err="failed to get container status \"dd376dd6871e585dedfcb3dbc7288a7b8c8f8e1a3991d0956566388bfdcafd39\": rpc error: code = NotFound desc = an error occurred when try to find container \"dd376dd6871e585dedfcb3dbc7288a7b8c8f8e1a3991d0956566388bfdcafd39\": not found" Jul 9 10:05:31.025878 kubelet[2571]: I0709 10:05:31.025857 2571 scope.go:117] "RemoveContainer" containerID="a4725d4a3ccef7e2a049d1fca438ae257c5d9a0169fc239ed624d0365c2ca8d6" Jul 9 10:05:31.026116 containerd[1488]: time="2025-07-09T10:05:31.026057110Z" level=error msg="ContainerStatus for \"a4725d4a3ccef7e2a049d1fca438ae257c5d9a0169fc239ed624d0365c2ca8d6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a4725d4a3ccef7e2a049d1fca438ae257c5d9a0169fc239ed624d0365c2ca8d6\": not found" Jul 9 10:05:31.026260 kubelet[2571]: E0709 10:05:31.026238 2571 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a4725d4a3ccef7e2a049d1fca438ae257c5d9a0169fc239ed624d0365c2ca8d6\": not found" containerID="a4725d4a3ccef7e2a049d1fca438ae257c5d9a0169fc239ed624d0365c2ca8d6" Jul 9 10:05:31.026314 kubelet[2571]: I0709 10:05:31.026263 2571 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a4725d4a3ccef7e2a049d1fca438ae257c5d9a0169fc239ed624d0365c2ca8d6"} err="failed to get container status \"a4725d4a3ccef7e2a049d1fca438ae257c5d9a0169fc239ed624d0365c2ca8d6\": rpc error: code = NotFound desc = an error occurred when try to find container \"a4725d4a3ccef7e2a049d1fca438ae257c5d9a0169fc239ed624d0365c2ca8d6\": not found" Jul 9 10:05:31.026314 kubelet[2571]: I0709 10:05:31.026280 2571 scope.go:117] "RemoveContainer" containerID="145a437b4007092f39c9e9ac7a08ad76b2e9644ef68a25796ba2e9a0bfa8f4c5" Jul 9 10:05:31.026478 containerd[1488]: time="2025-07-09T10:05:31.026439828Z" level=error msg="ContainerStatus for \"145a437b4007092f39c9e9ac7a08ad76b2e9644ef68a25796ba2e9a0bfa8f4c5\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"145a437b4007092f39c9e9ac7a08ad76b2e9644ef68a25796ba2e9a0bfa8f4c5\": not found" Jul 9 10:05:31.026585 kubelet[2571]: E0709 10:05:31.026557 2571 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"145a437b4007092f39c9e9ac7a08ad76b2e9644ef68a25796ba2e9a0bfa8f4c5\": not found" containerID="145a437b4007092f39c9e9ac7a08ad76b2e9644ef68a25796ba2e9a0bfa8f4c5" Jul 9 10:05:31.026628 kubelet[2571]: I0709 10:05:31.026580 2571 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"145a437b4007092f39c9e9ac7a08ad76b2e9644ef68a25796ba2e9a0bfa8f4c5"} err="failed to get container status \"145a437b4007092f39c9e9ac7a08ad76b2e9644ef68a25796ba2e9a0bfa8f4c5\": rpc error: code = NotFound desc = an error occurred when try to find container \"145a437b4007092f39c9e9ac7a08ad76b2e9644ef68a25796ba2e9a0bfa8f4c5\": not found" Jul 9 10:05:31.026628 kubelet[2571]: I0709 10:05:31.026595 2571 scope.go:117] "RemoveContainer" containerID="a03947aae8fcd42e3e3212866e608d6fa924d9466b75db94d8ebed0a8d80794c" Jul 9 10:05:31.026816 containerd[1488]: time="2025-07-09T10:05:31.026777360Z" level=error msg="ContainerStatus for \"a03947aae8fcd42e3e3212866e608d6fa924d9466b75db94d8ebed0a8d80794c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a03947aae8fcd42e3e3212866e608d6fa924d9466b75db94d8ebed0a8d80794c\": not found" Jul 9 10:05:31.026962 kubelet[2571]: E0709 10:05:31.026933 2571 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a03947aae8fcd42e3e3212866e608d6fa924d9466b75db94d8ebed0a8d80794c\": not found" containerID="a03947aae8fcd42e3e3212866e608d6fa924d9466b75db94d8ebed0a8d80794c" Jul 9 10:05:31.027011 kubelet[2571]: I0709 10:05:31.026959 2571 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a03947aae8fcd42e3e3212866e608d6fa924d9466b75db94d8ebed0a8d80794c"} err="failed to get container status \"a03947aae8fcd42e3e3212866e608d6fa924d9466b75db94d8ebed0a8d80794c\": rpc error: code = NotFound desc = an error occurred when try to find container \"a03947aae8fcd42e3e3212866e608d6fa924d9466b75db94d8ebed0a8d80794c\": not found" Jul 9 10:05:31.027011 kubelet[2571]: I0709 10:05:31.026978 2571 scope.go:117] "RemoveContainer" containerID="5f8a35f5b33bb5e9c181dc07ec43972276cfb91f2f9036031c205e30259dd72e" Jul 9 10:05:31.027210 containerd[1488]: time="2025-07-09T10:05:31.027172760Z" level=error msg="ContainerStatus for \"5f8a35f5b33bb5e9c181dc07ec43972276cfb91f2f9036031c205e30259dd72e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5f8a35f5b33bb5e9c181dc07ec43972276cfb91f2f9036031c205e30259dd72e\": not found" Jul 9 10:05:31.027335 kubelet[2571]: E0709 10:05:31.027295 2571 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5f8a35f5b33bb5e9c181dc07ec43972276cfb91f2f9036031c205e30259dd72e\": not found" containerID="5f8a35f5b33bb5e9c181dc07ec43972276cfb91f2f9036031c205e30259dd72e" Jul 9 10:05:31.027383 kubelet[2571]: I0709 10:05:31.027336 2571 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5f8a35f5b33bb5e9c181dc07ec43972276cfb91f2f9036031c205e30259dd72e"} err="failed to get container status \"5f8a35f5b33bb5e9c181dc07ec43972276cfb91f2f9036031c205e30259dd72e\": rpc error: code = NotFound desc = an error occurred when try to find container \"5f8a35f5b33bb5e9c181dc07ec43972276cfb91f2f9036031c205e30259dd72e\": not found" Jul 9 10:05:31.082697 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9d1f5e02d069ca91a66505ffef543d13c4374fc2a03a088d89e1fa937cec26f5-rootfs.mount: Deactivated successfully. Jul 9 10:05:31.082841 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bbc3be2d2f0728ff017b7bc4863e1e464f1a76b1d1d41f7297546a7f63eb9bf5-rootfs.mount: Deactivated successfully. Jul 9 10:05:31.082946 systemd[1]: var-lib-kubelet-pods-d7eed657\x2df6f5\x2d4b6e\x2da39d\x2d3c671510523a-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d7q5fp.mount: Deactivated successfully. Jul 9 10:05:31.083074 systemd[1]: var-lib-kubelet-pods-0427ac22\x2d349b\x2d4e16\x2d8131\x2d7c94ffd506d2-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dgdkqx.mount: Deactivated successfully. Jul 9 10:05:31.083216 systemd[1]: var-lib-kubelet-pods-0427ac22\x2d349b\x2d4e16\x2d8131\x2d7c94ffd506d2-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 9 10:05:31.083344 systemd[1]: var-lib-kubelet-pods-0427ac22\x2d349b\x2d4e16\x2d8131\x2d7c94ffd506d2-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 9 10:05:31.736727 kubelet[2571]: I0709 10:05:31.736636 2571 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0427ac22-349b-4e16-8131-7c94ffd506d2" path="/var/lib/kubelet/pods/0427ac22-349b-4e16-8131-7c94ffd506d2/volumes" Jul 9 10:05:31.737849 kubelet[2571]: I0709 10:05:31.737821 2571 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7eed657-f6f5-4b6e-a39d-3c671510523a" path="/var/lib/kubelet/pods/d7eed657-f6f5-4b6e-a39d-3c671510523a/volumes" Jul 9 10:05:32.028091 sshd[4258]: Connection closed by 10.0.0.1 port 40378 Jul 9 10:05:32.028969 sshd-session[4255]: pam_unix(sshd:session): session closed for user core Jul 9 10:05:32.037926 systemd[1]: sshd@24-10.0.0.36:22-10.0.0.1:40378.service: Deactivated successfully. Jul 9 10:05:32.040092 systemd[1]: session-25.scope: Deactivated successfully. Jul 9 10:05:32.041951 systemd-logind[1461]: Session 25 logged out. Waiting for processes to exit. Jul 9 10:05:32.048439 systemd[1]: Started sshd@25-10.0.0.36:22-10.0.0.1:40394.service - OpenSSH per-connection server daemon (10.0.0.1:40394). Jul 9 10:05:32.049442 systemd-logind[1461]: Removed session 25. Jul 9 10:05:32.085516 sshd[4416]: Accepted publickey for core from 10.0.0.1 port 40394 ssh2: RSA SHA256:reH7imFaG11BTrq9feyiNgkDqTgB0gigrS0G7NIp5EM Jul 9 10:05:32.086967 sshd-session[4416]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 10:05:32.091547 systemd-logind[1461]: New session 26 of user core. Jul 9 10:05:32.097273 systemd[1]: Started session-26.scope - Session 26 of User core. Jul 9 10:05:32.656361 sshd[4419]: Connection closed by 10.0.0.1 port 40394 Jul 9 10:05:32.658207 sshd-session[4416]: pam_unix(sshd:session): session closed for user core Jul 9 10:05:32.674356 systemd[1]: sshd@25-10.0.0.36:22-10.0.0.1:40394.service: Deactivated successfully. Jul 9 10:05:32.679399 systemd[1]: session-26.scope: Deactivated successfully. Jul 9 10:05:32.681851 kubelet[2571]: I0709 10:05:32.681623 2571 memory_manager.go:355] "RemoveStaleState removing state" podUID="d7eed657-f6f5-4b6e-a39d-3c671510523a" containerName="cilium-operator" Jul 9 10:05:32.681851 kubelet[2571]: I0709 10:05:32.681659 2571 memory_manager.go:355] "RemoveStaleState removing state" podUID="0427ac22-349b-4e16-8131-7c94ffd506d2" containerName="cilium-agent" Jul 9 10:05:32.685540 systemd-logind[1461]: Session 26 logged out. Waiting for processes to exit. Jul 9 10:05:32.692480 systemd[1]: Started sshd@26-10.0.0.36:22-10.0.0.1:40404.service - OpenSSH per-connection server daemon (10.0.0.1:40404). Jul 9 10:05:32.703102 systemd-logind[1461]: Removed session 26. Jul 9 10:05:32.715114 systemd[1]: Created slice kubepods-burstable-podfd2d5bc2_48cc_452d_ae5d_f1e9f12afb6e.slice - libcontainer container kubepods-burstable-podfd2d5bc2_48cc_452d_ae5d_f1e9f12afb6e.slice. Jul 9 10:05:32.737863 sshd[4430]: Accepted publickey for core from 10.0.0.1 port 40404 ssh2: RSA SHA256:reH7imFaG11BTrq9feyiNgkDqTgB0gigrS0G7NIp5EM Jul 9 10:05:32.739630 sshd-session[4430]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 10:05:32.747198 systemd-logind[1461]: New session 27 of user core. Jul 9 10:05:32.754188 kubelet[2571]: I0709 10:05:32.754081 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/fd2d5bc2-48cc-452d-ae5d-f1e9f12afb6e-hostproc\") pod \"cilium-k69l4\" (UID: \"fd2d5bc2-48cc-452d-ae5d-f1e9f12afb6e\") " pod="kube-system/cilium-k69l4" Jul 9 10:05:32.754385 systemd[1]: Started session-27.scope - Session 27 of User core. Jul 9 10:05:32.754601 kubelet[2571]: I0709 10:05:32.754144 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fd2d5bc2-48cc-452d-ae5d-f1e9f12afb6e-lib-modules\") pod \"cilium-k69l4\" (UID: \"fd2d5bc2-48cc-452d-ae5d-f1e9f12afb6e\") " pod="kube-system/cilium-k69l4" Jul 9 10:05:32.754673 kubelet[2571]: I0709 10:05:32.754622 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/fd2d5bc2-48cc-452d-ae5d-f1e9f12afb6e-cilium-run\") pod \"cilium-k69l4\" (UID: \"fd2d5bc2-48cc-452d-ae5d-f1e9f12afb6e\") " pod="kube-system/cilium-k69l4" Jul 9 10:05:32.754673 kubelet[2571]: I0709 10:05:32.754662 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/fd2d5bc2-48cc-452d-ae5d-f1e9f12afb6e-host-proc-sys-net\") pod \"cilium-k69l4\" (UID: \"fd2d5bc2-48cc-452d-ae5d-f1e9f12afb6e\") " pod="kube-system/cilium-k69l4" Jul 9 10:05:32.755057 kubelet[2571]: I0709 10:05:32.754688 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wjcb9\" (UniqueName: \"kubernetes.io/projected/fd2d5bc2-48cc-452d-ae5d-f1e9f12afb6e-kube-api-access-wjcb9\") pod \"cilium-k69l4\" (UID: \"fd2d5bc2-48cc-452d-ae5d-f1e9f12afb6e\") " pod="kube-system/cilium-k69l4" Jul 9 10:05:32.755057 kubelet[2571]: I0709 10:05:32.754714 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/fd2d5bc2-48cc-452d-ae5d-f1e9f12afb6e-host-proc-sys-kernel\") pod \"cilium-k69l4\" (UID: \"fd2d5bc2-48cc-452d-ae5d-f1e9f12afb6e\") " pod="kube-system/cilium-k69l4" Jul 9 10:05:32.755057 kubelet[2571]: I0709 10:05:32.754734 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/fd2d5bc2-48cc-452d-ae5d-f1e9f12afb6e-bpf-maps\") pod \"cilium-k69l4\" (UID: \"fd2d5bc2-48cc-452d-ae5d-f1e9f12afb6e\") " pod="kube-system/cilium-k69l4" Jul 9 10:05:32.755057 kubelet[2571]: I0709 10:05:32.754798 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fd2d5bc2-48cc-452d-ae5d-f1e9f12afb6e-xtables-lock\") pod \"cilium-k69l4\" (UID: \"fd2d5bc2-48cc-452d-ae5d-f1e9f12afb6e\") " pod="kube-system/cilium-k69l4" Jul 9 10:05:32.755057 kubelet[2571]: I0709 10:05:32.754839 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fd2d5bc2-48cc-452d-ae5d-f1e9f12afb6e-cilium-config-path\") pod \"cilium-k69l4\" (UID: \"fd2d5bc2-48cc-452d-ae5d-f1e9f12afb6e\") " pod="kube-system/cilium-k69l4" Jul 9 10:05:32.755194 kubelet[2571]: I0709 10:05:32.754881 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/fd2d5bc2-48cc-452d-ae5d-f1e9f12afb6e-cilium-ipsec-secrets\") pod \"cilium-k69l4\" (UID: \"fd2d5bc2-48cc-452d-ae5d-f1e9f12afb6e\") " pod="kube-system/cilium-k69l4" Jul 9 10:05:32.755194 kubelet[2571]: I0709 10:05:32.754898 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/fd2d5bc2-48cc-452d-ae5d-f1e9f12afb6e-hubble-tls\") pod \"cilium-k69l4\" (UID: \"fd2d5bc2-48cc-452d-ae5d-f1e9f12afb6e\") " pod="kube-system/cilium-k69l4" Jul 9 10:05:32.755194 kubelet[2571]: I0709 10:05:32.754913 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/fd2d5bc2-48cc-452d-ae5d-f1e9f12afb6e-cni-path\") pod \"cilium-k69l4\" (UID: \"fd2d5bc2-48cc-452d-ae5d-f1e9f12afb6e\") " pod="kube-system/cilium-k69l4" Jul 9 10:05:32.755194 kubelet[2571]: I0709 10:05:32.754932 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/fd2d5bc2-48cc-452d-ae5d-f1e9f12afb6e-cilium-cgroup\") pod \"cilium-k69l4\" (UID: \"fd2d5bc2-48cc-452d-ae5d-f1e9f12afb6e\") " pod="kube-system/cilium-k69l4" Jul 9 10:05:32.755194 kubelet[2571]: I0709 10:05:32.754946 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/fd2d5bc2-48cc-452d-ae5d-f1e9f12afb6e-etc-cni-netd\") pod \"cilium-k69l4\" (UID: \"fd2d5bc2-48cc-452d-ae5d-f1e9f12afb6e\") " pod="kube-system/cilium-k69l4" Jul 9 10:05:32.755194 kubelet[2571]: I0709 10:05:32.754964 2571 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/fd2d5bc2-48cc-452d-ae5d-f1e9f12afb6e-clustermesh-secrets\") pod \"cilium-k69l4\" (UID: \"fd2d5bc2-48cc-452d-ae5d-f1e9f12afb6e\") " pod="kube-system/cilium-k69l4" Jul 9 10:05:32.808144 sshd[4433]: Connection closed by 10.0.0.1 port 40404 Jul 9 10:05:32.808594 sshd-session[4430]: pam_unix(sshd:session): session closed for user core Jul 9 10:05:32.810447 kubelet[2571]: E0709 10:05:32.810376 2571 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 9 10:05:32.823570 systemd[1]: sshd@26-10.0.0.36:22-10.0.0.1:40404.service: Deactivated successfully. Jul 9 10:05:32.826800 systemd[1]: session-27.scope: Deactivated successfully. Jul 9 10:05:32.829112 systemd-logind[1461]: Session 27 logged out. Waiting for processes to exit. Jul 9 10:05:32.838763 systemd[1]: Started sshd@27-10.0.0.36:22-10.0.0.1:40418.service - OpenSSH per-connection server daemon (10.0.0.1:40418). Jul 9 10:05:32.840200 systemd-logind[1461]: Removed session 27. Jul 9 10:05:32.888685 sshd[4439]: Accepted publickey for core from 10.0.0.1 port 40418 ssh2: RSA SHA256:reH7imFaG11BTrq9feyiNgkDqTgB0gigrS0G7NIp5EM Jul 9 10:05:32.890712 sshd-session[4439]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 9 10:05:32.895244 systemd-logind[1461]: New session 28 of user core. Jul 9 10:05:32.905436 systemd[1]: Started session-28.scope - Session 28 of User core. Jul 9 10:05:33.020199 containerd[1488]: time="2025-07-09T10:05:33.020018841Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-k69l4,Uid:fd2d5bc2-48cc-452d-ae5d-f1e9f12afb6e,Namespace:kube-system,Attempt:0,}" Jul 9 10:05:33.054237 containerd[1488]: time="2025-07-09T10:05:33.053997493Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 9 10:05:33.054832 containerd[1488]: time="2025-07-09T10:05:33.054767971Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 9 10:05:33.054832 containerd[1488]: time="2025-07-09T10:05:33.054791375Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 9 10:05:33.054958 containerd[1488]: time="2025-07-09T10:05:33.054888527Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 9 10:05:33.076425 systemd[1]: Started cri-containerd-d409612b957c9680c0236cfb56604f64d8ea5a50b2a95e421b24dd8b1060c0ec.scope - libcontainer container d409612b957c9680c0236cfb56604f64d8ea5a50b2a95e421b24dd8b1060c0ec. Jul 9 10:05:33.103224 containerd[1488]: time="2025-07-09T10:05:33.103174125Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-k69l4,Uid:fd2d5bc2-48cc-452d-ae5d-f1e9f12afb6e,Namespace:kube-system,Attempt:0,} returns sandbox id \"d409612b957c9680c0236cfb56604f64d8ea5a50b2a95e421b24dd8b1060c0ec\"" Jul 9 10:05:33.106263 containerd[1488]: time="2025-07-09T10:05:33.106210962Z" level=info msg="CreateContainer within sandbox \"d409612b957c9680c0236cfb56604f64d8ea5a50b2a95e421b24dd8b1060c0ec\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 9 10:05:33.119771 containerd[1488]: time="2025-07-09T10:05:33.119723265Z" level=info msg="CreateContainer within sandbox \"d409612b957c9680c0236cfb56604f64d8ea5a50b2a95e421b24dd8b1060c0ec\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"52c23ceea7da1bceece8664df17130affb7149364ce13cffe7f12418edfa01ca\"" Jul 9 10:05:33.120217 containerd[1488]: time="2025-07-09T10:05:33.120183760Z" level=info msg="StartContainer for \"52c23ceea7da1bceece8664df17130affb7149364ce13cffe7f12418edfa01ca\"" Jul 9 10:05:33.148418 systemd[1]: Started cri-containerd-52c23ceea7da1bceece8664df17130affb7149364ce13cffe7f12418edfa01ca.scope - libcontainer container 52c23ceea7da1bceece8664df17130affb7149364ce13cffe7f12418edfa01ca. Jul 9 10:05:33.179066 containerd[1488]: time="2025-07-09T10:05:33.179022870Z" level=info msg="StartContainer for \"52c23ceea7da1bceece8664df17130affb7149364ce13cffe7f12418edfa01ca\" returns successfully" Jul 9 10:05:33.189904 systemd[1]: cri-containerd-52c23ceea7da1bceece8664df17130affb7149364ce13cffe7f12418edfa01ca.scope: Deactivated successfully. Jul 9 10:05:33.223385 containerd[1488]: time="2025-07-09T10:05:33.223313280Z" level=info msg="shim disconnected" id=52c23ceea7da1bceece8664df17130affb7149364ce13cffe7f12418edfa01ca namespace=k8s.io Jul 9 10:05:33.223385 containerd[1488]: time="2025-07-09T10:05:33.223373584Z" level=warning msg="cleaning up after shim disconnected" id=52c23ceea7da1bceece8664df17130affb7149364ce13cffe7f12418edfa01ca namespace=k8s.io Jul 9 10:05:33.223385 containerd[1488]: time="2025-07-09T10:05:33.223382140Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 9 10:05:33.978973 containerd[1488]: time="2025-07-09T10:05:33.978912879Z" level=info msg="CreateContainer within sandbox \"d409612b957c9680c0236cfb56604f64d8ea5a50b2a95e421b24dd8b1060c0ec\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 9 10:05:33.991988 containerd[1488]: time="2025-07-09T10:05:33.991918209Z" level=info msg="CreateContainer within sandbox \"d409612b957c9680c0236cfb56604f64d8ea5a50b2a95e421b24dd8b1060c0ec\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"85ef4af6565e2683b5d8c298d8c394d66d09fe5cdbfb4932a203284497334a3b\"" Jul 9 10:05:33.992673 containerd[1488]: time="2025-07-09T10:05:33.992529779Z" level=info msg="StartContainer for \"85ef4af6565e2683b5d8c298d8c394d66d09fe5cdbfb4932a203284497334a3b\"" Jul 9 10:05:34.029318 systemd[1]: Started cri-containerd-85ef4af6565e2683b5d8c298d8c394d66d09fe5cdbfb4932a203284497334a3b.scope - libcontainer container 85ef4af6565e2683b5d8c298d8c394d66d09fe5cdbfb4932a203284497334a3b. Jul 9 10:05:34.054698 containerd[1488]: time="2025-07-09T10:05:34.054637463Z" level=info msg="StartContainer for \"85ef4af6565e2683b5d8c298d8c394d66d09fe5cdbfb4932a203284497334a3b\" returns successfully" Jul 9 10:05:34.062680 systemd[1]: cri-containerd-85ef4af6565e2683b5d8c298d8c394d66d09fe5cdbfb4932a203284497334a3b.scope: Deactivated successfully. Jul 9 10:05:34.088108 containerd[1488]: time="2025-07-09T10:05:34.088040608Z" level=info msg="shim disconnected" id=85ef4af6565e2683b5d8c298d8c394d66d09fe5cdbfb4932a203284497334a3b namespace=k8s.io Jul 9 10:05:34.088108 containerd[1488]: time="2025-07-09T10:05:34.088097787Z" level=warning msg="cleaning up after shim disconnected" id=85ef4af6565e2683b5d8c298d8c394d66d09fe5cdbfb4932a203284497334a3b namespace=k8s.io Jul 9 10:05:34.088108 containerd[1488]: time="2025-07-09T10:05:34.088106092Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 9 10:05:34.864048 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-85ef4af6565e2683b5d8c298d8c394d66d09fe5cdbfb4932a203284497334a3b-rootfs.mount: Deactivated successfully. Jul 9 10:05:34.982579 containerd[1488]: time="2025-07-09T10:05:34.982496607Z" level=info msg="CreateContainer within sandbox \"d409612b957c9680c0236cfb56604f64d8ea5a50b2a95e421b24dd8b1060c0ec\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 9 10:05:35.014284 containerd[1488]: time="2025-07-09T10:05:35.014216652Z" level=info msg="CreateContainer within sandbox \"d409612b957c9680c0236cfb56604f64d8ea5a50b2a95e421b24dd8b1060c0ec\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"d697245fc0ac21f1d2ca9d5755855f5152bdd936b09a0b7ace67c7525de96033\"" Jul 9 10:05:35.014774 containerd[1488]: time="2025-07-09T10:05:35.014751319Z" level=info msg="StartContainer for \"d697245fc0ac21f1d2ca9d5755855f5152bdd936b09a0b7ace67c7525de96033\"" Jul 9 10:05:35.056442 systemd[1]: Started cri-containerd-d697245fc0ac21f1d2ca9d5755855f5152bdd936b09a0b7ace67c7525de96033.scope - libcontainer container d697245fc0ac21f1d2ca9d5755855f5152bdd936b09a0b7ace67c7525de96033. Jul 9 10:05:35.090731 systemd[1]: cri-containerd-d697245fc0ac21f1d2ca9d5755855f5152bdd936b09a0b7ace67c7525de96033.scope: Deactivated successfully. Jul 9 10:05:35.120442 containerd[1488]: time="2025-07-09T10:05:35.120292682Z" level=info msg="StartContainer for \"d697245fc0ac21f1d2ca9d5755855f5152bdd936b09a0b7ace67c7525de96033\" returns successfully" Jul 9 10:05:35.157496 containerd[1488]: time="2025-07-09T10:05:35.157401234Z" level=info msg="shim disconnected" id=d697245fc0ac21f1d2ca9d5755855f5152bdd936b09a0b7ace67c7525de96033 namespace=k8s.io Jul 9 10:05:35.157496 containerd[1488]: time="2025-07-09T10:05:35.157461658Z" level=warning msg="cleaning up after shim disconnected" id=d697245fc0ac21f1d2ca9d5755855f5152bdd936b09a0b7ace67c7525de96033 namespace=k8s.io Jul 9 10:05:35.157496 containerd[1488]: time="2025-07-09T10:05:35.157470855Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 9 10:05:35.864125 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d697245fc0ac21f1d2ca9d5755855f5152bdd936b09a0b7ace67c7525de96033-rootfs.mount: Deactivated successfully. Jul 9 10:05:35.986056 containerd[1488]: time="2025-07-09T10:05:35.986007592Z" level=info msg="CreateContainer within sandbox \"d409612b957c9680c0236cfb56604f64d8ea5a50b2a95e421b24dd8b1060c0ec\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 9 10:05:36.002738 containerd[1488]: time="2025-07-09T10:05:36.002683616Z" level=info msg="CreateContainer within sandbox \"d409612b957c9680c0236cfb56604f64d8ea5a50b2a95e421b24dd8b1060c0ec\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"02981ffc89559adf05917cbacb6dcdb5af3c16a554c77b2b60ab456eba49a41c\"" Jul 9 10:05:36.005195 containerd[1488]: time="2025-07-09T10:05:36.003391001Z" level=info msg="StartContainer for \"02981ffc89559adf05917cbacb6dcdb5af3c16a554c77b2b60ab456eba49a41c\"" Jul 9 10:05:36.056288 systemd[1]: Started cri-containerd-02981ffc89559adf05917cbacb6dcdb5af3c16a554c77b2b60ab456eba49a41c.scope - libcontainer container 02981ffc89559adf05917cbacb6dcdb5af3c16a554c77b2b60ab456eba49a41c. Jul 9 10:05:36.080937 systemd[1]: cri-containerd-02981ffc89559adf05917cbacb6dcdb5af3c16a554c77b2b60ab456eba49a41c.scope: Deactivated successfully. Jul 9 10:05:36.082941 containerd[1488]: time="2025-07-09T10:05:36.082905130Z" level=info msg="StartContainer for \"02981ffc89559adf05917cbacb6dcdb5af3c16a554c77b2b60ab456eba49a41c\" returns successfully" Jul 9 10:05:36.108055 containerd[1488]: time="2025-07-09T10:05:36.107978014Z" level=info msg="shim disconnected" id=02981ffc89559adf05917cbacb6dcdb5af3c16a554c77b2b60ab456eba49a41c namespace=k8s.io Jul 9 10:05:36.108055 containerd[1488]: time="2025-07-09T10:05:36.108039020Z" level=warning msg="cleaning up after shim disconnected" id=02981ffc89559adf05917cbacb6dcdb5af3c16a554c77b2b60ab456eba49a41c namespace=k8s.io Jul 9 10:05:36.108055 containerd[1488]: time="2025-07-09T10:05:36.108048287Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 9 10:05:36.864165 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-02981ffc89559adf05917cbacb6dcdb5af3c16a554c77b2b60ab456eba49a41c-rootfs.mount: Deactivated successfully. Jul 9 10:05:36.990190 containerd[1488]: time="2025-07-09T10:05:36.990029736Z" level=info msg="CreateContainer within sandbox \"d409612b957c9680c0236cfb56604f64d8ea5a50b2a95e421b24dd8b1060c0ec\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 9 10:05:37.005500 containerd[1488]: time="2025-07-09T10:05:37.005444211Z" level=info msg="CreateContainer within sandbox \"d409612b957c9680c0236cfb56604f64d8ea5a50b2a95e421b24dd8b1060c0ec\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"54db87989fa1e3ad590b0f98c480f5534154b9f872b2581045ba9fd3d7e59c44\"" Jul 9 10:05:37.006068 containerd[1488]: time="2025-07-09T10:05:37.006016924Z" level=info msg="StartContainer for \"54db87989fa1e3ad590b0f98c480f5534154b9f872b2581045ba9fd3d7e59c44\"" Jul 9 10:05:37.037300 systemd[1]: Started cri-containerd-54db87989fa1e3ad590b0f98c480f5534154b9f872b2581045ba9fd3d7e59c44.scope - libcontainer container 54db87989fa1e3ad590b0f98c480f5534154b9f872b2581045ba9fd3d7e59c44. Jul 9 10:05:37.073542 containerd[1488]: time="2025-07-09T10:05:37.073478919Z" level=info msg="StartContainer for \"54db87989fa1e3ad590b0f98c480f5534154b9f872b2581045ba9fd3d7e59c44\" returns successfully" Jul 9 10:05:37.492195 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Jul 9 10:05:37.527174 kernel: jitterentropy: Initialization failed with host not compliant with requirements: 9 Jul 9 10:05:37.556178 kernel: DRBG: Continuing without Jitter RNG Jul 9 10:05:38.007568 kubelet[2571]: I0709 10:05:38.007477 2571 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-k69l4" podStartSLOduration=6.007461016 podStartE2EDuration="6.007461016s" podCreationTimestamp="2025-07-09 10:05:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-09 10:05:38.007079103 +0000 UTC m=+90.373134855" watchObservedRunningTime="2025-07-09 10:05:38.007461016 +0000 UTC m=+90.373516778" Jul 9 10:05:40.621685 systemd-networkd[1418]: lxc_health: Link UP Jul 9 10:05:40.622032 systemd-networkd[1418]: lxc_health: Gained carrier Jul 9 10:05:42.253293 systemd-networkd[1418]: lxc_health: Gained IPv6LL Jul 9 10:05:47.636446 sshd[4446]: Connection closed by 10.0.0.1 port 40418 Jul 9 10:05:47.636904 sshd-session[4439]: pam_unix(sshd:session): session closed for user core Jul 9 10:05:47.641564 systemd[1]: sshd@27-10.0.0.36:22-10.0.0.1:40418.service: Deactivated successfully. Jul 9 10:05:47.643889 systemd[1]: session-28.scope: Deactivated successfully. Jul 9 10:05:47.644615 systemd-logind[1461]: Session 28 logged out. Waiting for processes to exit. Jul 9 10:05:47.645458 systemd-logind[1461]: Removed session 28.