Jul 6 23:56:56.930729 kernel: Linux version 6.6.95-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Sun Jul 6 22:23:50 -00 2025 Jul 6 23:56:56.930765 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=65c65ff9d50198f0ae5c37458dc3ff85c6a690e7aa124bb306a2f4c63a54d876 Jul 6 23:56:56.930779 kernel: BIOS-provided physical RAM map: Jul 6 23:56:56.930788 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jul 6 23:56:56.930795 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Jul 6 23:56:56.930804 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Jul 6 23:56:56.930813 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Jul 6 23:56:56.930822 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Jul 6 23:56:56.930830 kernel: BIOS-e820: [mem 0x000000000080c000-0x000000000080ffff] usable Jul 6 23:56:56.930838 kernel: BIOS-e820: [mem 0x0000000000810000-0x00000000008fffff] ACPI NVS Jul 6 23:56:56.930849 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009c8eefff] usable Jul 6 23:56:56.930857 kernel: BIOS-e820: [mem 0x000000009c8ef000-0x000000009c9eefff] reserved Jul 6 23:56:56.930865 kernel: BIOS-e820: [mem 0x000000009c9ef000-0x000000009caeefff] type 20 Jul 6 23:56:56.930874 kernel: BIOS-e820: [mem 0x000000009caef000-0x000000009cb6efff] reserved Jul 6 23:56:56.930909 kernel: BIOS-e820: [mem 0x000000009cb6f000-0x000000009cb7efff] ACPI data Jul 6 23:56:56.930919 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Jul 6 23:56:56.930931 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009cf3ffff] usable Jul 6 23:56:56.930940 kernel: BIOS-e820: [mem 0x000000009cf40000-0x000000009cf5ffff] reserved Jul 6 23:56:56.930949 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Jul 6 23:56:56.930957 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Jul 6 23:56:56.930967 kernel: NX (Execute Disable) protection: active Jul 6 23:56:56.930977 kernel: APIC: Static calls initialized Jul 6 23:56:56.930987 kernel: efi: EFI v2.7 by EDK II Jul 6 23:56:56.930998 kernel: efi: SMBIOS=0x9c9ab000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9b674118 Jul 6 23:56:56.931006 kernel: SMBIOS 2.8 present. Jul 6 23:56:56.931015 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 0.0.0 02/06/2015 Jul 6 23:56:56.931024 kernel: Hypervisor detected: KVM Jul 6 23:56:56.931036 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jul 6 23:56:56.931045 kernel: kvm-clock: using sched offset of 4604854576 cycles Jul 6 23:56:56.931054 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jul 6 23:56:56.931063 kernel: tsc: Detected 2794.750 MHz processor Jul 6 23:56:56.931073 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 6 23:56:56.931082 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 6 23:56:56.931091 kernel: last_pfn = 0x9cf40 max_arch_pfn = 0x400000000 Jul 6 23:56:56.931101 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jul 6 23:56:56.931110 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 6 23:56:56.931122 kernel: Using GB pages for direct mapping Jul 6 23:56:56.931131 kernel: Secure boot disabled Jul 6 23:56:56.931140 kernel: ACPI: Early table checksum verification disabled Jul 6 23:56:56.931149 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Jul 6 23:56:56.931163 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Jul 6 23:56:56.931173 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 6 23:56:56.931183 kernel: ACPI: DSDT 0x000000009CB7A000 0021BA (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 6 23:56:56.931195 kernel: ACPI: FACS 0x000000009CBDD000 000040 Jul 6 23:56:56.931204 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 6 23:56:56.931214 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 6 23:56:56.931224 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 6 23:56:56.931233 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 6 23:56:56.931243 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Jul 6 23:56:56.931252 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Jul 6 23:56:56.931265 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1b9] Jul 6 23:56:56.931274 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Jul 6 23:56:56.931284 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Jul 6 23:56:56.931293 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Jul 6 23:56:56.931303 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Jul 6 23:56:56.931312 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Jul 6 23:56:56.931322 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Jul 6 23:56:56.931331 kernel: No NUMA configuration found Jul 6 23:56:56.931340 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cf3ffff] Jul 6 23:56:56.931350 kernel: NODE_DATA(0) allocated [mem 0x9cea6000-0x9ceabfff] Jul 6 23:56:56.931362 kernel: Zone ranges: Jul 6 23:56:56.931381 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 6 23:56:56.931392 kernel: DMA32 [mem 0x0000000001000000-0x000000009cf3ffff] Jul 6 23:56:56.931401 kernel: Normal empty Jul 6 23:56:56.931410 kernel: Movable zone start for each node Jul 6 23:56:56.931418 kernel: Early memory node ranges Jul 6 23:56:56.931427 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jul 6 23:56:56.931436 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Jul 6 23:56:56.931444 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Jul 6 23:56:56.931456 kernel: node 0: [mem 0x000000000080c000-0x000000000080ffff] Jul 6 23:56:56.931464 kernel: node 0: [mem 0x0000000000900000-0x000000009c8eefff] Jul 6 23:56:56.931473 kernel: node 0: [mem 0x000000009cbff000-0x000000009cf3ffff] Jul 6 23:56:56.931484 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cf3ffff] Jul 6 23:56:56.931493 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 6 23:56:56.931502 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jul 6 23:56:56.931510 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Jul 6 23:56:56.931519 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 6 23:56:56.931528 kernel: On node 0, zone DMA: 240 pages in unavailable ranges Jul 6 23:56:56.931537 kernel: On node 0, zone DMA32: 784 pages in unavailable ranges Jul 6 23:56:56.931548 kernel: On node 0, zone DMA32: 12480 pages in unavailable ranges Jul 6 23:56:56.931557 kernel: ACPI: PM-Timer IO Port: 0x608 Jul 6 23:56:56.931565 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jul 6 23:56:56.931574 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jul 6 23:56:56.931583 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jul 6 23:56:56.931592 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jul 6 23:56:56.931601 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 6 23:56:56.931609 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jul 6 23:56:56.931618 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jul 6 23:56:56.931629 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 6 23:56:56.931638 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jul 6 23:56:56.931647 kernel: TSC deadline timer available Jul 6 23:56:56.931656 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Jul 6 23:56:56.931664 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jul 6 23:56:56.931673 kernel: kvm-guest: KVM setup pv remote TLB flush Jul 6 23:56:56.931690 kernel: kvm-guest: setup PV sched yield Jul 6 23:56:56.931699 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Jul 6 23:56:56.931707 kernel: Booting paravirtualized kernel on KVM Jul 6 23:56:56.931719 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 6 23:56:56.931728 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jul 6 23:56:56.931737 kernel: percpu: Embedded 58 pages/cpu s197096 r8192 d32280 u524288 Jul 6 23:56:56.931746 kernel: pcpu-alloc: s197096 r8192 d32280 u524288 alloc=1*2097152 Jul 6 23:56:56.931754 kernel: pcpu-alloc: [0] 0 1 2 3 Jul 6 23:56:56.931762 kernel: kvm-guest: PV spinlocks enabled Jul 6 23:56:56.931771 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jul 6 23:56:56.931781 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=65c65ff9d50198f0ae5c37458dc3ff85c6a690e7aa124bb306a2f4c63a54d876 Jul 6 23:56:56.931793 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 6 23:56:56.931802 kernel: random: crng init done Jul 6 23:56:56.931811 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 6 23:56:56.931820 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 6 23:56:56.931828 kernel: Fallback order for Node 0: 0 Jul 6 23:56:56.931837 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629759 Jul 6 23:56:56.931845 kernel: Policy zone: DMA32 Jul 6 23:56:56.931854 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 6 23:56:56.931863 kernel: Memory: 2395616K/2567000K available (12288K kernel code, 2295K rwdata, 22748K rodata, 42868K init, 2324K bss, 171124K reserved, 0K cma-reserved) Jul 6 23:56:56.931875 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jul 6 23:56:56.931906 kernel: ftrace: allocating 37966 entries in 149 pages Jul 6 23:56:56.931916 kernel: ftrace: allocated 149 pages with 4 groups Jul 6 23:56:56.931925 kernel: Dynamic Preempt: voluntary Jul 6 23:56:56.931954 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 6 23:56:56.931983 kernel: rcu: RCU event tracing is enabled. Jul 6 23:56:56.932001 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jul 6 23:56:56.932019 kernel: Trampoline variant of Tasks RCU enabled. Jul 6 23:56:56.932047 kernel: Rude variant of Tasks RCU enabled. Jul 6 23:56:56.932065 kernel: Tracing variant of Tasks RCU enabled. Jul 6 23:56:56.932083 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 6 23:56:56.932094 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jul 6 23:56:56.932122 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jul 6 23:56:56.932136 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 6 23:56:56.932147 kernel: Console: colour dummy device 80x25 Jul 6 23:56:56.932158 kernel: printk: console [ttyS0] enabled Jul 6 23:56:56.932168 kernel: ACPI: Core revision 20230628 Jul 6 23:56:56.932182 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jul 6 23:56:56.932193 kernel: APIC: Switch to symmetric I/O mode setup Jul 6 23:56:56.932204 kernel: x2apic enabled Jul 6 23:56:56.932214 kernel: APIC: Switched APIC routing to: physical x2apic Jul 6 23:56:56.932225 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jul 6 23:56:56.932236 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jul 6 23:56:56.932247 kernel: kvm-guest: setup PV IPIs Jul 6 23:56:56.932257 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jul 6 23:56:56.932268 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Jul 6 23:56:56.932282 kernel: Calibrating delay loop (skipped) preset value.. 5589.50 BogoMIPS (lpj=2794750) Jul 6 23:56:56.932293 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jul 6 23:56:56.932303 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jul 6 23:56:56.932314 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jul 6 23:56:56.932325 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 6 23:56:56.932335 kernel: Spectre V2 : Mitigation: Retpolines Jul 6 23:56:56.932346 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jul 6 23:56:56.932357 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Jul 6 23:56:56.932368 kernel: RETBleed: Mitigation: untrained return thunk Jul 6 23:56:56.932382 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jul 6 23:56:56.932393 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jul 6 23:56:56.932403 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jul 6 23:56:56.932415 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jul 6 23:56:56.932426 kernel: x86/bugs: return thunk changed Jul 6 23:56:56.932436 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jul 6 23:56:56.932447 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 6 23:56:56.932458 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 6 23:56:56.932471 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 6 23:56:56.932482 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 6 23:56:56.932492 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jul 6 23:56:56.932503 kernel: Freeing SMP alternatives memory: 32K Jul 6 23:56:56.932514 kernel: pid_max: default: 32768 minimum: 301 Jul 6 23:56:56.932524 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jul 6 23:56:56.932535 kernel: landlock: Up and running. Jul 6 23:56:56.932545 kernel: SELinux: Initializing. Jul 6 23:56:56.932556 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 6 23:56:56.932569 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 6 23:56:56.932580 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Jul 6 23:56:56.932591 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 6 23:56:56.932602 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 6 23:56:56.932612 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 6 23:56:56.932623 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jul 6 23:56:56.932633 kernel: ... version: 0 Jul 6 23:56:56.932644 kernel: ... bit width: 48 Jul 6 23:56:56.932654 kernel: ... generic registers: 6 Jul 6 23:56:56.932668 kernel: ... value mask: 0000ffffffffffff Jul 6 23:56:56.932678 kernel: ... max period: 00007fffffffffff Jul 6 23:56:56.932698 kernel: ... fixed-purpose events: 0 Jul 6 23:56:56.932708 kernel: ... event mask: 000000000000003f Jul 6 23:56:56.932719 kernel: signal: max sigframe size: 1776 Jul 6 23:56:56.932729 kernel: rcu: Hierarchical SRCU implementation. Jul 6 23:56:56.932740 kernel: rcu: Max phase no-delay instances is 400. Jul 6 23:56:56.932751 kernel: smp: Bringing up secondary CPUs ... Jul 6 23:56:56.932761 kernel: smpboot: x86: Booting SMP configuration: Jul 6 23:56:56.932775 kernel: .... node #0, CPUs: #1 #2 #3 Jul 6 23:56:56.932786 kernel: smp: Brought up 1 node, 4 CPUs Jul 6 23:56:56.932796 kernel: smpboot: Max logical packages: 1 Jul 6 23:56:56.932807 kernel: smpboot: Total of 4 processors activated (22358.00 BogoMIPS) Jul 6 23:56:56.932818 kernel: devtmpfs: initialized Jul 6 23:56:56.932828 kernel: x86/mm: Memory block size: 128MB Jul 6 23:56:56.932839 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Jul 6 23:56:56.932850 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Jul 6 23:56:56.932861 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00810000-0x008fffff] (983040 bytes) Jul 6 23:56:56.932874 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Jul 6 23:56:56.932922 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Jul 6 23:56:56.932934 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 6 23:56:56.932945 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jul 6 23:56:56.932955 kernel: pinctrl core: initialized pinctrl subsystem Jul 6 23:56:56.932966 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 6 23:56:56.932979 kernel: audit: initializing netlink subsys (disabled) Jul 6 23:56:56.932991 kernel: audit: type=2000 audit(1751846216.243:1): state=initialized audit_enabled=0 res=1 Jul 6 23:56:56.933003 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 6 23:56:56.933018 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 6 23:56:56.933028 kernel: cpuidle: using governor menu Jul 6 23:56:56.933039 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 6 23:56:56.933050 kernel: dca service started, version 1.12.1 Jul 6 23:56:56.933060 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Jul 6 23:56:56.933071 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Jul 6 23:56:56.933082 kernel: PCI: Using configuration type 1 for base access Jul 6 23:56:56.933093 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 6 23:56:56.933103 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 6 23:56:56.933117 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jul 6 23:56:56.933128 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 6 23:56:56.933138 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jul 6 23:56:56.933149 kernel: ACPI: Added _OSI(Module Device) Jul 6 23:56:56.933159 kernel: ACPI: Added _OSI(Processor Device) Jul 6 23:56:56.933170 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 6 23:56:56.933181 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 6 23:56:56.933192 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jul 6 23:56:56.933202 kernel: ACPI: Interpreter enabled Jul 6 23:56:56.933215 kernel: ACPI: PM: (supports S0 S3 S5) Jul 6 23:56:56.933226 kernel: ACPI: Using IOAPIC for interrupt routing Jul 6 23:56:56.933237 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 6 23:56:56.933248 kernel: PCI: Using E820 reservations for host bridge windows Jul 6 23:56:56.933258 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jul 6 23:56:56.933269 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 6 23:56:56.933650 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 6 23:56:56.933919 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jul 6 23:56:56.934228 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jul 6 23:56:56.934246 kernel: PCI host bridge to bus 0000:00 Jul 6 23:56:56.935151 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 6 23:56:56.935286 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jul 6 23:56:56.935400 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 6 23:56:56.935510 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xafffffff window] Jul 6 23:56:56.936693 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jul 6 23:56:56.936820 kernel: pci_bus 0000:00: root bus resource [mem 0x800000000-0xfffffffff window] Jul 6 23:56:56.936944 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 6 23:56:56.937107 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Jul 6 23:56:56.937261 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Jul 6 23:56:56.937392 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Jul 6 23:56:56.937521 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Jul 6 23:56:56.937652 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Jul 6 23:56:56.937783 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Jul 6 23:56:56.937930 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jul 6 23:56:56.938103 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Jul 6 23:56:56.938237 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Jul 6 23:56:56.939592 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Jul 6 23:56:56.939750 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x800000000-0x800003fff 64bit pref] Jul 6 23:56:56.939915 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Jul 6 23:56:56.940041 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Jul 6 23:56:56.940162 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Jul 6 23:56:56.940283 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x800004000-0x800007fff 64bit pref] Jul 6 23:56:56.940429 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jul 6 23:56:56.940560 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Jul 6 23:56:56.940690 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Jul 6 23:56:56.940818 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x800008000-0x80000bfff 64bit pref] Jul 6 23:56:56.941033 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Jul 6 23:56:56.941196 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Jul 6 23:56:56.941372 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jul 6 23:56:56.941519 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Jul 6 23:56:56.941638 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Jul 6 23:56:56.941766 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Jul 6 23:56:56.941934 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Jul 6 23:56:56.943168 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Jul 6 23:56:56.943182 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jul 6 23:56:56.943190 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jul 6 23:56:56.943198 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jul 6 23:56:56.943206 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jul 6 23:56:56.943213 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jul 6 23:56:56.943221 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jul 6 23:56:56.943233 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jul 6 23:56:56.943240 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jul 6 23:56:56.943248 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jul 6 23:56:56.943256 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jul 6 23:56:56.943263 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jul 6 23:56:56.943271 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jul 6 23:56:56.943279 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jul 6 23:56:56.943286 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jul 6 23:56:56.943294 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jul 6 23:56:56.943304 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jul 6 23:56:56.943311 kernel: iommu: Default domain type: Translated Jul 6 23:56:56.943319 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 6 23:56:56.943327 kernel: efivars: Registered efivars operations Jul 6 23:56:56.943334 kernel: PCI: Using ACPI for IRQ routing Jul 6 23:56:56.943342 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 6 23:56:56.943349 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Jul 6 23:56:56.943357 kernel: e820: reserve RAM buffer [mem 0x00810000-0x008fffff] Jul 6 23:56:56.943370 kernel: e820: reserve RAM buffer [mem 0x9c8ef000-0x9fffffff] Jul 6 23:56:56.943380 kernel: e820: reserve RAM buffer [mem 0x9cf40000-0x9fffffff] Jul 6 23:56:56.943515 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jul 6 23:56:56.943652 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jul 6 23:56:56.943794 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jul 6 23:56:56.943806 kernel: vgaarb: loaded Jul 6 23:56:56.943814 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jul 6 23:56:56.943822 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jul 6 23:56:56.943830 kernel: clocksource: Switched to clocksource kvm-clock Jul 6 23:56:56.943842 kernel: VFS: Disk quotas dquot_6.6.0 Jul 6 23:56:56.943850 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 6 23:56:56.943858 kernel: pnp: PnP ACPI init Jul 6 23:56:56.944028 kernel: system 00:05: [mem 0xb0000000-0xbfffffff window] has been reserved Jul 6 23:56:56.944042 kernel: pnp: PnP ACPI: found 6 devices Jul 6 23:56:56.944052 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 6 23:56:56.944061 kernel: NET: Registered PF_INET protocol family Jul 6 23:56:56.944068 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 6 23:56:56.944081 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 6 23:56:56.944088 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 6 23:56:56.944096 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 6 23:56:56.944104 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 6 23:56:56.944111 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 6 23:56:56.944119 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 6 23:56:56.944126 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 6 23:56:56.944134 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 6 23:56:56.944141 kernel: NET: Registered PF_XDP protocol family Jul 6 23:56:56.944267 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Jul 6 23:56:56.944388 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Jul 6 23:56:56.944500 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jul 6 23:56:56.944609 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jul 6 23:56:56.944729 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jul 6 23:56:56.944839 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xafffffff window] Jul 6 23:56:56.944963 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Jul 6 23:56:56.945074 kernel: pci_bus 0000:00: resource 9 [mem 0x800000000-0xfffffffff window] Jul 6 23:56:56.945088 kernel: PCI: CLS 0 bytes, default 64 Jul 6 23:56:56.945096 kernel: Initialise system trusted keyrings Jul 6 23:56:56.945104 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 6 23:56:56.945112 kernel: Key type asymmetric registered Jul 6 23:56:56.945119 kernel: Asymmetric key parser 'x509' registered Jul 6 23:56:56.945127 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jul 6 23:56:56.945134 kernel: io scheduler mq-deadline registered Jul 6 23:56:56.945142 kernel: io scheduler kyber registered Jul 6 23:56:56.945150 kernel: io scheduler bfq registered Jul 6 23:56:56.945160 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 6 23:56:56.945168 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jul 6 23:56:56.945176 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jul 6 23:56:56.945184 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jul 6 23:56:56.945192 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 6 23:56:56.945199 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 6 23:56:56.945207 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jul 6 23:56:56.945215 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jul 6 23:56:56.945222 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jul 6 23:56:56.945357 kernel: rtc_cmos 00:04: RTC can wake from S4 Jul 6 23:56:56.945369 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jul 6 23:56:56.945481 kernel: rtc_cmos 00:04: registered as rtc0 Jul 6 23:56:56.945593 kernel: rtc_cmos 00:04: setting system clock to 2025-07-06T23:56:56 UTC (1751846216) Jul 6 23:56:56.945714 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Jul 6 23:56:56.945725 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jul 6 23:56:56.945733 kernel: efifb: probing for efifb Jul 6 23:56:56.945741 kernel: efifb: framebuffer at 0xc0000000, using 1408k, total 1408k Jul 6 23:56:56.945752 kernel: efifb: mode is 800x600x24, linelength=2400, pages=1 Jul 6 23:56:56.945760 kernel: efifb: scrolling: redraw Jul 6 23:56:56.945768 kernel: efifb: Truecolor: size=0:8:8:8, shift=0:16:8:0 Jul 6 23:56:56.945775 kernel: Console: switching to colour frame buffer device 100x37 Jul 6 23:56:56.945783 kernel: fb0: EFI VGA frame buffer device Jul 6 23:56:56.945810 kernel: pstore: Using crash dump compression: deflate Jul 6 23:56:56.945820 kernel: pstore: Registered efi_pstore as persistent store backend Jul 6 23:56:56.945828 kernel: NET: Registered PF_INET6 protocol family Jul 6 23:56:56.945836 kernel: Segment Routing with IPv6 Jul 6 23:56:56.945846 kernel: In-situ OAM (IOAM) with IPv6 Jul 6 23:56:56.945854 kernel: NET: Registered PF_PACKET protocol family Jul 6 23:56:56.945862 kernel: Key type dns_resolver registered Jul 6 23:56:56.945872 kernel: IPI shorthand broadcast: enabled Jul 6 23:56:56.945941 kernel: sched_clock: Marking stable (867002170, 111112035)->(1028508794, -50394589) Jul 6 23:56:56.945951 kernel: registered taskstats version 1 Jul 6 23:56:56.945959 kernel: Loading compiled-in X.509 certificates Jul 6 23:56:56.945967 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.95-flatcar: 6372c48ca52cc7f7bbee5675b604584c1c68ec5b' Jul 6 23:56:56.945975 kernel: Key type .fscrypt registered Jul 6 23:56:56.945987 kernel: Key type fscrypt-provisioning registered Jul 6 23:56:56.945995 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 6 23:56:56.946002 kernel: ima: Allocated hash algorithm: sha1 Jul 6 23:56:56.946010 kernel: ima: No architecture policies found Jul 6 23:56:56.946018 kernel: clk: Disabling unused clocks Jul 6 23:56:56.946026 kernel: Freeing unused kernel image (initmem) memory: 42868K Jul 6 23:56:56.946034 kernel: Write protecting the kernel read-only data: 36864k Jul 6 23:56:56.946042 kernel: Freeing unused kernel image (rodata/data gap) memory: 1828K Jul 6 23:56:56.946052 kernel: Run /init as init process Jul 6 23:56:56.946060 kernel: with arguments: Jul 6 23:56:56.946068 kernel: /init Jul 6 23:56:56.946076 kernel: with environment: Jul 6 23:56:56.946084 kernel: HOME=/ Jul 6 23:56:56.946092 kernel: TERM=linux Jul 6 23:56:56.946100 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 6 23:56:56.946109 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 6 23:56:56.946123 systemd[1]: Detected virtualization kvm. Jul 6 23:56:56.946131 systemd[1]: Detected architecture x86-64. Jul 6 23:56:56.946139 systemd[1]: Running in initrd. Jul 6 23:56:56.946147 systemd[1]: No hostname configured, using default hostname. Jul 6 23:56:56.946155 systemd[1]: Hostname set to . Jul 6 23:56:56.946169 systemd[1]: Initializing machine ID from VM UUID. Jul 6 23:56:56.946177 systemd[1]: Queued start job for default target initrd.target. Jul 6 23:56:56.946186 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 6 23:56:56.946194 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 6 23:56:56.946203 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 6 23:56:56.946212 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 6 23:56:56.946220 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 6 23:56:56.946229 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 6 23:56:56.946241 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 6 23:56:56.946250 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 6 23:56:56.946258 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 6 23:56:56.946266 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 6 23:56:56.946274 systemd[1]: Reached target paths.target - Path Units. Jul 6 23:56:56.946283 systemd[1]: Reached target slices.target - Slice Units. Jul 6 23:56:56.946291 systemd[1]: Reached target swap.target - Swaps. Jul 6 23:56:56.946301 systemd[1]: Reached target timers.target - Timer Units. Jul 6 23:56:56.946310 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 6 23:56:56.946321 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 6 23:56:56.946329 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 6 23:56:56.946337 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jul 6 23:56:56.946346 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 6 23:56:56.946354 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 6 23:56:56.946362 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 6 23:56:56.946371 systemd[1]: Reached target sockets.target - Socket Units. Jul 6 23:56:56.946381 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 6 23:56:56.946389 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 6 23:56:56.946398 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 6 23:56:56.946407 systemd[1]: Starting systemd-fsck-usr.service... Jul 6 23:56:56.946415 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 6 23:56:56.946423 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 6 23:56:56.946431 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:56:56.946440 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 6 23:56:56.946450 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 6 23:56:56.946459 systemd[1]: Finished systemd-fsck-usr.service. Jul 6 23:56:56.946468 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 6 23:56:56.946498 systemd-journald[192]: Collecting audit messages is disabled. Jul 6 23:56:56.946521 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:56:56.946530 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 6 23:56:56.946538 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 6 23:56:56.946547 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 6 23:56:56.946558 systemd-journald[192]: Journal started Jul 6 23:56:56.946576 systemd-journald[192]: Runtime Journal (/run/log/journal/a0ebef15d948401c8cf38a4bf0e942f4) is 6.0M, max 48.3M, 42.2M free. Jul 6 23:56:56.935593 systemd-modules-load[193]: Inserted module 'overlay' Jul 6 23:56:56.949151 systemd[1]: Started systemd-journald.service - Journal Service. Jul 6 23:56:56.955031 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 6 23:56:56.959069 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 6 23:56:56.961642 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 6 23:56:56.965563 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 6 23:56:56.972578 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 6 23:56:56.974184 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 6 23:56:56.976496 systemd-modules-load[193]: Inserted module 'br_netfilter' Jul 6 23:56:56.977461 kernel: Bridge firewalling registered Jul 6 23:56:56.978107 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 6 23:56:56.981461 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 6 23:56:56.988549 dracut-cmdline[221]: dracut-dracut-053 Jul 6 23:56:56.991488 dracut-cmdline[221]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=65c65ff9d50198f0ae5c37458dc3ff85c6a690e7aa124bb306a2f4c63a54d876 Jul 6 23:56:57.001310 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 6 23:56:57.006136 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 6 23:56:57.041565 systemd-resolved[248]: Positive Trust Anchors: Jul 6 23:56:57.041600 systemd-resolved[248]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 6 23:56:57.041637 systemd-resolved[248]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 6 23:56:57.045449 systemd-resolved[248]: Defaulting to hostname 'linux'. Jul 6 23:56:57.047115 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 6 23:56:57.052718 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 6 23:56:57.083926 kernel: SCSI subsystem initialized Jul 6 23:56:57.093909 kernel: Loading iSCSI transport class v2.0-870. Jul 6 23:56:57.103909 kernel: iscsi: registered transport (tcp) Jul 6 23:56:57.126920 kernel: iscsi: registered transport (qla4xxx) Jul 6 23:56:57.126950 kernel: QLogic iSCSI HBA Driver Jul 6 23:56:57.178011 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 6 23:56:57.189071 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 6 23:56:57.214181 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 6 23:56:57.214263 kernel: device-mapper: uevent: version 1.0.3 Jul 6 23:56:57.215585 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jul 6 23:56:57.257942 kernel: raid6: avx2x4 gen() 24851 MB/s Jul 6 23:56:57.274926 kernel: raid6: avx2x2 gen() 26791 MB/s Jul 6 23:56:57.292035 kernel: raid6: avx2x1 gen() 22164 MB/s Jul 6 23:56:57.292111 kernel: raid6: using algorithm avx2x2 gen() 26791 MB/s Jul 6 23:56:57.310176 kernel: raid6: .... xor() 15659 MB/s, rmw enabled Jul 6 23:56:57.310233 kernel: raid6: using avx2x2 recovery algorithm Jul 6 23:56:57.331926 kernel: xor: automatically using best checksumming function avx Jul 6 23:56:57.487919 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 6 23:56:57.501781 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 6 23:56:57.518166 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 6 23:56:57.530032 systemd-udevd[412]: Using default interface naming scheme 'v255'. Jul 6 23:56:57.534761 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 6 23:56:57.546071 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 6 23:56:57.559301 dracut-pre-trigger[420]: rd.md=0: removing MD RAID activation Jul 6 23:56:57.594616 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 6 23:56:57.610191 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 6 23:56:57.674741 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 6 23:56:57.684125 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 6 23:56:57.699515 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 6 23:56:57.702268 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 6 23:56:57.704120 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 6 23:56:57.705777 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 6 23:56:57.712042 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jul 6 23:56:57.719100 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 6 23:56:57.723909 kernel: cryptd: max_cpu_qlen set to 1000 Jul 6 23:56:57.733279 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jul 6 23:56:57.738089 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 6 23:56:57.747775 kernel: libata version 3.00 loaded. Jul 6 23:56:57.747835 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 6 23:56:57.747859 kernel: GPT:9289727 != 19775487 Jul 6 23:56:57.747872 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 6 23:56:57.747903 kernel: GPT:9289727 != 19775487 Jul 6 23:56:57.747916 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 6 23:56:57.747930 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 6 23:56:57.745225 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 6 23:56:57.745387 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 6 23:56:57.747556 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 6 23:56:57.748632 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 6 23:56:57.750978 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:56:57.755162 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:56:57.758917 kernel: AVX2 version of gcm_enc/dec engaged. Jul 6 23:56:57.758947 kernel: AES CTR mode by8 optimization enabled Jul 6 23:56:57.766341 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:56:57.776852 kernel: ahci 0000:00:1f.2: version 3.0 Jul 6 23:56:57.777080 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jul 6 23:56:57.777093 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Jul 6 23:56:57.777236 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jul 6 23:56:57.777376 kernel: BTRFS: device fsid 01287863-c21f-4cbb-820d-bbae8208f32f devid 1 transid 34 /dev/vda3 scanned by (udev-worker) (460) Jul 6 23:56:57.777387 kernel: scsi host0: ahci Jul 6 23:56:57.779914 kernel: scsi host1: ahci Jul 6 23:56:57.782447 kernel: scsi host2: ahci Jul 6 23:56:57.783911 kernel: scsi host3: ahci Jul 6 23:56:57.786956 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/vda6 scanned by (udev-worker) (456) Jul 6 23:56:57.786980 kernel: scsi host4: ahci Jul 6 23:56:57.788749 kernel: scsi host5: ahci Jul 6 23:56:57.788964 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Jul 6 23:56:57.791903 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Jul 6 23:56:57.791934 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Jul 6 23:56:57.791944 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Jul 6 23:56:57.791955 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Jul 6 23:56:57.792817 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jul 6 23:56:57.795221 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Jul 6 23:56:57.796938 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:56:57.802207 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jul 6 23:56:57.802678 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jul 6 23:56:57.812956 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jul 6 23:56:57.820254 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 6 23:56:57.838174 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 6 23:56:57.839527 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 6 23:56:57.850259 disk-uuid[555]: Primary Header is updated. Jul 6 23:56:57.850259 disk-uuid[555]: Secondary Entries is updated. Jul 6 23:56:57.850259 disk-uuid[555]: Secondary Header is updated. Jul 6 23:56:57.854954 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 6 23:56:57.859931 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 6 23:56:57.861552 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 6 23:56:58.103663 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jul 6 23:56:58.103739 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jul 6 23:56:58.103750 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jul 6 23:56:58.103760 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jul 6 23:56:58.104915 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jul 6 23:56:58.105913 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jul 6 23:56:58.105928 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jul 6 23:56:58.106909 kernel: ata3.00: applying bridge limits Jul 6 23:56:58.106924 kernel: ata3.00: configured for UDMA/100 Jul 6 23:56:58.107907 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jul 6 23:56:58.158926 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jul 6 23:56:58.159318 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jul 6 23:56:58.178050 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jul 6 23:56:58.951910 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 6 23:56:58.952119 disk-uuid[558]: The operation has completed successfully. Jul 6 23:56:58.977176 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 6 23:56:58.977292 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 6 23:56:59.009055 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 6 23:56:59.012311 sh[592]: Success Jul 6 23:56:59.023909 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Jul 6 23:56:59.055607 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 6 23:56:59.069271 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 6 23:56:59.073440 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 6 23:56:59.082903 kernel: BTRFS info (device dm-0): first mount of filesystem 01287863-c21f-4cbb-820d-bbae8208f32f Jul 6 23:56:59.082932 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jul 6 23:56:59.082943 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jul 6 23:56:59.084561 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jul 6 23:56:59.084585 kernel: BTRFS info (device dm-0): using free space tree Jul 6 23:56:59.089391 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 6 23:56:59.090170 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 6 23:56:59.090939 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 6 23:56:59.092425 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 6 23:56:59.106439 kernel: BTRFS info (device vda6): first mount of filesystem 11f56a79-b29d-47db-ad8e-56effe5ac41b Jul 6 23:56:59.106476 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 6 23:56:59.106491 kernel: BTRFS info (device vda6): using free space tree Jul 6 23:56:59.109909 kernel: BTRFS info (device vda6): auto enabling async discard Jul 6 23:56:59.117970 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 6 23:56:59.119609 kernel: BTRFS info (device vda6): last unmount of filesystem 11f56a79-b29d-47db-ad8e-56effe5ac41b Jul 6 23:56:59.128790 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 6 23:56:59.135020 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 6 23:56:59.189097 ignition[698]: Ignition 2.19.0 Jul 6 23:56:59.189108 ignition[698]: Stage: fetch-offline Jul 6 23:56:59.189147 ignition[698]: no configs at "/usr/lib/ignition/base.d" Jul 6 23:56:59.189157 ignition[698]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 6 23:56:59.189246 ignition[698]: parsed url from cmdline: "" Jul 6 23:56:59.189250 ignition[698]: no config URL provided Jul 6 23:56:59.189255 ignition[698]: reading system config file "/usr/lib/ignition/user.ign" Jul 6 23:56:59.189264 ignition[698]: no config at "/usr/lib/ignition/user.ign" Jul 6 23:56:59.189289 ignition[698]: op(1): [started] loading QEMU firmware config module Jul 6 23:56:59.189294 ignition[698]: op(1): executing: "modprobe" "qemu_fw_cfg" Jul 6 23:56:59.198783 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 6 23:56:59.198952 ignition[698]: op(1): [finished] loading QEMU firmware config module Jul 6 23:56:59.202440 ignition[698]: parsing config with SHA512: 7f929b92e178482238b9a17f337585f0234d27e93c0f4b9bb3386f543d97514d342f9b90795384a7c47659eee729f59a94b0318858f746c06366116f5a0f7078 Jul 6 23:56:59.204822 unknown[698]: fetched base config from "system" Jul 6 23:56:59.204834 unknown[698]: fetched user config from "qemu" Jul 6 23:56:59.205107 ignition[698]: fetch-offline: fetch-offline passed Jul 6 23:56:59.205168 ignition[698]: Ignition finished successfully Jul 6 23:56:59.207392 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 6 23:56:59.211384 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 6 23:56:59.244360 systemd-networkd[779]: lo: Link UP Jul 6 23:56:59.244368 systemd-networkd[779]: lo: Gained carrier Jul 6 23:56:59.246105 systemd-networkd[779]: Enumeration completed Jul 6 23:56:59.246204 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 6 23:56:59.246563 systemd-networkd[779]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 6 23:56:59.246567 systemd-networkd[779]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 6 23:56:59.248315 systemd[1]: Reached target network.target - Network. Jul 6 23:56:59.248386 systemd-networkd[779]: eth0: Link UP Jul 6 23:56:59.248390 systemd-networkd[779]: eth0: Gained carrier Jul 6 23:56:59.248398 systemd-networkd[779]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 6 23:56:59.250209 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 6 23:56:59.259021 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 6 23:56:59.266952 systemd-networkd[779]: eth0: DHCPv4 address 10.0.0.116/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 6 23:56:59.273805 ignition[782]: Ignition 2.19.0 Jul 6 23:56:59.273815 ignition[782]: Stage: kargs Jul 6 23:56:59.274001 ignition[782]: no configs at "/usr/lib/ignition/base.d" Jul 6 23:56:59.274012 ignition[782]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 6 23:56:59.277317 ignition[782]: kargs: kargs passed Jul 6 23:56:59.277359 ignition[782]: Ignition finished successfully Jul 6 23:56:59.281460 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 6 23:56:59.290124 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 6 23:56:59.303583 ignition[791]: Ignition 2.19.0 Jul 6 23:56:59.303593 ignition[791]: Stage: disks Jul 6 23:56:59.303752 ignition[791]: no configs at "/usr/lib/ignition/base.d" Jul 6 23:56:59.303763 ignition[791]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 6 23:56:59.307101 ignition[791]: disks: disks passed Jul 6 23:56:59.307144 ignition[791]: Ignition finished successfully Jul 6 23:56:59.310219 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 6 23:56:59.310699 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 6 23:56:59.312272 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 6 23:56:59.314378 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 6 23:56:59.314697 systemd[1]: Reached target sysinit.target - System Initialization. Jul 6 23:56:59.315163 systemd[1]: Reached target basic.target - Basic System. Jul 6 23:56:59.331015 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 6 23:56:59.344858 systemd-fsck[801]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jul 6 23:56:59.351089 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 6 23:56:59.364001 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 6 23:56:59.448910 kernel: EXT4-fs (vda9): mounted filesystem c3eefe20-4a42-420d-8034-4d5498275b2f r/w with ordered data mode. Quota mode: none. Jul 6 23:56:59.449138 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 6 23:56:59.450081 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 6 23:56:59.462962 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 6 23:56:59.464554 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 6 23:56:59.465664 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 6 23:56:59.465699 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 6 23:56:59.476343 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/vda6 scanned by mount (809) Jul 6 23:56:59.476368 kernel: BTRFS info (device vda6): first mount of filesystem 11f56a79-b29d-47db-ad8e-56effe5ac41b Jul 6 23:56:59.476379 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 6 23:56:59.476389 kernel: BTRFS info (device vda6): using free space tree Jul 6 23:56:59.465720 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 6 23:56:59.479539 kernel: BTRFS info (device vda6): auto enabling async discard Jul 6 23:56:59.472605 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 6 23:56:59.477112 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 6 23:56:59.481371 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 6 23:56:59.511934 initrd-setup-root[833]: cut: /sysroot/etc/passwd: No such file or directory Jul 6 23:56:59.518139 initrd-setup-root[840]: cut: /sysroot/etc/group: No such file or directory Jul 6 23:56:59.522112 initrd-setup-root[847]: cut: /sysroot/etc/shadow: No such file or directory Jul 6 23:56:59.525919 initrd-setup-root[854]: cut: /sysroot/etc/gshadow: No such file or directory Jul 6 23:56:59.603410 systemd-resolved[248]: Detected conflict on linux IN A 10.0.0.116 Jul 6 23:56:59.603425 systemd-resolved[248]: Hostname conflict, changing published hostname from 'linux' to 'linux6'. Jul 6 23:56:59.608513 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 6 23:56:59.620981 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 6 23:56:59.622550 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 6 23:56:59.628911 kernel: BTRFS info (device vda6): last unmount of filesystem 11f56a79-b29d-47db-ad8e-56effe5ac41b Jul 6 23:56:59.646412 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 6 23:56:59.657502 ignition[923]: INFO : Ignition 2.19.0 Jul 6 23:56:59.657502 ignition[923]: INFO : Stage: mount Jul 6 23:56:59.659672 ignition[923]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 6 23:56:59.659672 ignition[923]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 6 23:56:59.659672 ignition[923]: INFO : mount: mount passed Jul 6 23:56:59.659672 ignition[923]: INFO : Ignition finished successfully Jul 6 23:56:59.660692 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 6 23:56:59.669112 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 6 23:57:00.082454 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 6 23:57:00.093135 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 6 23:57:00.100901 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 scanned by mount (935) Jul 6 23:57:00.103456 kernel: BTRFS info (device vda6): first mount of filesystem 11f56a79-b29d-47db-ad8e-56effe5ac41b Jul 6 23:57:00.103478 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 6 23:57:00.103489 kernel: BTRFS info (device vda6): using free space tree Jul 6 23:57:00.105904 kernel: BTRFS info (device vda6): auto enabling async discard Jul 6 23:57:00.107233 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 6 23:57:00.136875 ignition[952]: INFO : Ignition 2.19.0 Jul 6 23:57:00.136875 ignition[952]: INFO : Stage: files Jul 6 23:57:00.138453 ignition[952]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 6 23:57:00.138453 ignition[952]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 6 23:57:00.140896 ignition[952]: DEBUG : files: compiled without relabeling support, skipping Jul 6 23:57:00.142112 ignition[952]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 6 23:57:00.142112 ignition[952]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 6 23:57:00.145203 ignition[952]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 6 23:57:00.146545 ignition[952]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 6 23:57:00.148155 unknown[952]: wrote ssh authorized keys file for user: core Jul 6 23:57:00.149182 ignition[952]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 6 23:57:00.151452 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Jul 6 23:57:00.153213 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Jul 6 23:57:00.154988 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 6 23:57:00.156726 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 6 23:57:00.158387 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 6 23:57:00.160807 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 6 23:57:00.163325 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 6 23:57:00.165371 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Jul 6 23:57:00.747225 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Jul 6 23:57:00.856092 systemd-networkd[779]: eth0: Gained IPv6LL Jul 6 23:57:01.380753 ignition[952]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jul 6 23:57:01.380753 ignition[952]: INFO : files: op(7): [started] processing unit "coreos-metadata.service" Jul 6 23:57:01.384670 ignition[952]: INFO : files: op(7): op(8): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 6 23:57:01.384670 ignition[952]: INFO : files: op(7): op(8): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 6 23:57:01.384670 ignition[952]: INFO : files: op(7): [finished] processing unit "coreos-metadata.service" Jul 6 23:57:01.384670 ignition[952]: INFO : files: op(9): [started] setting preset to disabled for "coreos-metadata.service" Jul 6 23:57:01.405175 ignition[952]: INFO : files: op(9): op(a): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 6 23:57:01.410477 ignition[952]: INFO : files: op(9): op(a): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 6 23:57:01.412112 ignition[952]: INFO : files: op(9): [finished] setting preset to disabled for "coreos-metadata.service" Jul 6 23:57:01.412112 ignition[952]: INFO : files: createResultFile: createFiles: op(b): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 6 23:57:01.412112 ignition[952]: INFO : files: createResultFile: createFiles: op(b): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 6 23:57:01.412112 ignition[952]: INFO : files: files passed Jul 6 23:57:01.412112 ignition[952]: INFO : Ignition finished successfully Jul 6 23:57:01.413836 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 6 23:57:01.427017 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 6 23:57:01.429540 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 6 23:57:01.431318 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 6 23:57:01.431421 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 6 23:57:01.440201 initrd-setup-root-after-ignition[980]: grep: /sysroot/oem/oem-release: No such file or directory Jul 6 23:57:01.443225 initrd-setup-root-after-ignition[982]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 6 23:57:01.443225 initrd-setup-root-after-ignition[982]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 6 23:57:01.447325 initrd-setup-root-after-ignition[986]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 6 23:57:01.445947 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 6 23:57:01.447516 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 6 23:57:01.454146 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 6 23:57:01.480573 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 6 23:57:01.480719 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 6 23:57:01.482968 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 6 23:57:01.484901 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 6 23:57:01.486811 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 6 23:57:01.487805 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 6 23:57:01.507925 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 6 23:57:01.526235 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 6 23:57:01.537109 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 6 23:57:01.537513 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 6 23:57:01.539739 systemd[1]: Stopped target timers.target - Timer Units. Jul 6 23:57:01.540182 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 6 23:57:01.540309 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 6 23:57:01.544941 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 6 23:57:01.545415 systemd[1]: Stopped target basic.target - Basic System. Jul 6 23:57:01.545736 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 6 23:57:01.549518 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 6 23:57:01.551442 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 6 23:57:01.553482 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 6 23:57:01.553792 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 6 23:57:01.554280 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 6 23:57:01.559331 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 6 23:57:01.561241 systemd[1]: Stopped target swap.target - Swaps. Jul 6 23:57:01.562824 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 6 23:57:01.563001 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 6 23:57:01.565739 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 6 23:57:01.566393 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 6 23:57:01.566672 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 6 23:57:01.571324 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 6 23:57:01.573709 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 6 23:57:01.573844 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 6 23:57:01.576442 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 6 23:57:01.576570 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 6 23:57:01.577309 systemd[1]: Stopped target paths.target - Path Units. Jul 6 23:57:01.579826 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 6 23:57:01.584972 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 6 23:57:01.585338 systemd[1]: Stopped target slices.target - Slice Units. Jul 6 23:57:01.588261 systemd[1]: Stopped target sockets.target - Socket Units. Jul 6 23:57:01.589706 systemd[1]: iscsid.socket: Deactivated successfully. Jul 6 23:57:01.589813 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 6 23:57:01.591330 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 6 23:57:01.591419 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 6 23:57:01.593202 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 6 23:57:01.593322 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 6 23:57:01.594765 systemd[1]: ignition-files.service: Deactivated successfully. Jul 6 23:57:01.594868 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 6 23:57:01.607078 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 6 23:57:01.608195 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 6 23:57:01.609301 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 6 23:57:01.609443 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 6 23:57:01.611312 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 6 23:57:01.611502 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 6 23:57:01.616568 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 6 23:57:01.616682 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 6 23:57:01.631360 ignition[1006]: INFO : Ignition 2.19.0 Jul 6 23:57:01.631360 ignition[1006]: INFO : Stage: umount Jul 6 23:57:01.633433 ignition[1006]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 6 23:57:01.633433 ignition[1006]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 6 23:57:01.633433 ignition[1006]: INFO : umount: umount passed Jul 6 23:57:01.633433 ignition[1006]: INFO : Ignition finished successfully Jul 6 23:57:01.636038 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 6 23:57:01.636600 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 6 23:57:01.636725 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 6 23:57:01.638683 systemd[1]: Stopped target network.target - Network. Jul 6 23:57:01.640231 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 6 23:57:01.640287 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 6 23:57:01.642107 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 6 23:57:01.642161 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 6 23:57:01.643974 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 6 23:57:01.644022 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 6 23:57:01.645763 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 6 23:57:01.645811 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 6 23:57:01.647844 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 6 23:57:01.649626 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 6 23:57:01.655964 systemd-networkd[779]: eth0: DHCPv6 lease lost Jul 6 23:57:01.658587 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 6 23:57:01.658756 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 6 23:57:01.660874 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 6 23:57:01.660932 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 6 23:57:01.677047 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 6 23:57:01.678917 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 6 23:57:01.678999 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 6 23:57:01.681353 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 6 23:57:01.684355 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 6 23:57:01.684517 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 6 23:57:01.691197 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 6 23:57:01.691268 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 6 23:57:01.693227 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 6 23:57:01.693280 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 6 23:57:01.695142 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 6 23:57:01.695196 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 6 23:57:01.697575 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 6 23:57:01.697753 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 6 23:57:01.699770 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 6 23:57:01.699882 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 6 23:57:01.702590 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 6 23:57:01.702667 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 6 23:57:01.704687 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 6 23:57:01.704728 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 6 23:57:01.707108 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 6 23:57:01.707164 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 6 23:57:01.709292 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 6 23:57:01.709341 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 6 23:57:01.711111 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 6 23:57:01.711162 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 6 23:57:01.729027 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 6 23:57:01.729438 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 6 23:57:01.729491 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 6 23:57:01.729793 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jul 6 23:57:01.729837 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 6 23:57:01.730109 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 6 23:57:01.730151 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 6 23:57:01.730436 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 6 23:57:01.730488 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:57:01.737263 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 6 23:57:01.737371 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 6 23:57:01.813971 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 6 23:57:01.814121 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 6 23:57:01.816553 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 6 23:57:01.817756 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 6 23:57:01.817819 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 6 23:57:01.830011 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 6 23:57:01.839327 systemd[1]: Switching root. Jul 6 23:57:01.866153 systemd-journald[192]: Journal stopped Jul 6 23:57:03.009298 systemd-journald[192]: Received SIGTERM from PID 1 (systemd). Jul 6 23:57:03.009379 kernel: SELinux: policy capability network_peer_controls=1 Jul 6 23:57:03.009393 kernel: SELinux: policy capability open_perms=1 Jul 6 23:57:03.009405 kernel: SELinux: policy capability extended_socket_class=1 Jul 6 23:57:03.009422 kernel: SELinux: policy capability always_check_network=0 Jul 6 23:57:03.009434 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 6 23:57:03.009445 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 6 23:57:03.009456 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 6 23:57:03.009468 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 6 23:57:03.009479 kernel: audit: type=1403 audit(1751846222.252:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 6 23:57:03.009492 systemd[1]: Successfully loaded SELinux policy in 42.466ms. Jul 6 23:57:03.009520 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.933ms. Jul 6 23:57:03.009533 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jul 6 23:57:03.009549 systemd[1]: Detected virtualization kvm. Jul 6 23:57:03.009561 systemd[1]: Detected architecture x86-64. Jul 6 23:57:03.009576 systemd[1]: Detected first boot. Jul 6 23:57:03.009588 systemd[1]: Initializing machine ID from VM UUID. Jul 6 23:57:03.009600 zram_generator::config[1050]: No configuration found. Jul 6 23:57:03.009620 systemd[1]: Populated /etc with preset unit settings. Jul 6 23:57:03.009632 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 6 23:57:03.009644 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 6 23:57:03.009656 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 6 23:57:03.009668 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 6 23:57:03.009681 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 6 23:57:03.009693 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 6 23:57:03.009705 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 6 23:57:03.009719 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 6 23:57:03.009732 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 6 23:57:03.009748 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 6 23:57:03.009760 systemd[1]: Created slice user.slice - User and Session Slice. Jul 6 23:57:03.009772 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 6 23:57:03.009785 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 6 23:57:03.009798 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 6 23:57:03.009810 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 6 23:57:03.009822 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 6 23:57:03.009837 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 6 23:57:03.009849 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jul 6 23:57:03.009860 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 6 23:57:03.009872 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 6 23:57:03.009897 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 6 23:57:03.009909 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 6 23:57:03.009921 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 6 23:57:03.009932 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 6 23:57:03.009947 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 6 23:57:03.009959 systemd[1]: Reached target slices.target - Slice Units. Jul 6 23:57:03.009971 systemd[1]: Reached target swap.target - Swaps. Jul 6 23:57:03.009983 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 6 23:57:03.009995 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 6 23:57:03.010006 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 6 23:57:03.010018 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 6 23:57:03.010030 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 6 23:57:03.010041 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 6 23:57:03.010056 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 6 23:57:03.010068 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 6 23:57:03.010080 systemd[1]: Mounting media.mount - External Media Directory... Jul 6 23:57:03.010092 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 6 23:57:03.010104 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 6 23:57:03.010116 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 6 23:57:03.010128 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 6 23:57:03.010140 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 6 23:57:03.010160 systemd[1]: Reached target machines.target - Containers. Jul 6 23:57:03.010171 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 6 23:57:03.010184 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 6 23:57:03.010195 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 6 23:57:03.010207 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 6 23:57:03.010219 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 6 23:57:03.010231 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 6 23:57:03.010243 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 6 23:57:03.010254 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 6 23:57:03.010269 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 6 23:57:03.010281 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 6 23:57:03.010293 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 6 23:57:03.010306 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 6 23:57:03.010317 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 6 23:57:03.010331 systemd[1]: Stopped systemd-fsck-usr.service. Jul 6 23:57:03.010342 kernel: loop: module loaded Jul 6 23:57:03.010354 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 6 23:57:03.010365 kernel: fuse: init (API version 7.39) Jul 6 23:57:03.010379 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 6 23:57:03.010391 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 6 23:57:03.010403 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 6 23:57:03.010416 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 6 23:57:03.010428 systemd[1]: verity-setup.service: Deactivated successfully. Jul 6 23:57:03.010442 systemd[1]: Stopped verity-setup.service. Jul 6 23:57:03.010456 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 6 23:57:03.010469 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 6 23:57:03.010483 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 6 23:57:03.010495 systemd[1]: Mounted media.mount - External Media Directory. Jul 6 23:57:03.010507 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 6 23:57:03.010528 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 6 23:57:03.010557 systemd-journald[1124]: Collecting audit messages is disabled. Jul 6 23:57:03.010583 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 6 23:57:03.010595 systemd-journald[1124]: Journal started Jul 6 23:57:03.010617 systemd-journald[1124]: Runtime Journal (/run/log/journal/a0ebef15d948401c8cf38a4bf0e942f4) is 6.0M, max 48.3M, 42.2M free. Jul 6 23:57:02.791272 systemd[1]: Queued start job for default target multi-user.target. Jul 6 23:57:02.810346 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jul 6 23:57:02.810851 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 6 23:57:03.013145 systemd[1]: Started systemd-journald.service - Journal Service. Jul 6 23:57:03.014064 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 6 23:57:03.015464 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 6 23:57:03.016980 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 6 23:57:03.017158 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 6 23:57:03.018597 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 6 23:57:03.018766 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 6 23:57:03.020395 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 6 23:57:03.020578 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 6 23:57:03.021908 kernel: ACPI: bus type drm_connector registered Jul 6 23:57:03.022654 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 6 23:57:03.022831 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 6 23:57:03.024290 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 6 23:57:03.024470 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 6 23:57:03.026298 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 6 23:57:03.026471 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 6 23:57:03.027989 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 6 23:57:03.029346 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 6 23:57:03.030839 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 6 23:57:03.046025 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 6 23:57:03.055965 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 6 23:57:03.058332 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 6 23:57:03.059542 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 6 23:57:03.059636 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 6 23:57:03.061675 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jul 6 23:57:03.064053 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 6 23:57:03.066292 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 6 23:57:03.067478 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 6 23:57:03.070002 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 6 23:57:03.072175 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 6 23:57:03.073457 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 6 23:57:03.078456 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 6 23:57:03.079778 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 6 23:57:03.085029 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 6 23:57:03.087185 systemd-journald[1124]: Time spent on flushing to /var/log/journal/a0ebef15d948401c8cf38a4bf0e942f4 is 13.816ms for 975 entries. Jul 6 23:57:03.087185 systemd-journald[1124]: System Journal (/var/log/journal/a0ebef15d948401c8cf38a4bf0e942f4) is 8.0M, max 195.6M, 187.6M free. Jul 6 23:57:03.108944 systemd-journald[1124]: Received client request to flush runtime journal. Jul 6 23:57:03.089022 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 6 23:57:03.095006 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 6 23:57:03.098962 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 6 23:57:03.101576 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 6 23:57:03.104362 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 6 23:57:03.114775 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 6 23:57:03.117484 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 6 23:57:03.125431 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 6 23:57:03.127837 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 6 23:57:03.128936 kernel: loop0: detected capacity change from 0 to 140768 Jul 6 23:57:03.139585 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jul 6 23:57:03.141616 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 6 23:57:03.146452 systemd-tmpfiles[1166]: ACLs are not supported, ignoring. Jul 6 23:57:03.146469 systemd-tmpfiles[1166]: ACLs are not supported, ignoring. Jul 6 23:57:03.148375 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jul 6 23:57:03.152269 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 6 23:57:03.158280 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 6 23:57:03.156679 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 6 23:57:03.166646 udevadm[1180]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jul 6 23:57:03.171257 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 6 23:57:03.172760 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jul 6 23:57:03.186631 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 6 23:57:03.189926 kernel: loop1: detected capacity change from 0 to 142488 Jul 6 23:57:03.196022 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 6 23:57:03.213622 systemd-tmpfiles[1186]: ACLs are not supported, ignoring. Jul 6 23:57:03.214126 systemd-tmpfiles[1186]: ACLs are not supported, ignoring. Jul 6 23:57:03.220413 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 6 23:57:03.232905 kernel: loop2: detected capacity change from 0 to 221472 Jul 6 23:57:03.270922 kernel: loop3: detected capacity change from 0 to 140768 Jul 6 23:57:03.284924 kernel: loop4: detected capacity change from 0 to 142488 Jul 6 23:57:03.296904 kernel: loop5: detected capacity change from 0 to 221472 Jul 6 23:57:03.302718 (sd-merge)[1191]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jul 6 23:57:03.305305 (sd-merge)[1191]: Merged extensions into '/usr'. Jul 6 23:57:03.310252 systemd[1]: Reloading requested from client PID 1164 ('systemd-sysext') (unit systemd-sysext.service)... Jul 6 23:57:03.310266 systemd[1]: Reloading... Jul 6 23:57:03.364260 zram_generator::config[1223]: No configuration found. Jul 6 23:57:03.400339 ldconfig[1159]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 6 23:57:03.479986 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 6 23:57:03.530171 systemd[1]: Reloading finished in 219 ms. Jul 6 23:57:03.564721 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 6 23:57:03.566212 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 6 23:57:03.581093 systemd[1]: Starting ensure-sysext.service... Jul 6 23:57:03.583408 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 6 23:57:03.588201 systemd[1]: Reloading requested from client PID 1254 ('systemctl') (unit ensure-sysext.service)... Jul 6 23:57:03.588216 systemd[1]: Reloading... Jul 6 23:57:03.616258 systemd-tmpfiles[1255]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 6 23:57:03.616638 systemd-tmpfiles[1255]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 6 23:57:03.617631 systemd-tmpfiles[1255]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 6 23:57:03.617983 systemd-tmpfiles[1255]: ACLs are not supported, ignoring. Jul 6 23:57:03.618066 systemd-tmpfiles[1255]: ACLs are not supported, ignoring. Jul 6 23:57:03.621371 systemd-tmpfiles[1255]: Detected autofs mount point /boot during canonicalization of boot. Jul 6 23:57:03.621383 systemd-tmpfiles[1255]: Skipping /boot Jul 6 23:57:03.634977 systemd-tmpfiles[1255]: Detected autofs mount point /boot during canonicalization of boot. Jul 6 23:57:03.634993 systemd-tmpfiles[1255]: Skipping /boot Jul 6 23:57:03.661012 zram_generator::config[1284]: No configuration found. Jul 6 23:57:03.764725 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 6 23:57:03.813438 systemd[1]: Reloading finished in 224 ms. Jul 6 23:57:03.831228 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 6 23:57:03.844328 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 6 23:57:03.852746 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 6 23:57:03.855160 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 6 23:57:03.857547 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 6 23:57:03.860705 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 6 23:57:03.866271 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 6 23:57:03.869440 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 6 23:57:03.874852 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 6 23:57:03.875082 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 6 23:57:03.881075 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 6 23:57:03.886421 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 6 23:57:03.891451 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 6 23:57:03.895141 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 6 23:57:03.900967 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 6 23:57:03.902036 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 6 23:57:03.903183 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 6 23:57:03.903445 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 6 23:57:03.904213 systemd-udevd[1325]: Using default interface naming scheme 'v255'. Jul 6 23:57:03.905914 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 6 23:57:03.908532 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 6 23:57:03.908705 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 6 23:57:03.919400 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 6 23:57:03.919657 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 6 23:57:03.925615 augenrules[1345]: No rules Jul 6 23:57:03.926956 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 6 23:57:03.932018 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 6 23:57:03.934553 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 6 23:57:03.946376 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 6 23:57:03.946600 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 6 23:57:03.954109 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 6 23:57:03.957848 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 6 23:57:03.963091 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 6 23:57:03.967104 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 6 23:57:03.968308 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 6 23:57:03.972098 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 6 23:57:03.979001 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 6 23:57:03.979255 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 6 23:57:03.980314 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 6 23:57:03.983602 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 6 23:57:03.986219 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 6 23:57:03.986424 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 6 23:57:03.988416 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 6 23:57:03.988621 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 6 23:57:03.990122 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 6 23:57:03.990321 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 6 23:57:04.000560 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jul 6 23:57:04.001362 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 6 23:57:04.002628 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 6 23:57:04.009939 systemd[1]: Finished ensure-sysext.service. Jul 6 23:57:04.017552 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 6 23:57:04.017626 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 6 23:57:04.031156 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 6 23:57:04.032676 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 6 23:57:04.033111 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 6 23:57:04.043030 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1357) Jul 6 23:57:04.043080 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jul 6 23:57:04.055211 kernel: ACPI: button: Power Button [PWRF] Jul 6 23:57:04.066430 systemd-resolved[1324]: Positive Trust Anchors: Jul 6 23:57:04.066453 systemd-resolved[1324]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 6 23:57:04.066496 systemd-resolved[1324]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 6 23:57:04.075529 systemd-resolved[1324]: Defaulting to hostname 'linux'. Jul 6 23:57:04.077383 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 6 23:57:04.079376 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 6 23:57:04.094987 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Jul 6 23:57:04.098308 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jul 6 23:57:04.098471 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Jul 6 23:57:04.098649 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jul 6 23:57:04.097416 systemd-networkd[1383]: lo: Link UP Jul 6 23:57:04.097421 systemd-networkd[1383]: lo: Gained carrier Jul 6 23:57:04.100591 systemd-networkd[1383]: Enumeration completed Jul 6 23:57:04.102683 systemd-networkd[1383]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 6 23:57:04.102930 systemd-networkd[1383]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 6 23:57:04.103053 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 6 23:57:04.104689 systemd[1]: Reached target network.target - Network. Jul 6 23:57:04.106838 systemd-networkd[1383]: eth0: Link UP Jul 6 23:57:04.107068 systemd-networkd[1383]: eth0: Gained carrier Jul 6 23:57:04.107193 systemd-networkd[1383]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 6 23:57:04.109972 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Jul 6 23:57:04.117048 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 6 23:57:04.120588 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 6 23:57:04.123238 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 6 23:57:04.124506 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 6 23:57:04.124934 systemd-networkd[1383]: eth0: DHCPv4 address 10.0.0.116/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 6 23:57:04.125564 systemd-timesyncd[1395]: Network configuration changed, trying to establish connection. Jul 6 23:57:04.125728 systemd[1]: Reached target time-set.target - System Time Set. Jul 6 23:57:04.744429 systemd-resolved[1324]: Clock change detected. Flushing caches. Jul 6 23:57:04.744615 systemd-timesyncd[1395]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jul 6 23:57:04.744673 systemd-timesyncd[1395]: Initial clock synchronization to Sun 2025-07-06 23:57:04.743398 UTC. Jul 6 23:57:04.765881 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 6 23:57:04.783613 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:57:04.798537 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 6 23:57:04.798773 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:57:04.848378 kernel: mousedev: PS/2 mouse device common for all mice Jul 6 23:57:04.848686 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 6 23:57:04.859805 kernel: kvm_amd: TSC scaling supported Jul 6 23:57:04.859835 kernel: kvm_amd: Nested Virtualization enabled Jul 6 23:57:04.859859 kernel: kvm_amd: Nested Paging enabled Jul 6 23:57:04.860748 kernel: kvm_amd: LBR virtualization supported Jul 6 23:57:04.860765 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jul 6 23:57:04.861705 kernel: kvm_amd: Virtual GIF supported Jul 6 23:57:04.883655 kernel: EDAC MC: Ver: 3.0.0 Jul 6 23:57:04.912558 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 6 23:57:04.914114 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jul 6 23:57:04.939476 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jul 6 23:57:04.948583 lvm[1422]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 6 23:57:04.981614 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jul 6 23:57:04.983082 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 6 23:57:04.984171 systemd[1]: Reached target sysinit.target - System Initialization. Jul 6 23:57:04.985303 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 6 23:57:04.986566 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 6 23:57:04.987960 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 6 23:57:04.989082 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 6 23:57:04.990294 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 6 23:57:04.991496 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 6 23:57:04.991525 systemd[1]: Reached target paths.target - Path Units. Jul 6 23:57:04.992406 systemd[1]: Reached target timers.target - Timer Units. Jul 6 23:57:04.994161 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 6 23:57:04.996722 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 6 23:57:05.003852 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 6 23:57:05.006220 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jul 6 23:57:05.007741 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 6 23:57:05.008857 systemd[1]: Reached target sockets.target - Socket Units. Jul 6 23:57:05.009786 systemd[1]: Reached target basic.target - Basic System. Jul 6 23:57:05.010731 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 6 23:57:05.010757 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 6 23:57:05.011767 systemd[1]: Starting containerd.service - containerd container runtime... Jul 6 23:57:05.013817 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 6 23:57:05.016526 lvm[1426]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 6 23:57:05.018413 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 6 23:57:05.020650 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 6 23:57:05.022421 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 6 23:57:05.026471 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 6 23:57:05.027472 jq[1429]: false Jul 6 23:57:05.031615 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 6 23:57:05.034451 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 6 23:57:05.042502 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 6 23:57:05.046170 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 6 23:57:05.046850 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 6 23:57:05.050491 systemd[1]: Starting update-engine.service - Update Engine... Jul 6 23:57:05.052566 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 6 23:57:05.053820 extend-filesystems[1430]: Found loop3 Jul 6 23:57:05.054879 extend-filesystems[1430]: Found loop4 Jul 6 23:57:05.055693 extend-filesystems[1430]: Found loop5 Jul 6 23:57:05.057767 extend-filesystems[1430]: Found sr0 Jul 6 23:57:05.057767 extend-filesystems[1430]: Found vda Jul 6 23:57:05.057767 extend-filesystems[1430]: Found vda1 Jul 6 23:57:05.057767 extend-filesystems[1430]: Found vda2 Jul 6 23:57:05.057767 extend-filesystems[1430]: Found vda3 Jul 6 23:57:05.057767 extend-filesystems[1430]: Found usr Jul 6 23:57:05.057767 extend-filesystems[1430]: Found vda4 Jul 6 23:57:05.057767 extend-filesystems[1430]: Found vda6 Jul 6 23:57:05.057767 extend-filesystems[1430]: Found vda7 Jul 6 23:57:05.057767 extend-filesystems[1430]: Found vda9 Jul 6 23:57:05.057767 extend-filesystems[1430]: Checking size of /dev/vda9 Jul 6 23:57:05.057039 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jul 6 23:57:05.064551 dbus-daemon[1428]: [system] SELinux support is enabled Jul 6 23:57:05.061464 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 6 23:57:05.081827 update_engine[1443]: I20250706 23:57:05.080667 1443 main.cc:92] Flatcar Update Engine starting Jul 6 23:57:05.082014 jq[1445]: true Jul 6 23:57:05.061709 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 6 23:57:05.062073 systemd[1]: motdgen.service: Deactivated successfully. Jul 6 23:57:05.062269 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 6 23:57:05.063809 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 6 23:57:05.064031 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 6 23:57:05.068514 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 6 23:57:05.083102 update_engine[1443]: I20250706 23:57:05.082995 1443 update_check_scheduler.cc:74] Next update check in 3m11s Jul 6 23:57:05.089479 jq[1450]: true Jul 6 23:57:05.092681 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 6 23:57:05.092740 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 6 23:57:05.094452 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 6 23:57:05.094478 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 6 23:57:05.097625 systemd[1]: Started update-engine.service - Update Engine. Jul 6 23:57:05.097838 extend-filesystems[1430]: Resized partition /dev/vda9 Jul 6 23:57:05.100303 extend-filesystems[1462]: resize2fs 1.47.1 (20-May-2024) Jul 6 23:57:05.108561 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (1356) Jul 6 23:57:05.108584 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jul 6 23:57:05.101711 (ntainerd)[1455]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 6 23:57:05.107473 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 6 23:57:05.108443 systemd-logind[1440]: Watching system buttons on /dev/input/event1 (Power Button) Jul 6 23:57:05.108464 systemd-logind[1440]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 6 23:57:05.109779 systemd-logind[1440]: New seat seat0. Jul 6 23:57:05.119483 systemd[1]: Started systemd-logind.service - User Login Management. Jul 6 23:57:05.152369 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jul 6 23:57:05.175269 locksmithd[1463]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 6 23:57:05.180661 extend-filesystems[1462]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 6 23:57:05.180661 extend-filesystems[1462]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 6 23:57:05.180661 extend-filesystems[1462]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jul 6 23:57:05.184499 extend-filesystems[1430]: Resized filesystem in /dev/vda9 Jul 6 23:57:05.185899 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 6 23:57:05.186164 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 6 23:57:05.189191 bash[1478]: Updated "/home/core/.ssh/authorized_keys" Jul 6 23:57:05.190515 sshd_keygen[1444]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 6 23:57:05.190728 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 6 23:57:05.192831 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 6 23:57:05.225029 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 6 23:57:05.237536 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 6 23:57:05.244632 systemd[1]: issuegen.service: Deactivated successfully. Jul 6 23:57:05.244855 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 6 23:57:05.247448 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 6 23:57:05.267533 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 6 23:57:05.275724 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 6 23:57:05.278210 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jul 6 23:57:05.279445 systemd[1]: Reached target getty.target - Login Prompts. Jul 6 23:57:05.390664 containerd[1455]: time="2025-07-06T23:57:05.390513976Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jul 6 23:57:05.415703 containerd[1455]: time="2025-07-06T23:57:05.415648653Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 6 23:57:05.417578 containerd[1455]: time="2025-07-06T23:57:05.417540921Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.95-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 6 23:57:05.417578 containerd[1455]: time="2025-07-06T23:57:05.417568543Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 6 23:57:05.417628 containerd[1455]: time="2025-07-06T23:57:05.417582529Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 6 23:57:05.417864 containerd[1455]: time="2025-07-06T23:57:05.417837918Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jul 6 23:57:05.417864 containerd[1455]: time="2025-07-06T23:57:05.417858497Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jul 6 23:57:05.417991 containerd[1455]: time="2025-07-06T23:57:05.417970777Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jul 6 23:57:05.417991 containerd[1455]: time="2025-07-06T23:57:05.417988300Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 6 23:57:05.418238 containerd[1455]: time="2025-07-06T23:57:05.418208663Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 6 23:57:05.418238 containerd[1455]: time="2025-07-06T23:57:05.418229653Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 6 23:57:05.418288 containerd[1455]: time="2025-07-06T23:57:05.418244170Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jul 6 23:57:05.418288 containerd[1455]: time="2025-07-06T23:57:05.418256122Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 6 23:57:05.418396 containerd[1455]: time="2025-07-06T23:57:05.418377540Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 6 23:57:05.418645 containerd[1455]: time="2025-07-06T23:57:05.418625966Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 6 23:57:05.418775 containerd[1455]: time="2025-07-06T23:57:05.418756911Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 6 23:57:05.418775 containerd[1455]: time="2025-07-06T23:57:05.418773082Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 6 23:57:05.418923 containerd[1455]: time="2025-07-06T23:57:05.418906251Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 6 23:57:05.418988 containerd[1455]: time="2025-07-06T23:57:05.418972335Z" level=info msg="metadata content store policy set" policy=shared Jul 6 23:57:05.424970 containerd[1455]: time="2025-07-06T23:57:05.424929304Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 6 23:57:05.425026 containerd[1455]: time="2025-07-06T23:57:05.424978667Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 6 23:57:05.425026 containerd[1455]: time="2025-07-06T23:57:05.424997583Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jul 6 23:57:05.425026 containerd[1455]: time="2025-07-06T23:57:05.425014274Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jul 6 23:57:05.425083 containerd[1455]: time="2025-07-06T23:57:05.425029152Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 6 23:57:05.425196 containerd[1455]: time="2025-07-06T23:57:05.425169545Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 6 23:57:05.425471 containerd[1455]: time="2025-07-06T23:57:05.425434532Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 6 23:57:05.425635 containerd[1455]: time="2025-07-06T23:57:05.425616713Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jul 6 23:57:05.425665 containerd[1455]: time="2025-07-06T23:57:05.425637562Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jul 6 23:57:05.425665 containerd[1455]: time="2025-07-06T23:57:05.425651619Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jul 6 23:57:05.425702 containerd[1455]: time="2025-07-06T23:57:05.425666417Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 6 23:57:05.425702 containerd[1455]: time="2025-07-06T23:57:05.425682407Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 6 23:57:05.425702 containerd[1455]: time="2025-07-06T23:57:05.425695561Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 6 23:57:05.425757 containerd[1455]: time="2025-07-06T23:57:05.425711972Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 6 23:57:05.425757 containerd[1455]: time="2025-07-06T23:57:05.425727752Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 6 23:57:05.425757 containerd[1455]: time="2025-07-06T23:57:05.425741217Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 6 23:57:05.425757 containerd[1455]: time="2025-07-06T23:57:05.425754522Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 6 23:57:05.425828 containerd[1455]: time="2025-07-06T23:57:05.425767426Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 6 23:57:05.425828 containerd[1455]: time="2025-07-06T23:57:05.425788746Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 6 23:57:05.425828 containerd[1455]: time="2025-07-06T23:57:05.425804105Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 6 23:57:05.425828 containerd[1455]: time="2025-07-06T23:57:05.425824172Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 6 23:57:05.425917 containerd[1455]: time="2025-07-06T23:57:05.425856303Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 6 23:57:05.425917 containerd[1455]: time="2025-07-06T23:57:05.425870750Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 6 23:57:05.425917 containerd[1455]: time="2025-07-06T23:57:05.425884816Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 6 23:57:05.425917 containerd[1455]: time="2025-07-06T23:57:05.425909562Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 6 23:57:05.426554 containerd[1455]: time="2025-07-06T23:57:05.426072518Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 6 23:57:05.426554 containerd[1455]: time="2025-07-06T23:57:05.426106912Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jul 6 23:57:05.426554 containerd[1455]: time="2025-07-06T23:57:05.426140996Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jul 6 23:57:05.426554 containerd[1455]: time="2025-07-06T23:57:05.426163408Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 6 23:57:05.426554 containerd[1455]: time="2025-07-06T23:57:05.426184538Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jul 6 23:57:05.426554 containerd[1455]: time="2025-07-06T23:57:05.426206219Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 6 23:57:05.426554 containerd[1455]: time="2025-07-06T23:57:05.426264718Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jul 6 23:57:05.426554 containerd[1455]: time="2025-07-06T23:57:05.426296308Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jul 6 23:57:05.426554 containerd[1455]: time="2025-07-06T23:57:05.426334810Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 6 23:57:05.426554 containerd[1455]: time="2025-07-06T23:57:05.426354847Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 6 23:57:05.426554 containerd[1455]: time="2025-07-06T23:57:05.426411874Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 6 23:57:05.426554 containerd[1455]: time="2025-07-06T23:57:05.426440117Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jul 6 23:57:05.426554 containerd[1455]: time="2025-07-06T23:57:05.426460616Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 6 23:57:05.426798 containerd[1455]: time="2025-07-06T23:57:05.426485212Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jul 6 23:57:05.426798 containerd[1455]: time="2025-07-06T23:57:05.426499469Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 6 23:57:05.426798 containerd[1455]: time="2025-07-06T23:57:05.426515318Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jul 6 23:57:05.426798 containerd[1455]: time="2025-07-06T23:57:05.426533112Z" level=info msg="NRI interface is disabled by configuration." Jul 6 23:57:05.426798 containerd[1455]: time="2025-07-06T23:57:05.426550234Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 6 23:57:05.427313 containerd[1455]: time="2025-07-06T23:57:05.426976904Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 6 23:57:05.427528 containerd[1455]: time="2025-07-06T23:57:05.427343130Z" level=info msg="Connect containerd service" Jul 6 23:57:05.427528 containerd[1455]: time="2025-07-06T23:57:05.427429442Z" level=info msg="using legacy CRI server" Jul 6 23:57:05.427528 containerd[1455]: time="2025-07-06T23:57:05.427463696Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 6 23:57:05.427689 containerd[1455]: time="2025-07-06T23:57:05.427648122Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 6 23:57:05.428855 containerd[1455]: time="2025-07-06T23:57:05.428817294Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 6 23:57:05.429078 containerd[1455]: time="2025-07-06T23:57:05.429012370Z" level=info msg="Start subscribing containerd event" Jul 6 23:57:05.429121 containerd[1455]: time="2025-07-06T23:57:05.429111456Z" level=info msg="Start recovering state" Jul 6 23:57:05.429295 containerd[1455]: time="2025-07-06T23:57:05.429263862Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 6 23:57:05.429348 containerd[1455]: time="2025-07-06T23:57:05.429264603Z" level=info msg="Start event monitor" Jul 6 23:57:05.429348 containerd[1455]: time="2025-07-06T23:57:05.429338031Z" level=info msg="Start snapshots syncer" Jul 6 23:57:05.429385 containerd[1455]: time="2025-07-06T23:57:05.429350033Z" level=info msg="Start cni network conf syncer for default" Jul 6 23:57:05.429385 containerd[1455]: time="2025-07-06T23:57:05.429359701Z" level=info msg="Start streaming server" Jul 6 23:57:05.429421 containerd[1455]: time="2025-07-06T23:57:05.429354442Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 6 23:57:05.429571 systemd[1]: Started containerd.service - containerd container runtime. Jul 6 23:57:05.429923 containerd[1455]: time="2025-07-06T23:57:05.429890346Z" level=info msg="containerd successfully booted in 0.040428s" Jul 6 23:57:06.656605 systemd-networkd[1383]: eth0: Gained IPv6LL Jul 6 23:57:06.660686 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 6 23:57:06.662473 systemd[1]: Reached target network-online.target - Network is Online. Jul 6 23:57:06.674544 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jul 6 23:57:06.677251 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:57:06.679587 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 6 23:57:06.752159 systemd[1]: coreos-metadata.service: Deactivated successfully. Jul 6 23:57:06.752427 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jul 6 23:57:06.754075 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 6 23:57:06.758213 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 6 23:57:08.343589 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:57:08.345532 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 6 23:57:08.348281 systemd[1]: Startup finished in 1.003s (kernel) + 5.537s (initrd) + 5.519s (userspace) = 12.060s. Jul 6 23:57:08.349565 (kubelet)[1533]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 6 23:57:09.089017 kubelet[1533]: E0706 23:57:09.088929 1533 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 6 23:57:09.093505 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 6 23:57:09.093741 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 6 23:57:09.094137 systemd[1]: kubelet.service: Consumed 2.225s CPU time. Jul 6 23:57:10.749635 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 6 23:57:10.750869 systemd[1]: Started sshd@0-10.0.0.116:22-10.0.0.1:47864.service - OpenSSH per-connection server daemon (10.0.0.1:47864). Jul 6 23:57:10.788299 sshd[1546]: Accepted publickey for core from 10.0.0.1 port 47864 ssh2: RSA SHA256:Lb9W8z7TDUhiZk7PaXs7DOgToeXIbwhAkjEsqIc7XbQ Jul 6 23:57:10.790362 sshd[1546]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:57:10.799262 systemd-logind[1440]: New session 1 of user core. Jul 6 23:57:10.800576 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 6 23:57:10.817632 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 6 23:57:10.829642 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 6 23:57:10.832799 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 6 23:57:10.841137 (systemd)[1550]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 6 23:57:10.944229 systemd[1550]: Queued start job for default target default.target. Jul 6 23:57:10.955607 systemd[1550]: Created slice app.slice - User Application Slice. Jul 6 23:57:10.955632 systemd[1550]: Reached target paths.target - Paths. Jul 6 23:57:10.955646 systemd[1550]: Reached target timers.target - Timers. Jul 6 23:57:10.957237 systemd[1550]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 6 23:57:10.968630 systemd[1550]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 6 23:57:10.968767 systemd[1550]: Reached target sockets.target - Sockets. Jul 6 23:57:10.968785 systemd[1550]: Reached target basic.target - Basic System. Jul 6 23:57:10.968821 systemd[1550]: Reached target default.target - Main User Target. Jul 6 23:57:10.968852 systemd[1550]: Startup finished in 121ms. Jul 6 23:57:10.969271 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 6 23:57:10.970782 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 6 23:57:11.031671 systemd[1]: Started sshd@1-10.0.0.116:22-10.0.0.1:47878.service - OpenSSH per-connection server daemon (10.0.0.1:47878). Jul 6 23:57:11.061859 sshd[1561]: Accepted publickey for core from 10.0.0.1 port 47878 ssh2: RSA SHA256:Lb9W8z7TDUhiZk7PaXs7DOgToeXIbwhAkjEsqIc7XbQ Jul 6 23:57:11.063353 sshd[1561]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:57:11.067103 systemd-logind[1440]: New session 2 of user core. Jul 6 23:57:11.076431 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 6 23:57:11.129733 sshd[1561]: pam_unix(sshd:session): session closed for user core Jul 6 23:57:11.146883 systemd[1]: sshd@1-10.0.0.116:22-10.0.0.1:47878.service: Deactivated successfully. Jul 6 23:57:11.148575 systemd[1]: session-2.scope: Deactivated successfully. Jul 6 23:57:11.149774 systemd-logind[1440]: Session 2 logged out. Waiting for processes to exit. Jul 6 23:57:11.158615 systemd[1]: Started sshd@2-10.0.0.116:22-10.0.0.1:47882.service - OpenSSH per-connection server daemon (10.0.0.1:47882). Jul 6 23:57:11.159412 systemd-logind[1440]: Removed session 2. Jul 6 23:57:11.184520 sshd[1568]: Accepted publickey for core from 10.0.0.1 port 47882 ssh2: RSA SHA256:Lb9W8z7TDUhiZk7PaXs7DOgToeXIbwhAkjEsqIc7XbQ Jul 6 23:57:11.185992 sshd[1568]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:57:11.189639 systemd-logind[1440]: New session 3 of user core. Jul 6 23:57:11.201415 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 6 23:57:11.250680 sshd[1568]: pam_unix(sshd:session): session closed for user core Jul 6 23:57:11.266886 systemd[1]: sshd@2-10.0.0.116:22-10.0.0.1:47882.service: Deactivated successfully. Jul 6 23:57:11.268378 systemd[1]: session-3.scope: Deactivated successfully. Jul 6 23:57:11.269626 systemd-logind[1440]: Session 3 logged out. Waiting for processes to exit. Jul 6 23:57:11.270786 systemd[1]: Started sshd@3-10.0.0.116:22-10.0.0.1:47894.service - OpenSSH per-connection server daemon (10.0.0.1:47894). Jul 6 23:57:11.271625 systemd-logind[1440]: Removed session 3. Jul 6 23:57:11.300749 sshd[1575]: Accepted publickey for core from 10.0.0.1 port 47894 ssh2: RSA SHA256:Lb9W8z7TDUhiZk7PaXs7DOgToeXIbwhAkjEsqIc7XbQ Jul 6 23:57:11.302001 sshd[1575]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:57:11.305621 systemd-logind[1440]: New session 4 of user core. Jul 6 23:57:11.314451 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 6 23:57:11.370102 sshd[1575]: pam_unix(sshd:session): session closed for user core Jul 6 23:57:11.394165 systemd[1]: sshd@3-10.0.0.116:22-10.0.0.1:47894.service: Deactivated successfully. Jul 6 23:57:11.395891 systemd[1]: session-4.scope: Deactivated successfully. Jul 6 23:57:11.397499 systemd-logind[1440]: Session 4 logged out. Waiting for processes to exit. Jul 6 23:57:11.407839 systemd[1]: Started sshd@4-10.0.0.116:22-10.0.0.1:47904.service - OpenSSH per-connection server daemon (10.0.0.1:47904). Jul 6 23:57:11.409069 systemd-logind[1440]: Removed session 4. Jul 6 23:57:11.433780 sshd[1582]: Accepted publickey for core from 10.0.0.1 port 47904 ssh2: RSA SHA256:Lb9W8z7TDUhiZk7PaXs7DOgToeXIbwhAkjEsqIc7XbQ Jul 6 23:57:11.435253 sshd[1582]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:57:11.439068 systemd-logind[1440]: New session 5 of user core. Jul 6 23:57:11.448443 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 6 23:57:11.505973 sudo[1585]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 6 23:57:11.506302 sudo[1585]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 6 23:57:11.529779 sudo[1585]: pam_unix(sudo:session): session closed for user root Jul 6 23:57:11.531798 sshd[1582]: pam_unix(sshd:session): session closed for user core Jul 6 23:57:11.539095 systemd[1]: sshd@4-10.0.0.116:22-10.0.0.1:47904.service: Deactivated successfully. Jul 6 23:57:11.540817 systemd[1]: session-5.scope: Deactivated successfully. Jul 6 23:57:11.542313 systemd-logind[1440]: Session 5 logged out. Waiting for processes to exit. Jul 6 23:57:11.543776 systemd[1]: Started sshd@5-10.0.0.116:22-10.0.0.1:47912.service - OpenSSH per-connection server daemon (10.0.0.1:47912). Jul 6 23:57:11.544599 systemd-logind[1440]: Removed session 5. Jul 6 23:57:11.574523 sshd[1590]: Accepted publickey for core from 10.0.0.1 port 47912 ssh2: RSA SHA256:Lb9W8z7TDUhiZk7PaXs7DOgToeXIbwhAkjEsqIc7XbQ Jul 6 23:57:11.576131 sshd[1590]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:57:11.579947 systemd-logind[1440]: New session 6 of user core. Jul 6 23:57:11.591436 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 6 23:57:11.644449 sudo[1594]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 6 23:57:11.644790 sudo[1594]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 6 23:57:11.648430 sudo[1594]: pam_unix(sudo:session): session closed for user root Jul 6 23:57:11.654604 sudo[1593]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jul 6 23:57:11.654947 sudo[1593]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 6 23:57:11.676581 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jul 6 23:57:11.678211 auditctl[1597]: No rules Jul 6 23:57:11.679455 systemd[1]: audit-rules.service: Deactivated successfully. Jul 6 23:57:11.679707 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jul 6 23:57:11.681523 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 6 23:57:11.711414 augenrules[1615]: No rules Jul 6 23:57:11.713263 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 6 23:57:11.714560 sudo[1593]: pam_unix(sudo:session): session closed for user root Jul 6 23:57:11.716716 sshd[1590]: pam_unix(sshd:session): session closed for user core Jul 6 23:57:11.731086 systemd[1]: sshd@5-10.0.0.116:22-10.0.0.1:47912.service: Deactivated successfully. Jul 6 23:57:11.732778 systemd[1]: session-6.scope: Deactivated successfully. Jul 6 23:57:11.734298 systemd-logind[1440]: Session 6 logged out. Waiting for processes to exit. Jul 6 23:57:11.748665 systemd[1]: Started sshd@6-10.0.0.116:22-10.0.0.1:47918.service - OpenSSH per-connection server daemon (10.0.0.1:47918). Jul 6 23:57:11.749707 systemd-logind[1440]: Removed session 6. Jul 6 23:57:11.775024 sshd[1623]: Accepted publickey for core from 10.0.0.1 port 47918 ssh2: RSA SHA256:Lb9W8z7TDUhiZk7PaXs7DOgToeXIbwhAkjEsqIc7XbQ Jul 6 23:57:11.776628 sshd[1623]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 6 23:57:11.781037 systemd-logind[1440]: New session 7 of user core. Jul 6 23:57:11.797475 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 6 23:57:11.852999 sudo[1626]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 6 23:57:11.853366 sudo[1626]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 6 23:57:11.878683 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jul 6 23:57:11.901083 systemd[1]: coreos-metadata.service: Deactivated successfully. Jul 6 23:57:11.901375 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jul 6 23:57:12.532863 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:57:12.533352 systemd[1]: kubelet.service: Consumed 2.225s CPU time. Jul 6 23:57:12.551690 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:57:12.579785 systemd[1]: Reloading requested from client PID 1668 ('systemctl') (unit session-7.scope)... Jul 6 23:57:12.579805 systemd[1]: Reloading... Jul 6 23:57:12.673466 zram_generator::config[1709]: No configuration found. Jul 6 23:57:14.190578 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 6 23:57:14.266905 systemd[1]: Reloading finished in 1686 ms. Jul 6 23:57:14.318397 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 6 23:57:14.318494 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 6 23:57:14.318758 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:57:14.320300 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 6 23:57:14.486310 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 6 23:57:14.491805 (kubelet)[1754]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 6 23:57:14.604146 kubelet[1754]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 6 23:57:14.604146 kubelet[1754]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 6 23:57:14.604146 kubelet[1754]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 6 23:57:14.604603 kubelet[1754]: I0706 23:57:14.604332 1754 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 6 23:57:15.004462 kubelet[1754]: I0706 23:57:15.004416 1754 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jul 6 23:57:15.004462 kubelet[1754]: I0706 23:57:15.004449 1754 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 6 23:57:15.004838 kubelet[1754]: I0706 23:57:15.004818 1754 server.go:934] "Client rotation is on, will bootstrap in background" Jul 6 23:57:15.026520 kubelet[1754]: I0706 23:57:15.026446 1754 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 6 23:57:15.037479 kubelet[1754]: E0706 23:57:15.037413 1754 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jul 6 23:57:15.037479 kubelet[1754]: I0706 23:57:15.037471 1754 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jul 6 23:57:15.043829 kubelet[1754]: I0706 23:57:15.043801 1754 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 6 23:57:15.044597 kubelet[1754]: I0706 23:57:15.044565 1754 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jul 6 23:57:15.044796 kubelet[1754]: I0706 23:57:15.044746 1754 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 6 23:57:15.044974 kubelet[1754]: I0706 23:57:15.044784 1754 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"10.0.0.116","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 6 23:57:15.045088 kubelet[1754]: I0706 23:57:15.044991 1754 topology_manager.go:138] "Creating topology manager with none policy" Jul 6 23:57:15.045088 kubelet[1754]: I0706 23:57:15.045004 1754 container_manager_linux.go:300] "Creating device plugin manager" Jul 6 23:57:15.045205 kubelet[1754]: I0706 23:57:15.045178 1754 state_mem.go:36] "Initialized new in-memory state store" Jul 6 23:57:15.048380 kubelet[1754]: I0706 23:57:15.047679 1754 kubelet.go:408] "Attempting to sync node with API server" Jul 6 23:57:15.048380 kubelet[1754]: I0706 23:57:15.047705 1754 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 6 23:57:15.048380 kubelet[1754]: I0706 23:57:15.047754 1754 kubelet.go:314] "Adding apiserver pod source" Jul 6 23:57:15.048380 kubelet[1754]: I0706 23:57:15.047786 1754 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 6 23:57:15.048380 kubelet[1754]: E0706 23:57:15.048268 1754 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 6 23:57:15.048380 kubelet[1754]: E0706 23:57:15.048280 1754 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 6 23:57:15.052201 kubelet[1754]: I0706 23:57:15.052168 1754 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jul 6 23:57:15.052741 kubelet[1754]: I0706 23:57:15.052702 1754 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 6 23:57:15.053404 kubelet[1754]: W0706 23:57:15.053377 1754 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 6 23:57:15.056021 kubelet[1754]: I0706 23:57:15.055658 1754 server.go:1274] "Started kubelet" Jul 6 23:57:15.056552 kubelet[1754]: I0706 23:57:15.055999 1754 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 6 23:57:15.056552 kubelet[1754]: I0706 23:57:15.056527 1754 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 6 23:57:15.057814 kubelet[1754]: I0706 23:57:15.056896 1754 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 6 23:57:15.057814 kubelet[1754]: I0706 23:57:15.057687 1754 server.go:449] "Adding debug handlers to kubelet server" Jul 6 23:57:15.057814 kubelet[1754]: I0706 23:57:15.057725 1754 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 6 23:57:15.057814 kubelet[1754]: W0706 23:57:15.057413 1754 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "10.0.0.116" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Jul 6 23:57:15.057814 kubelet[1754]: E0706 23:57:15.057790 1754 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"10.0.0.116\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" Jul 6 23:57:15.058005 kubelet[1754]: W0706 23:57:15.057955 1754 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Jul 6 23:57:15.058041 kubelet[1754]: E0706 23:57:15.058003 1754 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Jul 6 23:57:15.058856 kubelet[1754]: I0706 23:57:15.058563 1754 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 6 23:57:15.060990 kubelet[1754]: I0706 23:57:15.060967 1754 volume_manager.go:289] "Starting Kubelet Volume Manager" Jul 6 23:57:15.061833 kubelet[1754]: I0706 23:57:15.061075 1754 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jul 6 23:57:15.061833 kubelet[1754]: I0706 23:57:15.061168 1754 reconciler.go:26] "Reconciler: start to sync state" Jul 6 23:57:15.061833 kubelet[1754]: E0706 23:57:15.061394 1754 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.116\" not found" Jul 6 23:57:15.062205 kubelet[1754]: I0706 23:57:15.062167 1754 factory.go:221] Registration of the systemd container factory successfully Jul 6 23:57:15.062265 kubelet[1754]: I0706 23:57:15.062248 1754 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 6 23:57:15.062311 kubelet[1754]: W0706 23:57:15.062276 1754 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Jul 6 23:57:15.063585 kubelet[1754]: E0706 23:57:15.062522 1754 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" Jul 6 23:57:15.063585 kubelet[1754]: E0706 23:57:15.063221 1754 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 6 23:57:15.063754 kubelet[1754]: I0706 23:57:15.063646 1754 factory.go:221] Registration of the containerd container factory successfully Jul 6 23:57:15.067438 kubelet[1754]: E0706 23:57:15.067377 1754 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.0.0.116\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Jul 6 23:57:15.076944 kubelet[1754]: E0706 23:57:15.075734 1754 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.116.184fcede7d985b9c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.116,UID:10.0.0.116,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:10.0.0.116,},FirstTimestamp:2025-07-06 23:57:15.05561078 +0000 UTC m=+0.557589271,LastTimestamp:2025-07-06 23:57:15.05561078 +0000 UTC m=+0.557589271,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.116,}" Jul 6 23:57:15.078581 kubelet[1754]: I0706 23:57:15.078562 1754 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 6 23:57:15.078581 kubelet[1754]: I0706 23:57:15.078577 1754 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 6 23:57:15.078663 kubelet[1754]: I0706 23:57:15.078597 1754 state_mem.go:36] "Initialized new in-memory state store" Jul 6 23:57:15.080543 kubelet[1754]: E0706 23:57:15.080472 1754 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.116.184fcede7e0c37fd default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.116,UID:10.0.0.116,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:10.0.0.116,},FirstTimestamp:2025-07-06 23:57:15.063203837 +0000 UTC m=+0.565182328,LastTimestamp:2025-07-06 23:57:15.063203837 +0000 UTC m=+0.565182328,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.116,}" Jul 6 23:57:15.083620 kubelet[1754]: E0706 23:57:15.083562 1754 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.116.184fcede7eec323d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.116,UID:10.0.0.116,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node 10.0.0.116 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:10.0.0.116,},FirstTimestamp:2025-07-06 23:57:15.077882429 +0000 UTC m=+0.579860920,LastTimestamp:2025-07-06 23:57:15.077882429 +0000 UTC m=+0.579860920,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.116,}" Jul 6 23:57:15.086936 kubelet[1754]: E0706 23:57:15.086858 1754 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.116.184fcede7eec5ab4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.116,UID:10.0.0.116,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node 10.0.0.116 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:10.0.0.116,},FirstTimestamp:2025-07-06 23:57:15.077892788 +0000 UTC m=+0.579871280,LastTimestamp:2025-07-06 23:57:15.077892788 +0000 UTC m=+0.579871280,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.116,}" Jul 6 23:57:15.092389 kubelet[1754]: E0706 23:57:15.090548 1754 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.116.184fcede7eec653c default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.116,UID:10.0.0.116,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node 10.0.0.116 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:10.0.0.116,},FirstTimestamp:2025-07-06 23:57:15.077895484 +0000 UTC m=+0.579873975,LastTimestamp:2025-07-06 23:57:15.077895484 +0000 UTC m=+0.579873975,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.116,}" Jul 6 23:57:15.161486 kubelet[1754]: E0706 23:57:15.161443 1754 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.116\" not found" Jul 6 23:57:15.262007 kubelet[1754]: E0706 23:57:15.261866 1754 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.116\" not found" Jul 6 23:57:15.272488 kubelet[1754]: E0706 23:57:15.272431 1754 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.0.0.116\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="400ms" Jul 6 23:57:15.362875 kubelet[1754]: E0706 23:57:15.362825 1754 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.116\" not found" Jul 6 23:57:15.463387 kubelet[1754]: E0706 23:57:15.463335 1754 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.116\" not found" Jul 6 23:57:15.564459 kubelet[1754]: E0706 23:57:15.564253 1754 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.116\" not found" Jul 6 23:57:15.665483 kubelet[1754]: E0706 23:57:15.665415 1754 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.116\" not found" Jul 6 23:57:15.766465 kubelet[1754]: E0706 23:57:15.766386 1754 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.116\" not found" Jul 6 23:57:15.866981 kubelet[1754]: E0706 23:57:15.866843 1754 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.116\" not found" Jul 6 23:57:15.885994 kubelet[1754]: E0706 23:57:15.885943 1754 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.0.0.116\" not found" node="10.0.0.116" Jul 6 23:57:15.895524 kubelet[1754]: I0706 23:57:15.895468 1754 policy_none.go:49] "None policy: Start" Jul 6 23:57:15.896524 kubelet[1754]: I0706 23:57:15.896497 1754 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 6 23:57:15.896591 kubelet[1754]: I0706 23:57:15.896529 1754 state_mem.go:35] "Initializing new in-memory state store" Jul 6 23:57:15.905958 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 6 23:57:15.917354 kubelet[1754]: I0706 23:57:15.917265 1754 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 6 23:57:15.919000 kubelet[1754]: I0706 23:57:15.918938 1754 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 6 23:57:15.919000 kubelet[1754]: I0706 23:57:15.918998 1754 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 6 23:57:15.919302 kubelet[1754]: I0706 23:57:15.919035 1754 kubelet.go:2321] "Starting kubelet main sync loop" Jul 6 23:57:15.919302 kubelet[1754]: E0706 23:57:15.919175 1754 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 6 23:57:15.920965 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 6 23:57:15.925292 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 6 23:57:15.941408 kubelet[1754]: I0706 23:57:15.941361 1754 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 6 23:57:15.941595 kubelet[1754]: I0706 23:57:15.941576 1754 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 6 23:57:15.941654 kubelet[1754]: I0706 23:57:15.941598 1754 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 6 23:57:15.942419 kubelet[1754]: I0706 23:57:15.941828 1754 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 6 23:57:15.942822 kubelet[1754]: E0706 23:57:15.942788 1754 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.0.116\" not found" Jul 6 23:57:16.007006 kubelet[1754]: I0706 23:57:16.006957 1754 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jul 6 23:57:16.007160 kubelet[1754]: W0706 23:57:16.007137 1754 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jul 6 23:57:16.043309 kubelet[1754]: I0706 23:57:16.043292 1754 kubelet_node_status.go:72] "Attempting to register node" node="10.0.0.116" Jul 6 23:57:16.048522 kubelet[1754]: E0706 23:57:16.048475 1754 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 6 23:57:16.050939 kubelet[1754]: I0706 23:57:16.050897 1754 kubelet_node_status.go:75] "Successfully registered node" node="10.0.0.116" Jul 6 23:57:16.050939 kubelet[1754]: E0706 23:57:16.050929 1754 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"10.0.0.116\": node \"10.0.0.116\" not found" Jul 6 23:57:16.058793 kubelet[1754]: E0706 23:57:16.058746 1754 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.116\" not found" Jul 6 23:57:16.159432 kubelet[1754]: E0706 23:57:16.159228 1754 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.116\" not found" Jul 6 23:57:16.260687 kubelet[1754]: E0706 23:57:16.260633 1754 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.116\" not found" Jul 6 23:57:16.361612 kubelet[1754]: E0706 23:57:16.361541 1754 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.116\" not found" Jul 6 23:57:16.461807 kubelet[1754]: E0706 23:57:16.461674 1754 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.116\" not found" Jul 6 23:57:16.517131 sudo[1626]: pam_unix(sudo:session): session closed for user root Jul 6 23:57:16.518824 sshd[1623]: pam_unix(sshd:session): session closed for user core Jul 6 23:57:16.522998 systemd[1]: sshd@6-10.0.0.116:22-10.0.0.1:47918.service: Deactivated successfully. Jul 6 23:57:16.524966 systemd[1]: session-7.scope: Deactivated successfully. Jul 6 23:57:16.525732 systemd-logind[1440]: Session 7 logged out. Waiting for processes to exit. Jul 6 23:57:16.526613 systemd-logind[1440]: Removed session 7. Jul 6 23:57:16.561894 kubelet[1754]: E0706 23:57:16.561854 1754 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.0.0.116\" not found" Jul 6 23:57:16.663079 kubelet[1754]: I0706 23:57:16.663043 1754 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Jul 6 23:57:16.663497 containerd[1455]: time="2025-07-06T23:57:16.663442404Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 6 23:57:16.663867 kubelet[1754]: I0706 23:57:16.663688 1754 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Jul 6 23:57:17.048931 kubelet[1754]: E0706 23:57:17.048833 1754 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 6 23:57:17.048931 kubelet[1754]: I0706 23:57:17.048867 1754 apiserver.go:52] "Watching apiserver" Jul 6 23:57:17.060078 systemd[1]: Created slice kubepods-burstable-pod0b29535f_cbd4_4196_a137_0c48216fd9b6.slice - libcontainer container kubepods-burstable-pod0b29535f_cbd4_4196_a137_0c48216fd9b6.slice. Jul 6 23:57:17.061617 kubelet[1754]: I0706 23:57:17.061580 1754 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jul 6 23:57:17.073023 kubelet[1754]: I0706 23:57:17.072980 1754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/57a6315a-95ad-47ac-8c9e-031fcffb36f2-xtables-lock\") pod \"kube-proxy-kjttg\" (UID: \"57a6315a-95ad-47ac-8c9e-031fcffb36f2\") " pod="kube-system/kube-proxy-kjttg" Jul 6 23:57:17.073023 kubelet[1754]: I0706 23:57:17.073019 1754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0b29535f-cbd4-4196-a137-0c48216fd9b6-cilium-run\") pod \"cilium-t8smz\" (UID: \"0b29535f-cbd4-4196-a137-0c48216fd9b6\") " pod="kube-system/cilium-t8smz" Jul 6 23:57:17.073171 kubelet[1754]: I0706 23:57:17.073046 1754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0b29535f-cbd4-4196-a137-0c48216fd9b6-lib-modules\") pod \"cilium-t8smz\" (UID: \"0b29535f-cbd4-4196-a137-0c48216fd9b6\") " pod="kube-system/cilium-t8smz" Jul 6 23:57:17.073171 kubelet[1754]: I0706 23:57:17.073067 1754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0b29535f-cbd4-4196-a137-0c48216fd9b6-xtables-lock\") pod \"cilium-t8smz\" (UID: \"0b29535f-cbd4-4196-a137-0c48216fd9b6\") " pod="kube-system/cilium-t8smz" Jul 6 23:57:17.073171 kubelet[1754]: I0706 23:57:17.073089 1754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0b29535f-cbd4-4196-a137-0c48216fd9b6-cilium-config-path\") pod \"cilium-t8smz\" (UID: \"0b29535f-cbd4-4196-a137-0c48216fd9b6\") " pod="kube-system/cilium-t8smz" Jul 6 23:57:17.073171 kubelet[1754]: I0706 23:57:17.073151 1754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0b29535f-cbd4-4196-a137-0c48216fd9b6-host-proc-sys-net\") pod \"cilium-t8smz\" (UID: \"0b29535f-cbd4-4196-a137-0c48216fd9b6\") " pod="kube-system/cilium-t8smz" Jul 6 23:57:17.073291 kubelet[1754]: I0706 23:57:17.073180 1754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2kkrk\" (UniqueName: \"kubernetes.io/projected/0b29535f-cbd4-4196-a137-0c48216fd9b6-kube-api-access-2kkrk\") pod \"cilium-t8smz\" (UID: \"0b29535f-cbd4-4196-a137-0c48216fd9b6\") " pod="kube-system/cilium-t8smz" Jul 6 23:57:17.073291 kubelet[1754]: I0706 23:57:17.073200 1754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/57a6315a-95ad-47ac-8c9e-031fcffb36f2-kube-proxy\") pod \"kube-proxy-kjttg\" (UID: \"57a6315a-95ad-47ac-8c9e-031fcffb36f2\") " pod="kube-system/kube-proxy-kjttg" Jul 6 23:57:17.073291 kubelet[1754]: I0706 23:57:17.073237 1754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0b29535f-cbd4-4196-a137-0c48216fd9b6-hostproc\") pod \"cilium-t8smz\" (UID: \"0b29535f-cbd4-4196-a137-0c48216fd9b6\") " pod="kube-system/cilium-t8smz" Jul 6 23:57:17.073291 kubelet[1754]: I0706 23:57:17.073251 1754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0b29535f-cbd4-4196-a137-0c48216fd9b6-cilium-cgroup\") pod \"cilium-t8smz\" (UID: \"0b29535f-cbd4-4196-a137-0c48216fd9b6\") " pod="kube-system/cilium-t8smz" Jul 6 23:57:17.073291 kubelet[1754]: I0706 23:57:17.073266 1754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0b29535f-cbd4-4196-a137-0c48216fd9b6-cni-path\") pod \"cilium-t8smz\" (UID: \"0b29535f-cbd4-4196-a137-0c48216fd9b6\") " pod="kube-system/cilium-t8smz" Jul 6 23:57:17.073291 kubelet[1754]: I0706 23:57:17.073282 1754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0b29535f-cbd4-4196-a137-0c48216fd9b6-etc-cni-netd\") pod \"cilium-t8smz\" (UID: \"0b29535f-cbd4-4196-a137-0c48216fd9b6\") " pod="kube-system/cilium-t8smz" Jul 6 23:57:17.073477 kubelet[1754]: I0706 23:57:17.073333 1754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0b29535f-cbd4-4196-a137-0c48216fd9b6-host-proc-sys-kernel\") pod \"cilium-t8smz\" (UID: \"0b29535f-cbd4-4196-a137-0c48216fd9b6\") " pod="kube-system/cilium-t8smz" Jul 6 23:57:17.073477 kubelet[1754]: I0706 23:57:17.073362 1754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0b29535f-cbd4-4196-a137-0c48216fd9b6-hubble-tls\") pod \"cilium-t8smz\" (UID: \"0b29535f-cbd4-4196-a137-0c48216fd9b6\") " pod="kube-system/cilium-t8smz" Jul 6 23:57:17.073477 kubelet[1754]: I0706 23:57:17.073377 1754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/57a6315a-95ad-47ac-8c9e-031fcffb36f2-lib-modules\") pod \"kube-proxy-kjttg\" (UID: \"57a6315a-95ad-47ac-8c9e-031fcffb36f2\") " pod="kube-system/kube-proxy-kjttg" Jul 6 23:57:17.073477 kubelet[1754]: I0706 23:57:17.073394 1754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0b29535f-cbd4-4196-a137-0c48216fd9b6-bpf-maps\") pod \"cilium-t8smz\" (UID: \"0b29535f-cbd4-4196-a137-0c48216fd9b6\") " pod="kube-system/cilium-t8smz" Jul 6 23:57:17.073477 kubelet[1754]: I0706 23:57:17.073409 1754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-25gft\" (UniqueName: \"kubernetes.io/projected/57a6315a-95ad-47ac-8c9e-031fcffb36f2-kube-api-access-25gft\") pod \"kube-proxy-kjttg\" (UID: \"57a6315a-95ad-47ac-8c9e-031fcffb36f2\") " pod="kube-system/kube-proxy-kjttg" Jul 6 23:57:17.073643 kubelet[1754]: I0706 23:57:17.073429 1754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0b29535f-cbd4-4196-a137-0c48216fd9b6-clustermesh-secrets\") pod \"cilium-t8smz\" (UID: \"0b29535f-cbd4-4196-a137-0c48216fd9b6\") " pod="kube-system/cilium-t8smz" Jul 6 23:57:17.079847 systemd[1]: Created slice kubepods-besteffort-pod57a6315a_95ad_47ac_8c9e_031fcffb36f2.slice - libcontainer container kubepods-besteffort-pod57a6315a_95ad_47ac_8c9e_031fcffb36f2.slice. Jul 6 23:57:17.378655 kubelet[1754]: E0706 23:57:17.378491 1754 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:57:17.379425 containerd[1455]: time="2025-07-06T23:57:17.379380147Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-t8smz,Uid:0b29535f-cbd4-4196-a137-0c48216fd9b6,Namespace:kube-system,Attempt:0,}" Jul 6 23:57:17.394896 kubelet[1754]: E0706 23:57:17.394865 1754 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:57:17.395489 containerd[1455]: time="2025-07-06T23:57:17.395445670Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-kjttg,Uid:57a6315a-95ad-47ac-8c9e-031fcffb36f2,Namespace:kube-system,Attempt:0,}" Jul 6 23:57:17.981591 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1282181889.mount: Deactivated successfully. Jul 6 23:57:17.988931 containerd[1455]: time="2025-07-06T23:57:17.988880840Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 6 23:57:17.989573 containerd[1455]: time="2025-07-06T23:57:17.989528364Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 6 23:57:17.990646 containerd[1455]: time="2025-07-06T23:57:17.990615493Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 6 23:57:17.991553 containerd[1455]: time="2025-07-06T23:57:17.991508497Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 6 23:57:17.993791 containerd[1455]: time="2025-07-06T23:57:17.993745251Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jul 6 23:57:17.997171 containerd[1455]: time="2025-07-06T23:57:17.997143122Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 6 23:57:17.997961 containerd[1455]: time="2025-07-06T23:57:17.997927282Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 602.40052ms" Jul 6 23:57:17.998818 containerd[1455]: time="2025-07-06T23:57:17.998783408Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 619.270431ms" Jul 6 23:57:18.049227 kubelet[1754]: E0706 23:57:18.049155 1754 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 6 23:57:18.186441 containerd[1455]: time="2025-07-06T23:57:18.186293579Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:57:18.186441 containerd[1455]: time="2025-07-06T23:57:18.186364783Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:57:18.186441 containerd[1455]: time="2025-07-06T23:57:18.186378679Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:57:18.186805 containerd[1455]: time="2025-07-06T23:57:18.186467666Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:57:18.191632 containerd[1455]: time="2025-07-06T23:57:18.191474413Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:57:18.191632 containerd[1455]: time="2025-07-06T23:57:18.191597965Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:57:18.192379 containerd[1455]: time="2025-07-06T23:57:18.192292697Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:57:18.192586 containerd[1455]: time="2025-07-06T23:57:18.192514073Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:57:18.311624 systemd[1]: Started cri-containerd-35aac45c480286aaa24878f69547eeab4a899b5845aeb967deb1f1cefd08b7ae.scope - libcontainer container 35aac45c480286aaa24878f69547eeab4a899b5845aeb967deb1f1cefd08b7ae. Jul 6 23:57:18.318592 systemd[1]: Started cri-containerd-0ab62ed6d1c126704418a4d8934c512b66a3f2ed4316d563c73f4593d9628a78.scope - libcontainer container 0ab62ed6d1c126704418a4d8934c512b66a3f2ed4316d563c73f4593d9628a78. Jul 6 23:57:18.337403 containerd[1455]: time="2025-07-06T23:57:18.337366982Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-t8smz,Uid:0b29535f-cbd4-4196-a137-0c48216fd9b6,Namespace:kube-system,Attempt:0,} returns sandbox id \"35aac45c480286aaa24878f69547eeab4a899b5845aeb967deb1f1cefd08b7ae\"" Jul 6 23:57:18.338880 kubelet[1754]: E0706 23:57:18.338838 1754 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:57:18.344180 containerd[1455]: time="2025-07-06T23:57:18.344014426Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jul 6 23:57:18.345637 containerd[1455]: time="2025-07-06T23:57:18.345595200Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-kjttg,Uid:57a6315a-95ad-47ac-8c9e-031fcffb36f2,Namespace:kube-system,Attempt:0,} returns sandbox id \"0ab62ed6d1c126704418a4d8934c512b66a3f2ed4316d563c73f4593d9628a78\"" Jul 6 23:57:18.346260 kubelet[1754]: E0706 23:57:18.346225 1754 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:57:19.049493 kubelet[1754]: E0706 23:57:19.049447 1754 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 6 23:57:20.050496 kubelet[1754]: E0706 23:57:20.050426 1754 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 6 23:57:21.050660 kubelet[1754]: E0706 23:57:21.050609 1754 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 6 23:57:22.051163 kubelet[1754]: E0706 23:57:22.051079 1754 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 6 23:57:23.051342 kubelet[1754]: E0706 23:57:23.051277 1754 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 6 23:57:23.751879 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1896797626.mount: Deactivated successfully. Jul 6 23:57:24.052529 kubelet[1754]: E0706 23:57:24.052344 1754 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 6 23:57:25.424306 kubelet[1754]: E0706 23:57:25.424233 1754 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 6 23:57:26.425226 kubelet[1754]: E0706 23:57:26.425149 1754 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 6 23:57:27.092510 containerd[1455]: time="2025-07-06T23:57:27.092450521Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:57:27.093262 containerd[1455]: time="2025-07-06T23:57:27.093197081Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Jul 6 23:57:27.094471 containerd[1455]: time="2025-07-06T23:57:27.094398263Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:57:27.096010 containerd[1455]: time="2025-07-06T23:57:27.095951936Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 8.751904889s" Jul 6 23:57:27.096010 containerd[1455]: time="2025-07-06T23:57:27.095983956Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jul 6 23:57:27.097063 containerd[1455]: time="2025-07-06T23:57:27.096908179Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\"" Jul 6 23:57:27.098283 containerd[1455]: time="2025-07-06T23:57:27.098259623Z" level=info msg="CreateContainer within sandbox \"35aac45c480286aaa24878f69547eeab4a899b5845aeb967deb1f1cefd08b7ae\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 6 23:57:27.114559 containerd[1455]: time="2025-07-06T23:57:27.114514170Z" level=info msg="CreateContainer within sandbox \"35aac45c480286aaa24878f69547eeab4a899b5845aeb967deb1f1cefd08b7ae\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"95e46801d32881f554f5afcbd773af32bf0883f823ee512d275b8b760cb61266\"" Jul 6 23:57:27.115197 containerd[1455]: time="2025-07-06T23:57:27.115173577Z" level=info msg="StartContainer for \"95e46801d32881f554f5afcbd773af32bf0883f823ee512d275b8b760cb61266\"" Jul 6 23:57:27.156476 systemd[1]: Started cri-containerd-95e46801d32881f554f5afcbd773af32bf0883f823ee512d275b8b760cb61266.scope - libcontainer container 95e46801d32881f554f5afcbd773af32bf0883f823ee512d275b8b760cb61266. Jul 6 23:57:27.185539 containerd[1455]: time="2025-07-06T23:57:27.185492665Z" level=info msg="StartContainer for \"95e46801d32881f554f5afcbd773af32bf0883f823ee512d275b8b760cb61266\" returns successfully" Jul 6 23:57:27.196655 systemd[1]: cri-containerd-95e46801d32881f554f5afcbd773af32bf0883f823ee512d275b8b760cb61266.scope: Deactivated successfully. Jul 6 23:57:27.426415 kubelet[1754]: E0706 23:57:27.426367 1754 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 6 23:57:27.659940 containerd[1455]: time="2025-07-06T23:57:27.659844154Z" level=info msg="shim disconnected" id=95e46801d32881f554f5afcbd773af32bf0883f823ee512d275b8b760cb61266 namespace=k8s.io Jul 6 23:57:27.659940 containerd[1455]: time="2025-07-06T23:57:27.659938882Z" level=warning msg="cleaning up after shim disconnected" id=95e46801d32881f554f5afcbd773af32bf0883f823ee512d275b8b760cb61266 namespace=k8s.io Jul 6 23:57:27.659940 containerd[1455]: time="2025-07-06T23:57:27.659948991Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:57:27.945549 kubelet[1754]: E0706 23:57:27.945501 1754 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:57:27.948473 containerd[1455]: time="2025-07-06T23:57:27.948435964Z" level=info msg="CreateContainer within sandbox \"35aac45c480286aaa24878f69547eeab4a899b5845aeb967deb1f1cefd08b7ae\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 6 23:57:27.964394 containerd[1455]: time="2025-07-06T23:57:27.964352116Z" level=info msg="CreateContainer within sandbox \"35aac45c480286aaa24878f69547eeab4a899b5845aeb967deb1f1cefd08b7ae\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"f4f1742946baa7cc65298d9cca15d6121b50ab24381dcda357116cb0c447029b\"" Jul 6 23:57:27.964992 containerd[1455]: time="2025-07-06T23:57:27.964931412Z" level=info msg="StartContainer for \"f4f1742946baa7cc65298d9cca15d6121b50ab24381dcda357116cb0c447029b\"" Jul 6 23:57:28.088462 systemd[1]: Started cri-containerd-f4f1742946baa7cc65298d9cca15d6121b50ab24381dcda357116cb0c447029b.scope - libcontainer container f4f1742946baa7cc65298d9cca15d6121b50ab24381dcda357116cb0c447029b. Jul 6 23:57:28.111260 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-95e46801d32881f554f5afcbd773af32bf0883f823ee512d275b8b760cb61266-rootfs.mount: Deactivated successfully. Jul 6 23:57:28.124768 containerd[1455]: time="2025-07-06T23:57:28.124721018Z" level=info msg="StartContainer for \"f4f1742946baa7cc65298d9cca15d6121b50ab24381dcda357116cb0c447029b\" returns successfully" Jul 6 23:57:28.142274 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 6 23:57:28.143240 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 6 23:57:28.143725 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jul 6 23:57:28.154431 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 6 23:57:28.154743 systemd[1]: cri-containerd-f4f1742946baa7cc65298d9cca15d6121b50ab24381dcda357116cb0c447029b.scope: Deactivated successfully. Jul 6 23:57:28.258088 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f4f1742946baa7cc65298d9cca15d6121b50ab24381dcda357116cb0c447029b-rootfs.mount: Deactivated successfully. Jul 6 23:57:28.275900 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 6 23:57:28.352216 containerd[1455]: time="2025-07-06T23:57:28.352125181Z" level=info msg="shim disconnected" id=f4f1742946baa7cc65298d9cca15d6121b50ab24381dcda357116cb0c447029b namespace=k8s.io Jul 6 23:57:28.352216 containerd[1455]: time="2025-07-06T23:57:28.352193940Z" level=warning msg="cleaning up after shim disconnected" id=f4f1742946baa7cc65298d9cca15d6121b50ab24381dcda357116cb0c447029b namespace=k8s.io Jul 6 23:57:28.352216 containerd[1455]: time="2025-07-06T23:57:28.352208938Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:57:28.426696 kubelet[1754]: E0706 23:57:28.426655 1754 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 6 23:57:28.893653 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2802309947.mount: Deactivated successfully. Jul 6 23:57:29.058806 kubelet[1754]: E0706 23:57:29.058772 1754 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:57:29.060455 containerd[1455]: time="2025-07-06T23:57:29.060411437Z" level=info msg="CreateContainer within sandbox \"35aac45c480286aaa24878f69547eeab4a899b5845aeb967deb1f1cefd08b7ae\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 6 23:57:29.078838 containerd[1455]: time="2025-07-06T23:57:29.078722400Z" level=info msg="CreateContainer within sandbox \"35aac45c480286aaa24878f69547eeab4a899b5845aeb967deb1f1cefd08b7ae\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"842a6842ef40ba919dc0c3056a4bfc6cf29ce39745f8e20c395579b2a183a174\"" Jul 6 23:57:29.079478 containerd[1455]: time="2025-07-06T23:57:29.079433744Z" level=info msg="StartContainer for \"842a6842ef40ba919dc0c3056a4bfc6cf29ce39745f8e20c395579b2a183a174\"" Jul 6 23:57:29.177572 systemd[1]: Started cri-containerd-842a6842ef40ba919dc0c3056a4bfc6cf29ce39745f8e20c395579b2a183a174.scope - libcontainer container 842a6842ef40ba919dc0c3056a4bfc6cf29ce39745f8e20c395579b2a183a174. Jul 6 23:57:29.214743 containerd[1455]: time="2025-07-06T23:57:29.214706621Z" level=info msg="StartContainer for \"842a6842ef40ba919dc0c3056a4bfc6cf29ce39745f8e20c395579b2a183a174\" returns successfully" Jul 6 23:57:29.215595 systemd[1]: cri-containerd-842a6842ef40ba919dc0c3056a4bfc6cf29ce39745f8e20c395579b2a183a174.scope: Deactivated successfully. Jul 6 23:57:29.239690 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-842a6842ef40ba919dc0c3056a4bfc6cf29ce39745f8e20c395579b2a183a174-rootfs.mount: Deactivated successfully. Jul 6 23:57:29.427875 kubelet[1754]: E0706 23:57:29.427675 1754 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 6 23:57:29.561768 containerd[1455]: time="2025-07-06T23:57:29.561698160Z" level=info msg="shim disconnected" id=842a6842ef40ba919dc0c3056a4bfc6cf29ce39745f8e20c395579b2a183a174 namespace=k8s.io Jul 6 23:57:29.561768 containerd[1455]: time="2025-07-06T23:57:29.561756890Z" level=warning msg="cleaning up after shim disconnected" id=842a6842ef40ba919dc0c3056a4bfc6cf29ce39745f8e20c395579b2a183a174 namespace=k8s.io Jul 6 23:57:29.561768 containerd[1455]: time="2025-07-06T23:57:29.561766989Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:57:29.651798 containerd[1455]: time="2025-07-06T23:57:29.651761468Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:57:29.652609 containerd[1455]: time="2025-07-06T23:57:29.652550899Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.10: active requests=0, bytes read=30383943" Jul 6 23:57:29.653663 containerd[1455]: time="2025-07-06T23:57:29.653639019Z" level=info msg="ImageCreate event name:\"sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:57:29.655928 containerd[1455]: time="2025-07-06T23:57:29.655876434Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:57:29.656351 containerd[1455]: time="2025-07-06T23:57:29.656291061Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.10\" with image id \"sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c\", repo tag \"registry.k8s.io/kube-proxy:v1.31.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\", size \"30382962\" in 2.559350521s" Jul 6 23:57:29.656387 containerd[1455]: time="2025-07-06T23:57:29.656352216Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\" returns image reference \"sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c\"" Jul 6 23:57:29.658813 containerd[1455]: time="2025-07-06T23:57:29.658768126Z" level=info msg="CreateContainer within sandbox \"0ab62ed6d1c126704418a4d8934c512b66a3f2ed4316d563c73f4593d9628a78\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 6 23:57:29.678029 containerd[1455]: time="2025-07-06T23:57:29.677932519Z" level=info msg="CreateContainer within sandbox \"0ab62ed6d1c126704418a4d8934c512b66a3f2ed4316d563c73f4593d9628a78\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"0796b1f12dc1ab67ad49f4be59365f099062f319840dd386df90dc9d852d515a\"" Jul 6 23:57:29.678491 containerd[1455]: time="2025-07-06T23:57:29.678461962Z" level=info msg="StartContainer for \"0796b1f12dc1ab67ad49f4be59365f099062f319840dd386df90dc9d852d515a\"" Jul 6 23:57:29.715568 systemd[1]: Started cri-containerd-0796b1f12dc1ab67ad49f4be59365f099062f319840dd386df90dc9d852d515a.scope - libcontainer container 0796b1f12dc1ab67ad49f4be59365f099062f319840dd386df90dc9d852d515a. Jul 6 23:57:29.797379 containerd[1455]: time="2025-07-06T23:57:29.797331733Z" level=info msg="StartContainer for \"0796b1f12dc1ab67ad49f4be59365f099062f319840dd386df90dc9d852d515a\" returns successfully" Jul 6 23:57:30.063222 kubelet[1754]: E0706 23:57:30.063076 1754 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:57:30.064755 kubelet[1754]: E0706 23:57:30.064733 1754 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:57:30.064916 containerd[1455]: time="2025-07-06T23:57:30.064874666Z" level=info msg="CreateContainer within sandbox \"35aac45c480286aaa24878f69547eeab4a899b5845aeb967deb1f1cefd08b7ae\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 6 23:57:30.081428 containerd[1455]: time="2025-07-06T23:57:30.081361078Z" level=info msg="CreateContainer within sandbox \"35aac45c480286aaa24878f69547eeab4a899b5845aeb967deb1f1cefd08b7ae\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"968f68ddc52dbeadefbdce99848976e8050c84c71651fee5b950adcbd4a02524\"" Jul 6 23:57:30.081858 containerd[1455]: time="2025-07-06T23:57:30.081813396Z" level=info msg="StartContainer for \"968f68ddc52dbeadefbdce99848976e8050c84c71651fee5b950adcbd4a02524\"" Jul 6 23:57:30.083117 kubelet[1754]: I0706 23:57:30.083050 1754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-kjttg" podStartSLOduration=2.772602648 podStartE2EDuration="14.083029295s" podCreationTimestamp="2025-07-06 23:57:16 +0000 UTC" firstStartedPulling="2025-07-06 23:57:18.346685504 +0000 UTC m=+3.848663985" lastFinishedPulling="2025-07-06 23:57:29.657112141 +0000 UTC m=+15.159090632" observedRunningTime="2025-07-06 23:57:30.082574954 +0000 UTC m=+15.584553445" watchObservedRunningTime="2025-07-06 23:57:30.083029295 +0000 UTC m=+15.585007787" Jul 6 23:57:30.131829 systemd[1]: Started cri-containerd-968f68ddc52dbeadefbdce99848976e8050c84c71651fee5b950adcbd4a02524.scope - libcontainer container 968f68ddc52dbeadefbdce99848976e8050c84c71651fee5b950adcbd4a02524. Jul 6 23:57:30.159882 systemd[1]: cri-containerd-968f68ddc52dbeadefbdce99848976e8050c84c71651fee5b950adcbd4a02524.scope: Deactivated successfully. Jul 6 23:57:30.164694 containerd[1455]: time="2025-07-06T23:57:30.164069556Z" level=info msg="StartContainer for \"968f68ddc52dbeadefbdce99848976e8050c84c71651fee5b950adcbd4a02524\" returns successfully" Jul 6 23:57:30.182476 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-968f68ddc52dbeadefbdce99848976e8050c84c71651fee5b950adcbd4a02524-rootfs.mount: Deactivated successfully. Jul 6 23:57:30.256765 containerd[1455]: time="2025-07-06T23:57:30.256694468Z" level=info msg="shim disconnected" id=968f68ddc52dbeadefbdce99848976e8050c84c71651fee5b950adcbd4a02524 namespace=k8s.io Jul 6 23:57:30.256765 containerd[1455]: time="2025-07-06T23:57:30.256748639Z" level=warning msg="cleaning up after shim disconnected" id=968f68ddc52dbeadefbdce99848976e8050c84c71651fee5b950adcbd4a02524 namespace=k8s.io Jul 6 23:57:30.256765 containerd[1455]: time="2025-07-06T23:57:30.256756995Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:57:30.428544 kubelet[1754]: E0706 23:57:30.428495 1754 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 6 23:57:31.068696 kubelet[1754]: E0706 23:57:31.068661 1754 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:57:31.068910 kubelet[1754]: E0706 23:57:31.068872 1754 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:57:31.070851 containerd[1455]: time="2025-07-06T23:57:31.070807449Z" level=info msg="CreateContainer within sandbox \"35aac45c480286aaa24878f69547eeab4a899b5845aeb967deb1f1cefd08b7ae\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 6 23:57:31.088891 containerd[1455]: time="2025-07-06T23:57:31.088827086Z" level=info msg="CreateContainer within sandbox \"35aac45c480286aaa24878f69547eeab4a899b5845aeb967deb1f1cefd08b7ae\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"6d8275b18d6f401c15b0a0f1c1f671c85a1fa8131c5eba2861801d2458bfc7f2\"" Jul 6 23:57:31.089500 containerd[1455]: time="2025-07-06T23:57:31.089457999Z" level=info msg="StartContainer for \"6d8275b18d6f401c15b0a0f1c1f671c85a1fa8131c5eba2861801d2458bfc7f2\"" Jul 6 23:57:31.169466 systemd[1]: Started cri-containerd-6d8275b18d6f401c15b0a0f1c1f671c85a1fa8131c5eba2861801d2458bfc7f2.scope - libcontainer container 6d8275b18d6f401c15b0a0f1c1f671c85a1fa8131c5eba2861801d2458bfc7f2. Jul 6 23:57:31.199593 containerd[1455]: time="2025-07-06T23:57:31.199548559Z" level=info msg="StartContainer for \"6d8275b18d6f401c15b0a0f1c1f671c85a1fa8131c5eba2861801d2458bfc7f2\" returns successfully" Jul 6 23:57:31.299944 kubelet[1754]: I0706 23:57:31.299905 1754 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jul 6 23:57:31.428879 kubelet[1754]: E0706 23:57:31.428845 1754 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 6 23:57:31.882359 kernel: Initializing XFRM netlink socket Jul 6 23:57:32.073003 kubelet[1754]: E0706 23:57:32.072954 1754 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:57:32.086177 kubelet[1754]: I0706 23:57:32.086117 1754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-t8smz" podStartSLOduration=7.332338412 podStartE2EDuration="16.086099562s" podCreationTimestamp="2025-07-06 23:57:16 +0000 UTC" firstStartedPulling="2025-07-06 23:57:18.342995916 +0000 UTC m=+3.844974407" lastFinishedPulling="2025-07-06 23:57:27.096757056 +0000 UTC m=+12.598735557" observedRunningTime="2025-07-06 23:57:32.085768912 +0000 UTC m=+17.587747403" watchObservedRunningTime="2025-07-06 23:57:32.086099562 +0000 UTC m=+17.588078043" Jul 6 23:57:32.429947 kubelet[1754]: E0706 23:57:32.429896 1754 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 6 23:57:32.750083 systemd[1]: Created slice kubepods-besteffort-pod8edb4d72_dc37_41a3_a604_42373dfce319.slice - libcontainer container kubepods-besteffort-pod8edb4d72_dc37_41a3_a604_42373dfce319.slice. Jul 6 23:57:32.774804 kubelet[1754]: I0706 23:57:32.774771 1754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvh5s\" (UniqueName: \"kubernetes.io/projected/8edb4d72-dc37-41a3-a604-42373dfce319-kube-api-access-jvh5s\") pod \"nginx-deployment-8587fbcb89-lw2zf\" (UID: \"8edb4d72-dc37-41a3-a604-42373dfce319\") " pod="default/nginx-deployment-8587fbcb89-lw2zf" Jul 6 23:57:33.053306 containerd[1455]: time="2025-07-06T23:57:33.053210394Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-lw2zf,Uid:8edb4d72-dc37-41a3-a604-42373dfce319,Namespace:default,Attempt:0,}" Jul 6 23:57:33.074895 kubelet[1754]: E0706 23:57:33.074861 1754 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:57:33.430650 kubelet[1754]: E0706 23:57:33.430616 1754 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 6 23:57:33.579922 systemd-networkd[1383]: cilium_host: Link UP Jul 6 23:57:33.580076 systemd-networkd[1383]: cilium_net: Link UP Jul 6 23:57:33.580260 systemd-networkd[1383]: cilium_net: Gained carrier Jul 6 23:57:33.580456 systemd-networkd[1383]: cilium_host: Gained carrier Jul 6 23:57:33.683699 systemd-networkd[1383]: cilium_vxlan: Link UP Jul 6 23:57:33.683708 systemd-networkd[1383]: cilium_vxlan: Gained carrier Jul 6 23:57:33.728524 systemd-networkd[1383]: cilium_host: Gained IPv6LL Jul 6 23:57:33.914350 kernel: NET: Registered PF_ALG protocol family Jul 6 23:57:34.077026 kubelet[1754]: E0706 23:57:34.076917 1754 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:57:34.431667 kubelet[1754]: E0706 23:57:34.431616 1754 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 6 23:57:34.496425 systemd-networkd[1383]: cilium_net: Gained IPv6LL Jul 6 23:57:34.531381 systemd-networkd[1383]: lxc_health: Link UP Jul 6 23:57:34.542472 systemd-networkd[1383]: lxc_health: Gained carrier Jul 6 23:57:34.747687 systemd-networkd[1383]: lxc4787d9ff5ed5: Link UP Jul 6 23:57:34.756365 kernel: eth0: renamed from tmp719de Jul 6 23:57:34.762376 systemd-networkd[1383]: lxc4787d9ff5ed5: Gained carrier Jul 6 23:57:35.048340 kubelet[1754]: E0706 23:57:35.048154 1754 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 6 23:57:35.264746 systemd-networkd[1383]: cilium_vxlan: Gained IPv6LL Jul 6 23:57:35.381201 kubelet[1754]: E0706 23:57:35.380808 1754 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:57:35.432084 kubelet[1754]: E0706 23:57:35.432036 1754 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 6 23:57:35.840522 systemd-networkd[1383]: lxc4787d9ff5ed5: Gained IPv6LL Jul 6 23:57:36.033498 systemd-networkd[1383]: lxc_health: Gained IPv6LL Jul 6 23:57:36.080248 kubelet[1754]: E0706 23:57:36.080211 1754 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:57:36.433648 kubelet[1754]: E0706 23:57:36.433577 1754 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 6 23:57:37.081811 kubelet[1754]: E0706 23:57:37.081769 1754 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:57:37.434406 kubelet[1754]: E0706 23:57:37.434357 1754 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 6 23:57:38.381600 containerd[1455]: time="2025-07-06T23:57:38.381471438Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:57:38.381600 containerd[1455]: time="2025-07-06T23:57:38.381550130Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:57:38.381600 containerd[1455]: time="2025-07-06T23:57:38.381563306Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:57:38.382104 containerd[1455]: time="2025-07-06T23:57:38.381665162Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:57:38.415451 systemd[1]: Started cri-containerd-719de8efa453982d5f3921c99662150aed840e3bdeab02b6c9614b3bfef14340.scope - libcontainer container 719de8efa453982d5f3921c99662150aed840e3bdeab02b6c9614b3bfef14340. Jul 6 23:57:38.426179 systemd-resolved[1324]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 6 23:57:38.435167 kubelet[1754]: E0706 23:57:38.435128 1754 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 6 23:57:38.452239 containerd[1455]: time="2025-07-06T23:57:38.452197465Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-lw2zf,Uid:8edb4d72-dc37-41a3-a604-42373dfce319,Namespace:default,Attempt:0,} returns sandbox id \"719de8efa453982d5f3921c99662150aed840e3bdeab02b6c9614b3bfef14340\"" Jul 6 23:57:38.456689 containerd[1455]: time="2025-07-06T23:57:38.456654637Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jul 6 23:57:39.435687 kubelet[1754]: E0706 23:57:39.435617 1754 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 6 23:57:40.435783 kubelet[1754]: E0706 23:57:40.435736 1754 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 6 23:57:41.436783 kubelet[1754]: E0706 23:57:41.436725 1754 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 6 23:57:41.530357 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1922216895.mount: Deactivated successfully. Jul 6 23:57:42.437674 kubelet[1754]: E0706 23:57:42.437603 1754 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 6 23:57:42.999980 containerd[1455]: time="2025-07-06T23:57:42.999923591Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:57:43.000608 containerd[1455]: time="2025-07-06T23:57:43.000550089Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=73313230" Jul 6 23:57:43.001653 containerd[1455]: time="2025-07-06T23:57:43.001620836Z" level=info msg="ImageCreate event name:\"sha256:601c94998c5615a5f36a1babb9bcc2b1d9f112c02c19d68701b29f3fd6b2feb8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:57:43.004188 containerd[1455]: time="2025-07-06T23:57:43.004152221Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:9a18b8b1845bf88a3388cde4ec626461965a717ac641198120979e75438b9693\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:57:43.005089 containerd[1455]: time="2025-07-06T23:57:43.005037842Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:601c94998c5615a5f36a1babb9bcc2b1d9f112c02c19d68701b29f3fd6b2feb8\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:9a18b8b1845bf88a3388cde4ec626461965a717ac641198120979e75438b9693\", size \"73313108\" in 4.548345674s" Jul 6 23:57:43.005148 containerd[1455]: time="2025-07-06T23:57:43.005093819Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:601c94998c5615a5f36a1babb9bcc2b1d9f112c02c19d68701b29f3fd6b2feb8\"" Jul 6 23:57:43.007105 containerd[1455]: time="2025-07-06T23:57:43.007074412Z" level=info msg="CreateContainer within sandbox \"719de8efa453982d5f3921c99662150aed840e3bdeab02b6c9614b3bfef14340\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Jul 6 23:57:43.020355 containerd[1455]: time="2025-07-06T23:57:43.020296174Z" level=info msg="CreateContainer within sandbox \"719de8efa453982d5f3921c99662150aed840e3bdeab02b6c9614b3bfef14340\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"aae1cd35255a3ed9ad3132acf143beb166c048ec3efe04165d53652dae516f39\"" Jul 6 23:57:43.020874 containerd[1455]: time="2025-07-06T23:57:43.020842898Z" level=info msg="StartContainer for \"aae1cd35255a3ed9ad3132acf143beb166c048ec3efe04165d53652dae516f39\"" Jul 6 23:57:43.044215 systemd[1]: run-containerd-runc-k8s.io-aae1cd35255a3ed9ad3132acf143beb166c048ec3efe04165d53652dae516f39-runc.QESMrk.mount: Deactivated successfully. Jul 6 23:57:43.053483 systemd[1]: Started cri-containerd-aae1cd35255a3ed9ad3132acf143beb166c048ec3efe04165d53652dae516f39.scope - libcontainer container aae1cd35255a3ed9ad3132acf143beb166c048ec3efe04165d53652dae516f39. Jul 6 23:57:43.078963 containerd[1455]: time="2025-07-06T23:57:43.078917603Z" level=info msg="StartContainer for \"aae1cd35255a3ed9ad3132acf143beb166c048ec3efe04165d53652dae516f39\" returns successfully" Jul 6 23:57:43.100899 kubelet[1754]: I0706 23:57:43.100849 1754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-8587fbcb89-lw2zf" podStartSLOduration=6.5512956639999995 podStartE2EDuration="11.100833078s" podCreationTimestamp="2025-07-06 23:57:32 +0000 UTC" firstStartedPulling="2025-07-06 23:57:38.456306227 +0000 UTC m=+23.958284718" lastFinishedPulling="2025-07-06 23:57:43.005843641 +0000 UTC m=+28.507822132" observedRunningTime="2025-07-06 23:57:43.100528367 +0000 UTC m=+28.602506858" watchObservedRunningTime="2025-07-06 23:57:43.100833078 +0000 UTC m=+28.602811569" Jul 6 23:57:43.437983 kubelet[1754]: E0706 23:57:43.437931 1754 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 6 23:57:44.438657 kubelet[1754]: E0706 23:57:44.438581 1754 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 6 23:57:45.056610 systemd[1]: Created slice kubepods-besteffort-pod9c9956f0_bbba_4f28_90f0_17a0549c9e6d.slice - libcontainer container kubepods-besteffort-pod9c9956f0_bbba_4f28_90f0_17a0549c9e6d.slice. Jul 6 23:57:45.198365 kubelet[1754]: I0706 23:57:45.198283 1754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jp8x2\" (UniqueName: \"kubernetes.io/projected/9c9956f0-bbba-4f28-90f0-17a0549c9e6d-kube-api-access-jp8x2\") pod \"nfs-server-provisioner-0\" (UID: \"9c9956f0-bbba-4f28-90f0-17a0549c9e6d\") " pod="default/nfs-server-provisioner-0" Jul 6 23:57:45.198365 kubelet[1754]: I0706 23:57:45.198349 1754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/9c9956f0-bbba-4f28-90f0-17a0549c9e6d-data\") pod \"nfs-server-provisioner-0\" (UID: \"9c9956f0-bbba-4f28-90f0-17a0549c9e6d\") " pod="default/nfs-server-provisioner-0" Jul 6 23:57:45.360121 containerd[1455]: time="2025-07-06T23:57:45.359976548Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:9c9956f0-bbba-4f28-90f0-17a0549c9e6d,Namespace:default,Attempt:0,}" Jul 6 23:57:45.389425 systemd-networkd[1383]: lxc3dcbba50564e: Link UP Jul 6 23:57:45.400360 kernel: eth0: renamed from tmp74d28 Jul 6 23:57:45.408394 systemd-networkd[1383]: lxc3dcbba50564e: Gained carrier Jul 6 23:57:45.439796 kubelet[1754]: E0706 23:57:45.439738 1754 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 6 23:57:45.646887 containerd[1455]: time="2025-07-06T23:57:45.646655306Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:57:45.646887 containerd[1455]: time="2025-07-06T23:57:45.646729337Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:57:45.646887 containerd[1455]: time="2025-07-06T23:57:45.646743595Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:57:45.646887 containerd[1455]: time="2025-07-06T23:57:45.646814019Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:57:45.673473 systemd[1]: Started cri-containerd-74d283762e8859b29f176eead3b77f880014a6f04327ef630a1e484d80f04dcc.scope - libcontainer container 74d283762e8859b29f176eead3b77f880014a6f04327ef630a1e484d80f04dcc. Jul 6 23:57:45.686023 systemd-resolved[1324]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 6 23:57:45.714466 containerd[1455]: time="2025-07-06T23:57:45.714418509Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:9c9956f0-bbba-4f28-90f0-17a0549c9e6d,Namespace:default,Attempt:0,} returns sandbox id \"74d283762e8859b29f176eead3b77f880014a6f04327ef630a1e484d80f04dcc\"" Jul 6 23:57:45.715913 containerd[1455]: time="2025-07-06T23:57:45.715884302Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Jul 6 23:57:46.440464 kubelet[1754]: E0706 23:57:46.440415 1754 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 6 23:57:47.232494 systemd-networkd[1383]: lxc3dcbba50564e: Gained IPv6LL Jul 6 23:57:47.440769 kubelet[1754]: E0706 23:57:47.440693 1754 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 6 23:57:47.825786 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2929049088.mount: Deactivated successfully. Jul 6 23:57:48.441780 kubelet[1754]: E0706 23:57:48.441705 1754 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 6 23:57:49.442199 kubelet[1754]: E0706 23:57:49.442135 1754 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 6 23:57:50.292531 update_engine[1443]: I20250706 23:57:50.292440 1443 update_attempter.cc:509] Updating boot flags... Jul 6 23:57:50.443251 kubelet[1754]: E0706 23:57:50.443146 1754 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 6 23:57:50.478357 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (3071) Jul 6 23:57:50.552372 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (3070) Jul 6 23:57:50.573339 containerd[1455]: time="2025-07-06T23:57:50.570844903Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:57:50.607368 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 34 scanned by (udev-worker) (3070) Jul 6 23:57:50.661112 containerd[1455]: time="2025-07-06T23:57:50.661028799Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=91039406" Jul 6 23:57:50.662747 containerd[1455]: time="2025-07-06T23:57:50.662683218Z" level=info msg="ImageCreate event name:\"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:57:50.665268 containerd[1455]: time="2025-07-06T23:57:50.665216594Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:57:50.666500 containerd[1455]: time="2025-07-06T23:57:50.666467788Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"91036984\" in 4.950546085s" Jul 6 23:57:50.666550 containerd[1455]: time="2025-07-06T23:57:50.666503756Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Jul 6 23:57:50.668750 containerd[1455]: time="2025-07-06T23:57:50.668720802Z" level=info msg="CreateContainer within sandbox \"74d283762e8859b29f176eead3b77f880014a6f04327ef630a1e484d80f04dcc\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Jul 6 23:57:50.684894 containerd[1455]: time="2025-07-06T23:57:50.684854591Z" level=info msg="CreateContainer within sandbox \"74d283762e8859b29f176eead3b77f880014a6f04327ef630a1e484d80f04dcc\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"b369c749997203718e842f058fa841e0abb3de3d8482047bbedf2f198c47bae0\"" Jul 6 23:57:50.685372 containerd[1455]: time="2025-07-06T23:57:50.685347476Z" level=info msg="StartContainer for \"b369c749997203718e842f058fa841e0abb3de3d8482047bbedf2f198c47bae0\"" Jul 6 23:57:50.754452 systemd[1]: Started cri-containerd-b369c749997203718e842f058fa841e0abb3de3d8482047bbedf2f198c47bae0.scope - libcontainer container b369c749997203718e842f058fa841e0abb3de3d8482047bbedf2f198c47bae0. Jul 6 23:57:50.781964 containerd[1455]: time="2025-07-06T23:57:50.781917638Z" level=info msg="StartContainer for \"b369c749997203718e842f058fa841e0abb3de3d8482047bbedf2f198c47bae0\" returns successfully" Jul 6 23:57:51.121172 kubelet[1754]: I0706 23:57:51.121092 1754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=1.169316801 podStartE2EDuration="6.121074688s" podCreationTimestamp="2025-07-06 23:57:45 +0000 UTC" firstStartedPulling="2025-07-06 23:57:45.715588689 +0000 UTC m=+31.217567180" lastFinishedPulling="2025-07-06 23:57:50.667346586 +0000 UTC m=+36.169325067" observedRunningTime="2025-07-06 23:57:51.120607002 +0000 UTC m=+36.622585493" watchObservedRunningTime="2025-07-06 23:57:51.121074688 +0000 UTC m=+36.623053179" Jul 6 23:57:51.444060 kubelet[1754]: E0706 23:57:51.444018 1754 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 6 23:57:52.445101 kubelet[1754]: E0706 23:57:52.445041 1754 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 6 23:57:53.446231 kubelet[1754]: E0706 23:57:53.446121 1754 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 6 23:57:54.446761 kubelet[1754]: E0706 23:57:54.446687 1754 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 6 23:57:55.048000 kubelet[1754]: E0706 23:57:55.047929 1754 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 6 23:57:55.447095 kubelet[1754]: E0706 23:57:55.447020 1754 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 6 23:57:56.447994 kubelet[1754]: E0706 23:57:56.447948 1754 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 6 23:57:57.448721 kubelet[1754]: E0706 23:57:57.448662 1754 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 6 23:57:58.449656 kubelet[1754]: E0706 23:57:58.449608 1754 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 6 23:57:59.450274 kubelet[1754]: E0706 23:57:59.450213 1754 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 6 23:58:00.450892 kubelet[1754]: E0706 23:58:00.450823 1754 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 6 23:58:00.781766 systemd[1]: Created slice kubepods-besteffort-pod219fddea_d4d4_497f_8218_4a38ce2982dd.slice - libcontainer container kubepods-besteffort-pod219fddea_d4d4_497f_8218_4a38ce2982dd.slice. Jul 6 23:58:00.976070 kubelet[1754]: I0706 23:58:00.975992 1754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-9eccc698-adbd-4baa-bb65-559b655c40ac\" (UniqueName: \"kubernetes.io/nfs/219fddea-d4d4-497f-8218-4a38ce2982dd-pvc-9eccc698-adbd-4baa-bb65-559b655c40ac\") pod \"test-pod-1\" (UID: \"219fddea-d4d4-497f-8218-4a38ce2982dd\") " pod="default/test-pod-1" Jul 6 23:58:00.976070 kubelet[1754]: I0706 23:58:00.976040 1754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rbsfn\" (UniqueName: \"kubernetes.io/projected/219fddea-d4d4-497f-8218-4a38ce2982dd-kube-api-access-rbsfn\") pod \"test-pod-1\" (UID: \"219fddea-d4d4-497f-8218-4a38ce2982dd\") " pod="default/test-pod-1" Jul 6 23:58:01.102383 kernel: FS-Cache: Loaded Jul 6 23:58:01.170506 kernel: RPC: Registered named UNIX socket transport module. Jul 6 23:58:01.170624 kernel: RPC: Registered udp transport module. Jul 6 23:58:01.170645 kernel: RPC: Registered tcp transport module. Jul 6 23:58:01.170663 kernel: RPC: Registered tcp-with-tls transport module. Jul 6 23:58:01.171734 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Jul 6 23:58:01.439384 kernel: NFS: Registering the id_resolver key type Jul 6 23:58:01.439570 kernel: Key type id_resolver registered Jul 6 23:58:01.439598 kernel: Key type id_legacy registered Jul 6 23:58:01.451779 kubelet[1754]: E0706 23:58:01.451704 1754 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 6 23:58:01.466297 nfsidmap[3164]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Jul 6 23:58:01.471194 nfsidmap[3167]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Jul 6 23:58:01.684746 containerd[1455]: time="2025-07-06T23:58:01.684695921Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:219fddea-d4d4-497f-8218-4a38ce2982dd,Namespace:default,Attempt:0,}" Jul 6 23:58:01.713624 systemd-networkd[1383]: lxcd97b4d8457e2: Link UP Jul 6 23:58:01.728361 kernel: eth0: renamed from tmp49c18 Jul 6 23:58:01.734367 systemd-networkd[1383]: lxcd97b4d8457e2: Gained carrier Jul 6 23:58:01.941998 containerd[1455]: time="2025-07-06T23:58:01.941870569Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:58:01.941998 containerd[1455]: time="2025-07-06T23:58:01.941943136Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:58:01.941998 containerd[1455]: time="2025-07-06T23:58:01.941956732Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:58:01.942232 containerd[1455]: time="2025-07-06T23:58:01.942056269Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:58:01.960477 systemd[1]: Started cri-containerd-49c18cdbf4f8d473bbb5b1cb2ac59ae52b14940e67522eb0a1f9899b6c6d16e6.scope - libcontainer container 49c18cdbf4f8d473bbb5b1cb2ac59ae52b14940e67522eb0a1f9899b6c6d16e6. Jul 6 23:58:01.971129 systemd-resolved[1324]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 6 23:58:01.994750 containerd[1455]: time="2025-07-06T23:58:01.994693028Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:219fddea-d4d4-497f-8218-4a38ce2982dd,Namespace:default,Attempt:0,} returns sandbox id \"49c18cdbf4f8d473bbb5b1cb2ac59ae52b14940e67522eb0a1f9899b6c6d16e6\"" Jul 6 23:58:01.996123 containerd[1455]: time="2025-07-06T23:58:01.996093409Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jul 6 23:58:02.384420 containerd[1455]: time="2025-07-06T23:58:02.384296384Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:58:02.385712 containerd[1455]: time="2025-07-06T23:58:02.385663451Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Jul 6 23:58:02.388341 containerd[1455]: time="2025-07-06T23:58:02.388296295Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:601c94998c5615a5f36a1babb9bcc2b1d9f112c02c19d68701b29f3fd6b2feb8\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:9a18b8b1845bf88a3388cde4ec626461965a717ac641198120979e75438b9693\", size \"73313108\" in 392.168071ms" Jul 6 23:58:02.388392 containerd[1455]: time="2025-07-06T23:58:02.388345717Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:601c94998c5615a5f36a1babb9bcc2b1d9f112c02c19d68701b29f3fd6b2feb8\"" Jul 6 23:58:02.390048 containerd[1455]: time="2025-07-06T23:58:02.390021648Z" level=info msg="CreateContainer within sandbox \"49c18cdbf4f8d473bbb5b1cb2ac59ae52b14940e67522eb0a1f9899b6c6d16e6\" for container &ContainerMetadata{Name:test,Attempt:0,}" Jul 6 23:58:02.403099 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount242489034.mount: Deactivated successfully. Jul 6 23:58:02.406182 containerd[1455]: time="2025-07-06T23:58:02.406142950Z" level=info msg="CreateContainer within sandbox \"49c18cdbf4f8d473bbb5b1cb2ac59ae52b14940e67522eb0a1f9899b6c6d16e6\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"227709231a46e9d0f8f7c928a29d9570725d4756855fab4eca6cca5b00ff10c1\"" Jul 6 23:58:02.406629 containerd[1455]: time="2025-07-06T23:58:02.406603589Z" level=info msg="StartContainer for \"227709231a46e9d0f8f7c928a29d9570725d4756855fab4eca6cca5b00ff10c1\"" Jul 6 23:58:02.438450 systemd[1]: Started cri-containerd-227709231a46e9d0f8f7c928a29d9570725d4756855fab4eca6cca5b00ff10c1.scope - libcontainer container 227709231a46e9d0f8f7c928a29d9570725d4756855fab4eca6cca5b00ff10c1. Jul 6 23:58:02.452173 kubelet[1754]: E0706 23:58:02.452134 1754 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 6 23:58:02.466081 containerd[1455]: time="2025-07-06T23:58:02.466040841Z" level=info msg="StartContainer for \"227709231a46e9d0f8f7c928a29d9570725d4756855fab4eca6cca5b00ff10c1\" returns successfully" Jul 6 23:58:02.912591 systemd-networkd[1383]: lxcd97b4d8457e2: Gained IPv6LL Jul 6 23:58:03.145149 kubelet[1754]: I0706 23:58:03.145079 1754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=17.752010474 podStartE2EDuration="18.145063692s" podCreationTimestamp="2025-07-06 23:57:45 +0000 UTC" firstStartedPulling="2025-07-06 23:58:01.995850221 +0000 UTC m=+47.497828712" lastFinishedPulling="2025-07-06 23:58:02.388903439 +0000 UTC m=+47.890881930" observedRunningTime="2025-07-06 23:58:03.144984573 +0000 UTC m=+48.646963064" watchObservedRunningTime="2025-07-06 23:58:03.145063692 +0000 UTC m=+48.647042183" Jul 6 23:58:03.452774 kubelet[1754]: E0706 23:58:03.452681 1754 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 6 23:58:04.453119 kubelet[1754]: E0706 23:58:04.453053 1754 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 6 23:58:05.453891 kubelet[1754]: E0706 23:58:05.453841 1754 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 6 23:58:06.454735 kubelet[1754]: E0706 23:58:06.454642 1754 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 6 23:58:07.455337 kubelet[1754]: E0706 23:58:07.455256 1754 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 6 23:58:07.638985 containerd[1455]: time="2025-07-06T23:58:07.638925120Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 6 23:58:07.646177 containerd[1455]: time="2025-07-06T23:58:07.646133689Z" level=info msg="StopContainer for \"6d8275b18d6f401c15b0a0f1c1f671c85a1fa8131c5eba2861801d2458bfc7f2\" with timeout 2 (s)" Jul 6 23:58:07.646383 containerd[1455]: time="2025-07-06T23:58:07.646359454Z" level=info msg="Stop container \"6d8275b18d6f401c15b0a0f1c1f671c85a1fa8131c5eba2861801d2458bfc7f2\" with signal terminated" Jul 6 23:58:07.653005 systemd-networkd[1383]: lxc_health: Link DOWN Jul 6 23:58:07.653017 systemd-networkd[1383]: lxc_health: Lost carrier Jul 6 23:58:07.680780 systemd[1]: cri-containerd-6d8275b18d6f401c15b0a0f1c1f671c85a1fa8131c5eba2861801d2458bfc7f2.scope: Deactivated successfully. Jul 6 23:58:07.681136 systemd[1]: cri-containerd-6d8275b18d6f401c15b0a0f1c1f671c85a1fa8131c5eba2861801d2458bfc7f2.scope: Consumed 7.737s CPU time. Jul 6 23:58:07.699610 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6d8275b18d6f401c15b0a0f1c1f671c85a1fa8131c5eba2861801d2458bfc7f2-rootfs.mount: Deactivated successfully. Jul 6 23:58:07.709166 containerd[1455]: time="2025-07-06T23:58:07.709025978Z" level=info msg="shim disconnected" id=6d8275b18d6f401c15b0a0f1c1f671c85a1fa8131c5eba2861801d2458bfc7f2 namespace=k8s.io Jul 6 23:58:07.709166 containerd[1455]: time="2025-07-06T23:58:07.709087443Z" level=warning msg="cleaning up after shim disconnected" id=6d8275b18d6f401c15b0a0f1c1f671c85a1fa8131c5eba2861801d2458bfc7f2 namespace=k8s.io Jul 6 23:58:07.709166 containerd[1455]: time="2025-07-06T23:58:07.709097764Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:58:07.725812 containerd[1455]: time="2025-07-06T23:58:07.725760459Z" level=info msg="StopContainer for \"6d8275b18d6f401c15b0a0f1c1f671c85a1fa8131c5eba2861801d2458bfc7f2\" returns successfully" Jul 6 23:58:07.726492 containerd[1455]: time="2025-07-06T23:58:07.726466319Z" level=info msg="StopPodSandbox for \"35aac45c480286aaa24878f69547eeab4a899b5845aeb967deb1f1cefd08b7ae\"" Jul 6 23:58:07.726562 containerd[1455]: time="2025-07-06T23:58:07.726508939Z" level=info msg="Container to stop \"842a6842ef40ba919dc0c3056a4bfc6cf29ce39745f8e20c395579b2a183a174\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 6 23:58:07.726562 containerd[1455]: time="2025-07-06T23:58:07.726524267Z" level=info msg="Container to stop \"968f68ddc52dbeadefbdce99848976e8050c84c71651fee5b950adcbd4a02524\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 6 23:58:07.726562 containerd[1455]: time="2025-07-06T23:58:07.726537692Z" level=info msg="Container to stop \"95e46801d32881f554f5afcbd773af32bf0883f823ee512d275b8b760cb61266\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 6 23:58:07.726562 containerd[1455]: time="2025-07-06T23:58:07.726549164Z" level=info msg="Container to stop \"f4f1742946baa7cc65298d9cca15d6121b50ab24381dcda357116cb0c447029b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 6 23:58:07.726699 containerd[1455]: time="2025-07-06T23:58:07.726560766Z" level=info msg="Container to stop \"6d8275b18d6f401c15b0a0f1c1f671c85a1fa8131c5eba2861801d2458bfc7f2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 6 23:58:07.728572 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-35aac45c480286aaa24878f69547eeab4a899b5845aeb967deb1f1cefd08b7ae-shm.mount: Deactivated successfully. Jul 6 23:58:07.733587 systemd[1]: cri-containerd-35aac45c480286aaa24878f69547eeab4a899b5845aeb967deb1f1cefd08b7ae.scope: Deactivated successfully. Jul 6 23:58:07.752271 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-35aac45c480286aaa24878f69547eeab4a899b5845aeb967deb1f1cefd08b7ae-rootfs.mount: Deactivated successfully. Jul 6 23:58:07.755714 containerd[1455]: time="2025-07-06T23:58:07.755639344Z" level=info msg="shim disconnected" id=35aac45c480286aaa24878f69547eeab4a899b5845aeb967deb1f1cefd08b7ae namespace=k8s.io Jul 6 23:58:07.755714 containerd[1455]: time="2025-07-06T23:58:07.755709176Z" level=warning msg="cleaning up after shim disconnected" id=35aac45c480286aaa24878f69547eeab4a899b5845aeb967deb1f1cefd08b7ae namespace=k8s.io Jul 6 23:58:07.755866 containerd[1455]: time="2025-07-06T23:58:07.755720968Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:58:07.769918 containerd[1455]: time="2025-07-06T23:58:07.769857258Z" level=info msg="TearDown network for sandbox \"35aac45c480286aaa24878f69547eeab4a899b5845aeb967deb1f1cefd08b7ae\" successfully" Jul 6 23:58:07.769918 containerd[1455]: time="2025-07-06T23:58:07.769906180Z" level=info msg="StopPodSandbox for \"35aac45c480286aaa24878f69547eeab4a899b5845aeb967deb1f1cefd08b7ae\" returns successfully" Jul 6 23:58:07.914425 kubelet[1754]: I0706 23:58:07.914356 1754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0b29535f-cbd4-4196-a137-0c48216fd9b6-hostproc\") pod \"0b29535f-cbd4-4196-a137-0c48216fd9b6\" (UID: \"0b29535f-cbd4-4196-a137-0c48216fd9b6\") " Jul 6 23:58:07.914425 kubelet[1754]: I0706 23:58:07.914400 1754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0b29535f-cbd4-4196-a137-0c48216fd9b6-cilium-cgroup\") pod \"0b29535f-cbd4-4196-a137-0c48216fd9b6\" (UID: \"0b29535f-cbd4-4196-a137-0c48216fd9b6\") " Jul 6 23:58:07.914425 kubelet[1754]: I0706 23:58:07.914420 1754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0b29535f-cbd4-4196-a137-0c48216fd9b6-host-proc-sys-kernel\") pod \"0b29535f-cbd4-4196-a137-0c48216fd9b6\" (UID: \"0b29535f-cbd4-4196-a137-0c48216fd9b6\") " Jul 6 23:58:07.914425 kubelet[1754]: I0706 23:58:07.914437 1754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0b29535f-cbd4-4196-a137-0c48216fd9b6-cilium-run\") pod \"0b29535f-cbd4-4196-a137-0c48216fd9b6\" (UID: \"0b29535f-cbd4-4196-a137-0c48216fd9b6\") " Jul 6 23:58:07.914822 kubelet[1754]: I0706 23:58:07.914451 1754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0b29535f-cbd4-4196-a137-0c48216fd9b6-lib-modules\") pod \"0b29535f-cbd4-4196-a137-0c48216fd9b6\" (UID: \"0b29535f-cbd4-4196-a137-0c48216fd9b6\") " Jul 6 23:58:07.914822 kubelet[1754]: I0706 23:58:07.914462 1754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0b29535f-cbd4-4196-a137-0c48216fd9b6-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "0b29535f-cbd4-4196-a137-0c48216fd9b6" (UID: "0b29535f-cbd4-4196-a137-0c48216fd9b6"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 6 23:58:07.914822 kubelet[1754]: I0706 23:58:07.914489 1754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0b29535f-cbd4-4196-a137-0c48216fd9b6-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "0b29535f-cbd4-4196-a137-0c48216fd9b6" (UID: "0b29535f-cbd4-4196-a137-0c48216fd9b6"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 6 23:58:07.914822 kubelet[1754]: I0706 23:58:07.914470 1754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0b29535f-cbd4-4196-a137-0c48216fd9b6-cilium-config-path\") pod \"0b29535f-cbd4-4196-a137-0c48216fd9b6\" (UID: \"0b29535f-cbd4-4196-a137-0c48216fd9b6\") " Jul 6 23:58:07.914822 kubelet[1754]: I0706 23:58:07.914516 1754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0b29535f-cbd4-4196-a137-0c48216fd9b6-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "0b29535f-cbd4-4196-a137-0c48216fd9b6" (UID: "0b29535f-cbd4-4196-a137-0c48216fd9b6"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 6 23:58:07.915004 kubelet[1754]: I0706 23:58:07.914459 1754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0b29535f-cbd4-4196-a137-0c48216fd9b6-hostproc" (OuterVolumeSpecName: "hostproc") pod "0b29535f-cbd4-4196-a137-0c48216fd9b6" (UID: "0b29535f-cbd4-4196-a137-0c48216fd9b6"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 6 23:58:07.915004 kubelet[1754]: I0706 23:58:07.914539 1754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0b29535f-cbd4-4196-a137-0c48216fd9b6-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "0b29535f-cbd4-4196-a137-0c48216fd9b6" (UID: "0b29535f-cbd4-4196-a137-0c48216fd9b6"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 6 23:58:07.915004 kubelet[1754]: I0706 23:58:07.914555 1754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0b29535f-cbd4-4196-a137-0c48216fd9b6-clustermesh-secrets\") pod \"0b29535f-cbd4-4196-a137-0c48216fd9b6\" (UID: \"0b29535f-cbd4-4196-a137-0c48216fd9b6\") " Jul 6 23:58:07.915004 kubelet[1754]: I0706 23:58:07.914589 1754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0b29535f-cbd4-4196-a137-0c48216fd9b6-etc-cni-netd\") pod \"0b29535f-cbd4-4196-a137-0c48216fd9b6\" (UID: \"0b29535f-cbd4-4196-a137-0c48216fd9b6\") " Jul 6 23:58:07.915004 kubelet[1754]: I0706 23:58:07.914611 1754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2kkrk\" (UniqueName: \"kubernetes.io/projected/0b29535f-cbd4-4196-a137-0c48216fd9b6-kube-api-access-2kkrk\") pod \"0b29535f-cbd4-4196-a137-0c48216fd9b6\" (UID: \"0b29535f-cbd4-4196-a137-0c48216fd9b6\") " Jul 6 23:58:07.915004 kubelet[1754]: I0706 23:58:07.914634 1754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0b29535f-cbd4-4196-a137-0c48216fd9b6-cni-path\") pod \"0b29535f-cbd4-4196-a137-0c48216fd9b6\" (UID: \"0b29535f-cbd4-4196-a137-0c48216fd9b6\") " Jul 6 23:58:07.915192 kubelet[1754]: I0706 23:58:07.914653 1754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0b29535f-cbd4-4196-a137-0c48216fd9b6-hubble-tls\") pod \"0b29535f-cbd4-4196-a137-0c48216fd9b6\" (UID: \"0b29535f-cbd4-4196-a137-0c48216fd9b6\") " Jul 6 23:58:07.915192 kubelet[1754]: I0706 23:58:07.914675 1754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0b29535f-cbd4-4196-a137-0c48216fd9b6-xtables-lock\") pod \"0b29535f-cbd4-4196-a137-0c48216fd9b6\" (UID: \"0b29535f-cbd4-4196-a137-0c48216fd9b6\") " Jul 6 23:58:07.915192 kubelet[1754]: I0706 23:58:07.914694 1754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0b29535f-cbd4-4196-a137-0c48216fd9b6-host-proc-sys-net\") pod \"0b29535f-cbd4-4196-a137-0c48216fd9b6\" (UID: \"0b29535f-cbd4-4196-a137-0c48216fd9b6\") " Jul 6 23:58:07.915192 kubelet[1754]: I0706 23:58:07.914711 1754 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0b29535f-cbd4-4196-a137-0c48216fd9b6-bpf-maps\") pod \"0b29535f-cbd4-4196-a137-0c48216fd9b6\" (UID: \"0b29535f-cbd4-4196-a137-0c48216fd9b6\") " Jul 6 23:58:07.915192 kubelet[1754]: I0706 23:58:07.914746 1754 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0b29535f-cbd4-4196-a137-0c48216fd9b6-cilium-run\") on node \"10.0.0.116\" DevicePath \"\"" Jul 6 23:58:07.915192 kubelet[1754]: I0706 23:58:07.914758 1754 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0b29535f-cbd4-4196-a137-0c48216fd9b6-lib-modules\") on node \"10.0.0.116\" DevicePath \"\"" Jul 6 23:58:07.915192 kubelet[1754]: I0706 23:58:07.914769 1754 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0b29535f-cbd4-4196-a137-0c48216fd9b6-hostproc\") on node \"10.0.0.116\" DevicePath \"\"" Jul 6 23:58:07.915463 kubelet[1754]: I0706 23:58:07.914781 1754 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0b29535f-cbd4-4196-a137-0c48216fd9b6-cilium-cgroup\") on node \"10.0.0.116\" DevicePath \"\"" Jul 6 23:58:07.915463 kubelet[1754]: I0706 23:58:07.914794 1754 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0b29535f-cbd4-4196-a137-0c48216fd9b6-host-proc-sys-kernel\") on node \"10.0.0.116\" DevicePath \"\"" Jul 6 23:58:07.915463 kubelet[1754]: I0706 23:58:07.914818 1754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0b29535f-cbd4-4196-a137-0c48216fd9b6-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "0b29535f-cbd4-4196-a137-0c48216fd9b6" (UID: "0b29535f-cbd4-4196-a137-0c48216fd9b6"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 6 23:58:07.915463 kubelet[1754]: I0706 23:58:07.914841 1754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0b29535f-cbd4-4196-a137-0c48216fd9b6-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "0b29535f-cbd4-4196-a137-0c48216fd9b6" (UID: "0b29535f-cbd4-4196-a137-0c48216fd9b6"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 6 23:58:07.917923 kubelet[1754]: I0706 23:58:07.917126 1754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0b29535f-cbd4-4196-a137-0c48216fd9b6-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "0b29535f-cbd4-4196-a137-0c48216fd9b6" (UID: "0b29535f-cbd4-4196-a137-0c48216fd9b6"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 6 23:58:07.917923 kubelet[1754]: I0706 23:58:07.917164 1754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0b29535f-cbd4-4196-a137-0c48216fd9b6-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "0b29535f-cbd4-4196-a137-0c48216fd9b6" (UID: "0b29535f-cbd4-4196-a137-0c48216fd9b6"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 6 23:58:07.917923 kubelet[1754]: I0706 23:58:07.917270 1754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0b29535f-cbd4-4196-a137-0c48216fd9b6-cni-path" (OuterVolumeSpecName: "cni-path") pod "0b29535f-cbd4-4196-a137-0c48216fd9b6" (UID: "0b29535f-cbd4-4196-a137-0c48216fd9b6"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 6 23:58:07.918124 kubelet[1754]: I0706 23:58:07.918099 1754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b29535f-cbd4-4196-a137-0c48216fd9b6-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "0b29535f-cbd4-4196-a137-0c48216fd9b6" (UID: "0b29535f-cbd4-4196-a137-0c48216fd9b6"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 6 23:58:07.918349 kubelet[1754]: I0706 23:58:07.918286 1754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b29535f-cbd4-4196-a137-0c48216fd9b6-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "0b29535f-cbd4-4196-a137-0c48216fd9b6" (UID: "0b29535f-cbd4-4196-a137-0c48216fd9b6"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 6 23:58:07.918743 kubelet[1754]: I0706 23:58:07.918711 1754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b29535f-cbd4-4196-a137-0c48216fd9b6-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "0b29535f-cbd4-4196-a137-0c48216fd9b6" (UID: "0b29535f-cbd4-4196-a137-0c48216fd9b6"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 6 23:58:07.919915 systemd[1]: var-lib-kubelet-pods-0b29535f\x2dcbd4\x2d4196\x2da137\x2d0c48216fd9b6-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 6 23:58:07.920046 systemd[1]: var-lib-kubelet-pods-0b29535f\x2dcbd4\x2d4196\x2da137\x2d0c48216fd9b6-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 6 23:58:07.920120 kubelet[1754]: I0706 23:58:07.919986 1754 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b29535f-cbd4-4196-a137-0c48216fd9b6-kube-api-access-2kkrk" (OuterVolumeSpecName: "kube-api-access-2kkrk") pod "0b29535f-cbd4-4196-a137-0c48216fd9b6" (UID: "0b29535f-cbd4-4196-a137-0c48216fd9b6"). InnerVolumeSpecName "kube-api-access-2kkrk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 6 23:58:07.929118 systemd[1]: Removed slice kubepods-burstable-pod0b29535f_cbd4_4196_a137_0c48216fd9b6.slice - libcontainer container kubepods-burstable-pod0b29535f_cbd4_4196_a137_0c48216fd9b6.slice. Jul 6 23:58:07.929232 systemd[1]: kubepods-burstable-pod0b29535f_cbd4_4196_a137_0c48216fd9b6.slice: Consumed 7.862s CPU time. Jul 6 23:58:08.015112 kubelet[1754]: I0706 23:58:08.014975 1754 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0b29535f-cbd4-4196-a137-0c48216fd9b6-clustermesh-secrets\") on node \"10.0.0.116\" DevicePath \"\"" Jul 6 23:58:08.015112 kubelet[1754]: I0706 23:58:08.015008 1754 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0b29535f-cbd4-4196-a137-0c48216fd9b6-etc-cni-netd\") on node \"10.0.0.116\" DevicePath \"\"" Jul 6 23:58:08.015112 kubelet[1754]: I0706 23:58:08.015018 1754 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2kkrk\" (UniqueName: \"kubernetes.io/projected/0b29535f-cbd4-4196-a137-0c48216fd9b6-kube-api-access-2kkrk\") on node \"10.0.0.116\" DevicePath \"\"" Jul 6 23:58:08.015112 kubelet[1754]: I0706 23:58:08.015027 1754 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0b29535f-cbd4-4196-a137-0c48216fd9b6-cni-path\") on node \"10.0.0.116\" DevicePath \"\"" Jul 6 23:58:08.015112 kubelet[1754]: I0706 23:58:08.015035 1754 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0b29535f-cbd4-4196-a137-0c48216fd9b6-hubble-tls\") on node \"10.0.0.116\" DevicePath \"\"" Jul 6 23:58:08.015112 kubelet[1754]: I0706 23:58:08.015043 1754 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0b29535f-cbd4-4196-a137-0c48216fd9b6-xtables-lock\") on node \"10.0.0.116\" DevicePath \"\"" Jul 6 23:58:08.015112 kubelet[1754]: I0706 23:58:08.015050 1754 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0b29535f-cbd4-4196-a137-0c48216fd9b6-host-proc-sys-net\") on node \"10.0.0.116\" DevicePath \"\"" Jul 6 23:58:08.015112 kubelet[1754]: I0706 23:58:08.015058 1754 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0b29535f-cbd4-4196-a137-0c48216fd9b6-bpf-maps\") on node \"10.0.0.116\" DevicePath \"\"" Jul 6 23:58:08.015424 kubelet[1754]: I0706 23:58:08.015066 1754 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0b29535f-cbd4-4196-a137-0c48216fd9b6-cilium-config-path\") on node \"10.0.0.116\" DevicePath \"\"" Jul 6 23:58:08.145904 kubelet[1754]: I0706 23:58:08.145872 1754 scope.go:117] "RemoveContainer" containerID="6d8275b18d6f401c15b0a0f1c1f671c85a1fa8131c5eba2861801d2458bfc7f2" Jul 6 23:58:08.146984 containerd[1455]: time="2025-07-06T23:58:08.146946175Z" level=info msg="RemoveContainer for \"6d8275b18d6f401c15b0a0f1c1f671c85a1fa8131c5eba2861801d2458bfc7f2\"" Jul 6 23:58:08.151160 containerd[1455]: time="2025-07-06T23:58:08.151122956Z" level=info msg="RemoveContainer for \"6d8275b18d6f401c15b0a0f1c1f671c85a1fa8131c5eba2861801d2458bfc7f2\" returns successfully" Jul 6 23:58:08.151407 kubelet[1754]: I0706 23:58:08.151381 1754 scope.go:117] "RemoveContainer" containerID="968f68ddc52dbeadefbdce99848976e8050c84c71651fee5b950adcbd4a02524" Jul 6 23:58:08.152287 containerd[1455]: time="2025-07-06T23:58:08.152266468Z" level=info msg="RemoveContainer for \"968f68ddc52dbeadefbdce99848976e8050c84c71651fee5b950adcbd4a02524\"" Jul 6 23:58:08.155418 containerd[1455]: time="2025-07-06T23:58:08.155391920Z" level=info msg="RemoveContainer for \"968f68ddc52dbeadefbdce99848976e8050c84c71651fee5b950adcbd4a02524\" returns successfully" Jul 6 23:58:08.155583 kubelet[1754]: I0706 23:58:08.155520 1754 scope.go:117] "RemoveContainer" containerID="842a6842ef40ba919dc0c3056a4bfc6cf29ce39745f8e20c395579b2a183a174" Jul 6 23:58:08.156348 containerd[1455]: time="2025-07-06T23:58:08.156308084Z" level=info msg="RemoveContainer for \"842a6842ef40ba919dc0c3056a4bfc6cf29ce39745f8e20c395579b2a183a174\"" Jul 6 23:58:08.159750 containerd[1455]: time="2025-07-06T23:58:08.159718012Z" level=info msg="RemoveContainer for \"842a6842ef40ba919dc0c3056a4bfc6cf29ce39745f8e20c395579b2a183a174\" returns successfully" Jul 6 23:58:08.159881 kubelet[1754]: I0706 23:58:08.159852 1754 scope.go:117] "RemoveContainer" containerID="f4f1742946baa7cc65298d9cca15d6121b50ab24381dcda357116cb0c447029b" Jul 6 23:58:08.160694 containerd[1455]: time="2025-07-06T23:58:08.160671095Z" level=info msg="RemoveContainer for \"f4f1742946baa7cc65298d9cca15d6121b50ab24381dcda357116cb0c447029b\"" Jul 6 23:58:08.163437 containerd[1455]: time="2025-07-06T23:58:08.163408907Z" level=info msg="RemoveContainer for \"f4f1742946baa7cc65298d9cca15d6121b50ab24381dcda357116cb0c447029b\" returns successfully" Jul 6 23:58:08.163560 kubelet[1754]: I0706 23:58:08.163539 1754 scope.go:117] "RemoveContainer" containerID="95e46801d32881f554f5afcbd773af32bf0883f823ee512d275b8b760cb61266" Jul 6 23:58:08.164423 containerd[1455]: time="2025-07-06T23:58:08.164398870Z" level=info msg="RemoveContainer for \"95e46801d32881f554f5afcbd773af32bf0883f823ee512d275b8b760cb61266\"" Jul 6 23:58:08.167001 containerd[1455]: time="2025-07-06T23:58:08.166971351Z" level=info msg="RemoveContainer for \"95e46801d32881f554f5afcbd773af32bf0883f823ee512d275b8b760cb61266\" returns successfully" Jul 6 23:58:08.167120 kubelet[1754]: I0706 23:58:08.167103 1754 scope.go:117] "RemoveContainer" containerID="6d8275b18d6f401c15b0a0f1c1f671c85a1fa8131c5eba2861801d2458bfc7f2" Jul 6 23:58:08.167369 containerd[1455]: time="2025-07-06T23:58:08.167295621Z" level=error msg="ContainerStatus for \"6d8275b18d6f401c15b0a0f1c1f671c85a1fa8131c5eba2861801d2458bfc7f2\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"6d8275b18d6f401c15b0a0f1c1f671c85a1fa8131c5eba2861801d2458bfc7f2\": not found" Jul 6 23:58:08.167473 kubelet[1754]: E0706 23:58:08.167454 1754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"6d8275b18d6f401c15b0a0f1c1f671c85a1fa8131c5eba2861801d2458bfc7f2\": not found" containerID="6d8275b18d6f401c15b0a0f1c1f671c85a1fa8131c5eba2861801d2458bfc7f2" Jul 6 23:58:08.167555 kubelet[1754]: I0706 23:58:08.167484 1754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"6d8275b18d6f401c15b0a0f1c1f671c85a1fa8131c5eba2861801d2458bfc7f2"} err="failed to get container status \"6d8275b18d6f401c15b0a0f1c1f671c85a1fa8131c5eba2861801d2458bfc7f2\": rpc error: code = NotFound desc = an error occurred when try to find container \"6d8275b18d6f401c15b0a0f1c1f671c85a1fa8131c5eba2861801d2458bfc7f2\": not found" Jul 6 23:58:08.167610 kubelet[1754]: I0706 23:58:08.167555 1754 scope.go:117] "RemoveContainer" containerID="968f68ddc52dbeadefbdce99848976e8050c84c71651fee5b950adcbd4a02524" Jul 6 23:58:08.167729 containerd[1455]: time="2025-07-06T23:58:08.167700694Z" level=error msg="ContainerStatus for \"968f68ddc52dbeadefbdce99848976e8050c84c71651fee5b950adcbd4a02524\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"968f68ddc52dbeadefbdce99848976e8050c84c71651fee5b950adcbd4a02524\": not found" Jul 6 23:58:08.167819 kubelet[1754]: E0706 23:58:08.167801 1754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"968f68ddc52dbeadefbdce99848976e8050c84c71651fee5b950adcbd4a02524\": not found" containerID="968f68ddc52dbeadefbdce99848976e8050c84c71651fee5b950adcbd4a02524" Jul 6 23:58:08.167873 kubelet[1754]: I0706 23:58:08.167821 1754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"968f68ddc52dbeadefbdce99848976e8050c84c71651fee5b950adcbd4a02524"} err="failed to get container status \"968f68ddc52dbeadefbdce99848976e8050c84c71651fee5b950adcbd4a02524\": rpc error: code = NotFound desc = an error occurred when try to find container \"968f68ddc52dbeadefbdce99848976e8050c84c71651fee5b950adcbd4a02524\": not found" Jul 6 23:58:08.167873 kubelet[1754]: I0706 23:58:08.167837 1754 scope.go:117] "RemoveContainer" containerID="842a6842ef40ba919dc0c3056a4bfc6cf29ce39745f8e20c395579b2a183a174" Jul 6 23:58:08.168026 containerd[1455]: time="2025-07-06T23:58:08.167975000Z" level=error msg="ContainerStatus for \"842a6842ef40ba919dc0c3056a4bfc6cf29ce39745f8e20c395579b2a183a174\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"842a6842ef40ba919dc0c3056a4bfc6cf29ce39745f8e20c395579b2a183a174\": not found" Jul 6 23:58:08.168113 kubelet[1754]: E0706 23:58:08.168091 1754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"842a6842ef40ba919dc0c3056a4bfc6cf29ce39745f8e20c395579b2a183a174\": not found" containerID="842a6842ef40ba919dc0c3056a4bfc6cf29ce39745f8e20c395579b2a183a174" Jul 6 23:58:08.168151 kubelet[1754]: I0706 23:58:08.168116 1754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"842a6842ef40ba919dc0c3056a4bfc6cf29ce39745f8e20c395579b2a183a174"} err="failed to get container status \"842a6842ef40ba919dc0c3056a4bfc6cf29ce39745f8e20c395579b2a183a174\": rpc error: code = NotFound desc = an error occurred when try to find container \"842a6842ef40ba919dc0c3056a4bfc6cf29ce39745f8e20c395579b2a183a174\": not found" Jul 6 23:58:08.168151 kubelet[1754]: I0706 23:58:08.168135 1754 scope.go:117] "RemoveContainer" containerID="f4f1742946baa7cc65298d9cca15d6121b50ab24381dcda357116cb0c447029b" Jul 6 23:58:08.168292 containerd[1455]: time="2025-07-06T23:58:08.168268823Z" level=error msg="ContainerStatus for \"f4f1742946baa7cc65298d9cca15d6121b50ab24381dcda357116cb0c447029b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f4f1742946baa7cc65298d9cca15d6121b50ab24381dcda357116cb0c447029b\": not found" Jul 6 23:58:08.168397 kubelet[1754]: E0706 23:58:08.168373 1754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f4f1742946baa7cc65298d9cca15d6121b50ab24381dcda357116cb0c447029b\": not found" containerID="f4f1742946baa7cc65298d9cca15d6121b50ab24381dcda357116cb0c447029b" Jul 6 23:58:08.168444 kubelet[1754]: I0706 23:58:08.168395 1754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f4f1742946baa7cc65298d9cca15d6121b50ab24381dcda357116cb0c447029b"} err="failed to get container status \"f4f1742946baa7cc65298d9cca15d6121b50ab24381dcda357116cb0c447029b\": rpc error: code = NotFound desc = an error occurred when try to find container \"f4f1742946baa7cc65298d9cca15d6121b50ab24381dcda357116cb0c447029b\": not found" Jul 6 23:58:08.168444 kubelet[1754]: I0706 23:58:08.168412 1754 scope.go:117] "RemoveContainer" containerID="95e46801d32881f554f5afcbd773af32bf0883f823ee512d275b8b760cb61266" Jul 6 23:58:08.168594 containerd[1455]: time="2025-07-06T23:58:08.168563167Z" level=error msg="ContainerStatus for \"95e46801d32881f554f5afcbd773af32bf0883f823ee512d275b8b760cb61266\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"95e46801d32881f554f5afcbd773af32bf0883f823ee512d275b8b760cb61266\": not found" Jul 6 23:58:08.168691 kubelet[1754]: E0706 23:58:08.168670 1754 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"95e46801d32881f554f5afcbd773af32bf0883f823ee512d275b8b760cb61266\": not found" containerID="95e46801d32881f554f5afcbd773af32bf0883f823ee512d275b8b760cb61266" Jul 6 23:58:08.168727 kubelet[1754]: I0706 23:58:08.168692 1754 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"95e46801d32881f554f5afcbd773af32bf0883f823ee512d275b8b760cb61266"} err="failed to get container status \"95e46801d32881f554f5afcbd773af32bf0883f823ee512d275b8b760cb61266\": rpc error: code = NotFound desc = an error occurred when try to find container \"95e46801d32881f554f5afcbd773af32bf0883f823ee512d275b8b760cb61266\": not found" Jul 6 23:58:08.456355 kubelet[1754]: E0706 23:58:08.456267 1754 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 6 23:58:08.623935 systemd[1]: var-lib-kubelet-pods-0b29535f\x2dcbd4\x2d4196\x2da137\x2d0c48216fd9b6-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d2kkrk.mount: Deactivated successfully. Jul 6 23:58:09.456968 kubelet[1754]: E0706 23:58:09.456905 1754 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 6 23:58:09.922384 kubelet[1754]: I0706 23:58:09.922336 1754 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b29535f-cbd4-4196-a137-0c48216fd9b6" path="/var/lib/kubelet/pods/0b29535f-cbd4-4196-a137-0c48216fd9b6/volumes" Jul 6 23:58:10.457789 kubelet[1754]: E0706 23:58:10.457720 1754 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 6 23:58:10.565215 kubelet[1754]: E0706 23:58:10.565157 1754 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0b29535f-cbd4-4196-a137-0c48216fd9b6" containerName="clean-cilium-state" Jul 6 23:58:10.565215 kubelet[1754]: E0706 23:58:10.565193 1754 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0b29535f-cbd4-4196-a137-0c48216fd9b6" containerName="cilium-agent" Jul 6 23:58:10.565215 kubelet[1754]: E0706 23:58:10.565202 1754 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0b29535f-cbd4-4196-a137-0c48216fd9b6" containerName="mount-cgroup" Jul 6 23:58:10.565215 kubelet[1754]: E0706 23:58:10.565210 1754 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0b29535f-cbd4-4196-a137-0c48216fd9b6" containerName="apply-sysctl-overwrites" Jul 6 23:58:10.565215 kubelet[1754]: E0706 23:58:10.565219 1754 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0b29535f-cbd4-4196-a137-0c48216fd9b6" containerName="mount-bpf-fs" Jul 6 23:58:10.565507 kubelet[1754]: I0706 23:58:10.565253 1754 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b29535f-cbd4-4196-a137-0c48216fd9b6" containerName="cilium-agent" Jul 6 23:58:10.571939 systemd[1]: Created slice kubepods-besteffort-pod7823f882_c493_42da_b8e1_57f989d125c4.slice - libcontainer container kubepods-besteffort-pod7823f882_c493_42da_b8e1_57f989d125c4.slice. Jul 6 23:58:10.583006 systemd[1]: Created slice kubepods-burstable-pode3dd78a3_c6b0_4ff3_8a07_010d35dee9b0.slice - libcontainer container kubepods-burstable-pode3dd78a3_c6b0_4ff3_8a07_010d35dee9b0.slice. Jul 6 23:58:10.733435 kubelet[1754]: I0706 23:58:10.733214 1754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/e3dd78a3-c6b0-4ff3-8a07-010d35dee9b0-hostproc\") pod \"cilium-fqp4s\" (UID: \"e3dd78a3-c6b0-4ff3-8a07-010d35dee9b0\") " pod="kube-system/cilium-fqp4s" Jul 6 23:58:10.733435 kubelet[1754]: I0706 23:58:10.733277 1754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/e3dd78a3-c6b0-4ff3-8a07-010d35dee9b0-cni-path\") pod \"cilium-fqp4s\" (UID: \"e3dd78a3-c6b0-4ff3-8a07-010d35dee9b0\") " pod="kube-system/cilium-fqp4s" Jul 6 23:58:10.733435 kubelet[1754]: I0706 23:58:10.733307 1754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/e3dd78a3-c6b0-4ff3-8a07-010d35dee9b0-host-proc-sys-kernel\") pod \"cilium-fqp4s\" (UID: \"e3dd78a3-c6b0-4ff3-8a07-010d35dee9b0\") " pod="kube-system/cilium-fqp4s" Jul 6 23:58:10.733435 kubelet[1754]: I0706 23:58:10.733357 1754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/7823f882-c493-42da-b8e1-57f989d125c4-cilium-config-path\") pod \"cilium-operator-5d85765b45-8wg57\" (UID: \"7823f882-c493-42da-b8e1-57f989d125c4\") " pod="kube-system/cilium-operator-5d85765b45-8wg57" Jul 6 23:58:10.733435 kubelet[1754]: I0706 23:58:10.733384 1754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4bpln\" (UniqueName: \"kubernetes.io/projected/7823f882-c493-42da-b8e1-57f989d125c4-kube-api-access-4bpln\") pod \"cilium-operator-5d85765b45-8wg57\" (UID: \"7823f882-c493-42da-b8e1-57f989d125c4\") " pod="kube-system/cilium-operator-5d85765b45-8wg57" Jul 6 23:58:10.733709 kubelet[1754]: I0706 23:58:10.733461 1754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/e3dd78a3-c6b0-4ff3-8a07-010d35dee9b0-bpf-maps\") pod \"cilium-fqp4s\" (UID: \"e3dd78a3-c6b0-4ff3-8a07-010d35dee9b0\") " pod="kube-system/cilium-fqp4s" Jul 6 23:58:10.733709 kubelet[1754]: I0706 23:58:10.733525 1754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e3dd78a3-c6b0-4ff3-8a07-010d35dee9b0-lib-modules\") pod \"cilium-fqp4s\" (UID: \"e3dd78a3-c6b0-4ff3-8a07-010d35dee9b0\") " pod="kube-system/cilium-fqp4s" Jul 6 23:58:10.733709 kubelet[1754]: I0706 23:58:10.733554 1754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/e3dd78a3-c6b0-4ff3-8a07-010d35dee9b0-clustermesh-secrets\") pod \"cilium-fqp4s\" (UID: \"e3dd78a3-c6b0-4ff3-8a07-010d35dee9b0\") " pod="kube-system/cilium-fqp4s" Jul 6 23:58:10.733709 kubelet[1754]: I0706 23:58:10.733574 1754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/e3dd78a3-c6b0-4ff3-8a07-010d35dee9b0-cilium-config-path\") pod \"cilium-fqp4s\" (UID: \"e3dd78a3-c6b0-4ff3-8a07-010d35dee9b0\") " pod="kube-system/cilium-fqp4s" Jul 6 23:58:10.733709 kubelet[1754]: I0706 23:58:10.733590 1754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/e3dd78a3-c6b0-4ff3-8a07-010d35dee9b0-hubble-tls\") pod \"cilium-fqp4s\" (UID: \"e3dd78a3-c6b0-4ff3-8a07-010d35dee9b0\") " pod="kube-system/cilium-fqp4s" Jul 6 23:58:10.733709 kubelet[1754]: I0706 23:58:10.733610 1754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/e3dd78a3-c6b0-4ff3-8a07-010d35dee9b0-etc-cni-netd\") pod \"cilium-fqp4s\" (UID: \"e3dd78a3-c6b0-4ff3-8a07-010d35dee9b0\") " pod="kube-system/cilium-fqp4s" Jul 6 23:58:10.733904 kubelet[1754]: I0706 23:58:10.733624 1754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/e3dd78a3-c6b0-4ff3-8a07-010d35dee9b0-host-proc-sys-net\") pod \"cilium-fqp4s\" (UID: \"e3dd78a3-c6b0-4ff3-8a07-010d35dee9b0\") " pod="kube-system/cilium-fqp4s" Jul 6 23:58:10.733904 kubelet[1754]: I0706 23:58:10.733642 1754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/e3dd78a3-c6b0-4ff3-8a07-010d35dee9b0-cilium-run\") pod \"cilium-fqp4s\" (UID: \"e3dd78a3-c6b0-4ff3-8a07-010d35dee9b0\") " pod="kube-system/cilium-fqp4s" Jul 6 23:58:10.733904 kubelet[1754]: I0706 23:58:10.733659 1754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/e3dd78a3-c6b0-4ff3-8a07-010d35dee9b0-cilium-cgroup\") pod \"cilium-fqp4s\" (UID: \"e3dd78a3-c6b0-4ff3-8a07-010d35dee9b0\") " pod="kube-system/cilium-fqp4s" Jul 6 23:58:10.733904 kubelet[1754]: I0706 23:58:10.733674 1754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e3dd78a3-c6b0-4ff3-8a07-010d35dee9b0-xtables-lock\") pod \"cilium-fqp4s\" (UID: \"e3dd78a3-c6b0-4ff3-8a07-010d35dee9b0\") " pod="kube-system/cilium-fqp4s" Jul 6 23:58:10.733904 kubelet[1754]: I0706 23:58:10.733689 1754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/e3dd78a3-c6b0-4ff3-8a07-010d35dee9b0-cilium-ipsec-secrets\") pod \"cilium-fqp4s\" (UID: \"e3dd78a3-c6b0-4ff3-8a07-010d35dee9b0\") " pod="kube-system/cilium-fqp4s" Jul 6 23:58:10.733904 kubelet[1754]: I0706 23:58:10.733706 1754 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8ds6\" (UniqueName: \"kubernetes.io/projected/e3dd78a3-c6b0-4ff3-8a07-010d35dee9b0-kube-api-access-l8ds6\") pod \"cilium-fqp4s\" (UID: \"e3dd78a3-c6b0-4ff3-8a07-010d35dee9b0\") " pod="kube-system/cilium-fqp4s" Jul 6 23:58:10.874955 kubelet[1754]: E0706 23:58:10.874879 1754 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:58:10.875643 containerd[1455]: time="2025-07-06T23:58:10.875588786Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-8wg57,Uid:7823f882-c493-42da-b8e1-57f989d125c4,Namespace:kube-system,Attempt:0,}" Jul 6 23:58:10.894294 kubelet[1754]: E0706 23:58:10.894249 1754 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:58:10.894995 containerd[1455]: time="2025-07-06T23:58:10.894926663Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-fqp4s,Uid:e3dd78a3-c6b0-4ff3-8a07-010d35dee9b0,Namespace:kube-system,Attempt:0,}" Jul 6 23:58:10.897973 containerd[1455]: time="2025-07-06T23:58:10.897196693Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:58:10.897973 containerd[1455]: time="2025-07-06T23:58:10.897941254Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:58:10.897973 containerd[1455]: time="2025-07-06T23:58:10.897954940Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:58:10.898097 containerd[1455]: time="2025-07-06T23:58:10.898062953Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:58:10.921300 containerd[1455]: time="2025-07-06T23:58:10.921085851Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 6 23:58:10.921300 containerd[1455]: time="2025-07-06T23:58:10.921165892Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 6 23:58:10.921300 containerd[1455]: time="2025-07-06T23:58:10.921186390Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:58:10.921491 containerd[1455]: time="2025-07-06T23:58:10.921375225Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 6 23:58:10.921782 systemd[1]: Started cri-containerd-a13a690ae2ae38d16a84f29d8ab442a196206833a5e809b55273596ff1877045.scope - libcontainer container a13a690ae2ae38d16a84f29d8ab442a196206833a5e809b55273596ff1877045. Jul 6 23:58:10.948566 systemd[1]: Started cri-containerd-fc3c33f30155e34d1f29390010bc1e75080f4e52cf13a4864de4158030373c77.scope - libcontainer container fc3c33f30155e34d1f29390010bc1e75080f4e52cf13a4864de4158030373c77. Jul 6 23:58:10.962241 kubelet[1754]: E0706 23:58:10.962191 1754 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 6 23:58:10.975504 containerd[1455]: time="2025-07-06T23:58:10.975408207Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-fqp4s,Uid:e3dd78a3-c6b0-4ff3-8a07-010d35dee9b0,Namespace:kube-system,Attempt:0,} returns sandbox id \"fc3c33f30155e34d1f29390010bc1e75080f4e52cf13a4864de4158030373c77\"" Jul 6 23:58:10.976248 kubelet[1754]: E0706 23:58:10.976146 1754 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:58:10.980695 containerd[1455]: time="2025-07-06T23:58:10.980647673Z" level=info msg="CreateContainer within sandbox \"fc3c33f30155e34d1f29390010bc1e75080f4e52cf13a4864de4158030373c77\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 6 23:58:10.985629 containerd[1455]: time="2025-07-06T23:58:10.985498187Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-8wg57,Uid:7823f882-c493-42da-b8e1-57f989d125c4,Namespace:kube-system,Attempt:0,} returns sandbox id \"a13a690ae2ae38d16a84f29d8ab442a196206833a5e809b55273596ff1877045\"" Jul 6 23:58:10.986340 kubelet[1754]: E0706 23:58:10.986296 1754 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:58:10.987540 containerd[1455]: time="2025-07-06T23:58:10.987502939Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jul 6 23:58:11.000047 containerd[1455]: time="2025-07-06T23:58:10.999979589Z" level=info msg="CreateContainer within sandbox \"fc3c33f30155e34d1f29390010bc1e75080f4e52cf13a4864de4158030373c77\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"c354daeab331a0ea7688b228d4ed54f0bb7e35021908ccae37131e7b50e029ed\"" Jul 6 23:58:11.000688 containerd[1455]: time="2025-07-06T23:58:11.000649489Z" level=info msg="StartContainer for \"c354daeab331a0ea7688b228d4ed54f0bb7e35021908ccae37131e7b50e029ed\"" Jul 6 23:58:11.029575 systemd[1]: Started cri-containerd-c354daeab331a0ea7688b228d4ed54f0bb7e35021908ccae37131e7b50e029ed.scope - libcontainer container c354daeab331a0ea7688b228d4ed54f0bb7e35021908ccae37131e7b50e029ed. Jul 6 23:58:11.057599 containerd[1455]: time="2025-07-06T23:58:11.057550664Z" level=info msg="StartContainer for \"c354daeab331a0ea7688b228d4ed54f0bb7e35021908ccae37131e7b50e029ed\" returns successfully" Jul 6 23:58:11.067554 systemd[1]: cri-containerd-c354daeab331a0ea7688b228d4ed54f0bb7e35021908ccae37131e7b50e029ed.scope: Deactivated successfully. Jul 6 23:58:11.098850 containerd[1455]: time="2025-07-06T23:58:11.098764335Z" level=info msg="shim disconnected" id=c354daeab331a0ea7688b228d4ed54f0bb7e35021908ccae37131e7b50e029ed namespace=k8s.io Jul 6 23:58:11.098850 containerd[1455]: time="2025-07-06T23:58:11.098840759Z" level=warning msg="cleaning up after shim disconnected" id=c354daeab331a0ea7688b228d4ed54f0bb7e35021908ccae37131e7b50e029ed namespace=k8s.io Jul 6 23:58:11.098850 containerd[1455]: time="2025-07-06T23:58:11.098850507Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:58:11.154301 kubelet[1754]: E0706 23:58:11.154261 1754 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:58:11.156341 containerd[1455]: time="2025-07-06T23:58:11.156271510Z" level=info msg="CreateContainer within sandbox \"fc3c33f30155e34d1f29390010bc1e75080f4e52cf13a4864de4158030373c77\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 6 23:58:11.170951 containerd[1455]: time="2025-07-06T23:58:11.170884393Z" level=info msg="CreateContainer within sandbox \"fc3c33f30155e34d1f29390010bc1e75080f4e52cf13a4864de4158030373c77\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"54b4555c27e29e16dfce6b414e36c4324f9d85aa3136572846203ad6c8f687a3\"" Jul 6 23:58:11.171687 containerd[1455]: time="2025-07-06T23:58:11.171649011Z" level=info msg="StartContainer for \"54b4555c27e29e16dfce6b414e36c4324f9d85aa3136572846203ad6c8f687a3\"" Jul 6 23:58:11.204469 systemd[1]: Started cri-containerd-54b4555c27e29e16dfce6b414e36c4324f9d85aa3136572846203ad6c8f687a3.scope - libcontainer container 54b4555c27e29e16dfce6b414e36c4324f9d85aa3136572846203ad6c8f687a3. Jul 6 23:58:11.235284 containerd[1455]: time="2025-07-06T23:58:11.235226062Z" level=info msg="StartContainer for \"54b4555c27e29e16dfce6b414e36c4324f9d85aa3136572846203ad6c8f687a3\" returns successfully" Jul 6 23:58:11.241741 systemd[1]: cri-containerd-54b4555c27e29e16dfce6b414e36c4324f9d85aa3136572846203ad6c8f687a3.scope: Deactivated successfully. Jul 6 23:58:11.267574 containerd[1455]: time="2025-07-06T23:58:11.267484662Z" level=info msg="shim disconnected" id=54b4555c27e29e16dfce6b414e36c4324f9d85aa3136572846203ad6c8f687a3 namespace=k8s.io Jul 6 23:58:11.267574 containerd[1455]: time="2025-07-06T23:58:11.267552008Z" level=warning msg="cleaning up after shim disconnected" id=54b4555c27e29e16dfce6b414e36c4324f9d85aa3136572846203ad6c8f687a3 namespace=k8s.io Jul 6 23:58:11.267574 containerd[1455]: time="2025-07-06T23:58:11.267563560Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:58:11.458453 kubelet[1754]: E0706 23:58:11.458375 1754 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 6 23:58:12.160176 kubelet[1754]: E0706 23:58:12.160137 1754 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:58:12.162163 containerd[1455]: time="2025-07-06T23:58:12.162113052Z" level=info msg="CreateContainer within sandbox \"fc3c33f30155e34d1f29390010bc1e75080f4e52cf13a4864de4158030373c77\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 6 23:58:12.236275 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2321670050.mount: Deactivated successfully. Jul 6 23:58:12.268846 containerd[1455]: time="2025-07-06T23:58:12.268794492Z" level=info msg="CreateContainer within sandbox \"fc3c33f30155e34d1f29390010bc1e75080f4e52cf13a4864de4158030373c77\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"b1bc53d0f3f441ae7e4f59912a1b0c09d2eb16228d9afea7bb9e9a5b3f000ede\"" Jul 6 23:58:12.269547 containerd[1455]: time="2025-07-06T23:58:12.269491803Z" level=info msg="StartContainer for \"b1bc53d0f3f441ae7e4f59912a1b0c09d2eb16228d9afea7bb9e9a5b3f000ede\"" Jul 6 23:58:12.298696 systemd[1]: Started cri-containerd-b1bc53d0f3f441ae7e4f59912a1b0c09d2eb16228d9afea7bb9e9a5b3f000ede.scope - libcontainer container b1bc53d0f3f441ae7e4f59912a1b0c09d2eb16228d9afea7bb9e9a5b3f000ede. Jul 6 23:58:12.330267 containerd[1455]: time="2025-07-06T23:58:12.330205386Z" level=info msg="StartContainer for \"b1bc53d0f3f441ae7e4f59912a1b0c09d2eb16228d9afea7bb9e9a5b3f000ede\" returns successfully" Jul 6 23:58:12.330808 systemd[1]: cri-containerd-b1bc53d0f3f441ae7e4f59912a1b0c09d2eb16228d9afea7bb9e9a5b3f000ede.scope: Deactivated successfully. Jul 6 23:58:12.424652 containerd[1455]: time="2025-07-06T23:58:12.424301923Z" level=info msg="shim disconnected" id=b1bc53d0f3f441ae7e4f59912a1b0c09d2eb16228d9afea7bb9e9a5b3f000ede namespace=k8s.io Jul 6 23:58:12.424652 containerd[1455]: time="2025-07-06T23:58:12.424390438Z" level=warning msg="cleaning up after shim disconnected" id=b1bc53d0f3f441ae7e4f59912a1b0c09d2eb16228d9afea7bb9e9a5b3f000ede namespace=k8s.io Jul 6 23:58:12.424652 containerd[1455]: time="2025-07-06T23:58:12.424399505Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:58:12.459147 kubelet[1754]: E0706 23:58:12.459097 1754 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 6 23:58:12.551658 containerd[1455]: time="2025-07-06T23:58:12.551583899Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:58:12.552258 containerd[1455]: time="2025-07-06T23:58:12.552215065Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Jul 6 23:58:12.553411 containerd[1455]: time="2025-07-06T23:58:12.553360609Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 6 23:58:12.554732 containerd[1455]: time="2025-07-06T23:58:12.554696241Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 1.567151243s" Jul 6 23:58:12.554792 containerd[1455]: time="2025-07-06T23:58:12.554732710Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jul 6 23:58:12.557037 containerd[1455]: time="2025-07-06T23:58:12.557012897Z" level=info msg="CreateContainer within sandbox \"a13a690ae2ae38d16a84f29d8ab442a196206833a5e809b55273596ff1877045\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jul 6 23:58:12.569259 containerd[1455]: time="2025-07-06T23:58:12.569193942Z" level=info msg="CreateContainer within sandbox \"a13a690ae2ae38d16a84f29d8ab442a196206833a5e809b55273596ff1877045\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"e0ee33816becce233b631aba2ce724e2da781a00f009dacf54176c958a45a877\"" Jul 6 23:58:12.569785 containerd[1455]: time="2025-07-06T23:58:12.569735640Z" level=info msg="StartContainer for \"e0ee33816becce233b631aba2ce724e2da781a00f009dacf54176c958a45a877\"" Jul 6 23:58:12.605461 systemd[1]: Started cri-containerd-e0ee33816becce233b631aba2ce724e2da781a00f009dacf54176c958a45a877.scope - libcontainer container e0ee33816becce233b631aba2ce724e2da781a00f009dacf54176c958a45a877. Jul 6 23:58:12.633288 containerd[1455]: time="2025-07-06T23:58:12.633244110Z" level=info msg="StartContainer for \"e0ee33816becce233b631aba2ce724e2da781a00f009dacf54176c958a45a877\" returns successfully" Jul 6 23:58:13.164191 kubelet[1754]: E0706 23:58:13.164148 1754 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:58:13.166043 kubelet[1754]: E0706 23:58:13.166013 1754 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:58:13.166259 containerd[1455]: time="2025-07-06T23:58:13.166216721Z" level=info msg="CreateContainer within sandbox \"fc3c33f30155e34d1f29390010bc1e75080f4e52cf13a4864de4158030373c77\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 6 23:58:13.180494 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount202377752.mount: Deactivated successfully. Jul 6 23:58:13.181833 containerd[1455]: time="2025-07-06T23:58:13.181791232Z" level=info msg="CreateContainer within sandbox \"fc3c33f30155e34d1f29390010bc1e75080f4e52cf13a4864de4158030373c77\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"0455c77c33b662656940bf0f71558873acd56a2998bf573bac96501e2eb6fee5\"" Jul 6 23:58:13.182271 containerd[1455]: time="2025-07-06T23:58:13.182226771Z" level=info msg="StartContainer for \"0455c77c33b662656940bf0f71558873acd56a2998bf573bac96501e2eb6fee5\"" Jul 6 23:58:13.187469 kubelet[1754]: I0706 23:58:13.187413 1754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-8wg57" podStartSLOduration=1.618834192 podStartE2EDuration="3.187395658s" podCreationTimestamp="2025-07-06 23:58:10 +0000 UTC" firstStartedPulling="2025-07-06 23:58:10.98698307 +0000 UTC m=+56.488961561" lastFinishedPulling="2025-07-06 23:58:12.555544536 +0000 UTC m=+58.057523027" observedRunningTime="2025-07-06 23:58:13.187328072 +0000 UTC m=+58.689306563" watchObservedRunningTime="2025-07-06 23:58:13.187395658 +0000 UTC m=+58.689374150" Jul 6 23:58:13.216461 systemd[1]: Started cri-containerd-0455c77c33b662656940bf0f71558873acd56a2998bf573bac96501e2eb6fee5.scope - libcontainer container 0455c77c33b662656940bf0f71558873acd56a2998bf573bac96501e2eb6fee5. Jul 6 23:58:13.239497 systemd[1]: cri-containerd-0455c77c33b662656940bf0f71558873acd56a2998bf573bac96501e2eb6fee5.scope: Deactivated successfully. Jul 6 23:58:13.240942 containerd[1455]: time="2025-07-06T23:58:13.240913013Z" level=info msg="StartContainer for \"0455c77c33b662656940bf0f71558873acd56a2998bf573bac96501e2eb6fee5\" returns successfully" Jul 6 23:58:13.264463 containerd[1455]: time="2025-07-06T23:58:13.264401953Z" level=info msg="shim disconnected" id=0455c77c33b662656940bf0f71558873acd56a2998bf573bac96501e2eb6fee5 namespace=k8s.io Jul 6 23:58:13.264463 containerd[1455]: time="2025-07-06T23:58:13.264447358Z" level=warning msg="cleaning up after shim disconnected" id=0455c77c33b662656940bf0f71558873acd56a2998bf573bac96501e2eb6fee5 namespace=k8s.io Jul 6 23:58:13.264463 containerd[1455]: time="2025-07-06T23:58:13.264457126Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 6 23:58:13.459649 kubelet[1754]: E0706 23:58:13.459466 1754 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 6 23:58:13.840580 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0455c77c33b662656940bf0f71558873acd56a2998bf573bac96501e2eb6fee5-rootfs.mount: Deactivated successfully. Jul 6 23:58:14.170614 kubelet[1754]: E0706 23:58:14.170574 1754 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:58:14.170614 kubelet[1754]: E0706 23:58:14.170574 1754 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:58:14.172892 containerd[1455]: time="2025-07-06T23:58:14.172830284Z" level=info msg="CreateContainer within sandbox \"fc3c33f30155e34d1f29390010bc1e75080f4e52cf13a4864de4158030373c77\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 6 23:58:14.191498 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2744803981.mount: Deactivated successfully. Jul 6 23:58:14.191937 containerd[1455]: time="2025-07-06T23:58:14.191766077Z" level=info msg="CreateContainer within sandbox \"fc3c33f30155e34d1f29390010bc1e75080f4e52cf13a4864de4158030373c77\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"99d0afbc2175b875c9d163cf32d38d2522a34527657bab91d3d06c8ff13f3042\"" Jul 6 23:58:14.192672 containerd[1455]: time="2025-07-06T23:58:14.192627627Z" level=info msg="StartContainer for \"99d0afbc2175b875c9d163cf32d38d2522a34527657bab91d3d06c8ff13f3042\"" Jul 6 23:58:14.227538 systemd[1]: Started cri-containerd-99d0afbc2175b875c9d163cf32d38d2522a34527657bab91d3d06c8ff13f3042.scope - libcontainer container 99d0afbc2175b875c9d163cf32d38d2522a34527657bab91d3d06c8ff13f3042. Jul 6 23:58:14.260052 containerd[1455]: time="2025-07-06T23:58:14.260002293Z" level=info msg="StartContainer for \"99d0afbc2175b875c9d163cf32d38d2522a34527657bab91d3d06c8ff13f3042\" returns successfully" Jul 6 23:58:14.459836 kubelet[1754]: E0706 23:58:14.459678 1754 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 6 23:58:14.663357 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Jul 6 23:58:15.048552 kubelet[1754]: E0706 23:58:15.048465 1754 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 6 23:58:15.065776 containerd[1455]: time="2025-07-06T23:58:15.065725967Z" level=info msg="StopPodSandbox for \"35aac45c480286aaa24878f69547eeab4a899b5845aeb967deb1f1cefd08b7ae\"" Jul 6 23:58:15.065954 containerd[1455]: time="2025-07-06T23:58:15.065830864Z" level=info msg="TearDown network for sandbox \"35aac45c480286aaa24878f69547eeab4a899b5845aeb967deb1f1cefd08b7ae\" successfully" Jul 6 23:58:15.065954 containerd[1455]: time="2025-07-06T23:58:15.065841213Z" level=info msg="StopPodSandbox for \"35aac45c480286aaa24878f69547eeab4a899b5845aeb967deb1f1cefd08b7ae\" returns successfully" Jul 6 23:58:15.066286 containerd[1455]: time="2025-07-06T23:58:15.066241676Z" level=info msg="RemovePodSandbox for \"35aac45c480286aaa24878f69547eeab4a899b5845aeb967deb1f1cefd08b7ae\"" Jul 6 23:58:15.066286 containerd[1455]: time="2025-07-06T23:58:15.066281150Z" level=info msg="Forcibly stopping sandbox \"35aac45c480286aaa24878f69547eeab4a899b5845aeb967deb1f1cefd08b7ae\"" Jul 6 23:58:15.066390 containerd[1455]: time="2025-07-06T23:58:15.066367774Z" level=info msg="TearDown network for sandbox \"35aac45c480286aaa24878f69547eeab4a899b5845aeb967deb1f1cefd08b7ae\" successfully" Jul 6 23:58:15.072752 containerd[1455]: time="2025-07-06T23:58:15.072705234Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"35aac45c480286aaa24878f69547eeab4a899b5845aeb967deb1f1cefd08b7ae\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jul 6 23:58:15.072838 containerd[1455]: time="2025-07-06T23:58:15.072765046Z" level=info msg="RemovePodSandbox \"35aac45c480286aaa24878f69547eeab4a899b5845aeb967deb1f1cefd08b7ae\" returns successfully" Jul 6 23:58:15.175621 kubelet[1754]: E0706 23:58:15.175589 1754 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:58:15.190497 kubelet[1754]: I0706 23:58:15.190430 1754 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-fqp4s" podStartSLOduration=5.190410642 podStartE2EDuration="5.190410642s" podCreationTimestamp="2025-07-06 23:58:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-06 23:58:15.190263615 +0000 UTC m=+60.692242106" watchObservedRunningTime="2025-07-06 23:58:15.190410642 +0000 UTC m=+60.692389133" Jul 6 23:58:15.460119 kubelet[1754]: E0706 23:58:15.460041 1754 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 6 23:58:16.460947 kubelet[1754]: E0706 23:58:16.460895 1754 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 6 23:58:16.895642 kubelet[1754]: E0706 23:58:16.895497 1754 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:58:17.461802 kubelet[1754]: E0706 23:58:17.461739 1754 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 6 23:58:17.830453 systemd-networkd[1383]: lxc_health: Link UP Jul 6 23:58:17.839220 systemd-networkd[1383]: lxc_health: Gained carrier Jul 6 23:58:18.462955 kubelet[1754]: E0706 23:58:18.462882 1754 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 6 23:58:18.897098 kubelet[1754]: E0706 23:58:18.896778 1754 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:58:19.183540 kubelet[1754]: E0706 23:58:19.183509 1754 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:58:19.463447 kubelet[1754]: E0706 23:58:19.463260 1754 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 6 23:58:19.488599 systemd-networkd[1383]: lxc_health: Gained IPv6LL Jul 6 23:58:20.185946 kubelet[1754]: E0706 23:58:20.185896 1754 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 6 23:58:20.463827 kubelet[1754]: E0706 23:58:20.463641 1754 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 6 23:58:21.464343 kubelet[1754]: E0706 23:58:21.464268 1754 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 6 23:58:22.464483 kubelet[1754]: E0706 23:58:22.464397 1754 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 6 23:58:23.465445 kubelet[1754]: E0706 23:58:23.465384 1754 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 6 23:58:24.466096 kubelet[1754]: E0706 23:58:24.466018 1754 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 6 23:58:25.466575 kubelet[1754]: E0706 23:58:25.466522 1754 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 6 23:58:26.467216 kubelet[1754]: E0706 23:58:26.467148 1754 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jul 6 23:58:27.467545 kubelet[1754]: E0706 23:58:27.467488 1754 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"