Jun 20 19:44:53.816251 kernel: Linux version 6.12.34-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Fri Jun 20 17:06:39 -00 2025 Jun 20 19:44:53.816272 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=b7bb3b1ced9c5d47870a8b74c6c30075189c27e25d75251cfa7215e4bbff75ea Jun 20 19:44:53.816283 kernel: BIOS-provided physical RAM map: Jun 20 19:44:53.816289 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000002ffff] usable Jun 20 19:44:53.816296 kernel: BIOS-e820: [mem 0x0000000000030000-0x000000000004ffff] reserved Jun 20 19:44:53.816302 kernel: BIOS-e820: [mem 0x0000000000050000-0x000000000009efff] usable Jun 20 19:44:53.816309 kernel: BIOS-e820: [mem 0x000000000009f000-0x000000000009ffff] reserved Jun 20 19:44:53.816316 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009b8ecfff] usable Jun 20 19:44:53.816323 kernel: BIOS-e820: [mem 0x000000009b8ed000-0x000000009bb6cfff] reserved Jun 20 19:44:53.816329 kernel: BIOS-e820: [mem 0x000000009bb6d000-0x000000009bb7efff] ACPI data Jun 20 19:44:53.816336 kernel: BIOS-e820: [mem 0x000000009bb7f000-0x000000009bbfefff] ACPI NVS Jun 20 19:44:53.816344 kernel: BIOS-e820: [mem 0x000000009bbff000-0x000000009bfb0fff] usable Jun 20 19:44:53.816350 kernel: BIOS-e820: [mem 0x000000009bfb1000-0x000000009bfb4fff] reserved Jun 20 19:44:53.816357 kernel: BIOS-e820: [mem 0x000000009bfb5000-0x000000009bfb6fff] ACPI NVS Jun 20 19:44:53.816365 kernel: BIOS-e820: [mem 0x000000009bfb7000-0x000000009bffffff] usable Jun 20 19:44:53.816372 kernel: BIOS-e820: [mem 0x000000009c000000-0x000000009cffffff] reserved Jun 20 19:44:53.816380 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jun 20 19:44:53.816387 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jun 20 19:44:53.816394 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jun 20 19:44:53.816401 kernel: NX (Execute Disable) protection: active Jun 20 19:44:53.816408 kernel: APIC: Static calls initialized Jun 20 19:44:53.816415 kernel: e820: update [mem 0x9a13f018-0x9a148c57] usable ==> usable Jun 20 19:44:53.816422 kernel: e820: update [mem 0x9a102018-0x9a13ee57] usable ==> usable Jun 20 19:44:53.816429 kernel: extended physical RAM map: Jun 20 19:44:53.816435 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000002ffff] usable Jun 20 19:44:53.816442 kernel: reserve setup_data: [mem 0x0000000000030000-0x000000000004ffff] reserved Jun 20 19:44:53.816450 kernel: reserve setup_data: [mem 0x0000000000050000-0x000000000009efff] usable Jun 20 19:44:53.816458 kernel: reserve setup_data: [mem 0x000000000009f000-0x000000000009ffff] reserved Jun 20 19:44:53.816465 kernel: reserve setup_data: [mem 0x0000000000100000-0x000000009a102017] usable Jun 20 19:44:53.816472 kernel: reserve setup_data: [mem 0x000000009a102018-0x000000009a13ee57] usable Jun 20 19:44:53.816479 kernel: reserve setup_data: [mem 0x000000009a13ee58-0x000000009a13f017] usable Jun 20 19:44:53.816486 kernel: reserve setup_data: [mem 0x000000009a13f018-0x000000009a148c57] usable Jun 20 19:44:53.816492 kernel: reserve setup_data: [mem 0x000000009a148c58-0x000000009b8ecfff] usable Jun 20 19:44:53.816499 kernel: reserve setup_data: [mem 0x000000009b8ed000-0x000000009bb6cfff] reserved Jun 20 19:44:53.816506 kernel: reserve setup_data: [mem 0x000000009bb6d000-0x000000009bb7efff] ACPI data Jun 20 19:44:53.816513 kernel: reserve setup_data: [mem 0x000000009bb7f000-0x000000009bbfefff] ACPI NVS Jun 20 19:44:53.816520 kernel: reserve setup_data: [mem 0x000000009bbff000-0x000000009bfb0fff] usable Jun 20 19:44:53.816527 kernel: reserve setup_data: [mem 0x000000009bfb1000-0x000000009bfb4fff] reserved Jun 20 19:44:53.816536 kernel: reserve setup_data: [mem 0x000000009bfb5000-0x000000009bfb6fff] ACPI NVS Jun 20 19:44:53.816543 kernel: reserve setup_data: [mem 0x000000009bfb7000-0x000000009bffffff] usable Jun 20 19:44:53.816553 kernel: reserve setup_data: [mem 0x000000009c000000-0x000000009cffffff] reserved Jun 20 19:44:53.816560 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jun 20 19:44:53.816567 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jun 20 19:44:53.816574 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jun 20 19:44:53.816583 kernel: efi: EFI v2.7 by EDK II Jun 20 19:44:53.816591 kernel: efi: SMBIOS=0x9b9d5000 ACPI=0x9bb7e000 ACPI 2.0=0x9bb7e014 MEMATTR=0x9a1af018 RNG=0x9bb73018 Jun 20 19:44:53.816598 kernel: random: crng init done Jun 20 19:44:53.816605 kernel: Kernel is locked down from EFI Secure Boot; see man kernel_lockdown.7 Jun 20 19:44:53.816612 kernel: secureboot: Secure boot enabled Jun 20 19:44:53.816620 kernel: SMBIOS 2.8 present. Jun 20 19:44:53.816627 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Jun 20 19:44:53.816634 kernel: DMI: Memory slots populated: 1/1 Jun 20 19:44:53.816641 kernel: Hypervisor detected: KVM Jun 20 19:44:53.816648 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jun 20 19:44:53.816655 kernel: kvm-clock: using sched offset of 4786998476 cycles Jun 20 19:44:53.816665 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jun 20 19:44:53.816673 kernel: tsc: Detected 2794.746 MHz processor Jun 20 19:44:53.816681 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jun 20 19:44:53.816688 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jun 20 19:44:53.816695 kernel: last_pfn = 0x9c000 max_arch_pfn = 0x400000000 Jun 20 19:44:53.816703 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jun 20 19:44:53.816710 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jun 20 19:44:53.816718 kernel: Using GB pages for direct mapping Jun 20 19:44:53.816725 kernel: ACPI: Early table checksum verification disabled Jun 20 19:44:53.816734 kernel: ACPI: RSDP 0x000000009BB7E014 000024 (v02 BOCHS ) Jun 20 19:44:53.816742 kernel: ACPI: XSDT 0x000000009BB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Jun 20 19:44:53.816757 kernel: ACPI: FACP 0x000000009BB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jun 20 19:44:53.816764 kernel: ACPI: DSDT 0x000000009BB7A000 002237 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 20 19:44:53.816771 kernel: ACPI: FACS 0x000000009BBDD000 000040 Jun 20 19:44:53.816779 kernel: ACPI: APIC 0x000000009BB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 20 19:44:53.816786 kernel: ACPI: HPET 0x000000009BB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 20 19:44:53.816793 kernel: ACPI: MCFG 0x000000009BB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 20 19:44:53.816801 kernel: ACPI: WAET 0x000000009BB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jun 20 19:44:53.816821 kernel: ACPI: BGRT 0x000000009BB74000 000038 (v01 INTEL EDK2 00000002 01000013) Jun 20 19:44:53.816829 kernel: ACPI: Reserving FACP table memory at [mem 0x9bb79000-0x9bb790f3] Jun 20 19:44:53.816836 kernel: ACPI: Reserving DSDT table memory at [mem 0x9bb7a000-0x9bb7c236] Jun 20 19:44:53.816844 kernel: ACPI: Reserving FACS table memory at [mem 0x9bbdd000-0x9bbdd03f] Jun 20 19:44:53.816851 kernel: ACPI: Reserving APIC table memory at [mem 0x9bb78000-0x9bb7808f] Jun 20 19:44:53.816858 kernel: ACPI: Reserving HPET table memory at [mem 0x9bb77000-0x9bb77037] Jun 20 19:44:53.816866 kernel: ACPI: Reserving MCFG table memory at [mem 0x9bb76000-0x9bb7603b] Jun 20 19:44:53.816873 kernel: ACPI: Reserving WAET table memory at [mem 0x9bb75000-0x9bb75027] Jun 20 19:44:53.816880 kernel: ACPI: Reserving BGRT table memory at [mem 0x9bb74000-0x9bb74037] Jun 20 19:44:53.816890 kernel: No NUMA configuration found Jun 20 19:44:53.816897 kernel: Faking a node at [mem 0x0000000000000000-0x000000009bffffff] Jun 20 19:44:53.816905 kernel: NODE_DATA(0) allocated [mem 0x9bf57dc0-0x9bf5efff] Jun 20 19:44:53.816912 kernel: Zone ranges: Jun 20 19:44:53.816919 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jun 20 19:44:53.816927 kernel: DMA32 [mem 0x0000000001000000-0x000000009bffffff] Jun 20 19:44:53.816934 kernel: Normal empty Jun 20 19:44:53.816941 kernel: Device empty Jun 20 19:44:53.816948 kernel: Movable zone start for each node Jun 20 19:44:53.816958 kernel: Early memory node ranges Jun 20 19:44:53.816965 kernel: node 0: [mem 0x0000000000001000-0x000000000002ffff] Jun 20 19:44:53.816972 kernel: node 0: [mem 0x0000000000050000-0x000000000009efff] Jun 20 19:44:53.816980 kernel: node 0: [mem 0x0000000000100000-0x000000009b8ecfff] Jun 20 19:44:53.816987 kernel: node 0: [mem 0x000000009bbff000-0x000000009bfb0fff] Jun 20 19:44:53.816994 kernel: node 0: [mem 0x000000009bfb7000-0x000000009bffffff] Jun 20 19:44:53.817001 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009bffffff] Jun 20 19:44:53.817009 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jun 20 19:44:53.817016 kernel: On node 0, zone DMA: 32 pages in unavailable ranges Jun 20 19:44:53.817024 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jun 20 19:44:53.817033 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Jun 20 19:44:53.817041 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Jun 20 19:44:53.817048 kernel: On node 0, zone DMA32: 16384 pages in unavailable ranges Jun 20 19:44:53.817055 kernel: ACPI: PM-Timer IO Port: 0x608 Jun 20 19:44:53.817062 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jun 20 19:44:53.817070 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jun 20 19:44:53.817077 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jun 20 19:44:53.817084 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jun 20 19:44:53.817092 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jun 20 19:44:53.817102 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jun 20 19:44:53.817109 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jun 20 19:44:53.817116 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jun 20 19:44:53.817124 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jun 20 19:44:53.817131 kernel: TSC deadline timer available Jun 20 19:44:53.817138 kernel: CPU topo: Max. logical packages: 1 Jun 20 19:44:53.817145 kernel: CPU topo: Max. logical dies: 1 Jun 20 19:44:53.817153 kernel: CPU topo: Max. dies per package: 1 Jun 20 19:44:53.817168 kernel: CPU topo: Max. threads per core: 1 Jun 20 19:44:53.817176 kernel: CPU topo: Num. cores per package: 4 Jun 20 19:44:53.817183 kernel: CPU topo: Num. threads per package: 4 Jun 20 19:44:53.817191 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Jun 20 19:44:53.817200 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jun 20 19:44:53.817208 kernel: kvm-guest: KVM setup pv remote TLB flush Jun 20 19:44:53.817215 kernel: kvm-guest: setup PV sched yield Jun 20 19:44:53.817223 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Jun 20 19:44:53.817231 kernel: Booting paravirtualized kernel on KVM Jun 20 19:44:53.817241 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jun 20 19:44:53.817248 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jun 20 19:44:53.817256 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Jun 20 19:44:53.817264 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Jun 20 19:44:53.817271 kernel: pcpu-alloc: [0] 0 1 2 3 Jun 20 19:44:53.817279 kernel: kvm-guest: PV spinlocks enabled Jun 20 19:44:53.817286 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jun 20 19:44:53.817295 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=b7bb3b1ced9c5d47870a8b74c6c30075189c27e25d75251cfa7215e4bbff75ea Jun 20 19:44:53.817305 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jun 20 19:44:53.817313 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jun 20 19:44:53.817321 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jun 20 19:44:53.817328 kernel: Fallback order for Node 0: 0 Jun 20 19:44:53.817336 kernel: Built 1 zonelists, mobility grouping on. Total pages: 638054 Jun 20 19:44:53.817344 kernel: Policy zone: DMA32 Jun 20 19:44:53.817351 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jun 20 19:44:53.817359 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jun 20 19:44:53.817367 kernel: ftrace: allocating 40093 entries in 157 pages Jun 20 19:44:53.817376 kernel: ftrace: allocated 157 pages with 5 groups Jun 20 19:44:53.817384 kernel: Dynamic Preempt: voluntary Jun 20 19:44:53.817391 kernel: rcu: Preemptible hierarchical RCU implementation. Jun 20 19:44:53.817400 kernel: rcu: RCU event tracing is enabled. Jun 20 19:44:53.817408 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jun 20 19:44:53.817415 kernel: Trampoline variant of Tasks RCU enabled. Jun 20 19:44:53.817423 kernel: Rude variant of Tasks RCU enabled. Jun 20 19:44:53.817431 kernel: Tracing variant of Tasks RCU enabled. Jun 20 19:44:53.817438 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jun 20 19:44:53.817446 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jun 20 19:44:53.817456 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jun 20 19:44:53.817464 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jun 20 19:44:53.817472 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jun 20 19:44:53.817480 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jun 20 19:44:53.817487 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jun 20 19:44:53.817495 kernel: Console: colour dummy device 80x25 Jun 20 19:44:53.817502 kernel: printk: legacy console [ttyS0] enabled Jun 20 19:44:53.817510 kernel: ACPI: Core revision 20240827 Jun 20 19:44:53.817520 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jun 20 19:44:53.817528 kernel: APIC: Switch to symmetric I/O mode setup Jun 20 19:44:53.817535 kernel: x2apic enabled Jun 20 19:44:53.817543 kernel: APIC: Switched APIC routing to: physical x2apic Jun 20 19:44:53.817551 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jun 20 19:44:53.817559 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jun 20 19:44:53.817566 kernel: kvm-guest: setup PV IPIs Jun 20 19:44:53.817574 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jun 20 19:44:53.817581 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848ddd4e75, max_idle_ns: 440795346320 ns Jun 20 19:44:53.817591 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794746) Jun 20 19:44:53.817599 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jun 20 19:44:53.817607 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jun 20 19:44:53.817614 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jun 20 19:44:53.817622 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jun 20 19:44:53.817630 kernel: Spectre V2 : Mitigation: Retpolines Jun 20 19:44:53.817637 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jun 20 19:44:53.817645 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Jun 20 19:44:53.817653 kernel: RETBleed: Mitigation: untrained return thunk Jun 20 19:44:53.817663 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jun 20 19:44:53.817670 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jun 20 19:44:53.817678 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jun 20 19:44:53.817686 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jun 20 19:44:53.817694 kernel: x86/bugs: return thunk changed Jun 20 19:44:53.817702 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jun 20 19:44:53.817709 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jun 20 19:44:53.817717 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jun 20 19:44:53.817727 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jun 20 19:44:53.817734 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jun 20 19:44:53.817742 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jun 20 19:44:53.817756 kernel: Freeing SMP alternatives memory: 32K Jun 20 19:44:53.817764 kernel: pid_max: default: 32768 minimum: 301 Jun 20 19:44:53.817771 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jun 20 19:44:53.817779 kernel: landlock: Up and running. Jun 20 19:44:53.817786 kernel: SELinux: Initializing. Jun 20 19:44:53.817794 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jun 20 19:44:53.817804 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jun 20 19:44:53.817822 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Jun 20 19:44:53.817830 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jun 20 19:44:53.817838 kernel: ... version: 0 Jun 20 19:44:53.817845 kernel: ... bit width: 48 Jun 20 19:44:53.817853 kernel: ... generic registers: 6 Jun 20 19:44:53.817861 kernel: ... value mask: 0000ffffffffffff Jun 20 19:44:53.817868 kernel: ... max period: 00007fffffffffff Jun 20 19:44:53.817876 kernel: ... fixed-purpose events: 0 Jun 20 19:44:53.817883 kernel: ... event mask: 000000000000003f Jun 20 19:44:53.817894 kernel: signal: max sigframe size: 1776 Jun 20 19:44:53.817902 kernel: rcu: Hierarchical SRCU implementation. Jun 20 19:44:53.817910 kernel: rcu: Max phase no-delay instances is 400. Jun 20 19:44:53.817918 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jun 20 19:44:53.817925 kernel: smp: Bringing up secondary CPUs ... Jun 20 19:44:53.817933 kernel: smpboot: x86: Booting SMP configuration: Jun 20 19:44:53.817940 kernel: .... node #0, CPUs: #1 #2 #3 Jun 20 19:44:53.817948 kernel: smp: Brought up 1 node, 4 CPUs Jun 20 19:44:53.817956 kernel: smpboot: Total of 4 processors activated (22357.96 BogoMIPS) Jun 20 19:44:53.817966 kernel: Memory: 2409216K/2552216K available (14336K kernel code, 2430K rwdata, 9956K rodata, 54424K init, 2544K bss, 137064K reserved, 0K cma-reserved) Jun 20 19:44:53.817974 kernel: devtmpfs: initialized Jun 20 19:44:53.817981 kernel: x86/mm: Memory block size: 128MB Jun 20 19:44:53.817989 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9bb7f000-0x9bbfefff] (524288 bytes) Jun 20 19:44:53.817997 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9bfb5000-0x9bfb6fff] (8192 bytes) Jun 20 19:44:53.818005 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jun 20 19:44:53.818012 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jun 20 19:44:53.818020 kernel: pinctrl core: initialized pinctrl subsystem Jun 20 19:44:53.818030 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jun 20 19:44:53.818037 kernel: audit: initializing netlink subsys (disabled) Jun 20 19:44:53.818045 kernel: audit: type=2000 audit(1750448690.910:1): state=initialized audit_enabled=0 res=1 Jun 20 19:44:53.818053 kernel: thermal_sys: Registered thermal governor 'step_wise' Jun 20 19:44:53.818061 kernel: thermal_sys: Registered thermal governor 'user_space' Jun 20 19:44:53.818068 kernel: cpuidle: using governor menu Jun 20 19:44:53.818076 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jun 20 19:44:53.818083 kernel: dca service started, version 1.12.1 Jun 20 19:44:53.818091 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] Jun 20 19:44:53.818101 kernel: PCI: Using configuration type 1 for base access Jun 20 19:44:53.818108 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jun 20 19:44:53.818116 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jun 20 19:44:53.818124 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jun 20 19:44:53.818132 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jun 20 19:44:53.818140 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jun 20 19:44:53.818147 kernel: ACPI: Added _OSI(Module Device) Jun 20 19:44:53.818155 kernel: ACPI: Added _OSI(Processor Device) Jun 20 19:44:53.818162 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jun 20 19:44:53.818172 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jun 20 19:44:53.818179 kernel: ACPI: Interpreter enabled Jun 20 19:44:53.818187 kernel: ACPI: PM: (supports S0 S5) Jun 20 19:44:53.818195 kernel: ACPI: Using IOAPIC for interrupt routing Jun 20 19:44:53.818202 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jun 20 19:44:53.818210 kernel: PCI: Using E820 reservations for host bridge windows Jun 20 19:44:53.818218 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jun 20 19:44:53.818225 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jun 20 19:44:53.818395 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jun 20 19:44:53.818518 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jun 20 19:44:53.818632 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jun 20 19:44:53.818642 kernel: PCI host bridge to bus 0000:00 Jun 20 19:44:53.818770 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jun 20 19:44:53.818903 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jun 20 19:44:53.819013 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jun 20 19:44:53.819121 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Jun 20 19:44:53.819225 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Jun 20 19:44:53.819330 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Jun 20 19:44:53.819436 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jun 20 19:44:53.819572 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Jun 20 19:44:53.819696 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Jun 20 19:44:53.819837 kernel: pci 0000:00:01.0: BAR 0 [mem 0xc0000000-0xc0ffffff pref] Jun 20 19:44:53.819965 kernel: pci 0000:00:01.0: BAR 2 [mem 0xc1044000-0xc1044fff] Jun 20 19:44:53.820078 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] Jun 20 19:44:53.820192 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jun 20 19:44:53.820316 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Jun 20 19:44:53.820432 kernel: pci 0000:00:02.0: BAR 0 [io 0x6100-0x611f] Jun 20 19:44:53.820547 kernel: pci 0000:00:02.0: BAR 1 [mem 0xc1043000-0xc1043fff] Jun 20 19:44:53.820660 kernel: pci 0000:00:02.0: BAR 4 [mem 0x380000000000-0x380000003fff 64bit pref] Jun 20 19:44:53.820799 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Jun 20 19:44:53.820930 kernel: pci 0000:00:03.0: BAR 0 [io 0x6000-0x607f] Jun 20 19:44:53.821046 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc1042000-0xc1042fff] Jun 20 19:44:53.821160 kernel: pci 0000:00:03.0: BAR 4 [mem 0x380000004000-0x380000007fff 64bit pref] Jun 20 19:44:53.821283 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Jun 20 19:44:53.821398 kernel: pci 0000:00:04.0: BAR 0 [io 0x60e0-0x60ff] Jun 20 19:44:53.821517 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc1041000-0xc1041fff] Jun 20 19:44:53.821631 kernel: pci 0000:00:04.0: BAR 4 [mem 0x380000008000-0x38000000bfff 64bit pref] Jun 20 19:44:53.821746 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref] Jun 20 19:44:53.821897 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Jun 20 19:44:53.822013 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jun 20 19:44:53.822150 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Jun 20 19:44:53.822300 kernel: pci 0000:00:1f.2: BAR 4 [io 0x60c0-0x60df] Jun 20 19:44:53.822420 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xc1040000-0xc1040fff] Jun 20 19:44:53.822541 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Jun 20 19:44:53.822658 kernel: pci 0000:00:1f.3: BAR 4 [io 0x6080-0x60bf] Jun 20 19:44:53.822669 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jun 20 19:44:53.822676 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jun 20 19:44:53.822684 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jun 20 19:44:53.822692 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jun 20 19:44:53.822699 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jun 20 19:44:53.822710 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jun 20 19:44:53.822718 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jun 20 19:44:53.822725 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jun 20 19:44:53.822733 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jun 20 19:44:53.822741 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jun 20 19:44:53.822757 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jun 20 19:44:53.822765 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jun 20 19:44:53.822773 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jun 20 19:44:53.822780 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jun 20 19:44:53.822790 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jun 20 19:44:53.822798 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jun 20 19:44:53.822806 kernel: iommu: Default domain type: Translated Jun 20 19:44:53.822827 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jun 20 19:44:53.822835 kernel: efivars: Registered efivars operations Jun 20 19:44:53.822843 kernel: PCI: Using ACPI for IRQ routing Jun 20 19:44:53.822851 kernel: PCI: pci_cache_line_size set to 64 bytes Jun 20 19:44:53.822858 kernel: e820: reserve RAM buffer [mem 0x0009f000-0x0009ffff] Jun 20 19:44:53.822866 kernel: e820: reserve RAM buffer [mem 0x9a102018-0x9bffffff] Jun 20 19:44:53.822876 kernel: e820: reserve RAM buffer [mem 0x9a13f018-0x9bffffff] Jun 20 19:44:53.822883 kernel: e820: reserve RAM buffer [mem 0x9b8ed000-0x9bffffff] Jun 20 19:44:53.822891 kernel: e820: reserve RAM buffer [mem 0x9bfb1000-0x9bffffff] Jun 20 19:44:53.823008 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jun 20 19:44:53.823121 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jun 20 19:44:53.823234 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jun 20 19:44:53.823244 kernel: vgaarb: loaded Jun 20 19:44:53.823251 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jun 20 19:44:53.823262 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jun 20 19:44:53.823270 kernel: clocksource: Switched to clocksource kvm-clock Jun 20 19:44:53.823277 kernel: VFS: Disk quotas dquot_6.6.0 Jun 20 19:44:53.823285 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jun 20 19:44:53.823293 kernel: pnp: PnP ACPI init Jun 20 19:44:53.823421 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Jun 20 19:44:53.823432 kernel: pnp: PnP ACPI: found 6 devices Jun 20 19:44:53.823440 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jun 20 19:44:53.823451 kernel: NET: Registered PF_INET protocol family Jun 20 19:44:53.823459 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jun 20 19:44:53.823467 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jun 20 19:44:53.823475 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jun 20 19:44:53.823483 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jun 20 19:44:53.823490 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jun 20 19:44:53.823498 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jun 20 19:44:53.823506 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jun 20 19:44:53.823513 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jun 20 19:44:53.823523 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jun 20 19:44:53.823530 kernel: NET: Registered PF_XDP protocol family Jun 20 19:44:53.823647 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref]: can't claim; no compatible bridge window Jun 20 19:44:53.823789 kernel: pci 0000:00:04.0: ROM [mem 0x9d000000-0x9d03ffff pref]: assigned Jun 20 19:44:53.823933 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jun 20 19:44:53.824091 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jun 20 19:44:53.824274 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jun 20 19:44:53.824381 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Jun 20 19:44:53.824489 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Jun 20 19:44:53.824594 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Jun 20 19:44:53.824604 kernel: PCI: CLS 0 bytes, default 64 Jun 20 19:44:53.824612 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848ddd4e75, max_idle_ns: 440795346320 ns Jun 20 19:44:53.824620 kernel: Initialise system trusted keyrings Jun 20 19:44:53.824627 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jun 20 19:44:53.824635 kernel: Key type asymmetric registered Jun 20 19:44:53.824643 kernel: Asymmetric key parser 'x509' registered Jun 20 19:44:53.824651 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jun 20 19:44:53.824673 kernel: io scheduler mq-deadline registered Jun 20 19:44:53.824683 kernel: io scheduler kyber registered Jun 20 19:44:53.824691 kernel: io scheduler bfq registered Jun 20 19:44:53.824699 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jun 20 19:44:53.824707 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jun 20 19:44:53.824715 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jun 20 19:44:53.824723 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jun 20 19:44:53.824731 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jun 20 19:44:53.824742 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jun 20 19:44:53.824765 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jun 20 19:44:53.824773 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jun 20 19:44:53.824781 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jun 20 19:44:53.824918 kernel: rtc_cmos 00:04: RTC can wake from S4 Jun 20 19:44:53.824931 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jun 20 19:44:53.825041 kernel: rtc_cmos 00:04: registered as rtc0 Jun 20 19:44:53.825151 kernel: rtc_cmos 00:04: setting system clock to 2025-06-20T19:44:53 UTC (1750448693) Jun 20 19:44:53.825258 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Jun 20 19:44:53.825271 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jun 20 19:44:53.825279 kernel: efifb: probing for efifb Jun 20 19:44:53.825287 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Jun 20 19:44:53.825295 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Jun 20 19:44:53.825303 kernel: efifb: scrolling: redraw Jun 20 19:44:53.825311 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jun 20 19:44:53.825319 kernel: Console: switching to colour frame buffer device 160x50 Jun 20 19:44:53.825327 kernel: fb0: EFI VGA frame buffer device Jun 20 19:44:53.825335 kernel: pstore: Using crash dump compression: deflate Jun 20 19:44:53.825345 kernel: pstore: Registered efi_pstore as persistent store backend Jun 20 19:44:53.825354 kernel: NET: Registered PF_INET6 protocol family Jun 20 19:44:53.825362 kernel: Segment Routing with IPv6 Jun 20 19:44:53.825370 kernel: In-situ OAM (IOAM) with IPv6 Jun 20 19:44:53.825378 kernel: NET: Registered PF_PACKET protocol family Jun 20 19:44:53.825388 kernel: Key type dns_resolver registered Jun 20 19:44:53.825396 kernel: IPI shorthand broadcast: enabled Jun 20 19:44:53.825404 kernel: sched_clock: Marking stable (2772003368, 140056533)->(2927433093, -15373192) Jun 20 19:44:53.825412 kernel: registered taskstats version 1 Jun 20 19:44:53.825420 kernel: Loading compiled-in X.509 certificates Jun 20 19:44:53.825428 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.34-flatcar: 9a085d119111c823c157514215d0379e3a2f1b94' Jun 20 19:44:53.825436 kernel: Demotion targets for Node 0: null Jun 20 19:44:53.825444 kernel: Key type .fscrypt registered Jun 20 19:44:53.825452 kernel: Key type fscrypt-provisioning registered Jun 20 19:44:53.825461 kernel: ima: No TPM chip found, activating TPM-bypass! Jun 20 19:44:53.825469 kernel: ima: Allocated hash algorithm: sha1 Jun 20 19:44:53.825477 kernel: ima: No architecture policies found Jun 20 19:44:53.825486 kernel: clk: Disabling unused clocks Jun 20 19:44:53.825493 kernel: Warning: unable to open an initial console. Jun 20 19:44:53.825502 kernel: Freeing unused kernel image (initmem) memory: 54424K Jun 20 19:44:53.825510 kernel: Write protecting the kernel read-only data: 24576k Jun 20 19:44:53.825518 kernel: Freeing unused kernel image (rodata/data gap) memory: 284K Jun 20 19:44:53.825526 kernel: Run /init as init process Jun 20 19:44:53.825536 kernel: with arguments: Jun 20 19:44:53.825544 kernel: /init Jun 20 19:44:53.825552 kernel: with environment: Jun 20 19:44:53.825560 kernel: HOME=/ Jun 20 19:44:53.825568 kernel: TERM=linux Jun 20 19:44:53.825575 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jun 20 19:44:53.825584 systemd[1]: Successfully made /usr/ read-only. Jun 20 19:44:53.825596 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jun 20 19:44:53.825607 systemd[1]: Detected virtualization kvm. Jun 20 19:44:53.825615 systemd[1]: Detected architecture x86-64. Jun 20 19:44:53.825623 systemd[1]: Running in initrd. Jun 20 19:44:53.825632 systemd[1]: No hostname configured, using default hostname. Jun 20 19:44:53.825641 systemd[1]: Hostname set to . Jun 20 19:44:53.825649 systemd[1]: Initializing machine ID from VM UUID. Jun 20 19:44:53.825657 systemd[1]: Queued start job for default target initrd.target. Jun 20 19:44:53.825666 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 20 19:44:53.825676 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 20 19:44:53.825686 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jun 20 19:44:53.825695 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jun 20 19:44:53.825703 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jun 20 19:44:53.825713 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jun 20 19:44:53.825722 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jun 20 19:44:53.825733 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jun 20 19:44:53.825742 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 20 19:44:53.825769 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jun 20 19:44:53.825777 systemd[1]: Reached target paths.target - Path Units. Jun 20 19:44:53.825793 systemd[1]: Reached target slices.target - Slice Units. Jun 20 19:44:53.825802 systemd[1]: Reached target swap.target - Swaps. Jun 20 19:44:53.825831 systemd[1]: Reached target timers.target - Timer Units. Jun 20 19:44:53.825840 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jun 20 19:44:53.825849 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 20 19:44:53.825860 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jun 20 19:44:53.825869 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jun 20 19:44:53.825878 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jun 20 19:44:53.825887 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jun 20 19:44:53.825895 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jun 20 19:44:53.825904 systemd[1]: Reached target sockets.target - Socket Units. Jun 20 19:44:53.825916 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jun 20 19:44:53.825925 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jun 20 19:44:53.825936 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jun 20 19:44:53.825945 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jun 20 19:44:53.825954 systemd[1]: Starting systemd-fsck-usr.service... Jun 20 19:44:53.825962 systemd[1]: Starting systemd-journald.service - Journal Service... Jun 20 19:44:53.825971 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jun 20 19:44:53.825979 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 20 19:44:53.825988 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jun 20 19:44:53.825999 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jun 20 19:44:53.826008 systemd[1]: Finished systemd-fsck-usr.service. Jun 20 19:44:53.826038 systemd-journald[219]: Collecting audit messages is disabled. Jun 20 19:44:53.826060 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jun 20 19:44:53.826069 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 19:44:53.826078 systemd-journald[219]: Journal started Jun 20 19:44:53.826097 systemd-journald[219]: Runtime Journal (/run/log/journal/dcb38c1ae15247709ee053b66712831b) is 6M, max 48.2M, 42.2M free. Jun 20 19:44:53.824271 systemd-modules-load[220]: Inserted module 'overlay' Jun 20 19:44:53.831477 systemd[1]: Started systemd-journald.service - Journal Service. Jun 20 19:44:53.835130 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jun 20 19:44:53.838107 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jun 20 19:44:53.853837 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jun 20 19:44:53.855140 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jun 20 19:44:53.857204 kernel: Bridge firewalling registered Jun 20 19:44:53.855471 systemd-modules-load[220]: Inserted module 'br_netfilter' Jun 20 19:44:53.859479 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jun 20 19:44:53.862440 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jun 20 19:44:53.865989 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 20 19:44:53.869980 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 20 19:44:53.874621 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 20 19:44:53.876694 systemd-tmpfiles[243]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jun 20 19:44:53.879794 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jun 20 19:44:53.882052 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jun 20 19:44:53.883041 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 20 19:44:53.896200 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jun 20 19:44:53.909780 dracut-cmdline[260]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=b7bb3b1ced9c5d47870a8b74c6c30075189c27e25d75251cfa7215e4bbff75ea Jun 20 19:44:53.945653 systemd-resolved[263]: Positive Trust Anchors: Jun 20 19:44:53.945668 systemd-resolved[263]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jun 20 19:44:53.945698 systemd-resolved[263]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jun 20 19:44:53.948150 systemd-resolved[263]: Defaulting to hostname 'linux'. Jun 20 19:44:53.954599 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jun 20 19:44:53.956011 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jun 20 19:44:54.027850 kernel: SCSI subsystem initialized Jun 20 19:44:54.039839 kernel: Loading iSCSI transport class v2.0-870. Jun 20 19:44:54.049840 kernel: iscsi: registered transport (tcp) Jun 20 19:44:54.071834 kernel: iscsi: registered transport (qla4xxx) Jun 20 19:44:54.071864 kernel: QLogic iSCSI HBA Driver Jun 20 19:44:54.092690 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jun 20 19:44:54.123090 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jun 20 19:44:54.124470 systemd[1]: Reached target network-pre.target - Preparation for Network. Jun 20 19:44:54.181511 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jun 20 19:44:54.183220 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jun 20 19:44:54.242844 kernel: raid6: avx2x4 gen() 30479 MB/s Jun 20 19:44:54.259835 kernel: raid6: avx2x2 gen() 31230 MB/s Jun 20 19:44:54.276919 kernel: raid6: avx2x1 gen() 26074 MB/s Jun 20 19:44:54.276942 kernel: raid6: using algorithm avx2x2 gen() 31230 MB/s Jun 20 19:44:54.294943 kernel: raid6: .... xor() 19915 MB/s, rmw enabled Jun 20 19:44:54.294963 kernel: raid6: using avx2x2 recovery algorithm Jun 20 19:44:54.314842 kernel: xor: automatically using best checksumming function avx Jun 20 19:44:54.478869 kernel: Btrfs loaded, zoned=no, fsverity=no Jun 20 19:44:54.487241 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jun 20 19:44:54.490025 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 20 19:44:54.520518 systemd-udevd[472]: Using default interface naming scheme 'v255'. Jun 20 19:44:54.526799 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 20 19:44:54.527891 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jun 20 19:44:54.549891 dracut-pre-trigger[474]: rd.md=0: removing MD RAID activation Jun 20 19:44:54.580104 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jun 20 19:44:54.583699 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jun 20 19:44:54.663794 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jun 20 19:44:54.667918 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jun 20 19:44:54.698857 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jun 20 19:44:54.700910 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jun 20 19:44:54.708903 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jun 20 19:44:54.708935 kernel: GPT:9289727 != 19775487 Jun 20 19:44:54.708959 kernel: GPT:Alternate GPT header not at the end of the disk. Jun 20 19:44:54.708979 kernel: GPT:9289727 != 19775487 Jun 20 19:44:54.709011 kernel: GPT: Use GNU Parted to correct GPT errors. Jun 20 19:44:54.709034 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jun 20 19:44:54.724850 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Jun 20 19:44:54.726834 kernel: libata version 3.00 loaded. Jun 20 19:44:54.729846 kernel: cryptd: max_cpu_qlen set to 1000 Jun 20 19:44:54.734706 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 20 19:44:54.735451 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 19:44:54.739998 kernel: ahci 0000:00:1f.2: version 3.0 Jun 20 19:44:54.740632 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jun 20 19:44:54.741568 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jun 20 19:44:54.744461 kernel: AES CTR mode by8 optimization enabled Jun 20 19:44:54.744475 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Jun 20 19:44:54.744630 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Jun 20 19:44:54.745932 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jun 20 19:44:54.752535 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 20 19:44:54.758867 kernel: scsi host0: ahci Jun 20 19:44:54.773833 kernel: scsi host1: ahci Jun 20 19:44:54.775852 kernel: scsi host2: ahci Jun 20 19:44:54.776831 kernel: scsi host3: ahci Jun 20 19:44:54.781901 kernel: scsi host4: ahci Jun 20 19:44:54.785485 kernel: scsi host5: ahci Jun 20 19:44:54.785709 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 lpm-pol 0 Jun 20 19:44:54.785731 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 lpm-pol 0 Jun 20 19:44:54.785742 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 lpm-pol 0 Jun 20 19:44:54.785753 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 lpm-pol 0 Jun 20 19:44:54.785763 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 lpm-pol 0 Jun 20 19:44:54.788936 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 lpm-pol 0 Jun 20 19:44:54.795107 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jun 20 19:44:54.803729 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jun 20 19:44:54.803998 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jun 20 19:44:54.820565 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jun 20 19:44:54.829774 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jun 20 19:44:54.830866 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jun 20 19:44:54.832558 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 20 19:44:54.832608 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 19:44:54.836590 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jun 20 19:44:54.849578 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 20 19:44:54.851024 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jun 20 19:44:54.860024 disk-uuid[635]: Primary Header is updated. Jun 20 19:44:54.860024 disk-uuid[635]: Secondary Entries is updated. Jun 20 19:44:54.860024 disk-uuid[635]: Secondary Header is updated. Jun 20 19:44:54.863860 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jun 20 19:44:54.868856 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jun 20 19:44:54.873587 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 19:44:55.094841 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jun 20 19:44:55.094892 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jun 20 19:44:55.095850 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jun 20 19:44:55.095945 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jun 20 19:44:55.096835 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jun 20 19:44:55.097851 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jun 20 19:44:55.098851 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jun 20 19:44:55.098872 kernel: ata3.00: applying bridge limits Jun 20 19:44:55.099838 kernel: ata3.00: configured for UDMA/100 Jun 20 19:44:55.101833 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jun 20 19:44:55.154840 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jun 20 19:44:55.155109 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jun 20 19:44:55.173849 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jun 20 19:44:55.598196 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jun 20 19:44:55.599983 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jun 20 19:44:55.601723 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 20 19:44:55.602955 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jun 20 19:44:55.605955 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jun 20 19:44:55.639228 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jun 20 19:44:55.890843 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jun 20 19:44:55.891048 disk-uuid[639]: The operation has completed successfully. Jun 20 19:44:55.913891 systemd[1]: disk-uuid.service: Deactivated successfully. Jun 20 19:44:55.914006 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jun 20 19:44:55.963196 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jun 20 19:44:55.979220 sh[670]: Success Jun 20 19:44:55.997632 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jun 20 19:44:55.997708 kernel: device-mapper: uevent: version 1.0.3 Jun 20 19:44:55.997721 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jun 20 19:44:56.007845 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Jun 20 19:44:56.038174 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jun 20 19:44:56.041221 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jun 20 19:44:56.057940 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jun 20 19:44:56.066417 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Jun 20 19:44:56.066464 kernel: BTRFS: device fsid 048b924a-9f97-43f5-98d6-0fff18874966 devid 1 transid 41 /dev/mapper/usr (253:0) scanned by mount (682) Jun 20 19:44:56.066832 kernel: BTRFS info (device dm-0): first mount of filesystem 048b924a-9f97-43f5-98d6-0fff18874966 Jun 20 19:44:56.068649 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jun 20 19:44:56.068669 kernel: BTRFS info (device dm-0): using free-space-tree Jun 20 19:44:56.073139 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jun 20 19:44:56.073761 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jun 20 19:44:56.075258 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jun 20 19:44:56.077053 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jun 20 19:44:56.078156 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jun 20 19:44:56.107855 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 (254:6) scanned by mount (717) Jun 20 19:44:56.109918 kernel: BTRFS info (device vda6): first mount of filesystem 40288228-7b4b-4005-945b-574c4c10ab32 Jun 20 19:44:56.109978 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jun 20 19:44:56.109990 kernel: BTRFS info (device vda6): using free-space-tree Jun 20 19:44:56.117856 kernel: BTRFS info (device vda6): last unmount of filesystem 40288228-7b4b-4005-945b-574c4c10ab32 Jun 20 19:44:56.118986 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jun 20 19:44:56.121998 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jun 20 19:44:56.195733 ignition[760]: Ignition 2.21.0 Jun 20 19:44:56.196322 ignition[760]: Stage: fetch-offline Jun 20 19:44:56.196353 ignition[760]: no configs at "/usr/lib/ignition/base.d" Jun 20 19:44:56.196362 ignition[760]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jun 20 19:44:56.196436 ignition[760]: parsed url from cmdline: "" Jun 20 19:44:56.196440 ignition[760]: no config URL provided Jun 20 19:44:56.196445 ignition[760]: reading system config file "/usr/lib/ignition/user.ign" Jun 20 19:44:56.196452 ignition[760]: no config at "/usr/lib/ignition/user.ign" Jun 20 19:44:56.196473 ignition[760]: op(1): [started] loading QEMU firmware config module Jun 20 19:44:56.196478 ignition[760]: op(1): executing: "modprobe" "qemu_fw_cfg" Jun 20 19:44:56.203908 ignition[760]: op(1): [finished] loading QEMU firmware config module Jun 20 19:44:56.213607 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 20 19:44:56.218540 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jun 20 19:44:56.246819 ignition[760]: parsing config with SHA512: b0101ef9adeda2dc2bd3e6f45a54399238822c98af4bec7b2ed7bad10f2a26c4332d5e6ca725c0aa862f596e3ab1a8d5487e4c9f8c4cd8efbe64101e81a7462c Jun 20 19:44:56.252497 unknown[760]: fetched base config from "system" Jun 20 19:44:56.252510 unknown[760]: fetched user config from "qemu" Jun 20 19:44:56.252873 ignition[760]: fetch-offline: fetch-offline passed Jun 20 19:44:56.252922 ignition[760]: Ignition finished successfully Jun 20 19:44:56.258589 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jun 20 19:44:56.259619 systemd-networkd[859]: lo: Link UP Jun 20 19:44:56.259623 systemd-networkd[859]: lo: Gained carrier Jun 20 19:44:56.261108 systemd-networkd[859]: Enumeration completed Jun 20 19:44:56.261239 systemd[1]: Started systemd-networkd.service - Network Configuration. Jun 20 19:44:56.261442 systemd-networkd[859]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 20 19:44:56.261446 systemd-networkd[859]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 20 19:44:56.261751 systemd[1]: Reached target network.target - Network. Jun 20 19:44:56.263240 systemd-networkd[859]: eth0: Link UP Jun 20 19:44:56.263244 systemd-networkd[859]: eth0: Gained carrier Jun 20 19:44:56.263252 systemd-networkd[859]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 20 19:44:56.264248 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jun 20 19:44:56.265884 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jun 20 19:44:56.281877 systemd-networkd[859]: eth0: DHCPv4 address 10.0.0.133/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jun 20 19:44:56.304850 ignition[863]: Ignition 2.21.0 Jun 20 19:44:56.304863 ignition[863]: Stage: kargs Jun 20 19:44:56.305008 ignition[863]: no configs at "/usr/lib/ignition/base.d" Jun 20 19:44:56.305018 ignition[863]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jun 20 19:44:56.306772 ignition[863]: kargs: kargs passed Jun 20 19:44:56.306860 ignition[863]: Ignition finished successfully Jun 20 19:44:56.311123 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jun 20 19:44:56.313289 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jun 20 19:44:56.340853 ignition[872]: Ignition 2.21.0 Jun 20 19:44:56.340864 ignition[872]: Stage: disks Jun 20 19:44:56.341000 ignition[872]: no configs at "/usr/lib/ignition/base.d" Jun 20 19:44:56.341011 ignition[872]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jun 20 19:44:56.341823 ignition[872]: disks: disks passed Jun 20 19:44:56.341866 ignition[872]: Ignition finished successfully Jun 20 19:44:56.346047 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jun 20 19:44:56.346484 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jun 20 19:44:56.348557 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jun 20 19:44:56.350798 systemd[1]: Reached target local-fs.target - Local File Systems. Jun 20 19:44:56.351293 systemd[1]: Reached target sysinit.target - System Initialization. Jun 20 19:44:56.351627 systemd[1]: Reached target basic.target - Basic System. Jun 20 19:44:56.358028 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jun 20 19:44:56.379782 systemd-fsck[882]: ROOT: clean, 15/553520 files, 52789/553472 blocks Jun 20 19:44:56.459850 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jun 20 19:44:56.464138 systemd[1]: Mounting sysroot.mount - /sysroot... Jun 20 19:44:56.567835 kernel: EXT4-fs (vda9): mounted filesystem 6290a154-3512-46a6-a5f5-a7fb62c65caa r/w with ordered data mode. Quota mode: none. Jun 20 19:44:56.568289 systemd[1]: Mounted sysroot.mount - /sysroot. Jun 20 19:44:56.569915 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jun 20 19:44:56.572715 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 20 19:44:56.574633 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jun 20 19:44:56.576124 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jun 20 19:44:56.576163 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jun 20 19:44:56.576185 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jun 20 19:44:56.588950 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jun 20 19:44:56.590621 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jun 20 19:44:56.596838 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 (254:6) scanned by mount (890) Jun 20 19:44:56.596875 kernel: BTRFS info (device vda6): first mount of filesystem 40288228-7b4b-4005-945b-574c4c10ab32 Jun 20 19:44:56.598593 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jun 20 19:44:56.598615 kernel: BTRFS info (device vda6): using free-space-tree Jun 20 19:44:56.604093 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 20 19:44:56.627896 initrd-setup-root[914]: cut: /sysroot/etc/passwd: No such file or directory Jun 20 19:44:56.632890 initrd-setup-root[921]: cut: /sysroot/etc/group: No such file or directory Jun 20 19:44:56.637028 initrd-setup-root[928]: cut: /sysroot/etc/shadow: No such file or directory Jun 20 19:44:56.641742 initrd-setup-root[935]: cut: /sysroot/etc/gshadow: No such file or directory Jun 20 19:44:56.727558 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jun 20 19:44:56.730211 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jun 20 19:44:56.731267 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jun 20 19:44:56.759850 kernel: BTRFS info (device vda6): last unmount of filesystem 40288228-7b4b-4005-945b-574c4c10ab32 Jun 20 19:44:56.770965 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jun 20 19:44:56.786988 ignition[1003]: INFO : Ignition 2.21.0 Jun 20 19:44:56.786988 ignition[1003]: INFO : Stage: mount Jun 20 19:44:56.789583 ignition[1003]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 20 19:44:56.789583 ignition[1003]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jun 20 19:44:56.792803 ignition[1003]: INFO : mount: mount passed Jun 20 19:44:56.793599 ignition[1003]: INFO : Ignition finished successfully Jun 20 19:44:56.797292 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jun 20 19:44:56.799837 systemd[1]: Starting ignition-files.service - Ignition (files)... Jun 20 19:44:57.065453 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jun 20 19:44:57.067987 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jun 20 19:44:57.098840 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 (254:6) scanned by mount (1016) Jun 20 19:44:57.098888 kernel: BTRFS info (device vda6): first mount of filesystem 40288228-7b4b-4005-945b-574c4c10ab32 Jun 20 19:44:57.098903 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jun 20 19:44:57.100321 kernel: BTRFS info (device vda6): using free-space-tree Jun 20 19:44:57.104437 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jun 20 19:44:57.141141 ignition[1033]: INFO : Ignition 2.21.0 Jun 20 19:44:57.141141 ignition[1033]: INFO : Stage: files Jun 20 19:44:57.143123 ignition[1033]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 20 19:44:57.143123 ignition[1033]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jun 20 19:44:57.143123 ignition[1033]: DEBUG : files: compiled without relabeling support, skipping Jun 20 19:44:57.146737 ignition[1033]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jun 20 19:44:57.146737 ignition[1033]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jun 20 19:44:57.149596 ignition[1033]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jun 20 19:44:57.149596 ignition[1033]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jun 20 19:44:57.149596 ignition[1033]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jun 20 19:44:57.148159 unknown[1033]: wrote ssh authorized keys file for user: core Jun 20 19:44:57.154743 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jun 20 19:44:57.154743 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jun 20 19:44:57.182898 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jun 20 19:44:57.433701 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jun 20 19:44:57.433701 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jun 20 19:44:57.437788 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jun 20 19:44:57.655988 systemd-networkd[859]: eth0: Gained IPv6LL Jun 20 19:44:57.846869 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jun 20 19:44:58.065562 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jun 20 19:44:58.067629 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jun 20 19:44:58.069517 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jun 20 19:44:58.071319 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jun 20 19:44:58.073209 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jun 20 19:44:58.074888 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 20 19:44:58.076682 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jun 20 19:44:58.078363 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 20 19:44:58.080118 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jun 20 19:44:58.085579 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jun 20 19:44:58.087625 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jun 20 19:44:58.089521 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jun 20 19:44:58.092242 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jun 20 19:44:58.092242 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jun 20 19:44:58.092242 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Jun 20 19:44:58.731774 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jun 20 19:44:59.131724 ignition[1033]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Jun 20 19:44:59.131724 ignition[1033]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jun 20 19:44:59.135691 ignition[1033]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 20 19:44:59.138101 ignition[1033]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jun 20 19:44:59.138101 ignition[1033]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jun 20 19:44:59.138101 ignition[1033]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Jun 20 19:44:59.143007 ignition[1033]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jun 20 19:44:59.143007 ignition[1033]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jun 20 19:44:59.143007 ignition[1033]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Jun 20 19:44:59.143007 ignition[1033]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Jun 20 19:44:59.158506 ignition[1033]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Jun 20 19:44:59.162966 ignition[1033]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jun 20 19:44:59.164641 ignition[1033]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Jun 20 19:44:59.164641 ignition[1033]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Jun 20 19:44:59.164641 ignition[1033]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Jun 20 19:44:59.164641 ignition[1033]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Jun 20 19:44:59.164641 ignition[1033]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Jun 20 19:44:59.164641 ignition[1033]: INFO : files: files passed Jun 20 19:44:59.164641 ignition[1033]: INFO : Ignition finished successfully Jun 20 19:44:59.176057 systemd[1]: Finished ignition-files.service - Ignition (files). Jun 20 19:44:59.178962 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jun 20 19:44:59.181267 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jun 20 19:44:59.202797 systemd[1]: ignition-quench.service: Deactivated successfully. Jun 20 19:44:59.202943 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jun 20 19:44:59.206682 initrd-setup-root-after-ignition[1062]: grep: /sysroot/oem/oem-release: No such file or directory Jun 20 19:44:59.210467 initrd-setup-root-after-ignition[1064]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 20 19:44:59.212111 initrd-setup-root-after-ignition[1064]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jun 20 19:44:59.214185 initrd-setup-root-after-ignition[1068]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jun 20 19:44:59.217244 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 20 19:44:59.219860 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jun 20 19:44:59.222120 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jun 20 19:44:59.270982 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jun 20 19:44:59.271118 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jun 20 19:44:59.271997 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jun 20 19:44:59.274686 systemd[1]: Reached target initrd.target - Initrd Default Target. Jun 20 19:44:59.275219 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jun 20 19:44:59.276085 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jun 20 19:44:59.293489 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 20 19:44:59.295389 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jun 20 19:44:59.316788 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jun 20 19:44:59.317296 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 20 19:44:59.317645 systemd[1]: Stopped target timers.target - Timer Units. Jun 20 19:44:59.318135 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jun 20 19:44:59.318235 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jun 20 19:44:59.324702 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jun 20 19:44:59.325321 systemd[1]: Stopped target basic.target - Basic System. Jun 20 19:44:59.325654 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jun 20 19:44:59.326160 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jun 20 19:44:59.326492 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jun 20 19:44:59.326845 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jun 20 19:44:59.327344 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jun 20 19:44:59.327686 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jun 20 19:44:59.328376 systemd[1]: Stopped target sysinit.target - System Initialization. Jun 20 19:44:59.328708 systemd[1]: Stopped target local-fs.target - Local File Systems. Jun 20 19:44:59.329200 systemd[1]: Stopped target swap.target - Swaps. Jun 20 19:44:59.329505 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jun 20 19:44:59.329620 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jun 20 19:44:59.346854 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jun 20 19:44:59.347351 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 20 19:44:59.347651 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jun 20 19:44:59.352935 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 20 19:44:59.353555 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jun 20 19:44:59.353670 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jun 20 19:44:59.357165 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jun 20 19:44:59.357273 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jun 20 19:44:59.357614 systemd[1]: Stopped target paths.target - Path Units. Jun 20 19:44:59.357874 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jun 20 19:44:59.367895 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 20 19:44:59.368292 systemd[1]: Stopped target slices.target - Slice Units. Jun 20 19:44:59.368587 systemd[1]: Stopped target sockets.target - Socket Units. Jun 20 19:44:59.369095 systemd[1]: iscsid.socket: Deactivated successfully. Jun 20 19:44:59.369191 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jun 20 19:44:59.374226 systemd[1]: iscsiuio.socket: Deactivated successfully. Jun 20 19:44:59.374309 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jun 20 19:44:59.375789 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jun 20 19:44:59.375931 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jun 20 19:44:59.377609 systemd[1]: ignition-files.service: Deactivated successfully. Jun 20 19:44:59.377712 systemd[1]: Stopped ignition-files.service - Ignition (files). Jun 20 19:44:59.382303 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jun 20 19:44:59.382761 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jun 20 19:44:59.382879 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jun 20 19:44:59.383990 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jun 20 19:44:59.387414 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jun 20 19:44:59.387538 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jun 20 19:44:59.390048 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jun 20 19:44:59.390188 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jun 20 19:44:59.396549 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jun 20 19:44:59.402960 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jun 20 19:44:59.417848 ignition[1088]: INFO : Ignition 2.21.0 Jun 20 19:44:59.417848 ignition[1088]: INFO : Stage: umount Jun 20 19:44:59.419699 ignition[1088]: INFO : no configs at "/usr/lib/ignition/base.d" Jun 20 19:44:59.419699 ignition[1088]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jun 20 19:44:59.421949 ignition[1088]: INFO : umount: umount passed Jun 20 19:44:59.421949 ignition[1088]: INFO : Ignition finished successfully Jun 20 19:44:59.424594 systemd[1]: ignition-mount.service: Deactivated successfully. Jun 20 19:44:59.424727 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jun 20 19:44:59.425420 systemd[1]: Stopped target network.target - Network. Jun 20 19:44:59.427637 systemd[1]: ignition-disks.service: Deactivated successfully. Jun 20 19:44:59.427692 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jun 20 19:44:59.429476 systemd[1]: ignition-kargs.service: Deactivated successfully. Jun 20 19:44:59.429527 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jun 20 19:44:59.429830 systemd[1]: ignition-setup.service: Deactivated successfully. Jun 20 19:44:59.429878 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jun 20 19:44:59.430308 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jun 20 19:44:59.430346 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jun 20 19:44:59.430740 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jun 20 19:44:59.431241 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jun 20 19:44:59.432525 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jun 20 19:44:59.448163 systemd[1]: systemd-resolved.service: Deactivated successfully. Jun 20 19:44:59.448298 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jun 20 19:44:59.452640 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jun 20 19:44:59.452911 systemd[1]: systemd-networkd.service: Deactivated successfully. Jun 20 19:44:59.453023 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jun 20 19:44:59.456711 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jun 20 19:44:59.457293 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jun 20 19:44:59.457620 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jun 20 19:44:59.457662 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jun 20 19:44:59.459103 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jun 20 19:44:59.463648 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jun 20 19:44:59.463700 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jun 20 19:44:59.464200 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jun 20 19:44:59.464241 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jun 20 19:44:59.469445 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jun 20 19:44:59.469493 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jun 20 19:44:59.470157 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jun 20 19:44:59.470199 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jun 20 19:44:59.474460 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 20 19:44:59.477620 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jun 20 19:44:59.477688 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jun 20 19:44:59.485750 systemd[1]: network-cleanup.service: Deactivated successfully. Jun 20 19:44:59.486021 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jun 20 19:44:59.501524 systemd[1]: systemd-udevd.service: Deactivated successfully. Jun 20 19:44:59.501707 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 20 19:44:59.502386 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jun 20 19:44:59.502429 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jun 20 19:44:59.505250 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jun 20 19:44:59.505285 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jun 20 19:44:59.505560 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jun 20 19:44:59.505617 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jun 20 19:44:59.510717 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jun 20 19:44:59.510763 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jun 20 19:44:59.511598 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jun 20 19:44:59.511644 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jun 20 19:44:59.517105 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jun 20 19:44:59.517701 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jun 20 19:44:59.517751 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jun 20 19:44:59.521749 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jun 20 19:44:59.521802 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 20 19:44:59.525043 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 20 19:44:59.525093 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 19:44:59.529601 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Jun 20 19:44:59.529661 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jun 20 19:44:59.529710 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jun 20 19:44:59.547935 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jun 20 19:44:59.548049 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jun 20 19:44:59.656254 systemd[1]: sysroot-boot.service: Deactivated successfully. Jun 20 19:44:59.656413 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jun 20 19:44:59.659640 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jun 20 19:44:59.661769 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jun 20 19:44:59.662782 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jun 20 19:44:59.665778 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jun 20 19:44:59.700411 systemd[1]: Switching root. Jun 20 19:44:59.739982 systemd-journald[219]: Journal stopped Jun 20 19:45:01.006012 systemd-journald[219]: Received SIGTERM from PID 1 (systemd). Jun 20 19:45:01.006095 kernel: SELinux: policy capability network_peer_controls=1 Jun 20 19:45:01.006114 kernel: SELinux: policy capability open_perms=1 Jun 20 19:45:01.006125 kernel: SELinux: policy capability extended_socket_class=1 Jun 20 19:45:01.006136 kernel: SELinux: policy capability always_check_network=0 Jun 20 19:45:01.006147 kernel: SELinux: policy capability cgroup_seclabel=1 Jun 20 19:45:01.006160 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jun 20 19:45:01.006171 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jun 20 19:45:01.006189 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jun 20 19:45:01.006199 kernel: SELinux: policy capability userspace_initial_context=0 Jun 20 19:45:01.006210 kernel: audit: type=1403 audit(1750448700.199:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jun 20 19:45:01.006228 systemd[1]: Successfully loaded SELinux policy in 47.222ms. Jun 20 19:45:01.006250 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 12.768ms. Jun 20 19:45:01.006263 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jun 20 19:45:01.006277 systemd[1]: Detected virtualization kvm. Jun 20 19:45:01.006289 systemd[1]: Detected architecture x86-64. Jun 20 19:45:01.006303 systemd[1]: Detected first boot. Jun 20 19:45:01.006315 systemd[1]: Initializing machine ID from VM UUID. Jun 20 19:45:01.006327 zram_generator::config[1134]: No configuration found. Jun 20 19:45:01.006340 kernel: Guest personality initialized and is inactive Jun 20 19:45:01.006351 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Jun 20 19:45:01.006362 kernel: Initialized host personality Jun 20 19:45:01.006373 kernel: NET: Registered PF_VSOCK protocol family Jun 20 19:45:01.006386 systemd[1]: Populated /etc with preset unit settings. Jun 20 19:45:01.006399 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jun 20 19:45:01.006411 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jun 20 19:45:01.006422 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jun 20 19:45:01.006434 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jun 20 19:45:01.006446 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jun 20 19:45:01.006458 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jun 20 19:45:01.006470 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jun 20 19:45:01.006484 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jun 20 19:45:01.006496 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jun 20 19:45:01.006508 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jun 20 19:45:01.006527 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jun 20 19:45:01.006539 systemd[1]: Created slice user.slice - User and Session Slice. Jun 20 19:45:01.006551 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jun 20 19:45:01.006565 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jun 20 19:45:01.006577 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jun 20 19:45:01.006589 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jun 20 19:45:01.006603 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jun 20 19:45:01.006615 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jun 20 19:45:01.006627 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jun 20 19:45:01.006638 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jun 20 19:45:01.006650 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jun 20 19:45:01.006662 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jun 20 19:45:01.006674 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jun 20 19:45:01.006686 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jun 20 19:45:01.006699 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jun 20 19:45:01.006712 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jun 20 19:45:01.006724 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jun 20 19:45:01.006736 systemd[1]: Reached target slices.target - Slice Units. Jun 20 19:45:01.006748 systemd[1]: Reached target swap.target - Swaps. Jun 20 19:45:01.006759 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jun 20 19:45:01.006777 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jun 20 19:45:01.006789 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jun 20 19:45:01.006801 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jun 20 19:45:01.006828 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jun 20 19:45:01.006843 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jun 20 19:45:01.006854 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jun 20 19:45:01.006866 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jun 20 19:45:01.006878 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jun 20 19:45:01.006890 systemd[1]: Mounting media.mount - External Media Directory... Jun 20 19:45:01.006902 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 20 19:45:01.006913 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jun 20 19:45:01.006925 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jun 20 19:45:01.006939 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jun 20 19:45:01.006951 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jun 20 19:45:01.006962 systemd[1]: Reached target machines.target - Containers. Jun 20 19:45:01.006974 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jun 20 19:45:01.006986 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 20 19:45:01.006998 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jun 20 19:45:01.007010 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jun 20 19:45:01.007021 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 20 19:45:01.007035 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jun 20 19:45:01.007047 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 20 19:45:01.007059 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jun 20 19:45:01.007071 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 20 19:45:01.007084 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jun 20 19:45:01.007097 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jun 20 19:45:01.007109 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jun 20 19:45:01.007120 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jun 20 19:45:01.007132 systemd[1]: Stopped systemd-fsck-usr.service. Jun 20 19:45:01.007146 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jun 20 19:45:01.007159 systemd[1]: Starting systemd-journald.service - Journal Service... Jun 20 19:45:01.007170 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jun 20 19:45:01.007182 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jun 20 19:45:01.007193 kernel: loop: module loaded Jun 20 19:45:01.007205 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jun 20 19:45:01.007217 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jun 20 19:45:01.007229 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jun 20 19:45:01.007243 systemd[1]: verity-setup.service: Deactivated successfully. Jun 20 19:45:01.007255 systemd[1]: Stopped verity-setup.service. Jun 20 19:45:01.007271 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 20 19:45:01.007283 kernel: ACPI: bus type drm_connector registered Jun 20 19:45:01.007294 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jun 20 19:45:01.007306 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jun 20 19:45:01.007317 systemd[1]: Mounted media.mount - External Media Directory. Jun 20 19:45:01.007329 kernel: fuse: init (API version 7.41) Jun 20 19:45:01.007340 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jun 20 19:45:01.007353 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jun 20 19:45:01.007389 systemd-journald[1209]: Collecting audit messages is disabled. Jun 20 19:45:01.007417 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jun 20 19:45:01.007429 systemd-journald[1209]: Journal started Jun 20 19:45:01.007452 systemd-journald[1209]: Runtime Journal (/run/log/journal/dcb38c1ae15247709ee053b66712831b) is 6M, max 48.2M, 42.2M free. Jun 20 19:45:00.760572 systemd[1]: Queued start job for default target multi-user.target. Jun 20 19:45:00.782876 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jun 20 19:45:00.783342 systemd[1]: systemd-journald.service: Deactivated successfully. Jun 20 19:45:01.008901 systemd[1]: Started systemd-journald.service - Journal Service. Jun 20 19:45:01.011913 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jun 20 19:45:01.013493 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jun 20 19:45:01.015070 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jun 20 19:45:01.015290 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jun 20 19:45:01.016773 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 20 19:45:01.017015 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 20 19:45:01.018554 systemd[1]: modprobe@drm.service: Deactivated successfully. Jun 20 19:45:01.018913 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jun 20 19:45:01.020353 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 20 19:45:01.020574 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 20 19:45:01.022108 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jun 20 19:45:01.022319 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jun 20 19:45:01.023704 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 20 19:45:01.023922 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 20 19:45:01.025378 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jun 20 19:45:01.026909 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jun 20 19:45:01.028489 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jun 20 19:45:01.030229 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jun 20 19:45:01.044968 systemd[1]: Reached target network-pre.target - Preparation for Network. Jun 20 19:45:01.047646 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jun 20 19:45:01.049986 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jun 20 19:45:01.051213 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jun 20 19:45:01.051295 systemd[1]: Reached target local-fs.target - Local File Systems. Jun 20 19:45:01.053451 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jun 20 19:45:01.062861 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jun 20 19:45:01.064033 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 20 19:45:01.066078 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jun 20 19:45:01.069125 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jun 20 19:45:01.071150 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 20 19:45:01.073880 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jun 20 19:45:01.075076 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 20 19:45:01.079447 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 20 19:45:01.082763 systemd-journald[1209]: Time spent on flushing to /var/log/journal/dcb38c1ae15247709ee053b66712831b is 24.274ms for 1040 entries. Jun 20 19:45:01.082763 systemd-journald[1209]: System Journal (/var/log/journal/dcb38c1ae15247709ee053b66712831b) is 8M, max 195.6M, 187.6M free. Jun 20 19:45:01.124276 systemd-journald[1209]: Received client request to flush runtime journal. Jun 20 19:45:01.124334 kernel: loop0: detected capacity change from 0 to 221472 Jun 20 19:45:01.082763 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jun 20 19:45:01.086247 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jun 20 19:45:01.089074 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jun 20 19:45:01.092760 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jun 20 19:45:01.094261 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jun 20 19:45:01.098441 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jun 20 19:45:01.104924 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jun 20 19:45:01.107280 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jun 20 19:45:01.110365 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 20 19:45:01.127971 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jun 20 19:45:01.139832 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jun 20 19:45:01.141876 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jun 20 19:45:01.148315 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jun 20 19:45:01.152588 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jun 20 19:45:01.160892 kernel: loop1: detected capacity change from 0 to 113872 Jun 20 19:45:01.180878 systemd-tmpfiles[1270]: ACLs are not supported, ignoring. Jun 20 19:45:01.180895 systemd-tmpfiles[1270]: ACLs are not supported, ignoring. Jun 20 19:45:01.186448 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jun 20 19:45:01.189844 kernel: loop2: detected capacity change from 0 to 146240 Jun 20 19:45:01.226834 kernel: loop3: detected capacity change from 0 to 221472 Jun 20 19:45:01.236847 kernel: loop4: detected capacity change from 0 to 113872 Jun 20 19:45:01.244846 kernel: loop5: detected capacity change from 0 to 146240 Jun 20 19:45:01.255582 (sd-merge)[1277]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jun 20 19:45:01.256146 (sd-merge)[1277]: Merged extensions into '/usr'. Jun 20 19:45:01.260542 systemd[1]: Reload requested from client PID 1253 ('systemd-sysext') (unit systemd-sysext.service)... Jun 20 19:45:01.260637 systemd[1]: Reloading... Jun 20 19:45:01.314852 zram_generator::config[1306]: No configuration found. Jun 20 19:45:01.394685 ldconfig[1248]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jun 20 19:45:01.419064 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 20 19:45:01.499574 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jun 20 19:45:01.499758 systemd[1]: Reloading finished in 238 ms. Jun 20 19:45:01.541346 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jun 20 19:45:01.543148 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jun 20 19:45:01.558155 systemd[1]: Starting ensure-sysext.service... Jun 20 19:45:01.560027 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jun 20 19:45:01.569399 systemd[1]: Reload requested from client PID 1341 ('systemctl') (unit ensure-sysext.service)... Jun 20 19:45:01.569415 systemd[1]: Reloading... Jun 20 19:45:01.580146 systemd-tmpfiles[1342]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jun 20 19:45:01.580185 systemd-tmpfiles[1342]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jun 20 19:45:01.580524 systemd-tmpfiles[1342]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jun 20 19:45:01.580775 systemd-tmpfiles[1342]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jun 20 19:45:01.581685 systemd-tmpfiles[1342]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jun 20 19:45:01.581978 systemd-tmpfiles[1342]: ACLs are not supported, ignoring. Jun 20 19:45:01.582051 systemd-tmpfiles[1342]: ACLs are not supported, ignoring. Jun 20 19:45:01.605828 systemd-tmpfiles[1342]: Detected autofs mount point /boot during canonicalization of boot. Jun 20 19:45:01.605844 systemd-tmpfiles[1342]: Skipping /boot Jun 20 19:45:01.619851 systemd-tmpfiles[1342]: Detected autofs mount point /boot during canonicalization of boot. Jun 20 19:45:01.619997 systemd-tmpfiles[1342]: Skipping /boot Jun 20 19:45:01.622843 zram_generator::config[1373]: No configuration found. Jun 20 19:45:01.707642 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 20 19:45:01.787294 systemd[1]: Reloading finished in 217 ms. Jun 20 19:45:01.811398 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jun 20 19:45:01.836061 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jun 20 19:45:01.844969 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jun 20 19:45:01.847447 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jun 20 19:45:01.856180 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jun 20 19:45:01.859876 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jun 20 19:45:01.863047 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jun 20 19:45:01.865837 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jun 20 19:45:01.870343 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 20 19:45:01.870530 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 20 19:45:01.874085 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 20 19:45:01.880142 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 20 19:45:01.883180 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 20 19:45:01.884358 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 20 19:45:01.884919 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jun 20 19:45:01.887636 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jun 20 19:45:01.888750 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 20 19:45:01.890143 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 20 19:45:01.890367 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 20 19:45:01.892184 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 20 19:45:01.892482 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 20 19:45:01.894433 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 20 19:45:01.894896 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 20 19:45:01.903192 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jun 20 19:45:01.908497 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jun 20 19:45:01.913871 systemd-udevd[1413]: Using default interface naming scheme 'v255'. Jun 20 19:45:01.914739 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 20 19:45:01.914982 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jun 20 19:45:01.917086 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jun 20 19:45:01.920751 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jun 20 19:45:01.923042 augenrules[1444]: No rules Jun 20 19:45:01.924785 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jun 20 19:45:01.933997 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jun 20 19:45:01.935147 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jun 20 19:45:01.935257 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jun 20 19:45:01.937575 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jun 20 19:45:01.938696 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jun 20 19:45:01.940247 systemd[1]: audit-rules.service: Deactivated successfully. Jun 20 19:45:01.940519 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jun 20 19:45:01.941971 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jun 20 19:45:01.942187 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jun 20 19:45:01.944249 systemd[1]: modprobe@drm.service: Deactivated successfully. Jun 20 19:45:01.944536 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jun 20 19:45:01.946402 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jun 20 19:45:01.946621 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jun 20 19:45:01.948333 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jun 20 19:45:01.950073 systemd[1]: modprobe@loop.service: Deactivated successfully. Jun 20 19:45:01.950277 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jun 20 19:45:01.951922 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jun 20 19:45:01.955605 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jun 20 19:45:01.960553 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jun 20 19:45:01.962244 systemd[1]: Finished ensure-sysext.service. Jun 20 19:45:01.974025 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jun 20 19:45:01.976266 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jun 20 19:45:01.976331 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jun 20 19:45:01.979939 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jun 20 19:45:01.981302 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jun 20 19:45:02.027676 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jun 20 19:45:02.085689 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jun 20 19:45:02.092046 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jun 20 19:45:02.108205 systemd-resolved[1411]: Positive Trust Anchors: Jun 20 19:45:02.108671 systemd-resolved[1411]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jun 20 19:45:02.108745 systemd-resolved[1411]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jun 20 19:45:02.113682 systemd-resolved[1411]: Defaulting to hostname 'linux'. Jun 20 19:45:02.116035 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jun 20 19:45:02.119159 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jun 20 19:45:02.125236 kernel: mousedev: PS/2 mouse device common for all mice Jun 20 19:45:02.129042 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jun 20 19:45:02.139785 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jun 20 19:45:02.143320 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Jun 20 19:45:02.143614 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jun 20 19:45:02.143772 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jun 20 19:45:02.143976 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jun 20 19:45:02.144042 systemd-networkd[1480]: lo: Link UP Jun 20 19:45:02.144053 systemd-networkd[1480]: lo: Gained carrier Jun 20 19:45:02.144543 systemd[1]: Reached target sysinit.target - System Initialization. Jun 20 19:45:02.146785 systemd-networkd[1480]: Enumeration completed Jun 20 19:45:02.146887 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jun 20 19:45:02.148225 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jun 20 19:45:02.149839 kernel: ACPI: button: Power Button [PWRF] Jun 20 19:45:02.150037 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jun 20 19:45:02.151213 systemd-networkd[1480]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 20 19:45:02.151225 systemd-networkd[1480]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jun 20 19:45:02.151238 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jun 20 19:45:02.152900 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jun 20 19:45:02.152934 systemd[1]: Reached target paths.target - Path Units. Jun 20 19:45:02.153885 systemd-networkd[1480]: eth0: Link UP Jun 20 19:45:02.153895 systemd[1]: Reached target time-set.target - System Time Set. Jun 20 19:45:02.155059 systemd-networkd[1480]: eth0: Gained carrier Jun 20 19:45:02.155075 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jun 20 19:45:02.155082 systemd-networkd[1480]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jun 20 19:45:02.157226 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jun 20 19:45:02.158524 systemd[1]: Reached target timers.target - Timer Units. Jun 20 19:45:02.160387 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jun 20 19:45:02.164030 systemd[1]: Starting docker.socket - Docker Socket for the API... Jun 20 19:45:02.168352 systemd-networkd[1480]: eth0: DHCPv4 address 10.0.0.133/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jun 20 19:45:02.169716 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jun 20 19:45:02.170771 systemd-timesyncd[1486]: Network configuration changed, trying to establish connection. Jun 20 19:45:02.171325 systemd-timesyncd[1486]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jun 20 19:45:02.171359 systemd-timesyncd[1486]: Initial clock synchronization to Fri 2025-06-20 19:45:02.378658 UTC. Jun 20 19:45:02.171436 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jun 20 19:45:02.172730 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jun 20 19:45:02.178843 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jun 20 19:45:02.180287 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jun 20 19:45:02.182516 systemd[1]: Started systemd-networkd.service - Network Configuration. Jun 20 19:45:02.185140 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jun 20 19:45:02.186949 systemd[1]: Reached target network.target - Network. Jun 20 19:45:02.189850 systemd[1]: Reached target sockets.target - Socket Units. Jun 20 19:45:02.190838 systemd[1]: Reached target basic.target - Basic System. Jun 20 19:45:02.192920 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jun 20 19:45:02.192947 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jun 20 19:45:02.193980 systemd[1]: Starting containerd.service - containerd container runtime... Jun 20 19:45:02.197911 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jun 20 19:45:02.203607 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jun 20 19:45:02.207877 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jun 20 19:45:02.209299 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jun 20 19:45:02.211872 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jun 20 19:45:02.213936 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jun 20 19:45:02.216275 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jun 20 19:45:02.222398 jq[1524]: false Jun 20 19:45:02.218842 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jun 20 19:45:02.221118 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jun 20 19:45:02.224009 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jun 20 19:45:02.229974 systemd[1]: Starting systemd-logind.service - User Login Management... Jun 20 19:45:02.235979 oslogin_cache_refresh[1526]: Refreshing passwd entry cache Jun 20 19:45:02.237915 google_oslogin_nss_cache[1526]: oslogin_cache_refresh[1526]: Refreshing passwd entry cache Jun 20 19:45:02.232115 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jun 20 19:45:02.237003 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jun 20 19:45:02.239030 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jun 20 19:45:02.239449 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jun 20 19:45:02.240257 systemd[1]: Starting update-engine.service - Update Engine... Jun 20 19:45:02.248945 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jun 20 19:45:02.251729 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jun 20 19:45:02.253326 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jun 20 19:45:02.253568 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jun 20 19:45:02.257983 google_oslogin_nss_cache[1526]: oslogin_cache_refresh[1526]: Failure getting users, quitting Jun 20 19:45:02.257983 google_oslogin_nss_cache[1526]: oslogin_cache_refresh[1526]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jun 20 19:45:02.257983 google_oslogin_nss_cache[1526]: oslogin_cache_refresh[1526]: Refreshing group entry cache Jun 20 19:45:02.257633 oslogin_cache_refresh[1526]: Failure getting users, quitting Jun 20 19:45:02.257654 oslogin_cache_refresh[1526]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jun 20 19:45:02.257702 oslogin_cache_refresh[1526]: Refreshing group entry cache Jun 20 19:45:02.259607 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jun 20 19:45:02.263897 extend-filesystems[1525]: Found /dev/vda6 Jun 20 19:45:02.265296 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jun 20 19:45:02.273526 jq[1537]: true Jun 20 19:45:02.281402 google_oslogin_nss_cache[1526]: oslogin_cache_refresh[1526]: Failure getting groups, quitting Jun 20 19:45:02.281402 google_oslogin_nss_cache[1526]: oslogin_cache_refresh[1526]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jun 20 19:45:02.278940 oslogin_cache_refresh[1526]: Failure getting groups, quitting Jun 20 19:45:02.278953 oslogin_cache_refresh[1526]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jun 20 19:45:02.286257 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jun 20 19:45:02.286596 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jun 20 19:45:02.294854 jq[1553]: true Jun 20 19:45:02.299856 update_engine[1535]: I20250620 19:45:02.299682 1535 main.cc:92] Flatcar Update Engine starting Jun 20 19:45:02.301993 extend-filesystems[1525]: Found /dev/vda9 Jun 20 19:45:02.307138 extend-filesystems[1525]: Checking size of /dev/vda9 Jun 20 19:45:02.306137 (ntainerd)[1559]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jun 20 19:45:02.307512 dbus-daemon[1522]: [system] SELinux support is enabled Jun 20 19:45:02.308276 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jun 20 19:45:02.316662 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jun 20 19:45:02.316786 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jun 20 19:45:02.322146 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 20 19:45:02.323311 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jun 20 19:45:02.323430 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jun 20 19:45:02.331473 extend-filesystems[1525]: Resized partition /dev/vda9 Jun 20 19:45:02.331053 systemd[1]: Started update-engine.service - Update Engine. Jun 20 19:45:02.335350 update_engine[1535]: I20250620 19:45:02.331993 1535 update_check_scheduler.cc:74] Next update check in 7m25s Jun 20 19:45:02.336824 extend-filesystems[1577]: resize2fs 1.47.2 (1-Jan-2025) Jun 20 19:45:02.338268 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jun 20 19:45:02.348582 tar[1545]: linux-amd64/helm Jun 20 19:45:02.355437 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jun 20 19:45:02.361126 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jun 20 19:45:02.378618 systemd[1]: motdgen.service: Deactivated successfully. Jun 20 19:45:02.379975 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jun 20 19:45:02.382513 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jun 20 19:45:02.383670 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 19:45:02.396356 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jun 20 19:45:02.409828 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jun 20 19:45:02.438832 extend-filesystems[1577]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jun 20 19:45:02.438832 extend-filesystems[1577]: old_desc_blocks = 1, new_desc_blocks = 1 Jun 20 19:45:02.438832 extend-filesystems[1577]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jun 20 19:45:02.446021 extend-filesystems[1525]: Resized filesystem in /dev/vda9 Jun 20 19:45:02.445619 systemd[1]: extend-filesystems.service: Deactivated successfully. Jun 20 19:45:02.447125 sshd_keygen[1548]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jun 20 19:45:02.448256 bash[1589]: Updated "/home/core/.ssh/authorized_keys" Jun 20 19:45:02.453844 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jun 20 19:45:02.456076 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jun 20 19:45:02.460708 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jun 20 19:45:02.496869 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jun 20 19:45:02.500339 systemd-logind[1532]: Watching system buttons on /dev/input/event2 (Power Button) Jun 20 19:45:02.500358 systemd-logind[1532]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jun 20 19:45:02.502089 systemd-logind[1532]: New seat seat0. Jun 20 19:45:02.503949 systemd[1]: Started systemd-logind.service - User Login Management. Jun 20 19:45:02.506463 locksmithd[1578]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jun 20 19:45:02.521354 systemd[1]: Starting issuegen.service - Generate /run/issue... Jun 20 19:45:02.545587 systemd[1]: issuegen.service: Deactivated successfully. Jun 20 19:45:02.545872 systemd[1]: Finished issuegen.service - Generate /run/issue. Jun 20 19:45:02.549963 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jun 20 19:45:02.588117 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jun 20 19:45:02.592000 systemd[1]: Started getty@tty1.service - Getty on tty1. Jun 20 19:45:02.597147 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jun 20 19:45:02.598558 systemd[1]: Reached target getty.target - Login Prompts. Jun 20 19:45:02.601232 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jun 20 19:45:02.612526 containerd[1559]: time="2025-06-20T19:45:02Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jun 20 19:45:02.612988 kernel: kvm_amd: TSC scaling supported Jun 20 19:45:02.613016 kernel: kvm_amd: Nested Virtualization enabled Jun 20 19:45:02.613035 kernel: kvm_amd: Nested Paging enabled Jun 20 19:45:02.613050 kernel: kvm_amd: LBR virtualization supported Jun 20 19:45:02.614416 containerd[1559]: time="2025-06-20T19:45:02.614371091Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Jun 20 19:45:02.618839 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jun 20 19:45:02.618877 kernel: kvm_amd: Virtual GIF supported Jun 20 19:45:02.625797 containerd[1559]: time="2025-06-20T19:45:02.625719711Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="19.287µs" Jun 20 19:45:02.625924 containerd[1559]: time="2025-06-20T19:45:02.625894850Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jun 20 19:45:02.626037 containerd[1559]: time="2025-06-20T19:45:02.626010677Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jun 20 19:45:02.626487 containerd[1559]: time="2025-06-20T19:45:02.626447627Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jun 20 19:45:02.626591 containerd[1559]: time="2025-06-20T19:45:02.626567682Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jun 20 19:45:02.626750 containerd[1559]: time="2025-06-20T19:45:02.626719617Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jun 20 19:45:02.626985 containerd[1559]: time="2025-06-20T19:45:02.626950841Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jun 20 19:45:02.627089 containerd[1559]: time="2025-06-20T19:45:02.627063442Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jun 20 19:45:02.627696 containerd[1559]: time="2025-06-20T19:45:02.627657637Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jun 20 19:45:02.627802 containerd[1559]: time="2025-06-20T19:45:02.627775648Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jun 20 19:45:02.627917 containerd[1559]: time="2025-06-20T19:45:02.627890724Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jun 20 19:45:02.628039 containerd[1559]: time="2025-06-20T19:45:02.628011391Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jun 20 19:45:02.628315 containerd[1559]: time="2025-06-20T19:45:02.628286276Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jun 20 19:45:02.628918 containerd[1559]: time="2025-06-20T19:45:02.628876624Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jun 20 19:45:02.629071 containerd[1559]: time="2025-06-20T19:45:02.629039740Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jun 20 19:45:02.629187 containerd[1559]: time="2025-06-20T19:45:02.629151520Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jun 20 19:45:02.629315 containerd[1559]: time="2025-06-20T19:45:02.629294247Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jun 20 19:45:02.630020 containerd[1559]: time="2025-06-20T19:45:02.629935681Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jun 20 19:45:02.630231 containerd[1559]: time="2025-06-20T19:45:02.630201770Z" level=info msg="metadata content store policy set" policy=shared Jun 20 19:45:02.636099 containerd[1559]: time="2025-06-20T19:45:02.636067535Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jun 20 19:45:02.636178 containerd[1559]: time="2025-06-20T19:45:02.636123650Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jun 20 19:45:02.636178 containerd[1559]: time="2025-06-20T19:45:02.636138087Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jun 20 19:45:02.636178 containerd[1559]: time="2025-06-20T19:45:02.636149989Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jun 20 19:45:02.636178 containerd[1559]: time="2025-06-20T19:45:02.636164056Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jun 20 19:45:02.636178 containerd[1559]: time="2025-06-20T19:45:02.636173203Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jun 20 19:45:02.636280 containerd[1559]: time="2025-06-20T19:45:02.636185676Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jun 20 19:45:02.636280 containerd[1559]: time="2025-06-20T19:45:02.636197939Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jun 20 19:45:02.636280 containerd[1559]: time="2025-06-20T19:45:02.636208890Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jun 20 19:45:02.636280 containerd[1559]: time="2025-06-20T19:45:02.636219229Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jun 20 19:45:02.636280 containerd[1559]: time="2025-06-20T19:45:02.636228757Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jun 20 19:45:02.636280 containerd[1559]: time="2025-06-20T19:45:02.636242713Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jun 20 19:45:02.636382 containerd[1559]: time="2025-06-20T19:45:02.636354263Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jun 20 19:45:02.636382 containerd[1559]: time="2025-06-20T19:45:02.636372427Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jun 20 19:45:02.636422 containerd[1559]: time="2025-06-20T19:45:02.636385020Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jun 20 19:45:02.636422 containerd[1559]: time="2025-06-20T19:45:02.636396923Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jun 20 19:45:02.636422 containerd[1559]: time="2025-06-20T19:45:02.636407382Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jun 20 19:45:02.636422 containerd[1559]: time="2025-06-20T19:45:02.636417912Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jun 20 19:45:02.636504 containerd[1559]: time="2025-06-20T19:45:02.636429223Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jun 20 19:45:02.636504 containerd[1559]: time="2025-06-20T19:45:02.636441045Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jun 20 19:45:02.636504 containerd[1559]: time="2025-06-20T19:45:02.636454380Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jun 20 19:45:02.636504 containerd[1559]: time="2025-06-20T19:45:02.636465431Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jun 20 19:45:02.636504 containerd[1559]: time="2025-06-20T19:45:02.636501539Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jun 20 19:45:02.636598 containerd[1559]: time="2025-06-20T19:45:02.636572061Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jun 20 19:45:02.636598 containerd[1559]: time="2025-06-20T19:45:02.636585015Z" level=info msg="Start snapshots syncer" Jun 20 19:45:02.636634 containerd[1559]: time="2025-06-20T19:45:02.636618077Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jun 20 19:45:02.637007 containerd[1559]: time="2025-06-20T19:45:02.636919062Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jun 20 19:45:02.637007 containerd[1559]: time="2025-06-20T19:45:02.636971140Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jun 20 19:45:02.639429 containerd[1559]: time="2025-06-20T19:45:02.639373417Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jun 20 19:45:02.639513 containerd[1559]: time="2025-06-20T19:45:02.639489615Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jun 20 19:45:02.639539 containerd[1559]: time="2025-06-20T19:45:02.639521575Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jun 20 19:45:02.639539 containerd[1559]: time="2025-06-20T19:45:02.639533587Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jun 20 19:45:02.639584 containerd[1559]: time="2025-06-20T19:45:02.639543145Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jun 20 19:45:02.639584 containerd[1559]: time="2025-06-20T19:45:02.639556470Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jun 20 19:45:02.639584 containerd[1559]: time="2025-06-20T19:45:02.639567210Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jun 20 19:45:02.639584 containerd[1559]: time="2025-06-20T19:45:02.639578131Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jun 20 19:45:02.639661 containerd[1559]: time="2025-06-20T19:45:02.639605001Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jun 20 19:45:02.639661 containerd[1559]: time="2025-06-20T19:45:02.639616954Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jun 20 19:45:02.639661 containerd[1559]: time="2025-06-20T19:45:02.639627533Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jun 20 19:45:02.639714 containerd[1559]: time="2025-06-20T19:45:02.639669031Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jun 20 19:45:02.639714 containerd[1559]: time="2025-06-20T19:45:02.639683679Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jun 20 19:45:02.639714 containerd[1559]: time="2025-06-20T19:45:02.639691884Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jun 20 19:45:02.639714 containerd[1559]: time="2025-06-20T19:45:02.639701532Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jun 20 19:45:02.639714 containerd[1559]: time="2025-06-20T19:45:02.639709337Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jun 20 19:45:02.639804 containerd[1559]: time="2025-06-20T19:45:02.639719276Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jun 20 19:45:02.639804 containerd[1559]: time="2025-06-20T19:45:02.639729054Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jun 20 19:45:02.639804 containerd[1559]: time="2025-06-20T19:45:02.639740886Z" level=info msg="runtime interface created" Jun 20 19:45:02.639804 containerd[1559]: time="2025-06-20T19:45:02.639746166Z" level=info msg="created NRI interface" Jun 20 19:45:02.639804 containerd[1559]: time="2025-06-20T19:45:02.639753690Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jun 20 19:45:02.639804 containerd[1559]: time="2025-06-20T19:45:02.639763719Z" level=info msg="Connect containerd service" Jun 20 19:45:02.639804 containerd[1559]: time="2025-06-20T19:45:02.639785610Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jun 20 19:45:02.640879 containerd[1559]: time="2025-06-20T19:45:02.640536409Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jun 20 19:45:02.646842 kernel: EDAC MC: Ver: 3.0.0 Jun 20 19:45:02.728837 containerd[1559]: time="2025-06-20T19:45:02.727073809Z" level=info msg="Start subscribing containerd event" Jun 20 19:45:02.728837 containerd[1559]: time="2025-06-20T19:45:02.727133481Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jun 20 19:45:02.728837 containerd[1559]: time="2025-06-20T19:45:02.727176231Z" level=info msg="Start recovering state" Jun 20 19:45:02.728837 containerd[1559]: time="2025-06-20T19:45:02.727221165Z" level=info msg=serving... address=/run/containerd/containerd.sock Jun 20 19:45:02.728837 containerd[1559]: time="2025-06-20T19:45:02.727340249Z" level=info msg="Start event monitor" Jun 20 19:45:02.728837 containerd[1559]: time="2025-06-20T19:45:02.727354736Z" level=info msg="Start cni network conf syncer for default" Jun 20 19:45:02.728837 containerd[1559]: time="2025-06-20T19:45:02.727361088Z" level=info msg="Start streaming server" Jun 20 19:45:02.728837 containerd[1559]: time="2025-06-20T19:45:02.727381707Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jun 20 19:45:02.728837 containerd[1559]: time="2025-06-20T19:45:02.727389291Z" level=info msg="runtime interface starting up..." Jun 20 19:45:02.728837 containerd[1559]: time="2025-06-20T19:45:02.727395282Z" level=info msg="starting plugins..." Jun 20 19:45:02.728837 containerd[1559]: time="2025-06-20T19:45:02.727409338Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jun 20 19:45:02.728837 containerd[1559]: time="2025-06-20T19:45:02.727597541Z" level=info msg="containerd successfully booted in 0.115551s" Jun 20 19:45:02.727751 systemd[1]: Started containerd.service - containerd container runtime. Jun 20 19:45:02.853053 tar[1545]: linux-amd64/LICENSE Jun 20 19:45:02.853053 tar[1545]: linux-amd64/README.md Jun 20 19:45:02.870626 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jun 20 19:45:03.352617 systemd-networkd[1480]: eth0: Gained IPv6LL Jun 20 19:45:03.355946 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jun 20 19:45:03.357798 systemd[1]: Reached target network-online.target - Network is Online. Jun 20 19:45:03.360424 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jun 20 19:45:03.363108 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:45:03.365303 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jun 20 19:45:03.388418 systemd[1]: coreos-metadata.service: Deactivated successfully. Jun 20 19:45:03.388746 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jun 20 19:45:03.390775 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jun 20 19:45:03.393494 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jun 20 19:45:04.083172 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:45:04.085234 systemd[1]: Reached target multi-user.target - Multi-User System. Jun 20 19:45:04.086642 systemd[1]: Startup finished in 2.824s (kernel) + 6.582s (initrd) + 3.933s (userspace) = 13.339s. Jun 20 19:45:04.089573 (kubelet)[1674]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 20 19:45:04.497925 kubelet[1674]: E0620 19:45:04.497795 1674 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 20 19:45:04.501721 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 20 19:45:04.501946 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 20 19:45:04.502353 systemd[1]: kubelet.service: Consumed 958ms CPU time, 265.8M memory peak. Jun 20 19:45:07.159921 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jun 20 19:45:07.161125 systemd[1]: Started sshd@0-10.0.0.133:22-10.0.0.1:41302.service - OpenSSH per-connection server daemon (10.0.0.1:41302). Jun 20 19:45:07.232548 sshd[1687]: Accepted publickey for core from 10.0.0.1 port 41302 ssh2: RSA SHA256:/nHAorWV7i/6K+ZJ86Yj9USNPlOARznL9tuQD88B/d4 Jun 20 19:45:07.234547 sshd-session[1687]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:45:07.241378 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jun 20 19:45:07.242555 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jun 20 19:45:07.249349 systemd-logind[1532]: New session 1 of user core. Jun 20 19:45:07.266744 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jun 20 19:45:07.269736 systemd[1]: Starting user@500.service - User Manager for UID 500... Jun 20 19:45:07.289163 (systemd)[1691]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jun 20 19:45:07.291413 systemd-logind[1532]: New session c1 of user core. Jun 20 19:45:07.438694 systemd[1691]: Queued start job for default target default.target. Jun 20 19:45:07.461091 systemd[1691]: Created slice app.slice - User Application Slice. Jun 20 19:45:07.461116 systemd[1691]: Reached target paths.target - Paths. Jun 20 19:45:07.461157 systemd[1691]: Reached target timers.target - Timers. Jun 20 19:45:07.462623 systemd[1691]: Starting dbus.socket - D-Bus User Message Bus Socket... Jun 20 19:45:07.473566 systemd[1691]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jun 20 19:45:07.473682 systemd[1691]: Reached target sockets.target - Sockets. Jun 20 19:45:07.473723 systemd[1691]: Reached target basic.target - Basic System. Jun 20 19:45:07.473762 systemd[1691]: Reached target default.target - Main User Target. Jun 20 19:45:07.473794 systemd[1691]: Startup finished in 175ms. Jun 20 19:45:07.474077 systemd[1]: Started user@500.service - User Manager for UID 500. Jun 20 19:45:07.475647 systemd[1]: Started session-1.scope - Session 1 of User core. Jun 20 19:45:07.541789 systemd[1]: Started sshd@1-10.0.0.133:22-10.0.0.1:41310.service - OpenSSH per-connection server daemon (10.0.0.1:41310). Jun 20 19:45:07.587774 sshd[1702]: Accepted publickey for core from 10.0.0.1 port 41310 ssh2: RSA SHA256:/nHAorWV7i/6K+ZJ86Yj9USNPlOARznL9tuQD88B/d4 Jun 20 19:45:07.589069 sshd-session[1702]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:45:07.593439 systemd-logind[1532]: New session 2 of user core. Jun 20 19:45:07.606965 systemd[1]: Started session-2.scope - Session 2 of User core. Jun 20 19:45:07.659496 sshd[1704]: Connection closed by 10.0.0.1 port 41310 Jun 20 19:45:07.659801 sshd-session[1702]: pam_unix(sshd:session): session closed for user core Jun 20 19:45:07.668522 systemd[1]: sshd@1-10.0.0.133:22-10.0.0.1:41310.service: Deactivated successfully. Jun 20 19:45:07.670429 systemd[1]: session-2.scope: Deactivated successfully. Jun 20 19:45:07.671233 systemd-logind[1532]: Session 2 logged out. Waiting for processes to exit. Jun 20 19:45:07.674089 systemd[1]: Started sshd@2-10.0.0.133:22-10.0.0.1:41314.service - OpenSSH per-connection server daemon (10.0.0.1:41314). Jun 20 19:45:07.674627 systemd-logind[1532]: Removed session 2. Jun 20 19:45:07.727490 sshd[1710]: Accepted publickey for core from 10.0.0.1 port 41314 ssh2: RSA SHA256:/nHAorWV7i/6K+ZJ86Yj9USNPlOARznL9tuQD88B/d4 Jun 20 19:45:07.728723 sshd-session[1710]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:45:07.732902 systemd-logind[1532]: New session 3 of user core. Jun 20 19:45:07.741952 systemd[1]: Started session-3.scope - Session 3 of User core. Jun 20 19:45:07.792214 sshd[1712]: Connection closed by 10.0.0.1 port 41314 Jun 20 19:45:07.792614 sshd-session[1710]: pam_unix(sshd:session): session closed for user core Jun 20 19:45:07.807228 systemd[1]: sshd@2-10.0.0.133:22-10.0.0.1:41314.service: Deactivated successfully. Jun 20 19:45:07.808761 systemd[1]: session-3.scope: Deactivated successfully. Jun 20 19:45:07.809466 systemd-logind[1532]: Session 3 logged out. Waiting for processes to exit. Jun 20 19:45:07.812105 systemd[1]: Started sshd@3-10.0.0.133:22-10.0.0.1:41320.service - OpenSSH per-connection server daemon (10.0.0.1:41320). Jun 20 19:45:07.812662 systemd-logind[1532]: Removed session 3. Jun 20 19:45:07.864260 sshd[1718]: Accepted publickey for core from 10.0.0.1 port 41320 ssh2: RSA SHA256:/nHAorWV7i/6K+ZJ86Yj9USNPlOARznL9tuQD88B/d4 Jun 20 19:45:07.866262 sshd-session[1718]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:45:07.871106 systemd-logind[1532]: New session 4 of user core. Jun 20 19:45:07.880971 systemd[1]: Started session-4.scope - Session 4 of User core. Jun 20 19:45:07.935471 sshd[1720]: Connection closed by 10.0.0.1 port 41320 Jun 20 19:45:07.935803 sshd-session[1718]: pam_unix(sshd:session): session closed for user core Jun 20 19:45:07.953516 systemd[1]: sshd@3-10.0.0.133:22-10.0.0.1:41320.service: Deactivated successfully. Jun 20 19:45:07.955662 systemd[1]: session-4.scope: Deactivated successfully. Jun 20 19:45:07.956450 systemd-logind[1532]: Session 4 logged out. Waiting for processes to exit. Jun 20 19:45:07.959696 systemd[1]: Started sshd@4-10.0.0.133:22-10.0.0.1:41336.service - OpenSSH per-connection server daemon (10.0.0.1:41336). Jun 20 19:45:07.960364 systemd-logind[1532]: Removed session 4. Jun 20 19:45:08.017873 sshd[1726]: Accepted publickey for core from 10.0.0.1 port 41336 ssh2: RSA SHA256:/nHAorWV7i/6K+ZJ86Yj9USNPlOARznL9tuQD88B/d4 Jun 20 19:45:08.019644 sshd-session[1726]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:45:08.024195 systemd-logind[1532]: New session 5 of user core. Jun 20 19:45:08.037961 systemd[1]: Started session-5.scope - Session 5 of User core. Jun 20 19:45:08.095940 sudo[1729]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jun 20 19:45:08.096263 sudo[1729]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jun 20 19:45:08.117259 sudo[1729]: pam_unix(sudo:session): session closed for user root Jun 20 19:45:08.119095 sshd[1728]: Connection closed by 10.0.0.1 port 41336 Jun 20 19:45:08.119499 sshd-session[1726]: pam_unix(sshd:session): session closed for user core Jun 20 19:45:08.136798 systemd[1]: sshd@4-10.0.0.133:22-10.0.0.1:41336.service: Deactivated successfully. Jun 20 19:45:08.138523 systemd[1]: session-5.scope: Deactivated successfully. Jun 20 19:45:08.139339 systemd-logind[1532]: Session 5 logged out. Waiting for processes to exit. Jun 20 19:45:08.142320 systemd[1]: Started sshd@5-10.0.0.133:22-10.0.0.1:41348.service - OpenSSH per-connection server daemon (10.0.0.1:41348). Jun 20 19:45:08.143090 systemd-logind[1532]: Removed session 5. Jun 20 19:45:08.194112 sshd[1735]: Accepted publickey for core from 10.0.0.1 port 41348 ssh2: RSA SHA256:/nHAorWV7i/6K+ZJ86Yj9USNPlOARznL9tuQD88B/d4 Jun 20 19:45:08.195496 sshd-session[1735]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:45:08.199949 systemd-logind[1532]: New session 6 of user core. Jun 20 19:45:08.214125 systemd[1]: Started session-6.scope - Session 6 of User core. Jun 20 19:45:08.269146 sudo[1739]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jun 20 19:45:08.269476 sudo[1739]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jun 20 19:45:08.407971 sudo[1739]: pam_unix(sudo:session): session closed for user root Jun 20 19:45:08.414730 sudo[1738]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jun 20 19:45:08.415092 sudo[1738]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jun 20 19:45:08.425049 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jun 20 19:45:08.481431 augenrules[1761]: No rules Jun 20 19:45:08.483276 systemd[1]: audit-rules.service: Deactivated successfully. Jun 20 19:45:08.483544 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jun 20 19:45:08.484846 sudo[1738]: pam_unix(sudo:session): session closed for user root Jun 20 19:45:08.486425 sshd[1737]: Connection closed by 10.0.0.1 port 41348 Jun 20 19:45:08.486761 sshd-session[1735]: pam_unix(sshd:session): session closed for user core Jun 20 19:45:08.499782 systemd[1]: sshd@5-10.0.0.133:22-10.0.0.1:41348.service: Deactivated successfully. Jun 20 19:45:08.501719 systemd[1]: session-6.scope: Deactivated successfully. Jun 20 19:45:08.502686 systemd-logind[1532]: Session 6 logged out. Waiting for processes to exit. Jun 20 19:45:08.506364 systemd[1]: Started sshd@6-10.0.0.133:22-10.0.0.1:41364.service - OpenSSH per-connection server daemon (10.0.0.1:41364). Jun 20 19:45:08.507149 systemd-logind[1532]: Removed session 6. Jun 20 19:45:08.564827 sshd[1770]: Accepted publickey for core from 10.0.0.1 port 41364 ssh2: RSA SHA256:/nHAorWV7i/6K+ZJ86Yj9USNPlOARznL9tuQD88B/d4 Jun 20 19:45:08.566372 sshd-session[1770]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:45:08.570919 systemd-logind[1532]: New session 7 of user core. Jun 20 19:45:08.585008 systemd[1]: Started session-7.scope - Session 7 of User core. Jun 20 19:45:08.637696 sudo[1773]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jun 20 19:45:08.638024 sudo[1773]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jun 20 19:45:08.939338 systemd[1]: Starting docker.service - Docker Application Container Engine... Jun 20 19:45:08.957196 (dockerd)[1794]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jun 20 19:45:09.172003 dockerd[1794]: time="2025-06-20T19:45:09.171937280Z" level=info msg="Starting up" Jun 20 19:45:09.173660 dockerd[1794]: time="2025-06-20T19:45:09.173612645Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jun 20 19:45:09.515054 dockerd[1794]: time="2025-06-20T19:45:09.514987638Z" level=info msg="Loading containers: start." Jun 20 19:45:09.524860 kernel: Initializing XFRM netlink socket Jun 20 19:45:09.761790 systemd-networkd[1480]: docker0: Link UP Jun 20 19:45:09.767084 dockerd[1794]: time="2025-06-20T19:45:09.766988927Z" level=info msg="Loading containers: done." Jun 20 19:45:09.780455 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2857749198-merged.mount: Deactivated successfully. Jun 20 19:45:09.782089 dockerd[1794]: time="2025-06-20T19:45:09.782042117Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jun 20 19:45:09.782203 dockerd[1794]: time="2025-06-20T19:45:09.782125470Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Jun 20 19:45:09.782285 dockerd[1794]: time="2025-06-20T19:45:09.782259389Z" level=info msg="Initializing buildkit" Jun 20 19:45:09.811581 dockerd[1794]: time="2025-06-20T19:45:09.811537883Z" level=info msg="Completed buildkit initialization" Jun 20 19:45:09.819007 dockerd[1794]: time="2025-06-20T19:45:09.818969109Z" level=info msg="Daemon has completed initialization" Jun 20 19:45:09.819066 dockerd[1794]: time="2025-06-20T19:45:09.819024721Z" level=info msg="API listen on /run/docker.sock" Jun 20 19:45:09.819183 systemd[1]: Started docker.service - Docker Application Container Engine. Jun 20 19:45:10.517181 containerd[1559]: time="2025-06-20T19:45:10.517119206Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\"" Jun 20 19:45:11.159537 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3971433102.mount: Deactivated successfully. Jun 20 19:45:12.030167 containerd[1559]: time="2025-06-20T19:45:12.030106605Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:45:12.030883 containerd[1559]: time="2025-06-20T19:45:12.030846960Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.10: active requests=0, bytes read=28077744" Jun 20 19:45:12.032200 containerd[1559]: time="2025-06-20T19:45:12.032165148Z" level=info msg="ImageCreate event name:\"sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:45:12.034668 containerd[1559]: time="2025-06-20T19:45:12.034628001Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:45:12.035606 containerd[1559]: time="2025-06-20T19:45:12.035557581Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.10\" with image id \"sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.10\", repo digest \"registry.k8s.io/kube-apiserver@sha256:083d7d64af31cd090f870eb49fb815e6bb42c175fc602ee9dae2f28f082bd4dc\", size \"28074544\" in 1.518389885s" Jun 20 19:45:12.035653 containerd[1559]: time="2025-06-20T19:45:12.035605985Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.10\" returns image reference \"sha256:74c5154ea84d9a53c406e6c00e53cf66145cce821fd80e3c74e2e1bf312f3977\"" Jun 20 19:45:12.036218 containerd[1559]: time="2025-06-20T19:45:12.036164061Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\"" Jun 20 19:45:13.208168 containerd[1559]: time="2025-06-20T19:45:13.208110564Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:45:13.210755 containerd[1559]: time="2025-06-20T19:45:13.210703873Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.10: active requests=0, bytes read=24713294" Jun 20 19:45:13.212293 containerd[1559]: time="2025-06-20T19:45:13.212253679Z" level=info msg="ImageCreate event name:\"sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:45:13.214838 containerd[1559]: time="2025-06-20T19:45:13.214792356Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:45:13.215777 containerd[1559]: time="2025-06-20T19:45:13.215745014Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.10\" with image id \"sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.10\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3c67387d023c6114879f1e817669fd641797d30f117230682faf3930ecaaf0fe\", size \"26315128\" in 1.179535596s" Jun 20 19:45:13.215834 containerd[1559]: time="2025-06-20T19:45:13.215780601Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.10\" returns image reference \"sha256:c285c4e62c91c434e9928bee7063b361509f43f43faa31641b626d6eff97616d\"" Jun 20 19:45:13.216311 containerd[1559]: time="2025-06-20T19:45:13.216249361Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\"" Jun 20 19:45:14.636533 containerd[1559]: time="2025-06-20T19:45:14.636464720Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:45:14.637239 containerd[1559]: time="2025-06-20T19:45:14.637187224Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.10: active requests=0, bytes read=18783671" Jun 20 19:45:14.638461 containerd[1559]: time="2025-06-20T19:45:14.638409584Z" level=info msg="ImageCreate event name:\"sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:45:14.640862 containerd[1559]: time="2025-06-20T19:45:14.640831095Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:45:14.641699 containerd[1559]: time="2025-06-20T19:45:14.641663747Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.10\" with image id \"sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.10\", repo digest \"registry.k8s.io/kube-scheduler@sha256:284dc2a5cf6afc9b76e39ad4b79c680c23d289488517643b28784a06d0141272\", size \"20385523\" in 1.425388864s" Jun 20 19:45:14.641699 containerd[1559]: time="2025-06-20T19:45:14.641695948Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.10\" returns image reference \"sha256:61daeb7d112d9547792027cb16242b1d131f357f511545477381457fff5a69e2\"" Jun 20 19:45:14.642359 containerd[1559]: time="2025-06-20T19:45:14.642158737Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\"" Jun 20 19:45:14.752616 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jun 20 19:45:14.754205 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:45:14.963599 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:45:14.967296 (kubelet)[2075]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jun 20 19:45:15.276779 kubelet[2075]: E0620 19:45:15.276601 2075 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jun 20 19:45:15.282714 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jun 20 19:45:15.282928 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jun 20 19:45:15.283308 systemd[1]: kubelet.service: Consumed 482ms CPU time, 110.7M memory peak. Jun 20 19:45:15.980744 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2431537262.mount: Deactivated successfully. Jun 20 19:45:16.743021 containerd[1559]: time="2025-06-20T19:45:16.742959668Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:45:16.743670 containerd[1559]: time="2025-06-20T19:45:16.743638644Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.10: active requests=0, bytes read=30383943" Jun 20 19:45:16.744703 containerd[1559]: time="2025-06-20T19:45:16.744674732Z" level=info msg="ImageCreate event name:\"sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:45:16.746481 containerd[1559]: time="2025-06-20T19:45:16.746432167Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:45:16.746917 containerd[1559]: time="2025-06-20T19:45:16.746872662Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.10\" with image id \"sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c\", repo tag \"registry.k8s.io/kube-proxy:v1.31.10\", repo digest \"registry.k8s.io/kube-proxy@sha256:bcbb293812bdf587b28ea98369a8c347ca84884160046296761acdf12b27029d\", size \"30382962\" in 2.104683902s" Jun 20 19:45:16.746917 containerd[1559]: time="2025-06-20T19:45:16.746914068Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.10\" returns image reference \"sha256:3ed600862d3e69931e0f9f4dbf5c2b46343af40aa079772434f13de771bdc30c\"" Jun 20 19:45:16.747382 containerd[1559]: time="2025-06-20T19:45:16.747359707Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jun 20 19:45:17.291032 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4228326920.mount: Deactivated successfully. Jun 20 19:45:17.949872 containerd[1559]: time="2025-06-20T19:45:17.949795087Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:45:17.950507 containerd[1559]: time="2025-06-20T19:45:17.950484712Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Jun 20 19:45:17.951719 containerd[1559]: time="2025-06-20T19:45:17.951688001Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:45:17.954443 containerd[1559]: time="2025-06-20T19:45:17.954387967Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:45:17.955532 containerd[1559]: time="2025-06-20T19:45:17.955483929Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.208092846s" Jun 20 19:45:17.955532 containerd[1559]: time="2025-06-20T19:45:17.955530591Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jun 20 19:45:17.955966 containerd[1559]: time="2025-06-20T19:45:17.955935541Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jun 20 19:45:18.459318 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2649675562.mount: Deactivated successfully. Jun 20 19:45:18.465333 containerd[1559]: time="2025-06-20T19:45:18.465293761Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 20 19:45:18.466012 containerd[1559]: time="2025-06-20T19:45:18.465981836Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jun 20 19:45:18.467111 containerd[1559]: time="2025-06-20T19:45:18.467055254Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 20 19:45:18.468988 containerd[1559]: time="2025-06-20T19:45:18.468949409Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jun 20 19:45:18.469483 containerd[1559]: time="2025-06-20T19:45:18.469443203Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 513.479813ms" Jun 20 19:45:18.469483 containerd[1559]: time="2025-06-20T19:45:18.469479095Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jun 20 19:45:18.470030 containerd[1559]: time="2025-06-20T19:45:18.469988212Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Jun 20 19:45:19.014228 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount504756927.mount: Deactivated successfully. Jun 20 19:45:21.272011 containerd[1559]: time="2025-06-20T19:45:21.271929078Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:45:21.272684 containerd[1559]: time="2025-06-20T19:45:21.272652915Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56780013" Jun 20 19:45:21.273825 containerd[1559]: time="2025-06-20T19:45:21.273772427Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:45:21.276188 containerd[1559]: time="2025-06-20T19:45:21.276158402Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:45:21.277127 containerd[1559]: time="2025-06-20T19:45:21.277093628Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 2.807069555s" Jun 20 19:45:21.277166 containerd[1559]: time="2025-06-20T19:45:21.277125965Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Jun 20 19:45:23.472559 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:45:23.472726 systemd[1]: kubelet.service: Consumed 482ms CPU time, 110.7M memory peak. Jun 20 19:45:23.474912 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:45:23.500723 systemd[1]: Reload requested from client PID 2231 ('systemctl') (unit session-7.scope)... Jun 20 19:45:23.500738 systemd[1]: Reloading... Jun 20 19:45:23.574853 zram_generator::config[2273]: No configuration found. Jun 20 19:45:23.736325 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 20 19:45:23.853796 systemd[1]: Reloading finished in 352 ms. Jun 20 19:45:23.925640 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jun 20 19:45:23.925741 systemd[1]: kubelet.service: Failed with result 'signal'. Jun 20 19:45:23.926060 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:45:23.926103 systemd[1]: kubelet.service: Consumed 143ms CPU time, 98.2M memory peak. Jun 20 19:45:23.927668 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:45:24.091223 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:45:24.094996 (kubelet)[2321]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jun 20 19:45:24.136689 kubelet[2321]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 20 19:45:24.136689 kubelet[2321]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jun 20 19:45:24.136689 kubelet[2321]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 20 19:45:24.137133 kubelet[2321]: I0620 19:45:24.136751 2321 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jun 20 19:45:24.419094 kubelet[2321]: I0620 19:45:24.419057 2321 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jun 20 19:45:24.419094 kubelet[2321]: I0620 19:45:24.419084 2321 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jun 20 19:45:24.419318 kubelet[2321]: I0620 19:45:24.419295 2321 server.go:934] "Client rotation is on, will bootstrap in background" Jun 20 19:45:24.444035 kubelet[2321]: E0620 19:45:24.443972 2321 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.133:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.133:6443: connect: connection refused" logger="UnhandledError" Jun 20 19:45:24.445107 kubelet[2321]: I0620 19:45:24.445073 2321 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 20 19:45:24.451353 kubelet[2321]: I0620 19:45:24.451332 2321 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jun 20 19:45:24.457189 kubelet[2321]: I0620 19:45:24.457160 2321 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jun 20 19:45:24.457693 kubelet[2321]: I0620 19:45:24.457666 2321 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jun 20 19:45:24.457857 kubelet[2321]: I0620 19:45:24.457803 2321 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jun 20 19:45:24.458007 kubelet[2321]: I0620 19:45:24.457846 2321 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jun 20 19:45:24.458121 kubelet[2321]: I0620 19:45:24.458022 2321 topology_manager.go:138] "Creating topology manager with none policy" Jun 20 19:45:24.458121 kubelet[2321]: I0620 19:45:24.458032 2321 container_manager_linux.go:300] "Creating device plugin manager" Jun 20 19:45:24.458164 kubelet[2321]: I0620 19:45:24.458140 2321 state_mem.go:36] "Initialized new in-memory state store" Jun 20 19:45:24.460130 kubelet[2321]: I0620 19:45:24.460088 2321 kubelet.go:408] "Attempting to sync node with API server" Jun 20 19:45:24.460178 kubelet[2321]: I0620 19:45:24.460132 2321 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jun 20 19:45:24.460178 kubelet[2321]: I0620 19:45:24.460172 2321 kubelet.go:314] "Adding apiserver pod source" Jun 20 19:45:24.460232 kubelet[2321]: I0620 19:45:24.460195 2321 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jun 20 19:45:24.463257 kubelet[2321]: I0620 19:45:24.463233 2321 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jun 20 19:45:24.463956 kubelet[2321]: I0620 19:45:24.463609 2321 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jun 20 19:45:24.463956 kubelet[2321]: W0620 19:45:24.463866 2321 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.133:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.133:6443: connect: connection refused Jun 20 19:45:24.463956 kubelet[2321]: E0620 19:45:24.463906 2321 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.133:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.133:6443: connect: connection refused" logger="UnhandledError" Jun 20 19:45:24.464265 kubelet[2321]: W0620 19:45:24.464225 2321 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.133:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.133:6443: connect: connection refused Jun 20 19:45:24.464306 kubelet[2321]: E0620 19:45:24.464272 2321 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.133:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.133:6443: connect: connection refused" logger="UnhandledError" Jun 20 19:45:24.464518 kubelet[2321]: W0620 19:45:24.464489 2321 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jun 20 19:45:24.466333 kubelet[2321]: I0620 19:45:24.466299 2321 server.go:1274] "Started kubelet" Jun 20 19:45:24.466405 kubelet[2321]: I0620 19:45:24.466378 2321 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jun 20 19:45:24.466829 kubelet[2321]: I0620 19:45:24.466567 2321 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jun 20 19:45:24.466939 kubelet[2321]: I0620 19:45:24.466913 2321 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jun 20 19:45:24.467543 kubelet[2321]: I0620 19:45:24.467512 2321 server.go:449] "Adding debug handlers to kubelet server" Jun 20 19:45:24.468565 kubelet[2321]: I0620 19:45:24.468381 2321 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jun 20 19:45:24.468641 kubelet[2321]: I0620 19:45:24.468621 2321 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jun 20 19:45:24.469962 kubelet[2321]: I0620 19:45:24.469757 2321 volume_manager.go:289] "Starting Kubelet Volume Manager" Jun 20 19:45:24.469962 kubelet[2321]: I0620 19:45:24.469869 2321 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jun 20 19:45:24.469962 kubelet[2321]: I0620 19:45:24.469926 2321 reconciler.go:26] "Reconciler: start to sync state" Jun 20 19:45:24.470584 kubelet[2321]: W0620 19:45:24.470385 2321 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.133:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.133:6443: connect: connection refused Jun 20 19:45:24.470584 kubelet[2321]: E0620 19:45:24.470471 2321 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.133:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.133:6443: connect: connection refused" logger="UnhandledError" Jun 20 19:45:24.472670 kubelet[2321]: E0620 19:45:24.472087 2321 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 20 19:45:24.472670 kubelet[2321]: E0620 19:45:24.470229 2321 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.133:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.133:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.184ad7d733591386 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-06-20 19:45:24.466275206 +0000 UTC m=+0.367780909,LastTimestamp:2025-06-20 19:45:24.466275206 +0000 UTC m=+0.367780909,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jun 20 19:45:24.472670 kubelet[2321]: E0620 19:45:24.472479 2321 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.133:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.133:6443: connect: connection refused" interval="200ms" Jun 20 19:45:24.473085 kubelet[2321]: I0620 19:45:24.473068 2321 factory.go:221] Registration of the systemd container factory successfully Jun 20 19:45:24.473164 kubelet[2321]: I0620 19:45:24.473146 2321 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jun 20 19:45:24.475606 kubelet[2321]: E0620 19:45:24.475520 2321 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jun 20 19:45:24.476077 kubelet[2321]: I0620 19:45:24.476061 2321 factory.go:221] Registration of the containerd container factory successfully Jun 20 19:45:24.486515 kubelet[2321]: I0620 19:45:24.486482 2321 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jun 20 19:45:24.487861 kubelet[2321]: I0620 19:45:24.487657 2321 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jun 20 19:45:24.487861 kubelet[2321]: I0620 19:45:24.487675 2321 status_manager.go:217] "Starting to sync pod status with apiserver" Jun 20 19:45:24.487861 kubelet[2321]: I0620 19:45:24.487695 2321 kubelet.go:2321] "Starting kubelet main sync loop" Jun 20 19:45:24.487861 kubelet[2321]: E0620 19:45:24.487730 2321 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jun 20 19:45:24.492779 kubelet[2321]: I0620 19:45:24.492758 2321 cpu_manager.go:214] "Starting CPU manager" policy="none" Jun 20 19:45:24.492779 kubelet[2321]: I0620 19:45:24.492772 2321 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jun 20 19:45:24.492868 kubelet[2321]: I0620 19:45:24.492790 2321 state_mem.go:36] "Initialized new in-memory state store" Jun 20 19:45:24.492868 kubelet[2321]: W0620 19:45:24.492801 2321 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.133:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.133:6443: connect: connection refused Jun 20 19:45:24.492922 kubelet[2321]: E0620 19:45:24.492862 2321 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.133:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.133:6443: connect: connection refused" logger="UnhandledError" Jun 20 19:45:24.572563 kubelet[2321]: E0620 19:45:24.572522 2321 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 20 19:45:24.588840 kubelet[2321]: E0620 19:45:24.588768 2321 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jun 20 19:45:24.673228 kubelet[2321]: E0620 19:45:24.673110 2321 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 20 19:45:24.673571 kubelet[2321]: E0620 19:45:24.673522 2321 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.133:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.133:6443: connect: connection refused" interval="400ms" Jun 20 19:45:24.773978 kubelet[2321]: E0620 19:45:24.773936 2321 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 20 19:45:24.789138 kubelet[2321]: E0620 19:45:24.789090 2321 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jun 20 19:45:24.874645 kubelet[2321]: E0620 19:45:24.874605 2321 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 20 19:45:24.963661 kubelet[2321]: E0620 19:45:24.963477 2321 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.133:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.133:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.184ad7d733591386 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-06-20 19:45:24.466275206 +0000 UTC m=+0.367780909,LastTimestamp:2025-06-20 19:45:24.466275206 +0000 UTC m=+0.367780909,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jun 20 19:45:24.975049 kubelet[2321]: E0620 19:45:24.975022 2321 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 20 19:45:25.069793 kubelet[2321]: I0620 19:45:25.069768 2321 policy_none.go:49] "None policy: Start" Jun 20 19:45:25.070545 kubelet[2321]: I0620 19:45:25.070501 2321 memory_manager.go:170] "Starting memorymanager" policy="None" Jun 20 19:45:25.070545 kubelet[2321]: I0620 19:45:25.070547 2321 state_mem.go:35] "Initializing new in-memory state store" Jun 20 19:45:25.074158 kubelet[2321]: E0620 19:45:25.074104 2321 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.133:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.133:6443: connect: connection refused" interval="800ms" Jun 20 19:45:25.075128 kubelet[2321]: E0620 19:45:25.075089 2321 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 20 19:45:25.078295 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jun 20 19:45:25.099903 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jun 20 19:45:25.102964 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jun 20 19:45:25.120667 kubelet[2321]: I0620 19:45:25.120643 2321 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jun 20 19:45:25.120994 kubelet[2321]: I0620 19:45:25.120869 2321 eviction_manager.go:189] "Eviction manager: starting control loop" Jun 20 19:45:25.120994 kubelet[2321]: I0620 19:45:25.120880 2321 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jun 20 19:45:25.121193 kubelet[2321]: I0620 19:45:25.121164 2321 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jun 20 19:45:25.122489 kubelet[2321]: E0620 19:45:25.122466 2321 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jun 20 19:45:25.197507 systemd[1]: Created slice kubepods-burstable-pod25a4125d67cf64cfa500b3642bb850a8.slice - libcontainer container kubepods-burstable-pod25a4125d67cf64cfa500b3642bb850a8.slice. Jun 20 19:45:25.222225 kubelet[2321]: I0620 19:45:25.222087 2321 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jun 20 19:45:25.222737 kubelet[2321]: E0620 19:45:25.222477 2321 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.133:6443/api/v1/nodes\": dial tcp 10.0.0.133:6443: connect: connection refused" node="localhost" Jun 20 19:45:25.224524 systemd[1]: Created slice kubepods-burstable-pod3f04709fe51ae4ab5abd58e8da771b74.slice - libcontainer container kubepods-burstable-pod3f04709fe51ae4ab5abd58e8da771b74.slice. Jun 20 19:45:25.229037 systemd[1]: Created slice kubepods-burstable-podb35b56493416c25588cb530e37ffc065.slice - libcontainer container kubepods-burstable-podb35b56493416c25588cb530e37ffc065.slice. Jun 20 19:45:25.274933 kubelet[2321]: I0620 19:45:25.274891 2321 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jun 20 19:45:25.274933 kubelet[2321]: I0620 19:45:25.274923 2321 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jun 20 19:45:25.274933 kubelet[2321]: I0620 19:45:25.274942 2321 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b35b56493416c25588cb530e37ffc065-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"b35b56493416c25588cb530e37ffc065\") " pod="kube-system/kube-scheduler-localhost" Jun 20 19:45:25.275136 kubelet[2321]: I0620 19:45:25.274958 2321 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/25a4125d67cf64cfa500b3642bb850a8-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"25a4125d67cf64cfa500b3642bb850a8\") " pod="kube-system/kube-apiserver-localhost" Jun 20 19:45:25.275136 kubelet[2321]: I0620 19:45:25.274976 2321 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jun 20 19:45:25.275136 kubelet[2321]: I0620 19:45:25.274993 2321 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jun 20 19:45:25.275136 kubelet[2321]: I0620 19:45:25.275016 2321 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jun 20 19:45:25.275136 kubelet[2321]: I0620 19:45:25.275036 2321 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/25a4125d67cf64cfa500b3642bb850a8-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"25a4125d67cf64cfa500b3642bb850a8\") " pod="kube-system/kube-apiserver-localhost" Jun 20 19:45:25.275258 kubelet[2321]: I0620 19:45:25.275055 2321 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/25a4125d67cf64cfa500b3642bb850a8-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"25a4125d67cf64cfa500b3642bb850a8\") " pod="kube-system/kube-apiserver-localhost" Jun 20 19:45:25.282390 kubelet[2321]: W0620 19:45:25.282336 2321 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.133:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.133:6443: connect: connection refused Jun 20 19:45:25.282443 kubelet[2321]: E0620 19:45:25.282397 2321 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.133:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.133:6443: connect: connection refused" logger="UnhandledError" Jun 20 19:45:25.319093 kubelet[2321]: W0620 19:45:25.319047 2321 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.133:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.133:6443: connect: connection refused Jun 20 19:45:25.319154 kubelet[2321]: E0620 19:45:25.319095 2321 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.133:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.133:6443: connect: connection refused" logger="UnhandledError" Jun 20 19:45:25.349759 kubelet[2321]: W0620 19:45:25.349695 2321 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.133:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.133:6443: connect: connection refused Jun 20 19:45:25.349759 kubelet[2321]: E0620 19:45:25.349747 2321 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.133:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.133:6443: connect: connection refused" logger="UnhandledError" Jun 20 19:45:25.424699 kubelet[2321]: I0620 19:45:25.424653 2321 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jun 20 19:45:25.425034 kubelet[2321]: E0620 19:45:25.424997 2321 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.133:6443/api/v1/nodes\": dial tcp 10.0.0.133:6443: connect: connection refused" node="localhost" Jun 20 19:45:25.520874 kubelet[2321]: E0620 19:45:25.520750 2321 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 20 19:45:25.521330 containerd[1559]: time="2025-06-20T19:45:25.521292739Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:25a4125d67cf64cfa500b3642bb850a8,Namespace:kube-system,Attempt:0,}" Jun 20 19:45:25.526875 kubelet[2321]: E0620 19:45:25.526855 2321 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 20 19:45:25.527164 containerd[1559]: time="2025-06-20T19:45:25.527125479Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:3f04709fe51ae4ab5abd58e8da771b74,Namespace:kube-system,Attempt:0,}" Jun 20 19:45:25.531363 kubelet[2321]: E0620 19:45:25.531336 2321 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 20 19:45:25.531598 containerd[1559]: time="2025-06-20T19:45:25.531564895Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:b35b56493416c25588cb530e37ffc065,Namespace:kube-system,Attempt:0,}" Jun 20 19:45:25.826095 kubelet[2321]: I0620 19:45:25.825975 2321 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jun 20 19:45:25.826414 kubelet[2321]: E0620 19:45:25.826357 2321 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.0.0.133:6443/api/v1/nodes\": dial tcp 10.0.0.133:6443: connect: connection refused" node="localhost" Jun 20 19:45:25.840685 kubelet[2321]: W0620 19:45:25.840661 2321 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.133:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.133:6443: connect: connection refused Jun 20 19:45:25.840729 kubelet[2321]: E0620 19:45:25.840696 2321 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.133:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.133:6443: connect: connection refused" logger="UnhandledError" Jun 20 19:45:25.874530 kubelet[2321]: E0620 19:45:25.874478 2321 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.133:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.133:6443: connect: connection refused" interval="1.6s" Jun 20 19:45:25.941058 containerd[1559]: time="2025-06-20T19:45:25.940979800Z" level=info msg="connecting to shim 815becc337c3623cc599ce03c5e478ba61007b662e46654c9c52a2a58b456cec" address="unix:///run/containerd/s/b77b8ce824b886515c7acbd1008de30d7ee07615c76176aed049b0723683fd85" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:45:25.941885 containerd[1559]: time="2025-06-20T19:45:25.941748976Z" level=info msg="connecting to shim ffd0d50b94b41aabc4e02287f79f319e19708d4feccff23948c429a46f055674" address="unix:///run/containerd/s/0881a29076d3a6c190785634c62ece086b81345364034080bbeebd031df2a847" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:45:25.949396 containerd[1559]: time="2025-06-20T19:45:25.949319445Z" level=info msg="connecting to shim 860441ca9aa8ac9b0584d642dc42c08487d4f98922726e228f4b6c0b940dae82" address="unix:///run/containerd/s/b1f282bc54df65ea74cd302feda85c77586ecaf9b74f46eb4bef4cd549c02be2" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:45:25.973969 systemd[1]: Started cri-containerd-815becc337c3623cc599ce03c5e478ba61007b662e46654c9c52a2a58b456cec.scope - libcontainer container 815becc337c3623cc599ce03c5e478ba61007b662e46654c9c52a2a58b456cec. Jun 20 19:45:25.978512 systemd[1]: Started cri-containerd-860441ca9aa8ac9b0584d642dc42c08487d4f98922726e228f4b6c0b940dae82.scope - libcontainer container 860441ca9aa8ac9b0584d642dc42c08487d4f98922726e228f4b6c0b940dae82. Jun 20 19:45:25.980264 systemd[1]: Started cri-containerd-ffd0d50b94b41aabc4e02287f79f319e19708d4feccff23948c429a46f055674.scope - libcontainer container ffd0d50b94b41aabc4e02287f79f319e19708d4feccff23948c429a46f055674. Jun 20 19:45:26.047591 containerd[1559]: time="2025-06-20T19:45:26.047542117Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:3f04709fe51ae4ab5abd58e8da771b74,Namespace:kube-system,Attempt:0,} returns sandbox id \"815becc337c3623cc599ce03c5e478ba61007b662e46654c9c52a2a58b456cec\"" Jun 20 19:45:26.048597 kubelet[2321]: E0620 19:45:26.048550 2321 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 20 19:45:26.050484 containerd[1559]: time="2025-06-20T19:45:26.050444269Z" level=info msg="CreateContainer within sandbox \"815becc337c3623cc599ce03c5e478ba61007b662e46654c9c52a2a58b456cec\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jun 20 19:45:26.349630 containerd[1559]: time="2025-06-20T19:45:26.349567803Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:25a4125d67cf64cfa500b3642bb850a8,Namespace:kube-system,Attempt:0,} returns sandbox id \"ffd0d50b94b41aabc4e02287f79f319e19708d4feccff23948c429a46f055674\"" Jun 20 19:45:26.350421 kubelet[2321]: E0620 19:45:26.350385 2321 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 20 19:45:26.350923 containerd[1559]: time="2025-06-20T19:45:26.350877901Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:b35b56493416c25588cb530e37ffc065,Namespace:kube-system,Attempt:0,} returns sandbox id \"860441ca9aa8ac9b0584d642dc42c08487d4f98922726e228f4b6c0b940dae82\"" Jun 20 19:45:26.351709 kubelet[2321]: E0620 19:45:26.351666 2321 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 20 19:45:26.352919 containerd[1559]: time="2025-06-20T19:45:26.352769206Z" level=info msg="CreateContainer within sandbox \"ffd0d50b94b41aabc4e02287f79f319e19708d4feccff23948c429a46f055674\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jun 20 19:45:26.353331 containerd[1559]: time="2025-06-20T19:45:26.353287067Z" level=info msg="CreateContainer within sandbox \"860441ca9aa8ac9b0584d642dc42c08487d4f98922726e228f4b6c0b940dae82\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jun 20 19:45:26.361450 containerd[1559]: time="2025-06-20T19:45:26.361408204Z" level=info msg="Container a09004c49b021fcfb6f01bed1a24f35c7e096d719b23b7ea2ca9266b41b5148c: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:45:26.367179 containerd[1559]: time="2025-06-20T19:45:26.367141701Z" level=info msg="Container fc8cd0cb5b96c5554e7361547abcc1ab4764a7105fda6f4301dbe457cbb2078b: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:45:26.371770 containerd[1559]: time="2025-06-20T19:45:26.371734763Z" level=info msg="CreateContainer within sandbox \"815becc337c3623cc599ce03c5e478ba61007b662e46654c9c52a2a58b456cec\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"a09004c49b021fcfb6f01bed1a24f35c7e096d719b23b7ea2ca9266b41b5148c\"" Jun 20 19:45:26.372259 containerd[1559]: time="2025-06-20T19:45:26.372233717Z" level=info msg="StartContainer for \"a09004c49b021fcfb6f01bed1a24f35c7e096d719b23b7ea2ca9266b41b5148c\"" Jun 20 19:45:26.373014 containerd[1559]: time="2025-06-20T19:45:26.372979930Z" level=info msg="Container 241ffe8c4a2e8af86998f0bf48488833c8119b01794d4df0d1be3a09f8f93ccd: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:45:26.373259 containerd[1559]: time="2025-06-20T19:45:26.373234262Z" level=info msg="connecting to shim a09004c49b021fcfb6f01bed1a24f35c7e096d719b23b7ea2ca9266b41b5148c" address="unix:///run/containerd/s/b77b8ce824b886515c7acbd1008de30d7ee07615c76176aed049b0723683fd85" protocol=ttrpc version=3 Jun 20 19:45:26.379170 containerd[1559]: time="2025-06-20T19:45:26.379143569Z" level=info msg="CreateContainer within sandbox \"860441ca9aa8ac9b0584d642dc42c08487d4f98922726e228f4b6c0b940dae82\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"fc8cd0cb5b96c5554e7361547abcc1ab4764a7105fda6f4301dbe457cbb2078b\"" Jun 20 19:45:26.380050 containerd[1559]: time="2025-06-20T19:45:26.380023746Z" level=info msg="StartContainer for \"fc8cd0cb5b96c5554e7361547abcc1ab4764a7105fda6f4301dbe457cbb2078b\"" Jun 20 19:45:26.381068 containerd[1559]: time="2025-06-20T19:45:26.381033369Z" level=info msg="connecting to shim fc8cd0cb5b96c5554e7361547abcc1ab4764a7105fda6f4301dbe457cbb2078b" address="unix:///run/containerd/s/b1f282bc54df65ea74cd302feda85c77586ecaf9b74f46eb4bef4cd549c02be2" protocol=ttrpc version=3 Jun 20 19:45:26.383211 containerd[1559]: time="2025-06-20T19:45:26.383031773Z" level=info msg="CreateContainer within sandbox \"ffd0d50b94b41aabc4e02287f79f319e19708d4feccff23948c429a46f055674\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"241ffe8c4a2e8af86998f0bf48488833c8119b01794d4df0d1be3a09f8f93ccd\"" Jun 20 19:45:26.384726 containerd[1559]: time="2025-06-20T19:45:26.384691426Z" level=info msg="StartContainer for \"241ffe8c4a2e8af86998f0bf48488833c8119b01794d4df0d1be3a09f8f93ccd\"" Jun 20 19:45:26.385862 containerd[1559]: time="2025-06-20T19:45:26.385839986Z" level=info msg="connecting to shim 241ffe8c4a2e8af86998f0bf48488833c8119b01794d4df0d1be3a09f8f93ccd" address="unix:///run/containerd/s/0881a29076d3a6c190785634c62ece086b81345364034080bbeebd031df2a847" protocol=ttrpc version=3 Jun 20 19:45:26.393044 systemd[1]: Started cri-containerd-a09004c49b021fcfb6f01bed1a24f35c7e096d719b23b7ea2ca9266b41b5148c.scope - libcontainer container a09004c49b021fcfb6f01bed1a24f35c7e096d719b23b7ea2ca9266b41b5148c. Jun 20 19:45:26.409957 systemd[1]: Started cri-containerd-fc8cd0cb5b96c5554e7361547abcc1ab4764a7105fda6f4301dbe457cbb2078b.scope - libcontainer container fc8cd0cb5b96c5554e7361547abcc1ab4764a7105fda6f4301dbe457cbb2078b. Jun 20 19:45:26.413226 systemd[1]: Started cri-containerd-241ffe8c4a2e8af86998f0bf48488833c8119b01794d4df0d1be3a09f8f93ccd.scope - libcontainer container 241ffe8c4a2e8af86998f0bf48488833c8119b01794d4df0d1be3a09f8f93ccd. Jun 20 19:45:26.453863 kubelet[2321]: E0620 19:45:26.453738 2321 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.133:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.133:6443: connect: connection refused" logger="UnhandledError" Jun 20 19:45:26.456643 containerd[1559]: time="2025-06-20T19:45:26.456584491Z" level=info msg="StartContainer for \"a09004c49b021fcfb6f01bed1a24f35c7e096d719b23b7ea2ca9266b41b5148c\" returns successfully" Jun 20 19:45:26.469965 containerd[1559]: time="2025-06-20T19:45:26.469925505Z" level=info msg="StartContainer for \"241ffe8c4a2e8af86998f0bf48488833c8119b01794d4df0d1be3a09f8f93ccd\" returns successfully" Jun 20 19:45:26.470400 containerd[1559]: time="2025-06-20T19:45:26.470310710Z" level=info msg="StartContainer for \"fc8cd0cb5b96c5554e7361547abcc1ab4764a7105fda6f4301dbe457cbb2078b\" returns successfully" Jun 20 19:45:26.498538 kubelet[2321]: E0620 19:45:26.498429 2321 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 20 19:45:26.501920 kubelet[2321]: E0620 19:45:26.501849 2321 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 20 19:45:26.504772 kubelet[2321]: E0620 19:45:26.504063 2321 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 20 19:45:26.628241 kubelet[2321]: I0620 19:45:26.628142 2321 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jun 20 19:45:27.469091 kubelet[2321]: I0620 19:45:27.469055 2321 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jun 20 19:45:27.469091 kubelet[2321]: E0620 19:45:27.469087 2321 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jun 20 19:45:27.476235 kubelet[2321]: E0620 19:45:27.476193 2321 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 20 19:45:27.505168 kubelet[2321]: E0620 19:45:27.505128 2321 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 20 19:45:27.530682 kubelet[2321]: E0620 19:45:27.530640 2321 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-node-lease\" not found" interval="3.2s" Jun 20 19:45:27.576970 kubelet[2321]: E0620 19:45:27.576927 2321 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 20 19:45:27.677589 kubelet[2321]: E0620 19:45:27.677543 2321 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 20 19:45:27.778263 kubelet[2321]: E0620 19:45:27.778138 2321 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 20 19:45:27.878746 kubelet[2321]: E0620 19:45:27.878710 2321 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 20 19:45:27.979792 kubelet[2321]: E0620 19:45:27.979753 2321 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 20 19:45:28.080536 kubelet[2321]: E0620 19:45:28.080454 2321 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 20 19:45:28.462753 kubelet[2321]: I0620 19:45:28.462703 2321 apiserver.go:52] "Watching apiserver" Jun 20 19:45:28.470805 kubelet[2321]: I0620 19:45:28.470778 2321 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jun 20 19:45:28.514582 kubelet[2321]: E0620 19:45:28.514549 2321 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 20 19:45:29.507390 kubelet[2321]: E0620 19:45:29.507360 2321 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 20 19:45:29.536204 systemd[1]: Reload requested from client PID 2596 ('systemctl') (unit session-7.scope)... Jun 20 19:45:29.536219 systemd[1]: Reloading... Jun 20 19:45:29.597978 zram_generator::config[2639]: No configuration found. Jun 20 19:45:29.690203 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jun 20 19:45:29.817563 systemd[1]: Reloading finished in 281 ms. Jun 20 19:45:29.846984 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:45:29.869058 systemd[1]: kubelet.service: Deactivated successfully. Jun 20 19:45:29.869351 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:45:29.869398 systemd[1]: kubelet.service: Consumed 783ms CPU time, 131.6M memory peak. Jun 20 19:45:29.871137 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jun 20 19:45:30.064859 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jun 20 19:45:30.068628 (kubelet)[2684]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jun 20 19:45:30.108581 kubelet[2684]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 20 19:45:30.108581 kubelet[2684]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jun 20 19:45:30.108581 kubelet[2684]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jun 20 19:45:30.109035 kubelet[2684]: I0620 19:45:30.108641 2684 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jun 20 19:45:30.114591 kubelet[2684]: I0620 19:45:30.114542 2684 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Jun 20 19:45:30.114591 kubelet[2684]: I0620 19:45:30.114578 2684 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jun 20 19:45:30.114870 kubelet[2684]: I0620 19:45:30.114853 2684 server.go:934] "Client rotation is on, will bootstrap in background" Jun 20 19:45:30.116120 kubelet[2684]: I0620 19:45:30.116085 2684 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jun 20 19:45:30.117947 kubelet[2684]: I0620 19:45:30.117848 2684 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jun 20 19:45:30.122318 kubelet[2684]: I0620 19:45:30.122293 2684 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jun 20 19:45:30.126395 kubelet[2684]: I0620 19:45:30.126370 2684 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jun 20 19:45:30.126690 kubelet[2684]: I0620 19:45:30.126465 2684 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jun 20 19:45:30.126690 kubelet[2684]: I0620 19:45:30.126632 2684 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jun 20 19:45:30.126803 kubelet[2684]: I0620 19:45:30.126658 2684 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jun 20 19:45:30.126803 kubelet[2684]: I0620 19:45:30.126806 2684 topology_manager.go:138] "Creating topology manager with none policy" Jun 20 19:45:30.126927 kubelet[2684]: I0620 19:45:30.126829 2684 container_manager_linux.go:300] "Creating device plugin manager" Jun 20 19:45:30.126927 kubelet[2684]: I0620 19:45:30.126851 2684 state_mem.go:36] "Initialized new in-memory state store" Jun 20 19:45:30.126971 kubelet[2684]: I0620 19:45:30.126944 2684 kubelet.go:408] "Attempting to sync node with API server" Jun 20 19:45:30.126971 kubelet[2684]: I0620 19:45:30.126953 2684 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jun 20 19:45:30.127016 kubelet[2684]: I0620 19:45:30.126988 2684 kubelet.go:314] "Adding apiserver pod source" Jun 20 19:45:30.127016 kubelet[2684]: I0620 19:45:30.126999 2684 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jun 20 19:45:30.127982 kubelet[2684]: I0620 19:45:30.127960 2684 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jun 20 19:45:30.128331 kubelet[2684]: I0620 19:45:30.128310 2684 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jun 20 19:45:30.128689 kubelet[2684]: I0620 19:45:30.128671 2684 server.go:1274] "Started kubelet" Jun 20 19:45:30.129071 kubelet[2684]: I0620 19:45:30.129044 2684 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jun 20 19:45:30.129107 kubelet[2684]: I0620 19:45:30.129071 2684 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jun 20 19:45:30.129309 kubelet[2684]: I0620 19:45:30.129288 2684 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jun 20 19:45:30.131831 kubelet[2684]: I0620 19:45:30.131175 2684 server.go:449] "Adding debug handlers to kubelet server" Jun 20 19:45:30.133089 kubelet[2684]: I0620 19:45:30.133066 2684 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jun 20 19:45:30.134302 kubelet[2684]: I0620 19:45:30.134160 2684 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jun 20 19:45:30.134778 kubelet[2684]: I0620 19:45:30.134754 2684 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Jun 20 19:45:30.134915 kubelet[2684]: E0620 19:45:30.134881 2684 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"localhost\" not found" Jun 20 19:45:30.136866 kubelet[2684]: I0620 19:45:30.136828 2684 factory.go:221] Registration of the systemd container factory successfully Jun 20 19:45:30.136958 kubelet[2684]: I0620 19:45:30.136935 2684 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jun 20 19:45:30.139687 kubelet[2684]: I0620 19:45:30.139670 2684 volume_manager.go:289] "Starting Kubelet Volume Manager" Jun 20 19:45:30.141445 kubelet[2684]: I0620 19:45:30.141427 2684 reconciler.go:26] "Reconciler: start to sync state" Jun 20 19:45:30.143479 kubelet[2684]: E0620 19:45:30.143453 2684 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jun 20 19:45:30.143479 kubelet[2684]: I0620 19:45:30.143477 2684 factory.go:221] Registration of the containerd container factory successfully Jun 20 19:45:30.148004 kubelet[2684]: I0620 19:45:30.147984 2684 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jun 20 19:45:30.149728 kubelet[2684]: I0620 19:45:30.149471 2684 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jun 20 19:45:30.149728 kubelet[2684]: I0620 19:45:30.149487 2684 status_manager.go:217] "Starting to sync pod status with apiserver" Jun 20 19:45:30.149728 kubelet[2684]: I0620 19:45:30.149501 2684 kubelet.go:2321] "Starting kubelet main sync loop" Jun 20 19:45:30.149728 kubelet[2684]: E0620 19:45:30.149541 2684 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jun 20 19:45:30.178459 kubelet[2684]: I0620 19:45:30.178423 2684 cpu_manager.go:214] "Starting CPU manager" policy="none" Jun 20 19:45:30.178459 kubelet[2684]: I0620 19:45:30.178446 2684 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jun 20 19:45:30.178459 kubelet[2684]: I0620 19:45:30.178462 2684 state_mem.go:36] "Initialized new in-memory state store" Jun 20 19:45:30.178645 kubelet[2684]: I0620 19:45:30.178616 2684 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jun 20 19:45:30.178645 kubelet[2684]: I0620 19:45:30.178627 2684 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jun 20 19:45:30.178645 kubelet[2684]: I0620 19:45:30.178645 2684 policy_none.go:49] "None policy: Start" Jun 20 19:45:30.179339 kubelet[2684]: I0620 19:45:30.179278 2684 memory_manager.go:170] "Starting memorymanager" policy="None" Jun 20 19:45:30.179339 kubelet[2684]: I0620 19:45:30.179326 2684 state_mem.go:35] "Initializing new in-memory state store" Jun 20 19:45:30.179477 kubelet[2684]: I0620 19:45:30.179457 2684 state_mem.go:75] "Updated machine memory state" Jun 20 19:45:30.184506 kubelet[2684]: I0620 19:45:30.184469 2684 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jun 20 19:45:30.184679 kubelet[2684]: I0620 19:45:30.184648 2684 eviction_manager.go:189] "Eviction manager: starting control loop" Jun 20 19:45:30.184679 kubelet[2684]: I0620 19:45:30.184661 2684 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jun 20 19:45:30.184874 kubelet[2684]: I0620 19:45:30.184856 2684 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jun 20 19:45:30.256788 kubelet[2684]: E0620 19:45:30.256754 2684 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jun 20 19:45:30.290128 kubelet[2684]: I0620 19:45:30.290094 2684 kubelet_node_status.go:72] "Attempting to register node" node="localhost" Jun 20 19:45:30.297361 kubelet[2684]: I0620 19:45:30.297324 2684 kubelet_node_status.go:111] "Node was previously registered" node="localhost" Jun 20 19:45:30.297422 kubelet[2684]: I0620 19:45:30.297404 2684 kubelet_node_status.go:75] "Successfully registered node" node="localhost" Jun 20 19:45:30.342639 kubelet[2684]: I0620 19:45:30.342557 2684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jun 20 19:45:30.342639 kubelet[2684]: I0620 19:45:30.342584 2684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jun 20 19:45:30.342639 kubelet[2684]: I0620 19:45:30.342603 2684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/b35b56493416c25588cb530e37ffc065-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"b35b56493416c25588cb530e37ffc065\") " pod="kube-system/kube-scheduler-localhost" Jun 20 19:45:30.342639 kubelet[2684]: I0620 19:45:30.342616 2684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/25a4125d67cf64cfa500b3642bb850a8-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"25a4125d67cf64cfa500b3642bb850a8\") " pod="kube-system/kube-apiserver-localhost" Jun 20 19:45:30.342639 kubelet[2684]: I0620 19:45:30.342630 2684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/25a4125d67cf64cfa500b3642bb850a8-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"25a4125d67cf64cfa500b3642bb850a8\") " pod="kube-system/kube-apiserver-localhost" Jun 20 19:45:30.342787 kubelet[2684]: I0620 19:45:30.342686 2684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jun 20 19:45:30.342787 kubelet[2684]: I0620 19:45:30.342749 2684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jun 20 19:45:30.342787 kubelet[2684]: I0620 19:45:30.342771 2684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/25a4125d67cf64cfa500b3642bb850a8-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"25a4125d67cf64cfa500b3642bb850a8\") " pod="kube-system/kube-apiserver-localhost" Jun 20 19:45:30.342889 kubelet[2684]: I0620 19:45:30.342800 2684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3f04709fe51ae4ab5abd58e8da771b74-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"3f04709fe51ae4ab5abd58e8da771b74\") " pod="kube-system/kube-controller-manager-localhost" Jun 20 19:45:30.536652 sudo[2721]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jun 20 19:45:30.536989 sudo[2721]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=0) Jun 20 19:45:30.556181 kubelet[2684]: E0620 19:45:30.556157 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 20 19:45:30.557292 kubelet[2684]: E0620 19:45:30.557242 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 20 19:45:30.557292 kubelet[2684]: E0620 19:45:30.557269 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 20 19:45:30.988233 sudo[2721]: pam_unix(sudo:session): session closed for user root Jun 20 19:45:31.128631 kubelet[2684]: I0620 19:45:31.128583 2684 apiserver.go:52] "Watching apiserver" Jun 20 19:45:31.135139 kubelet[2684]: I0620 19:45:31.135083 2684 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Jun 20 19:45:31.163843 kubelet[2684]: E0620 19:45:31.163795 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 20 19:45:31.164640 kubelet[2684]: E0620 19:45:31.164617 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 20 19:45:31.167724 kubelet[2684]: E0620 19:45:31.167695 2684 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jun 20 19:45:31.167877 kubelet[2684]: E0620 19:45:31.167805 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 20 19:45:31.183015 kubelet[2684]: I0620 19:45:31.182931 2684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.182895673 podStartE2EDuration="3.182895673s" podCreationTimestamp="2025-06-20 19:45:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 19:45:31.181784606 +0000 UTC m=+1.109472666" watchObservedRunningTime="2025-06-20 19:45:31.182895673 +0000 UTC m=+1.110583733" Jun 20 19:45:31.192877 kubelet[2684]: I0620 19:45:31.192788 2684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.192772898 podStartE2EDuration="1.192772898s" podCreationTimestamp="2025-06-20 19:45:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 19:45:31.187182422 +0000 UTC m=+1.114870482" watchObservedRunningTime="2025-06-20 19:45:31.192772898 +0000 UTC m=+1.120460958" Jun 20 19:45:31.202649 kubelet[2684]: I0620 19:45:31.202543 2684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.202528098 podStartE2EDuration="1.202528098s" podCreationTimestamp="2025-06-20 19:45:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 19:45:31.193013236 +0000 UTC m=+1.120701296" watchObservedRunningTime="2025-06-20 19:45:31.202528098 +0000 UTC m=+1.130216158" Jun 20 19:45:32.164889 kubelet[2684]: E0620 19:45:32.164854 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 20 19:45:32.165332 kubelet[2684]: E0620 19:45:32.165014 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 20 19:45:32.413909 sudo[1773]: pam_unix(sudo:session): session closed for user root Jun 20 19:45:32.415321 sshd[1772]: Connection closed by 10.0.0.1 port 41364 Jun 20 19:45:32.415747 sshd-session[1770]: pam_unix(sshd:session): session closed for user core Jun 20 19:45:32.420068 systemd[1]: sshd@6-10.0.0.133:22-10.0.0.1:41364.service: Deactivated successfully. Jun 20 19:45:32.421972 systemd[1]: session-7.scope: Deactivated successfully. Jun 20 19:45:32.422182 systemd[1]: session-7.scope: Consumed 4.160s CPU time, 260.7M memory peak. Jun 20 19:45:32.423490 systemd-logind[1532]: Session 7 logged out. Waiting for processes to exit. Jun 20 19:45:32.424559 systemd-logind[1532]: Removed session 7. Jun 20 19:45:35.603751 kubelet[2684]: E0620 19:45:35.603705 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 20 19:45:35.712078 kubelet[2684]: I0620 19:45:35.712044 2684 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jun 20 19:45:35.712542 containerd[1559]: time="2025-06-20T19:45:35.712495092Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jun 20 19:45:35.713079 kubelet[2684]: I0620 19:45:35.712770 2684 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jun 20 19:45:36.189550 systemd[1]: Created slice kubepods-besteffort-pod3f0a160a_d600_43f3_b670_877c78062372.slice - libcontainer container kubepods-besteffort-pod3f0a160a_d600_43f3_b670_877c78062372.slice. Jun 20 19:45:36.203621 systemd[1]: Created slice kubepods-burstable-pod11cdcc50_16b5_4a26_8270_3396efa20b13.slice - libcontainer container kubepods-burstable-pod11cdcc50_16b5_4a26_8270_3396efa20b13.slice. Jun 20 19:45:36.285546 kubelet[2684]: I0620 19:45:36.285500 2684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/11cdcc50-16b5-4a26-8270-3396efa20b13-cilium-cgroup\") pod \"cilium-gptlx\" (UID: \"11cdcc50-16b5-4a26-8270-3396efa20b13\") " pod="kube-system/cilium-gptlx" Jun 20 19:45:36.285546 kubelet[2684]: I0620 19:45:36.285537 2684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/11cdcc50-16b5-4a26-8270-3396efa20b13-cilium-config-path\") pod \"cilium-gptlx\" (UID: \"11cdcc50-16b5-4a26-8270-3396efa20b13\") " pod="kube-system/cilium-gptlx" Jun 20 19:45:36.285546 kubelet[2684]: I0620 19:45:36.285556 2684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/11cdcc50-16b5-4a26-8270-3396efa20b13-xtables-lock\") pod \"cilium-gptlx\" (UID: \"11cdcc50-16b5-4a26-8270-3396efa20b13\") " pod="kube-system/cilium-gptlx" Jun 20 19:45:36.285546 kubelet[2684]: I0620 19:45:36.285572 2684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3f0a160a-d600-43f3-b670-877c78062372-xtables-lock\") pod \"kube-proxy-s8xd4\" (UID: \"3f0a160a-d600-43f3-b670-877c78062372\") " pod="kube-system/kube-proxy-s8xd4" Jun 20 19:45:36.285848 kubelet[2684]: I0620 19:45:36.285586 2684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/11cdcc50-16b5-4a26-8270-3396efa20b13-hostproc\") pod \"cilium-gptlx\" (UID: \"11cdcc50-16b5-4a26-8270-3396efa20b13\") " pod="kube-system/cilium-gptlx" Jun 20 19:45:36.285848 kubelet[2684]: I0620 19:45:36.285602 2684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/11cdcc50-16b5-4a26-8270-3396efa20b13-etc-cni-netd\") pod \"cilium-gptlx\" (UID: \"11cdcc50-16b5-4a26-8270-3396efa20b13\") " pod="kube-system/cilium-gptlx" Jun 20 19:45:36.285848 kubelet[2684]: I0620 19:45:36.285616 2684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/11cdcc50-16b5-4a26-8270-3396efa20b13-clustermesh-secrets\") pod \"cilium-gptlx\" (UID: \"11cdcc50-16b5-4a26-8270-3396efa20b13\") " pod="kube-system/cilium-gptlx" Jun 20 19:45:36.285848 kubelet[2684]: I0620 19:45:36.285632 2684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/11cdcc50-16b5-4a26-8270-3396efa20b13-hubble-tls\") pod \"cilium-gptlx\" (UID: \"11cdcc50-16b5-4a26-8270-3396efa20b13\") " pod="kube-system/cilium-gptlx" Jun 20 19:45:36.285848 kubelet[2684]: I0620 19:45:36.285716 2684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3f0a160a-d600-43f3-b670-877c78062372-lib-modules\") pod \"kube-proxy-s8xd4\" (UID: \"3f0a160a-d600-43f3-b670-877c78062372\") " pod="kube-system/kube-proxy-s8xd4" Jun 20 19:45:36.285848 kubelet[2684]: I0620 19:45:36.285764 2684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/11cdcc50-16b5-4a26-8270-3396efa20b13-cni-path\") pod \"cilium-gptlx\" (UID: \"11cdcc50-16b5-4a26-8270-3396efa20b13\") " pod="kube-system/cilium-gptlx" Jun 20 19:45:36.285991 kubelet[2684]: I0620 19:45:36.285788 2684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/11cdcc50-16b5-4a26-8270-3396efa20b13-host-proc-sys-net\") pod \"cilium-gptlx\" (UID: \"11cdcc50-16b5-4a26-8270-3396efa20b13\") " pod="kube-system/cilium-gptlx" Jun 20 19:45:36.285991 kubelet[2684]: I0620 19:45:36.285805 2684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/11cdcc50-16b5-4a26-8270-3396efa20b13-lib-modules\") pod \"cilium-gptlx\" (UID: \"11cdcc50-16b5-4a26-8270-3396efa20b13\") " pod="kube-system/cilium-gptlx" Jun 20 19:45:36.285991 kubelet[2684]: I0620 19:45:36.285850 2684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/11cdcc50-16b5-4a26-8270-3396efa20b13-cilium-run\") pod \"cilium-gptlx\" (UID: \"11cdcc50-16b5-4a26-8270-3396efa20b13\") " pod="kube-system/cilium-gptlx" Jun 20 19:45:36.285991 kubelet[2684]: I0620 19:45:36.285867 2684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/11cdcc50-16b5-4a26-8270-3396efa20b13-host-proc-sys-kernel\") pod \"cilium-gptlx\" (UID: \"11cdcc50-16b5-4a26-8270-3396efa20b13\") " pod="kube-system/cilium-gptlx" Jun 20 19:45:36.285991 kubelet[2684]: I0620 19:45:36.285884 2684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hgczs\" (UniqueName: \"kubernetes.io/projected/11cdcc50-16b5-4a26-8270-3396efa20b13-kube-api-access-hgczs\") pod \"cilium-gptlx\" (UID: \"11cdcc50-16b5-4a26-8270-3396efa20b13\") " pod="kube-system/cilium-gptlx" Jun 20 19:45:36.286106 kubelet[2684]: I0620 19:45:36.285904 2684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/3f0a160a-d600-43f3-b670-877c78062372-kube-proxy\") pod \"kube-proxy-s8xd4\" (UID: \"3f0a160a-d600-43f3-b670-877c78062372\") " pod="kube-system/kube-proxy-s8xd4" Jun 20 19:45:36.286106 kubelet[2684]: I0620 19:45:36.285919 2684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jv496\" (UniqueName: \"kubernetes.io/projected/3f0a160a-d600-43f3-b670-877c78062372-kube-api-access-jv496\") pod \"kube-proxy-s8xd4\" (UID: \"3f0a160a-d600-43f3-b670-877c78062372\") " pod="kube-system/kube-proxy-s8xd4" Jun 20 19:45:36.286106 kubelet[2684]: I0620 19:45:36.285934 2684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/11cdcc50-16b5-4a26-8270-3396efa20b13-bpf-maps\") pod \"cilium-gptlx\" (UID: \"11cdcc50-16b5-4a26-8270-3396efa20b13\") " pod="kube-system/cilium-gptlx" Jun 20 19:45:36.500071 kubelet[2684]: E0620 19:45:36.499716 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 20 19:45:36.500449 containerd[1559]: time="2025-06-20T19:45:36.500400067Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-s8xd4,Uid:3f0a160a-d600-43f3-b670-877c78062372,Namespace:kube-system,Attempt:0,}" Jun 20 19:45:36.508319 kubelet[2684]: E0620 19:45:36.508293 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 20 19:45:36.509099 containerd[1559]: time="2025-06-20T19:45:36.509011270Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-gptlx,Uid:11cdcc50-16b5-4a26-8270-3396efa20b13,Namespace:kube-system,Attempt:0,}" Jun 20 19:45:36.522543 containerd[1559]: time="2025-06-20T19:45:36.522491948Z" level=info msg="connecting to shim 846331cac474f08e76498fdaad2097e643495ab2b8be5bd8e600628880a726a8" address="unix:///run/containerd/s/b08ce8c5467b56b42630356a7772a0a778b26cb32369b92d04c0a0ab4d5bb496" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:45:36.539641 containerd[1559]: time="2025-06-20T19:45:36.539598643Z" level=info msg="connecting to shim a46e43c12cbf4b6e31df935eefef5a3653c3d251a2e1a79f7f5f8f2fadc22232" address="unix:///run/containerd/s/b67cc9f3565641b14afba2e60293bdfef7ac47485b88d49073126f3a5cf564ec" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:45:36.551006 systemd[1]: Started cri-containerd-846331cac474f08e76498fdaad2097e643495ab2b8be5bd8e600628880a726a8.scope - libcontainer container 846331cac474f08e76498fdaad2097e643495ab2b8be5bd8e600628880a726a8. Jun 20 19:45:36.560889 systemd[1]: Started cri-containerd-a46e43c12cbf4b6e31df935eefef5a3653c3d251a2e1a79f7f5f8f2fadc22232.scope - libcontainer container a46e43c12cbf4b6e31df935eefef5a3653c3d251a2e1a79f7f5f8f2fadc22232. Jun 20 19:45:36.591638 containerd[1559]: time="2025-06-20T19:45:36.591570808Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-s8xd4,Uid:3f0a160a-d600-43f3-b670-877c78062372,Namespace:kube-system,Attempt:0,} returns sandbox id \"846331cac474f08e76498fdaad2097e643495ab2b8be5bd8e600628880a726a8\"" Jun 20 19:45:36.594161 kubelet[2684]: E0620 19:45:36.594059 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 20 19:45:36.599204 containerd[1559]: time="2025-06-20T19:45:36.599068872Z" level=info msg="CreateContainer within sandbox \"846331cac474f08e76498fdaad2097e643495ab2b8be5bd8e600628880a726a8\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jun 20 19:45:36.603087 containerd[1559]: time="2025-06-20T19:45:36.603029048Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-gptlx,Uid:11cdcc50-16b5-4a26-8270-3396efa20b13,Namespace:kube-system,Attempt:0,} returns sandbox id \"a46e43c12cbf4b6e31df935eefef5a3653c3d251a2e1a79f7f5f8f2fadc22232\"" Jun 20 19:45:36.603733 kubelet[2684]: E0620 19:45:36.603710 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 20 19:45:36.605134 containerd[1559]: time="2025-06-20T19:45:36.605097605Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jun 20 19:45:36.615566 containerd[1559]: time="2025-06-20T19:45:36.615493576Z" level=info msg="Container 2ba6a8d10fca61d8ff303b6136df02eb9b7ae149aeca1d5ae0981b8a7b8622ee: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:45:36.629194 containerd[1559]: time="2025-06-20T19:45:36.628839967Z" level=info msg="CreateContainer within sandbox \"846331cac474f08e76498fdaad2097e643495ab2b8be5bd8e600628880a726a8\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"2ba6a8d10fca61d8ff303b6136df02eb9b7ae149aeca1d5ae0981b8a7b8622ee\"" Jun 20 19:45:36.630493 containerd[1559]: time="2025-06-20T19:45:36.630454381Z" level=info msg="StartContainer for \"2ba6a8d10fca61d8ff303b6136df02eb9b7ae149aeca1d5ae0981b8a7b8622ee\"" Jun 20 19:45:36.632168 containerd[1559]: time="2025-06-20T19:45:36.632145015Z" level=info msg="connecting to shim 2ba6a8d10fca61d8ff303b6136df02eb9b7ae149aeca1d5ae0981b8a7b8622ee" address="unix:///run/containerd/s/b08ce8c5467b56b42630356a7772a0a778b26cb32369b92d04c0a0ab4d5bb496" protocol=ttrpc version=3 Jun 20 19:45:36.654068 systemd[1]: Started cri-containerd-2ba6a8d10fca61d8ff303b6136df02eb9b7ae149aeca1d5ae0981b8a7b8622ee.scope - libcontainer container 2ba6a8d10fca61d8ff303b6136df02eb9b7ae149aeca1d5ae0981b8a7b8622ee. Jun 20 19:45:36.700865 containerd[1559]: time="2025-06-20T19:45:36.700800785Z" level=info msg="StartContainer for \"2ba6a8d10fca61d8ff303b6136df02eb9b7ae149aeca1d5ae0981b8a7b8622ee\" returns successfully" Jun 20 19:45:36.734272 systemd[1]: Created slice kubepods-besteffort-podda868121_5b31_40f2_9b37_749d547f61d0.slice - libcontainer container kubepods-besteffort-podda868121_5b31_40f2_9b37_749d547f61d0.slice. Jun 20 19:45:36.789608 kubelet[2684]: I0620 19:45:36.789465 2684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/da868121-5b31-40f2-9b37-749d547f61d0-cilium-config-path\") pod \"cilium-operator-5d85765b45-n9l2r\" (UID: \"da868121-5b31-40f2-9b37-749d547f61d0\") " pod="kube-system/cilium-operator-5d85765b45-n9l2r" Jun 20 19:45:36.789608 kubelet[2684]: I0620 19:45:36.789504 2684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-scn4v\" (UniqueName: \"kubernetes.io/projected/da868121-5b31-40f2-9b37-749d547f61d0-kube-api-access-scn4v\") pod \"cilium-operator-5d85765b45-n9l2r\" (UID: \"da868121-5b31-40f2-9b37-749d547f61d0\") " pod="kube-system/cilium-operator-5d85765b45-n9l2r" Jun 20 19:45:37.041312 kubelet[2684]: E0620 19:45:37.041208 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 20 19:45:37.041961 containerd[1559]: time="2025-06-20T19:45:37.041862056Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-n9l2r,Uid:da868121-5b31-40f2-9b37-749d547f61d0,Namespace:kube-system,Attempt:0,}" Jun 20 19:45:37.078292 containerd[1559]: time="2025-06-20T19:45:37.078242229Z" level=info msg="connecting to shim 645844e97badd981bc3c97cff82e725598c7e497db2d4e2ae1338cb463101678" address="unix:///run/containerd/s/1928b07f1ee070258a89f62bb236eeef4ed7a31526bad42c22bbfeff7fd8fe3a" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:45:37.105054 systemd[1]: Started cri-containerd-645844e97badd981bc3c97cff82e725598c7e497db2d4e2ae1338cb463101678.scope - libcontainer container 645844e97badd981bc3c97cff82e725598c7e497db2d4e2ae1338cb463101678. Jun 20 19:45:37.158436 containerd[1559]: time="2025-06-20T19:45:37.158388908Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5d85765b45-n9l2r,Uid:da868121-5b31-40f2-9b37-749d547f61d0,Namespace:kube-system,Attempt:0,} returns sandbox id \"645844e97badd981bc3c97cff82e725598c7e497db2d4e2ae1338cb463101678\"" Jun 20 19:45:37.159139 kubelet[2684]: E0620 19:45:37.159107 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 20 19:45:37.175123 kubelet[2684]: E0620 19:45:37.174790 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 20 19:45:37.182000 kubelet[2684]: I0620 19:45:37.181923 2684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-s8xd4" podStartSLOduration=1.181902529 podStartE2EDuration="1.181902529s" podCreationTimestamp="2025-06-20 19:45:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 19:45:37.181549203 +0000 UTC m=+7.109237263" watchObservedRunningTime="2025-06-20 19:45:37.181902529 +0000 UTC m=+7.109590589" Jun 20 19:45:38.129355 kubelet[2684]: E0620 19:45:38.129319 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 20 19:45:38.177072 kubelet[2684]: E0620 19:45:38.177035 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 20 19:45:39.178951 kubelet[2684]: E0620 19:45:39.178921 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 20 19:45:40.053651 kubelet[2684]: E0620 19:45:40.053296 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 20 19:45:40.179657 kubelet[2684]: E0620 19:45:40.179628 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 20 19:45:43.329794 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount708271137.mount: Deactivated successfully. Jun 20 19:45:45.608292 kubelet[2684]: E0620 19:45:45.608247 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 20 19:45:47.649517 containerd[1559]: time="2025-06-20T19:45:47.649462076Z" level=info msg="ImageCreate event name:\"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:45:47.650200 containerd[1559]: time="2025-06-20T19:45:47.650139128Z" level=info msg="stop pulling image quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5: active requests=0, bytes read=166730503" Jun 20 19:45:47.651437 containerd[1559]: time="2025-06-20T19:45:47.651392596Z" level=info msg="ImageCreate event name:\"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:45:47.652790 containerd[1559]: time="2025-06-20T19:45:47.652748754Z" level=info msg="Pulled image \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" with image id \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\", repo tag \"\", repo digest \"quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\", size \"166719855\" in 11.04746039s" Jun 20 19:45:47.652790 containerd[1559]: time="2025-06-20T19:45:47.652777395Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jun 20 19:45:47.658671 containerd[1559]: time="2025-06-20T19:45:47.658635658Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jun 20 19:45:47.662954 containerd[1559]: time="2025-06-20T19:45:47.661462570Z" level=info msg="CreateContainer within sandbox \"a46e43c12cbf4b6e31df935eefef5a3653c3d251a2e1a79f7f5f8f2fadc22232\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jun 20 19:45:47.671310 containerd[1559]: time="2025-06-20T19:45:47.671272636Z" level=info msg="Container 100fc08b26ef1d48f99f6c8c9c2fe21448c57f8c6383abc7e8127dbe0b2b788e: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:45:47.674871 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1587557681.mount: Deactivated successfully. Jun 20 19:45:47.677464 containerd[1559]: time="2025-06-20T19:45:47.677413324Z" level=info msg="CreateContainer within sandbox \"a46e43c12cbf4b6e31df935eefef5a3653c3d251a2e1a79f7f5f8f2fadc22232\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"100fc08b26ef1d48f99f6c8c9c2fe21448c57f8c6383abc7e8127dbe0b2b788e\"" Jun 20 19:45:47.678044 containerd[1559]: time="2025-06-20T19:45:47.678015696Z" level=info msg="StartContainer for \"100fc08b26ef1d48f99f6c8c9c2fe21448c57f8c6383abc7e8127dbe0b2b788e\"" Jun 20 19:45:47.678883 containerd[1559]: time="2025-06-20T19:45:47.678860618Z" level=info msg="connecting to shim 100fc08b26ef1d48f99f6c8c9c2fe21448c57f8c6383abc7e8127dbe0b2b788e" address="unix:///run/containerd/s/b67cc9f3565641b14afba2e60293bdfef7ac47485b88d49073126f3a5cf564ec" protocol=ttrpc version=3 Jun 20 19:45:47.702963 systemd[1]: Started cri-containerd-100fc08b26ef1d48f99f6c8c9c2fe21448c57f8c6383abc7e8127dbe0b2b788e.scope - libcontainer container 100fc08b26ef1d48f99f6c8c9c2fe21448c57f8c6383abc7e8127dbe0b2b788e. Jun 20 19:45:47.738347 containerd[1559]: time="2025-06-20T19:45:47.738305712Z" level=info msg="StartContainer for \"100fc08b26ef1d48f99f6c8c9c2fe21448c57f8c6383abc7e8127dbe0b2b788e\" returns successfully" Jun 20 19:45:47.746924 update_engine[1535]: I20250620 19:45:47.746865 1535 update_attempter.cc:509] Updating boot flags... Jun 20 19:45:47.749051 systemd[1]: cri-containerd-100fc08b26ef1d48f99f6c8c9c2fe21448c57f8c6383abc7e8127dbe0b2b788e.scope: Deactivated successfully. Jun 20 19:45:47.751175 containerd[1559]: time="2025-06-20T19:45:47.751123247Z" level=info msg="received exit event container_id:\"100fc08b26ef1d48f99f6c8c9c2fe21448c57f8c6383abc7e8127dbe0b2b788e\" id:\"100fc08b26ef1d48f99f6c8c9c2fe21448c57f8c6383abc7e8127dbe0b2b788e\" pid:3104 exited_at:{seconds:1750448747 nanos:750579992}" Jun 20 19:45:47.751314 containerd[1559]: time="2025-06-20T19:45:47.751234045Z" level=info msg="TaskExit event in podsandbox handler container_id:\"100fc08b26ef1d48f99f6c8c9c2fe21448c57f8c6383abc7e8127dbe0b2b788e\" id:\"100fc08b26ef1d48f99f6c8c9c2fe21448c57f8c6383abc7e8127dbe0b2b788e\" pid:3104 exited_at:{seconds:1750448747 nanos:750579992}" Jun 20 19:45:47.773675 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-100fc08b26ef1d48f99f6c8c9c2fe21448c57f8c6383abc7e8127dbe0b2b788e-rootfs.mount: Deactivated successfully. Jun 20 19:45:48.195869 kubelet[2684]: E0620 19:45:48.195799 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 20 19:45:48.198947 containerd[1559]: time="2025-06-20T19:45:48.198910827Z" level=info msg="CreateContainer within sandbox \"a46e43c12cbf4b6e31df935eefef5a3653c3d251a2e1a79f7f5f8f2fadc22232\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jun 20 19:45:48.210359 containerd[1559]: time="2025-06-20T19:45:48.210315813Z" level=info msg="Container 997729666f07d1457102a98a6e8019b64dd7d36ca5048e006dd67e0510e0e7ea: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:45:48.217067 containerd[1559]: time="2025-06-20T19:45:48.217029070Z" level=info msg="CreateContainer within sandbox \"a46e43c12cbf4b6e31df935eefef5a3653c3d251a2e1a79f7f5f8f2fadc22232\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"997729666f07d1457102a98a6e8019b64dd7d36ca5048e006dd67e0510e0e7ea\"" Jun 20 19:45:48.217618 containerd[1559]: time="2025-06-20T19:45:48.217595948Z" level=info msg="StartContainer for \"997729666f07d1457102a98a6e8019b64dd7d36ca5048e006dd67e0510e0e7ea\"" Jun 20 19:45:48.218543 containerd[1559]: time="2025-06-20T19:45:48.218510627Z" level=info msg="connecting to shim 997729666f07d1457102a98a6e8019b64dd7d36ca5048e006dd67e0510e0e7ea" address="unix:///run/containerd/s/b67cc9f3565641b14afba2e60293bdfef7ac47485b88d49073126f3a5cf564ec" protocol=ttrpc version=3 Jun 20 19:45:48.245032 systemd[1]: Started cri-containerd-997729666f07d1457102a98a6e8019b64dd7d36ca5048e006dd67e0510e0e7ea.scope - libcontainer container 997729666f07d1457102a98a6e8019b64dd7d36ca5048e006dd67e0510e0e7ea. Jun 20 19:45:48.275749 containerd[1559]: time="2025-06-20T19:45:48.275701312Z" level=info msg="StartContainer for \"997729666f07d1457102a98a6e8019b64dd7d36ca5048e006dd67e0510e0e7ea\" returns successfully" Jun 20 19:45:48.288593 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jun 20 19:45:48.288920 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jun 20 19:45:48.289181 systemd[1]: Stopping systemd-sysctl.service - Apply Kernel Variables... Jun 20 19:45:48.290878 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jun 20 19:45:48.292024 systemd[1]: cri-containerd-997729666f07d1457102a98a6e8019b64dd7d36ca5048e006dd67e0510e0e7ea.scope: Deactivated successfully. Jun 20 19:45:48.294019 containerd[1559]: time="2025-06-20T19:45:48.293966790Z" level=info msg="received exit event container_id:\"997729666f07d1457102a98a6e8019b64dd7d36ca5048e006dd67e0510e0e7ea\" id:\"997729666f07d1457102a98a6e8019b64dd7d36ca5048e006dd67e0510e0e7ea\" pid:3168 exited_at:{seconds:1750448748 nanos:293691985}" Jun 20 19:45:48.294301 containerd[1559]: time="2025-06-20T19:45:48.294195999Z" level=info msg="TaskExit event in podsandbox handler container_id:\"997729666f07d1457102a98a6e8019b64dd7d36ca5048e006dd67e0510e0e7ea\" id:\"997729666f07d1457102a98a6e8019b64dd7d36ca5048e006dd67e0510e0e7ea\" pid:3168 exited_at:{seconds:1750448748 nanos:293691985}" Jun 20 19:45:48.321508 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jun 20 19:45:48.935382 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1919782962.mount: Deactivated successfully. Jun 20 19:45:49.199405 kubelet[2684]: E0620 19:45:49.199031 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 20 19:45:49.201469 containerd[1559]: time="2025-06-20T19:45:49.201435632Z" level=info msg="CreateContainer within sandbox \"a46e43c12cbf4b6e31df935eefef5a3653c3d251a2e1a79f7f5f8f2fadc22232\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jun 20 19:45:49.852682 containerd[1559]: time="2025-06-20T19:45:49.852629667Z" level=info msg="Container 435f71bdd4b1b1816e5341992382fe1b23c446847d375e22e3e0b1c987396fc1: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:45:49.856932 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4177122584.mount: Deactivated successfully. Jun 20 19:45:50.153826 containerd[1559]: time="2025-06-20T19:45:50.153782790Z" level=info msg="CreateContainer within sandbox \"a46e43c12cbf4b6e31df935eefef5a3653c3d251a2e1a79f7f5f8f2fadc22232\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"435f71bdd4b1b1816e5341992382fe1b23c446847d375e22e3e0b1c987396fc1\"" Jun 20 19:45:50.154165 containerd[1559]: time="2025-06-20T19:45:50.154143680Z" level=info msg="StartContainer for \"435f71bdd4b1b1816e5341992382fe1b23c446847d375e22e3e0b1c987396fc1\"" Jun 20 19:45:50.155411 containerd[1559]: time="2025-06-20T19:45:50.155377580Z" level=info msg="connecting to shim 435f71bdd4b1b1816e5341992382fe1b23c446847d375e22e3e0b1c987396fc1" address="unix:///run/containerd/s/b67cc9f3565641b14afba2e60293bdfef7ac47485b88d49073126f3a5cf564ec" protocol=ttrpc version=3 Jun 20 19:45:50.168673 containerd[1559]: time="2025-06-20T19:45:50.168626830Z" level=info msg="ImageCreate event name:\"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:45:50.169582 containerd[1559]: time="2025-06-20T19:45:50.169465697Z" level=info msg="stop pulling image quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e: active requests=0, bytes read=18904197" Jun 20 19:45:50.171353 containerd[1559]: time="2025-06-20T19:45:50.170661738Z" level=info msg="ImageCreate event name:\"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jun 20 19:45:50.172866 containerd[1559]: time="2025-06-20T19:45:50.172719995Z" level=info msg="Pulled image \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" with image id \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\", repo tag \"\", repo digest \"quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\", size \"18897442\" in 2.51404364s" Jun 20 19:45:50.172866 containerd[1559]: time="2025-06-20T19:45:50.172758014Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jun 20 19:45:50.174105 systemd[1]: Started cri-containerd-435f71bdd4b1b1816e5341992382fe1b23c446847d375e22e3e0b1c987396fc1.scope - libcontainer container 435f71bdd4b1b1816e5341992382fe1b23c446847d375e22e3e0b1c987396fc1. Jun 20 19:45:50.174768 containerd[1559]: time="2025-06-20T19:45:50.174705990Z" level=info msg="CreateContainer within sandbox \"645844e97badd981bc3c97cff82e725598c7e497db2d4e2ae1338cb463101678\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jun 20 19:45:50.187822 containerd[1559]: time="2025-06-20T19:45:50.187650046Z" level=info msg="Container 1f13eda7ef9fab81724a413e9b23587f0ca67205a0897b5f2503483db59765a7: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:45:50.196475 containerd[1559]: time="2025-06-20T19:45:50.196406579Z" level=info msg="CreateContainer within sandbox \"645844e97badd981bc3c97cff82e725598c7e497db2d4e2ae1338cb463101678\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"1f13eda7ef9fab81724a413e9b23587f0ca67205a0897b5f2503483db59765a7\"" Jun 20 19:45:50.196971 containerd[1559]: time="2025-06-20T19:45:50.196938479Z" level=info msg="StartContainer for \"1f13eda7ef9fab81724a413e9b23587f0ca67205a0897b5f2503483db59765a7\"" Jun 20 19:45:50.197891 containerd[1559]: time="2025-06-20T19:45:50.197851773Z" level=info msg="connecting to shim 1f13eda7ef9fab81724a413e9b23587f0ca67205a0897b5f2503483db59765a7" address="unix:///run/containerd/s/1928b07f1ee070258a89f62bb236eeef4ed7a31526bad42c22bbfeff7fd8fe3a" protocol=ttrpc version=3 Jun 20 19:45:50.217961 systemd[1]: Started cri-containerd-1f13eda7ef9fab81724a413e9b23587f0ca67205a0897b5f2503483db59765a7.scope - libcontainer container 1f13eda7ef9fab81724a413e9b23587f0ca67205a0897b5f2503483db59765a7. Jun 20 19:45:50.223463 systemd[1]: cri-containerd-435f71bdd4b1b1816e5341992382fe1b23c446847d375e22e3e0b1c987396fc1.scope: Deactivated successfully. Jun 20 19:45:50.223958 containerd[1559]: time="2025-06-20T19:45:50.223918543Z" level=info msg="StartContainer for \"435f71bdd4b1b1816e5341992382fe1b23c446847d375e22e3e0b1c987396fc1\" returns successfully" Jun 20 19:45:50.227445 containerd[1559]: time="2025-06-20T19:45:50.227407455Z" level=info msg="received exit event container_id:\"435f71bdd4b1b1816e5341992382fe1b23c446847d375e22e3e0b1c987396fc1\" id:\"435f71bdd4b1b1816e5341992382fe1b23c446847d375e22e3e0b1c987396fc1\" pid:3230 exited_at:{seconds:1750448750 nanos:226747584}" Jun 20 19:45:50.229387 containerd[1559]: time="2025-06-20T19:45:50.229362555Z" level=info msg="TaskExit event in podsandbox handler container_id:\"435f71bdd4b1b1816e5341992382fe1b23c446847d375e22e3e0b1c987396fc1\" id:\"435f71bdd4b1b1816e5341992382fe1b23c446847d375e22e3e0b1c987396fc1\" pid:3230 exited_at:{seconds:1750448750 nanos:226747584}" Jun 20 19:45:50.255571 containerd[1559]: time="2025-06-20T19:45:50.255535266Z" level=info msg="StartContainer for \"1f13eda7ef9fab81724a413e9b23587f0ca67205a0897b5f2503483db59765a7\" returns successfully" Jun 20 19:45:50.854717 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-435f71bdd4b1b1816e5341992382fe1b23c446847d375e22e3e0b1c987396fc1-rootfs.mount: Deactivated successfully. Jun 20 19:45:51.221733 kubelet[2684]: E0620 19:45:51.221692 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 20 19:45:51.226834 kubelet[2684]: E0620 19:45:51.226790 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 20 19:45:51.228623 containerd[1559]: time="2025-06-20T19:45:51.228587239Z" level=info msg="CreateContainer within sandbox \"a46e43c12cbf4b6e31df935eefef5a3653c3d251a2e1a79f7f5f8f2fadc22232\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jun 20 19:45:51.242011 containerd[1559]: time="2025-06-20T19:45:51.241883764Z" level=info msg="Container b6db81ee7051beccac0fc432f147bdfae526b7667c52ef1421cecfb64a3aa768: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:45:51.246871 kubelet[2684]: I0620 19:45:51.246801 2684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-5d85765b45-n9l2r" podStartSLOduration=2.233118775 podStartE2EDuration="15.246783736s" podCreationTimestamp="2025-06-20 19:45:36 +0000 UTC" firstStartedPulling="2025-06-20 19:45:37.159622249 +0000 UTC m=+7.087310299" lastFinishedPulling="2025-06-20 19:45:50.1732872 +0000 UTC m=+20.100975260" observedRunningTime="2025-06-20 19:45:51.230790157 +0000 UTC m=+21.158478217" watchObservedRunningTime="2025-06-20 19:45:51.246783736 +0000 UTC m=+21.174471796" Jun 20 19:45:51.250088 containerd[1559]: time="2025-06-20T19:45:51.250054552Z" level=info msg="CreateContainer within sandbox \"a46e43c12cbf4b6e31df935eefef5a3653c3d251a2e1a79f7f5f8f2fadc22232\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"b6db81ee7051beccac0fc432f147bdfae526b7667c52ef1421cecfb64a3aa768\"" Jun 20 19:45:51.250593 containerd[1559]: time="2025-06-20T19:45:51.250554992Z" level=info msg="StartContainer for \"b6db81ee7051beccac0fc432f147bdfae526b7667c52ef1421cecfb64a3aa768\"" Jun 20 19:45:51.251437 containerd[1559]: time="2025-06-20T19:45:51.251413952Z" level=info msg="connecting to shim b6db81ee7051beccac0fc432f147bdfae526b7667c52ef1421cecfb64a3aa768" address="unix:///run/containerd/s/b67cc9f3565641b14afba2e60293bdfef7ac47485b88d49073126f3a5cf564ec" protocol=ttrpc version=3 Jun 20 19:45:51.273960 systemd[1]: Started cri-containerd-b6db81ee7051beccac0fc432f147bdfae526b7667c52ef1421cecfb64a3aa768.scope - libcontainer container b6db81ee7051beccac0fc432f147bdfae526b7667c52ef1421cecfb64a3aa768. Jun 20 19:45:51.300366 systemd[1]: cri-containerd-b6db81ee7051beccac0fc432f147bdfae526b7667c52ef1421cecfb64a3aa768.scope: Deactivated successfully. Jun 20 19:45:51.300786 containerd[1559]: time="2025-06-20T19:45:51.300746723Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b6db81ee7051beccac0fc432f147bdfae526b7667c52ef1421cecfb64a3aa768\" id:\"b6db81ee7051beccac0fc432f147bdfae526b7667c52ef1421cecfb64a3aa768\" pid:3305 exited_at:{seconds:1750448751 nanos:300537905}" Jun 20 19:45:51.302084 containerd[1559]: time="2025-06-20T19:45:51.302050075Z" level=info msg="received exit event container_id:\"b6db81ee7051beccac0fc432f147bdfae526b7667c52ef1421cecfb64a3aa768\" id:\"b6db81ee7051beccac0fc432f147bdfae526b7667c52ef1421cecfb64a3aa768\" pid:3305 exited_at:{seconds:1750448751 nanos:300537905}" Jun 20 19:45:51.309685 containerd[1559]: time="2025-06-20T19:45:51.309617209Z" level=info msg="StartContainer for \"b6db81ee7051beccac0fc432f147bdfae526b7667c52ef1421cecfb64a3aa768\" returns successfully" Jun 20 19:45:51.322584 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b6db81ee7051beccac0fc432f147bdfae526b7667c52ef1421cecfb64a3aa768-rootfs.mount: Deactivated successfully. Jun 20 19:45:52.231837 kubelet[2684]: E0620 19:45:52.231775 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 20 19:45:52.232411 kubelet[2684]: E0620 19:45:52.231846 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 20 19:45:52.234632 containerd[1559]: time="2025-06-20T19:45:52.234584434Z" level=info msg="CreateContainer within sandbox \"a46e43c12cbf4b6e31df935eefef5a3653c3d251a2e1a79f7f5f8f2fadc22232\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jun 20 19:45:52.285466 containerd[1559]: time="2025-06-20T19:45:52.285382308Z" level=info msg="Container 728f7814af8be6c673f13ce1b4aa0e441b4df9858ba95da0abc8efa4ce1cffa6: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:45:52.289694 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2950821802.mount: Deactivated successfully. Jun 20 19:45:52.292983 containerd[1559]: time="2025-06-20T19:45:52.292937118Z" level=info msg="CreateContainer within sandbox \"a46e43c12cbf4b6e31df935eefef5a3653c3d251a2e1a79f7f5f8f2fadc22232\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"728f7814af8be6c673f13ce1b4aa0e441b4df9858ba95da0abc8efa4ce1cffa6\"" Jun 20 19:45:52.293501 containerd[1559]: time="2025-06-20T19:45:52.293466312Z" level=info msg="StartContainer for \"728f7814af8be6c673f13ce1b4aa0e441b4df9858ba95da0abc8efa4ce1cffa6\"" Jun 20 19:45:52.294439 containerd[1559]: time="2025-06-20T19:45:52.294404580Z" level=info msg="connecting to shim 728f7814af8be6c673f13ce1b4aa0e441b4df9858ba95da0abc8efa4ce1cffa6" address="unix:///run/containerd/s/b67cc9f3565641b14afba2e60293bdfef7ac47485b88d49073126f3a5cf564ec" protocol=ttrpc version=3 Jun 20 19:45:52.324942 systemd[1]: Started cri-containerd-728f7814af8be6c673f13ce1b4aa0e441b4df9858ba95da0abc8efa4ce1cffa6.scope - libcontainer container 728f7814af8be6c673f13ce1b4aa0e441b4df9858ba95da0abc8efa4ce1cffa6. Jun 20 19:45:52.359974 containerd[1559]: time="2025-06-20T19:45:52.359926180Z" level=info msg="StartContainer for \"728f7814af8be6c673f13ce1b4aa0e441b4df9858ba95da0abc8efa4ce1cffa6\" returns successfully" Jun 20 19:45:52.508502 containerd[1559]: time="2025-06-20T19:45:52.508406428Z" level=info msg="TaskExit event in podsandbox handler container_id:\"728f7814af8be6c673f13ce1b4aa0e441b4df9858ba95da0abc8efa4ce1cffa6\" id:\"5fcb2542201f3ffd97f737b224668a7fbb40b3320ec2f7da06e2285e8df2ebed\" pid:3378 exited_at:{seconds:1750448752 nanos:507979187}" Jun 20 19:45:52.571416 kubelet[2684]: I0620 19:45:52.571386 2684 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jun 20 19:45:52.608539 systemd[1]: Created slice kubepods-burstable-pod14f6cf09_4072_4b1c_b194_ae75f39437e3.slice - libcontainer container kubepods-burstable-pod14f6cf09_4072_4b1c_b194_ae75f39437e3.slice. Jun 20 19:45:52.620058 systemd[1]: Created slice kubepods-burstable-pod6f6f92e7_778e_4711_819e_44455651eb79.slice - libcontainer container kubepods-burstable-pod6f6f92e7_778e_4711_819e_44455651eb79.slice. Jun 20 19:45:52.766448 kubelet[2684]: I0620 19:45:52.766165 2684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2s7g9\" (UniqueName: \"kubernetes.io/projected/14f6cf09-4072-4b1c-b194-ae75f39437e3-kube-api-access-2s7g9\") pod \"coredns-7c65d6cfc9-2vkh4\" (UID: \"14f6cf09-4072-4b1c-b194-ae75f39437e3\") " pod="kube-system/coredns-7c65d6cfc9-2vkh4" Jun 20 19:45:52.766448 kubelet[2684]: I0620 19:45:52.766210 2684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwq4l\" (UniqueName: \"kubernetes.io/projected/6f6f92e7-778e-4711-819e-44455651eb79-kube-api-access-xwq4l\") pod \"coredns-7c65d6cfc9-tjbbj\" (UID: \"6f6f92e7-778e-4711-819e-44455651eb79\") " pod="kube-system/coredns-7c65d6cfc9-tjbbj" Jun 20 19:45:52.766448 kubelet[2684]: I0620 19:45:52.766236 2684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/14f6cf09-4072-4b1c-b194-ae75f39437e3-config-volume\") pod \"coredns-7c65d6cfc9-2vkh4\" (UID: \"14f6cf09-4072-4b1c-b194-ae75f39437e3\") " pod="kube-system/coredns-7c65d6cfc9-2vkh4" Jun 20 19:45:52.766448 kubelet[2684]: I0620 19:45:52.766254 2684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6f6f92e7-778e-4711-819e-44455651eb79-config-volume\") pod \"coredns-7c65d6cfc9-tjbbj\" (UID: \"6f6f92e7-778e-4711-819e-44455651eb79\") " pod="kube-system/coredns-7c65d6cfc9-tjbbj" Jun 20 19:45:52.916501 kubelet[2684]: E0620 19:45:52.916473 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 20 19:45:52.916969 containerd[1559]: time="2025-06-20T19:45:52.916926539Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-2vkh4,Uid:14f6cf09-4072-4b1c-b194-ae75f39437e3,Namespace:kube-system,Attempt:0,}" Jun 20 19:45:52.924254 kubelet[2684]: E0620 19:45:52.924213 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 20 19:45:52.924676 containerd[1559]: time="2025-06-20T19:45:52.924646143Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-tjbbj,Uid:6f6f92e7-778e-4711-819e-44455651eb79,Namespace:kube-system,Attempt:0,}" Jun 20 19:45:53.237989 kubelet[2684]: E0620 19:45:53.237953 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 20 19:45:53.418980 kubelet[2684]: I0620 19:45:53.418924 2684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-gptlx" podStartSLOduration=6.365760098 podStartE2EDuration="17.418907612s" podCreationTimestamp="2025-06-20 19:45:36 +0000 UTC" firstStartedPulling="2025-06-20 19:45:36.604643783 +0000 UTC m=+6.532331843" lastFinishedPulling="2025-06-20 19:45:47.657791297 +0000 UTC m=+17.585479357" observedRunningTime="2025-06-20 19:45:53.418560241 +0000 UTC m=+23.346248321" watchObservedRunningTime="2025-06-20 19:45:53.418907612 +0000 UTC m=+23.346595672" Jun 20 19:45:54.239026 kubelet[2684]: E0620 19:45:54.238981 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 20 19:45:54.556526 systemd-networkd[1480]: cilium_host: Link UP Jun 20 19:45:54.557282 systemd-networkd[1480]: cilium_net: Link UP Jun 20 19:45:54.557597 systemd-networkd[1480]: cilium_host: Gained carrier Jun 20 19:45:54.557917 systemd-networkd[1480]: cilium_net: Gained carrier Jun 20 19:45:54.650677 systemd-networkd[1480]: cilium_vxlan: Link UP Jun 20 19:45:54.650687 systemd-networkd[1480]: cilium_vxlan: Gained carrier Jun 20 19:45:54.846844 kernel: NET: Registered PF_ALG protocol family Jun 20 19:45:54.887985 systemd-networkd[1480]: cilium_net: Gained IPv6LL Jun 20 19:45:55.063928 systemd-networkd[1480]: cilium_host: Gained IPv6LL Jun 20 19:45:55.240964 kubelet[2684]: E0620 19:45:55.240936 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 20 19:45:55.446614 systemd-networkd[1480]: lxc_health: Link UP Jun 20 19:45:55.447680 systemd-networkd[1480]: lxc_health: Gained carrier Jun 20 19:45:55.704031 systemd-networkd[1480]: cilium_vxlan: Gained IPv6LL Jun 20 19:45:55.840697 systemd-networkd[1480]: lxc3dfa3d041016: Link UP Jun 20 19:45:55.843872 kernel: eth0: renamed from tmp3b2ff Jun 20 19:45:55.845126 systemd-networkd[1480]: lxc3dfa3d041016: Gained carrier Jun 20 19:45:55.925334 systemd-networkd[1480]: lxc7719c4b9cee0: Link UP Jun 20 19:45:55.936850 kernel: eth0: renamed from tmpc028a Jun 20 19:45:55.936596 systemd-networkd[1480]: lxc7719c4b9cee0: Gained carrier Jun 20 19:45:56.474611 systemd-networkd[1480]: lxc_health: Gained IPv6LL Jun 20 19:45:56.510122 kubelet[2684]: E0620 19:45:56.510079 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 20 19:45:57.559992 systemd-networkd[1480]: lxc3dfa3d041016: Gained IPv6LL Jun 20 19:45:57.752999 systemd-networkd[1480]: lxc7719c4b9cee0: Gained IPv6LL Jun 20 19:45:58.327664 systemd[1]: Started sshd@7-10.0.0.133:22-10.0.0.1:52862.service - OpenSSH per-connection server daemon (10.0.0.1:52862). Jun 20 19:45:58.404403 sshd[3849]: Accepted publickey for core from 10.0.0.1 port 52862 ssh2: RSA SHA256:/nHAorWV7i/6K+ZJ86Yj9USNPlOARznL9tuQD88B/d4 Jun 20 19:45:58.405893 sshd-session[3849]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:45:58.411048 systemd-logind[1532]: New session 8 of user core. Jun 20 19:45:58.416949 systemd[1]: Started session-8.scope - Session 8 of User core. Jun 20 19:45:58.539463 sshd[3851]: Connection closed by 10.0.0.1 port 52862 Jun 20 19:45:58.541441 sshd-session[3849]: pam_unix(sshd:session): session closed for user core Jun 20 19:45:58.545548 systemd[1]: sshd@7-10.0.0.133:22-10.0.0.1:52862.service: Deactivated successfully. Jun 20 19:45:58.547382 systemd[1]: session-8.scope: Deactivated successfully. Jun 20 19:45:58.548105 systemd-logind[1532]: Session 8 logged out. Waiting for processes to exit. Jun 20 19:45:58.549374 systemd-logind[1532]: Removed session 8. Jun 20 19:45:59.176637 containerd[1559]: time="2025-06-20T19:45:59.176588665Z" level=info msg="connecting to shim 3b2ffa4d5ba8d64d114f8923ee78d7e044a281905900ec785f2f6dea909f14ce" address="unix:///run/containerd/s/2f71c644a8d101931cd011a385dbf3c4adce01c06a9a3f40fa99e8b1919bfcaf" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:45:59.178395 containerd[1559]: time="2025-06-20T19:45:59.178356715Z" level=info msg="connecting to shim c028a0d33f9681a9a65e20df6153ad4815a6e411277d3b0651f6e09c7cae3df9" address="unix:///run/containerd/s/e59981585ddb0710d01002338018491466bf6cc4e38e289a5d473336b4e94e9b" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:45:59.202941 systemd[1]: Started cri-containerd-3b2ffa4d5ba8d64d114f8923ee78d7e044a281905900ec785f2f6dea909f14ce.scope - libcontainer container 3b2ffa4d5ba8d64d114f8923ee78d7e044a281905900ec785f2f6dea909f14ce. Jun 20 19:45:59.204614 systemd[1]: Started cri-containerd-c028a0d33f9681a9a65e20df6153ad4815a6e411277d3b0651f6e09c7cae3df9.scope - libcontainer container c028a0d33f9681a9a65e20df6153ad4815a6e411277d3b0651f6e09c7cae3df9. Jun 20 19:45:59.216232 systemd-resolved[1411]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jun 20 19:45:59.218098 systemd-resolved[1411]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jun 20 19:45:59.248516 containerd[1559]: time="2025-06-20T19:45:59.248473316Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-2vkh4,Uid:14f6cf09-4072-4b1c-b194-ae75f39437e3,Namespace:kube-system,Attempt:0,} returns sandbox id \"3b2ffa4d5ba8d64d114f8923ee78d7e044a281905900ec785f2f6dea909f14ce\"" Jun 20 19:45:59.249122 kubelet[2684]: E0620 19:45:59.249098 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 20 19:45:59.251541 containerd[1559]: time="2025-06-20T19:45:59.251494421Z" level=info msg="CreateContainer within sandbox \"3b2ffa4d5ba8d64d114f8923ee78d7e044a281905900ec785f2f6dea909f14ce\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jun 20 19:45:59.252751 containerd[1559]: time="2025-06-20T19:45:59.252726232Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-tjbbj,Uid:6f6f92e7-778e-4711-819e-44455651eb79,Namespace:kube-system,Attempt:0,} returns sandbox id \"c028a0d33f9681a9a65e20df6153ad4815a6e411277d3b0651f6e09c7cae3df9\"" Jun 20 19:45:59.253388 kubelet[2684]: E0620 19:45:59.253360 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 20 19:45:59.254974 containerd[1559]: time="2025-06-20T19:45:59.254936079Z" level=info msg="CreateContainer within sandbox \"c028a0d33f9681a9a65e20df6153ad4815a6e411277d3b0651f6e09c7cae3df9\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jun 20 19:45:59.263861 containerd[1559]: time="2025-06-20T19:45:59.263469056Z" level=info msg="Container 2a0275de0834702e8637cd1cefd0d94aca2849dac8e258a6ff0ad7b304f6b8df: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:45:59.266996 containerd[1559]: time="2025-06-20T19:45:59.266964153Z" level=info msg="Container 5121b0b95aba65815e6b58100e7d5e7bec6312eb77cd21f891ab23623cef0a77: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:45:59.272573 containerd[1559]: time="2025-06-20T19:45:59.272529635Z" level=info msg="CreateContainer within sandbox \"3b2ffa4d5ba8d64d114f8923ee78d7e044a281905900ec785f2f6dea909f14ce\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2a0275de0834702e8637cd1cefd0d94aca2849dac8e258a6ff0ad7b304f6b8df\"" Jun 20 19:45:59.273021 containerd[1559]: time="2025-06-20T19:45:59.272935159Z" level=info msg="StartContainer for \"2a0275de0834702e8637cd1cefd0d94aca2849dac8e258a6ff0ad7b304f6b8df\"" Jun 20 19:45:59.273681 containerd[1559]: time="2025-06-20T19:45:59.273658277Z" level=info msg="connecting to shim 2a0275de0834702e8637cd1cefd0d94aca2849dac8e258a6ff0ad7b304f6b8df" address="unix:///run/containerd/s/2f71c644a8d101931cd011a385dbf3c4adce01c06a9a3f40fa99e8b1919bfcaf" protocol=ttrpc version=3 Jun 20 19:45:59.275068 containerd[1559]: time="2025-06-20T19:45:59.275005172Z" level=info msg="CreateContainer within sandbox \"c028a0d33f9681a9a65e20df6153ad4815a6e411277d3b0651f6e09c7cae3df9\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5121b0b95aba65815e6b58100e7d5e7bec6312eb77cd21f891ab23623cef0a77\"" Jun 20 19:45:59.275438 containerd[1559]: time="2025-06-20T19:45:59.275415195Z" level=info msg="StartContainer for \"5121b0b95aba65815e6b58100e7d5e7bec6312eb77cd21f891ab23623cef0a77\"" Jun 20 19:45:59.277126 containerd[1559]: time="2025-06-20T19:45:59.277057900Z" level=info msg="connecting to shim 5121b0b95aba65815e6b58100e7d5e7bec6312eb77cd21f891ab23623cef0a77" address="unix:///run/containerd/s/e59981585ddb0710d01002338018491466bf6cc4e38e289a5d473336b4e94e9b" protocol=ttrpc version=3 Jun 20 19:45:59.294948 systemd[1]: Started cri-containerd-2a0275de0834702e8637cd1cefd0d94aca2849dac8e258a6ff0ad7b304f6b8df.scope - libcontainer container 2a0275de0834702e8637cd1cefd0d94aca2849dac8e258a6ff0ad7b304f6b8df. Jun 20 19:45:59.298103 systemd[1]: Started cri-containerd-5121b0b95aba65815e6b58100e7d5e7bec6312eb77cd21f891ab23623cef0a77.scope - libcontainer container 5121b0b95aba65815e6b58100e7d5e7bec6312eb77cd21f891ab23623cef0a77. Jun 20 19:45:59.325423 containerd[1559]: time="2025-06-20T19:45:59.325384354Z" level=info msg="StartContainer for \"2a0275de0834702e8637cd1cefd0d94aca2849dac8e258a6ff0ad7b304f6b8df\" returns successfully" Jun 20 19:45:59.331902 containerd[1559]: time="2025-06-20T19:45:59.331799761Z" level=info msg="StartContainer for \"5121b0b95aba65815e6b58100e7d5e7bec6312eb77cd21f891ab23623cef0a77\" returns successfully" Jun 20 19:45:59.773900 kubelet[2684]: I0620 19:45:59.773866 2684 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jun 20 19:45:59.774272 kubelet[2684]: E0620 19:45:59.774245 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 20 19:46:00.253853 kubelet[2684]: E0620 19:46:00.251425 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 20 19:46:00.255039 kubelet[2684]: E0620 19:46:00.255016 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 20 19:46:00.255393 kubelet[2684]: E0620 19:46:00.255371 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 20 19:46:00.276505 kubelet[2684]: I0620 19:46:00.276247 2684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-tjbbj" podStartSLOduration=24.276188819 podStartE2EDuration="24.276188819s" podCreationTimestamp="2025-06-20 19:45:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 19:46:00.264378318 +0000 UTC m=+30.192066398" watchObservedRunningTime="2025-06-20 19:46:00.276188819 +0000 UTC m=+30.203876879" Jun 20 19:46:00.277141 kubelet[2684]: I0620 19:46:00.277099 2684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-2vkh4" podStartSLOduration=24.277091386 podStartE2EDuration="24.277091386s" podCreationTimestamp="2025-06-20 19:45:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 19:46:00.276099708 +0000 UTC m=+30.203787789" watchObservedRunningTime="2025-06-20 19:46:00.277091386 +0000 UTC m=+30.204779446" Jun 20 19:46:01.256092 kubelet[2684]: E0620 19:46:01.256066 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 20 19:46:01.256516 kubelet[2684]: E0620 19:46:01.256105 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 20 19:46:02.257291 kubelet[2684]: E0620 19:46:02.257260 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 20 19:46:03.556214 systemd[1]: Started sshd@8-10.0.0.133:22-10.0.0.1:52876.service - OpenSSH per-connection server daemon (10.0.0.1:52876). Jun 20 19:46:03.615240 sshd[4042]: Accepted publickey for core from 10.0.0.1 port 52876 ssh2: RSA SHA256:/nHAorWV7i/6K+ZJ86Yj9USNPlOARznL9tuQD88B/d4 Jun 20 19:46:03.617131 sshd-session[4042]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:46:03.622003 systemd-logind[1532]: New session 9 of user core. Jun 20 19:46:03.630980 systemd[1]: Started session-9.scope - Session 9 of User core. Jun 20 19:46:03.749931 sshd[4044]: Connection closed by 10.0.0.1 port 52876 Jun 20 19:46:03.750286 sshd-session[4042]: pam_unix(sshd:session): session closed for user core Jun 20 19:46:03.753717 systemd[1]: sshd@8-10.0.0.133:22-10.0.0.1:52876.service: Deactivated successfully. Jun 20 19:46:03.755999 systemd[1]: session-9.scope: Deactivated successfully. Jun 20 19:46:03.757553 systemd-logind[1532]: Session 9 logged out. Waiting for processes to exit. Jun 20 19:46:03.758951 systemd-logind[1532]: Removed session 9. Jun 20 19:46:08.763777 systemd[1]: Started sshd@9-10.0.0.133:22-10.0.0.1:55458.service - OpenSSH per-connection server daemon (10.0.0.1:55458). Jun 20 19:46:08.814361 sshd[4061]: Accepted publickey for core from 10.0.0.1 port 55458 ssh2: RSA SHA256:/nHAorWV7i/6K+ZJ86Yj9USNPlOARznL9tuQD88B/d4 Jun 20 19:46:08.816160 sshd-session[4061]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:46:08.820648 systemd-logind[1532]: New session 10 of user core. Jun 20 19:46:08.826973 systemd[1]: Started session-10.scope - Session 10 of User core. Jun 20 19:46:08.937428 sshd[4063]: Connection closed by 10.0.0.1 port 55458 Jun 20 19:46:08.937718 sshd-session[4061]: pam_unix(sshd:session): session closed for user core Jun 20 19:46:08.941466 systemd[1]: sshd@9-10.0.0.133:22-10.0.0.1:55458.service: Deactivated successfully. Jun 20 19:46:08.943653 systemd[1]: session-10.scope: Deactivated successfully. Jun 20 19:46:08.946762 systemd-logind[1532]: Session 10 logged out. Waiting for processes to exit. Jun 20 19:46:08.947901 systemd-logind[1532]: Removed session 10. Jun 20 19:46:13.954252 systemd[1]: Started sshd@10-10.0.0.133:22-10.0.0.1:55464.service - OpenSSH per-connection server daemon (10.0.0.1:55464). Jun 20 19:46:14.013503 sshd[4077]: Accepted publickey for core from 10.0.0.1 port 55464 ssh2: RSA SHA256:/nHAorWV7i/6K+ZJ86Yj9USNPlOARznL9tuQD88B/d4 Jun 20 19:46:14.015102 sshd-session[4077]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:46:14.019581 systemd-logind[1532]: New session 11 of user core. Jun 20 19:46:14.032919 systemd[1]: Started session-11.scope - Session 11 of User core. Jun 20 19:46:14.158017 sshd[4079]: Connection closed by 10.0.0.1 port 55464 Jun 20 19:46:14.158325 sshd-session[4077]: pam_unix(sshd:session): session closed for user core Jun 20 19:46:14.174513 systemd[1]: sshd@10-10.0.0.133:22-10.0.0.1:55464.service: Deactivated successfully. Jun 20 19:46:14.176488 systemd[1]: session-11.scope: Deactivated successfully. Jun 20 19:46:14.177330 systemd-logind[1532]: Session 11 logged out. Waiting for processes to exit. Jun 20 19:46:14.180643 systemd[1]: Started sshd@11-10.0.0.133:22-10.0.0.1:55466.service - OpenSSH per-connection server daemon (10.0.0.1:55466). Jun 20 19:46:14.181482 systemd-logind[1532]: Removed session 11. Jun 20 19:46:14.244194 sshd[4093]: Accepted publickey for core from 10.0.0.1 port 55466 ssh2: RSA SHA256:/nHAorWV7i/6K+ZJ86Yj9USNPlOARznL9tuQD88B/d4 Jun 20 19:46:14.245866 sshd-session[4093]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:46:14.250907 systemd-logind[1532]: New session 12 of user core. Jun 20 19:46:14.261973 systemd[1]: Started session-12.scope - Session 12 of User core. Jun 20 19:46:14.408046 sshd[4095]: Connection closed by 10.0.0.1 port 55466 Jun 20 19:46:14.408532 sshd-session[4093]: pam_unix(sshd:session): session closed for user core Jun 20 19:46:14.424226 systemd[1]: sshd@11-10.0.0.133:22-10.0.0.1:55466.service: Deactivated successfully. Jun 20 19:46:14.428803 systemd[1]: session-12.scope: Deactivated successfully. Jun 20 19:46:14.430685 systemd-logind[1532]: Session 12 logged out. Waiting for processes to exit. Jun 20 19:46:14.434896 systemd[1]: Started sshd@12-10.0.0.133:22-10.0.0.1:55468.service - OpenSSH per-connection server daemon (10.0.0.1:55468). Jun 20 19:46:14.435557 systemd-logind[1532]: Removed session 12. Jun 20 19:46:14.501537 sshd[4107]: Accepted publickey for core from 10.0.0.1 port 55468 ssh2: RSA SHA256:/nHAorWV7i/6K+ZJ86Yj9USNPlOARznL9tuQD88B/d4 Jun 20 19:46:14.502840 sshd-session[4107]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:46:14.507440 systemd-logind[1532]: New session 13 of user core. Jun 20 19:46:14.518965 systemd[1]: Started session-13.scope - Session 13 of User core. Jun 20 19:46:14.629758 sshd[4109]: Connection closed by 10.0.0.1 port 55468 Jun 20 19:46:14.630146 sshd-session[4107]: pam_unix(sshd:session): session closed for user core Jun 20 19:46:14.635173 systemd[1]: sshd@12-10.0.0.133:22-10.0.0.1:55468.service: Deactivated successfully. Jun 20 19:46:14.637214 systemd[1]: session-13.scope: Deactivated successfully. Jun 20 19:46:14.638086 systemd-logind[1532]: Session 13 logged out. Waiting for processes to exit. Jun 20 19:46:14.639418 systemd-logind[1532]: Removed session 13. Jun 20 19:46:19.646259 systemd[1]: Started sshd@13-10.0.0.133:22-10.0.0.1:55802.service - OpenSSH per-connection server daemon (10.0.0.1:55802). Jun 20 19:46:19.694143 sshd[4122]: Accepted publickey for core from 10.0.0.1 port 55802 ssh2: RSA SHA256:/nHAorWV7i/6K+ZJ86Yj9USNPlOARznL9tuQD88B/d4 Jun 20 19:46:19.695341 sshd-session[4122]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:46:19.699161 systemd-logind[1532]: New session 14 of user core. Jun 20 19:46:19.707941 systemd[1]: Started session-14.scope - Session 14 of User core. Jun 20 19:46:19.808414 sshd[4124]: Connection closed by 10.0.0.1 port 55802 Jun 20 19:46:19.808682 sshd-session[4122]: pam_unix(sshd:session): session closed for user core Jun 20 19:46:19.812759 systemd[1]: sshd@13-10.0.0.133:22-10.0.0.1:55802.service: Deactivated successfully. Jun 20 19:46:19.814746 systemd[1]: session-14.scope: Deactivated successfully. Jun 20 19:46:19.815545 systemd-logind[1532]: Session 14 logged out. Waiting for processes to exit. Jun 20 19:46:19.816776 systemd-logind[1532]: Removed session 14. Jun 20 19:46:24.821929 systemd[1]: Started sshd@14-10.0.0.133:22-10.0.0.1:55818.service - OpenSSH per-connection server daemon (10.0.0.1:55818). Jun 20 19:46:24.892746 sshd[4137]: Accepted publickey for core from 10.0.0.1 port 55818 ssh2: RSA SHA256:/nHAorWV7i/6K+ZJ86Yj9USNPlOARznL9tuQD88B/d4 Jun 20 19:46:24.894556 sshd-session[4137]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:46:24.899167 systemd-logind[1532]: New session 15 of user core. Jun 20 19:46:24.910019 systemd[1]: Started session-15.scope - Session 15 of User core. Jun 20 19:46:25.017511 sshd[4139]: Connection closed by 10.0.0.1 port 55818 Jun 20 19:46:25.018085 sshd-session[4137]: pam_unix(sshd:session): session closed for user core Jun 20 19:46:25.028347 systemd[1]: sshd@14-10.0.0.133:22-10.0.0.1:55818.service: Deactivated successfully. Jun 20 19:46:25.030073 systemd[1]: session-15.scope: Deactivated successfully. Jun 20 19:46:25.031129 systemd-logind[1532]: Session 15 logged out. Waiting for processes to exit. Jun 20 19:46:25.034690 systemd[1]: Started sshd@15-10.0.0.133:22-10.0.0.1:55830.service - OpenSSH per-connection server daemon (10.0.0.1:55830). Jun 20 19:46:25.035397 systemd-logind[1532]: Removed session 15. Jun 20 19:46:25.098078 sshd[4153]: Accepted publickey for core from 10.0.0.1 port 55830 ssh2: RSA SHA256:/nHAorWV7i/6K+ZJ86Yj9USNPlOARznL9tuQD88B/d4 Jun 20 19:46:25.099514 sshd-session[4153]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:46:25.104073 systemd-logind[1532]: New session 16 of user core. Jun 20 19:46:25.117958 systemd[1]: Started session-16.scope - Session 16 of User core. Jun 20 19:46:25.291038 sshd[4155]: Connection closed by 10.0.0.1 port 55830 Jun 20 19:46:25.291393 sshd-session[4153]: pam_unix(sshd:session): session closed for user core Jun 20 19:46:25.302393 systemd[1]: sshd@15-10.0.0.133:22-10.0.0.1:55830.service: Deactivated successfully. Jun 20 19:46:25.304406 systemd[1]: session-16.scope: Deactivated successfully. Jun 20 19:46:25.305320 systemd-logind[1532]: Session 16 logged out. Waiting for processes to exit. Jun 20 19:46:25.308647 systemd[1]: Started sshd@16-10.0.0.133:22-10.0.0.1:55840.service - OpenSSH per-connection server daemon (10.0.0.1:55840). Jun 20 19:46:25.309502 systemd-logind[1532]: Removed session 16. Jun 20 19:46:25.370826 sshd[4166]: Accepted publickey for core from 10.0.0.1 port 55840 ssh2: RSA SHA256:/nHAorWV7i/6K+ZJ86Yj9USNPlOARznL9tuQD88B/d4 Jun 20 19:46:25.372557 sshd-session[4166]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:46:25.377470 systemd-logind[1532]: New session 17 of user core. Jun 20 19:46:25.388994 systemd[1]: Started session-17.scope - Session 17 of User core. Jun 20 19:46:28.170079 sshd[4168]: Connection closed by 10.0.0.1 port 55840 Jun 20 19:46:28.170483 sshd-session[4166]: pam_unix(sshd:session): session closed for user core Jun 20 19:46:28.179920 systemd[1]: sshd@16-10.0.0.133:22-10.0.0.1:55840.service: Deactivated successfully. Jun 20 19:46:28.181996 systemd[1]: session-17.scope: Deactivated successfully. Jun 20 19:46:28.182755 systemd-logind[1532]: Session 17 logged out. Waiting for processes to exit. Jun 20 19:46:28.185349 systemd[1]: Started sshd@17-10.0.0.133:22-10.0.0.1:50478.service - OpenSSH per-connection server daemon (10.0.0.1:50478). Jun 20 19:46:28.186321 systemd-logind[1532]: Removed session 17. Jun 20 19:46:28.244388 sshd[4190]: Accepted publickey for core from 10.0.0.1 port 50478 ssh2: RSA SHA256:/nHAorWV7i/6K+ZJ86Yj9USNPlOARznL9tuQD88B/d4 Jun 20 19:46:28.246087 sshd-session[4190]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:46:28.251273 systemd-logind[1532]: New session 18 of user core. Jun 20 19:46:28.258112 systemd[1]: Started session-18.scope - Session 18 of User core. Jun 20 19:46:28.772709 sshd[4192]: Connection closed by 10.0.0.1 port 50478 Jun 20 19:46:28.773053 sshd-session[4190]: pam_unix(sshd:session): session closed for user core Jun 20 19:46:28.788758 systemd[1]: sshd@17-10.0.0.133:22-10.0.0.1:50478.service: Deactivated successfully. Jun 20 19:46:28.790753 systemd[1]: session-18.scope: Deactivated successfully. Jun 20 19:46:28.791604 systemd-logind[1532]: Session 18 logged out. Waiting for processes to exit. Jun 20 19:46:28.794775 systemd[1]: Started sshd@18-10.0.0.133:22-10.0.0.1:50494.service - OpenSSH per-connection server daemon (10.0.0.1:50494). Jun 20 19:46:28.795529 systemd-logind[1532]: Removed session 18. Jun 20 19:46:28.847769 sshd[4203]: Accepted publickey for core from 10.0.0.1 port 50494 ssh2: RSA SHA256:/nHAorWV7i/6K+ZJ86Yj9USNPlOARznL9tuQD88B/d4 Jun 20 19:46:28.849469 sshd-session[4203]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:46:28.854088 systemd-logind[1532]: New session 19 of user core. Jun 20 19:46:28.862028 systemd[1]: Started session-19.scope - Session 19 of User core. Jun 20 19:46:28.969784 sshd[4205]: Connection closed by 10.0.0.1 port 50494 Jun 20 19:46:28.970165 sshd-session[4203]: pam_unix(sshd:session): session closed for user core Jun 20 19:46:28.973882 systemd[1]: sshd@18-10.0.0.133:22-10.0.0.1:50494.service: Deactivated successfully. Jun 20 19:46:28.976205 systemd[1]: session-19.scope: Deactivated successfully. Jun 20 19:46:28.980493 systemd-logind[1532]: Session 19 logged out. Waiting for processes to exit. Jun 20 19:46:28.982370 systemd-logind[1532]: Removed session 19. Jun 20 19:46:33.986789 systemd[1]: Started sshd@19-10.0.0.133:22-10.0.0.1:50504.service - OpenSSH per-connection server daemon (10.0.0.1:50504). Jun 20 19:46:34.049628 sshd[4222]: Accepted publickey for core from 10.0.0.1 port 50504 ssh2: RSA SHA256:/nHAorWV7i/6K+ZJ86Yj9USNPlOARznL9tuQD88B/d4 Jun 20 19:46:34.051270 sshd-session[4222]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:46:34.055474 systemd-logind[1532]: New session 20 of user core. Jun 20 19:46:34.067953 systemd[1]: Started session-20.scope - Session 20 of User core. Jun 20 19:46:34.180668 sshd[4224]: Connection closed by 10.0.0.1 port 50504 Jun 20 19:46:34.180976 sshd-session[4222]: pam_unix(sshd:session): session closed for user core Jun 20 19:46:34.185961 systemd[1]: sshd@19-10.0.0.133:22-10.0.0.1:50504.service: Deactivated successfully. Jun 20 19:46:34.188020 systemd[1]: session-20.scope: Deactivated successfully. Jun 20 19:46:34.189019 systemd-logind[1532]: Session 20 logged out. Waiting for processes to exit. Jun 20 19:46:34.190337 systemd-logind[1532]: Removed session 20. Jun 20 19:46:39.205068 systemd[1]: Started sshd@20-10.0.0.133:22-10.0.0.1:54870.service - OpenSSH per-connection server daemon (10.0.0.1:54870). Jun 20 19:46:39.268550 sshd[4243]: Accepted publickey for core from 10.0.0.1 port 54870 ssh2: RSA SHA256:/nHAorWV7i/6K+ZJ86Yj9USNPlOARznL9tuQD88B/d4 Jun 20 19:46:39.270189 sshd-session[4243]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:46:39.275116 systemd-logind[1532]: New session 21 of user core. Jun 20 19:46:39.288036 systemd[1]: Started session-21.scope - Session 21 of User core. Jun 20 19:46:39.400231 sshd[4245]: Connection closed by 10.0.0.1 port 54870 Jun 20 19:46:39.400590 sshd-session[4243]: pam_unix(sshd:session): session closed for user core Jun 20 19:46:39.404715 systemd[1]: sshd@20-10.0.0.133:22-10.0.0.1:54870.service: Deactivated successfully. Jun 20 19:46:39.406584 systemd[1]: session-21.scope: Deactivated successfully. Jun 20 19:46:39.407332 systemd-logind[1532]: Session 21 logged out. Waiting for processes to exit. Jun 20 19:46:39.408555 systemd-logind[1532]: Removed session 21. Jun 20 19:46:44.150438 kubelet[2684]: E0620 19:46:44.150385 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 20 19:46:44.415500 systemd[1]: Started sshd@21-10.0.0.133:22-10.0.0.1:54876.service - OpenSSH per-connection server daemon (10.0.0.1:54876). Jun 20 19:46:44.458871 sshd[4258]: Accepted publickey for core from 10.0.0.1 port 54876 ssh2: RSA SHA256:/nHAorWV7i/6K+ZJ86Yj9USNPlOARznL9tuQD88B/d4 Jun 20 19:46:44.460340 sshd-session[4258]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:46:44.464490 systemd-logind[1532]: New session 22 of user core. Jun 20 19:46:44.471933 systemd[1]: Started session-22.scope - Session 22 of User core. Jun 20 19:46:44.579415 sshd[4260]: Connection closed by 10.0.0.1 port 54876 Jun 20 19:46:44.579770 sshd-session[4258]: pam_unix(sshd:session): session closed for user core Jun 20 19:46:44.583636 systemd[1]: sshd@21-10.0.0.133:22-10.0.0.1:54876.service: Deactivated successfully. Jun 20 19:46:44.585650 systemd[1]: session-22.scope: Deactivated successfully. Jun 20 19:46:44.586441 systemd-logind[1532]: Session 22 logged out. Waiting for processes to exit. Jun 20 19:46:44.587691 systemd-logind[1532]: Removed session 22. Jun 20 19:46:49.592703 systemd[1]: Started sshd@22-10.0.0.133:22-10.0.0.1:38426.service - OpenSSH per-connection server daemon (10.0.0.1:38426). Jun 20 19:46:49.649436 sshd[4273]: Accepted publickey for core from 10.0.0.1 port 38426 ssh2: RSA SHA256:/nHAorWV7i/6K+ZJ86Yj9USNPlOARznL9tuQD88B/d4 Jun 20 19:46:49.650918 sshd-session[4273]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:46:49.655593 systemd-logind[1532]: New session 23 of user core. Jun 20 19:46:49.668025 systemd[1]: Started session-23.scope - Session 23 of User core. Jun 20 19:46:49.775685 sshd[4275]: Connection closed by 10.0.0.1 port 38426 Jun 20 19:46:49.775978 sshd-session[4273]: pam_unix(sshd:session): session closed for user core Jun 20 19:46:49.780491 systemd[1]: sshd@22-10.0.0.133:22-10.0.0.1:38426.service: Deactivated successfully. Jun 20 19:46:49.782510 systemd[1]: session-23.scope: Deactivated successfully. Jun 20 19:46:49.783238 systemd-logind[1532]: Session 23 logged out. Waiting for processes to exit. Jun 20 19:46:49.784491 systemd-logind[1532]: Removed session 23. Jun 20 19:46:54.787386 systemd[1]: Started sshd@23-10.0.0.133:22-10.0.0.1:38440.service - OpenSSH per-connection server daemon (10.0.0.1:38440). Jun 20 19:46:54.845759 sshd[4289]: Accepted publickey for core from 10.0.0.1 port 38440 ssh2: RSA SHA256:/nHAorWV7i/6K+ZJ86Yj9USNPlOARznL9tuQD88B/d4 Jun 20 19:46:54.847450 sshd-session[4289]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:46:54.851776 systemd-logind[1532]: New session 24 of user core. Jun 20 19:46:54.866963 systemd[1]: Started session-24.scope - Session 24 of User core. Jun 20 19:46:54.976415 sshd[4291]: Connection closed by 10.0.0.1 port 38440 Jun 20 19:46:54.976754 sshd-session[4289]: pam_unix(sshd:session): session closed for user core Jun 20 19:46:54.991073 systemd[1]: sshd@23-10.0.0.133:22-10.0.0.1:38440.service: Deactivated successfully. Jun 20 19:46:54.993703 systemd[1]: session-24.scope: Deactivated successfully. Jun 20 19:46:54.994640 systemd-logind[1532]: Session 24 logged out. Waiting for processes to exit. Jun 20 19:46:54.999402 systemd[1]: Started sshd@24-10.0.0.133:22-10.0.0.1:38454.service - OpenSSH per-connection server daemon (10.0.0.1:38454). Jun 20 19:46:55.000209 systemd-logind[1532]: Removed session 24. Jun 20 19:46:55.056457 sshd[4305]: Accepted publickey for core from 10.0.0.1 port 38454 ssh2: RSA SHA256:/nHAorWV7i/6K+ZJ86Yj9USNPlOARznL9tuQD88B/d4 Jun 20 19:46:55.057986 sshd-session[4305]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:46:55.063444 systemd-logind[1532]: New session 25 of user core. Jun 20 19:46:55.077142 systemd[1]: Started session-25.scope - Session 25 of User core. Jun 20 19:46:56.425315 containerd[1559]: time="2025-06-20T19:46:56.425127449Z" level=info msg="StopContainer for \"1f13eda7ef9fab81724a413e9b23587f0ca67205a0897b5f2503483db59765a7\" with timeout 30 (s)" Jun 20 19:46:56.434286 containerd[1559]: time="2025-06-20T19:46:56.434255038Z" level=info msg="Stop container \"1f13eda7ef9fab81724a413e9b23587f0ca67205a0897b5f2503483db59765a7\" with signal terminated" Jun 20 19:46:56.445556 systemd[1]: cri-containerd-1f13eda7ef9fab81724a413e9b23587f0ca67205a0897b5f2503483db59765a7.scope: Deactivated successfully. Jun 20 19:46:56.447728 containerd[1559]: time="2025-06-20T19:46:56.447279548Z" level=info msg="received exit event container_id:\"1f13eda7ef9fab81724a413e9b23587f0ca67205a0897b5f2503483db59765a7\" id:\"1f13eda7ef9fab81724a413e9b23587f0ca67205a0897b5f2503483db59765a7\" pid:3259 exited_at:{seconds:1750448816 nanos:446977246}" Jun 20 19:46:56.447728 containerd[1559]: time="2025-06-20T19:46:56.447374929Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1f13eda7ef9fab81724a413e9b23587f0ca67205a0897b5f2503483db59765a7\" id:\"1f13eda7ef9fab81724a413e9b23587f0ca67205a0897b5f2503483db59765a7\" pid:3259 exited_at:{seconds:1750448816 nanos:446977246}" Jun 20 19:46:56.461877 containerd[1559]: time="2025-06-20T19:46:56.461830268Z" level=error msg="failed to reload cni configuration after receiving fs change event(REMOVE \"/etc/cni/net.d/05-cilium.conf\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jun 20 19:46:56.462271 containerd[1559]: time="2025-06-20T19:46:56.462211821Z" level=info msg="TaskExit event in podsandbox handler container_id:\"728f7814af8be6c673f13ce1b4aa0e441b4df9858ba95da0abc8efa4ce1cffa6\" id:\"0c0453a57e4a04b3533262cdbc8f49080f1aa461b61da026e35105d25a9085ad\" pid:4332 exited_at:{seconds:1750448816 nanos:461960616}" Jun 20 19:46:56.464242 containerd[1559]: time="2025-06-20T19:46:56.464203634Z" level=info msg="StopContainer for \"728f7814af8be6c673f13ce1b4aa0e441b4df9858ba95da0abc8efa4ce1cffa6\" with timeout 2 (s)" Jun 20 19:46:56.464565 containerd[1559]: time="2025-06-20T19:46:56.464546914Z" level=info msg="Stop container \"728f7814af8be6c673f13ce1b4aa0e441b4df9858ba95da0abc8efa4ce1cffa6\" with signal terminated" Jun 20 19:46:56.471696 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1f13eda7ef9fab81724a413e9b23587f0ca67205a0897b5f2503483db59765a7-rootfs.mount: Deactivated successfully. Jun 20 19:46:56.472914 systemd-networkd[1480]: lxc_health: Link DOWN Jun 20 19:46:56.472924 systemd-networkd[1480]: lxc_health: Lost carrier Jun 20 19:46:56.489871 containerd[1559]: time="2025-06-20T19:46:56.489777484Z" level=info msg="StopContainer for \"1f13eda7ef9fab81724a413e9b23587f0ca67205a0897b5f2503483db59765a7\" returns successfully" Jun 20 19:46:56.490536 containerd[1559]: time="2025-06-20T19:46:56.490499993Z" level=info msg="StopPodSandbox for \"645844e97badd981bc3c97cff82e725598c7e497db2d4e2ae1338cb463101678\"" Jun 20 19:46:56.490605 containerd[1559]: time="2025-06-20T19:46:56.490580936Z" level=info msg="Container to stop \"1f13eda7ef9fab81724a413e9b23587f0ca67205a0897b5f2503483db59765a7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 20 19:46:56.493224 systemd[1]: cri-containerd-728f7814af8be6c673f13ce1b4aa0e441b4df9858ba95da0abc8efa4ce1cffa6.scope: Deactivated successfully. Jun 20 19:46:56.493570 systemd[1]: cri-containerd-728f7814af8be6c673f13ce1b4aa0e441b4df9858ba95da0abc8efa4ce1cffa6.scope: Consumed 6.257s CPU time, 125.2M memory peak, 604K read from disk, 14.5M written to disk. Jun 20 19:46:56.494396 containerd[1559]: time="2025-06-20T19:46:56.494300111Z" level=info msg="received exit event container_id:\"728f7814af8be6c673f13ce1b4aa0e441b4df9858ba95da0abc8efa4ce1cffa6\" id:\"728f7814af8be6c673f13ce1b4aa0e441b4df9858ba95da0abc8efa4ce1cffa6\" pid:3342 exited_at:{seconds:1750448816 nanos:493857301}" Jun 20 19:46:56.494540 containerd[1559]: time="2025-06-20T19:46:56.494309709Z" level=info msg="TaskExit event in podsandbox handler container_id:\"728f7814af8be6c673f13ce1b4aa0e441b4df9858ba95da0abc8efa4ce1cffa6\" id:\"728f7814af8be6c673f13ce1b4aa0e441b4df9858ba95da0abc8efa4ce1cffa6\" pid:3342 exited_at:{seconds:1750448816 nanos:493857301}" Jun 20 19:46:56.497971 systemd[1]: cri-containerd-645844e97badd981bc3c97cff82e725598c7e497db2d4e2ae1338cb463101678.scope: Deactivated successfully. Jun 20 19:46:56.498276 systemd[1]: cri-containerd-645844e97badd981bc3c97cff82e725598c7e497db2d4e2ae1338cb463101678.scope: Consumed 39ms CPU time, 5.9M memory peak, 1.4M read from disk. Jun 20 19:46:56.502471 containerd[1559]: time="2025-06-20T19:46:56.502384945Z" level=info msg="TaskExit event in podsandbox handler container_id:\"645844e97badd981bc3c97cff82e725598c7e497db2d4e2ae1338cb463101678\" id:\"645844e97badd981bc3c97cff82e725598c7e497db2d4e2ae1338cb463101678\" pid:3050 exit_status:137 exited_at:{seconds:1750448816 nanos:502005626}" Jun 20 19:46:56.517841 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-728f7814af8be6c673f13ce1b4aa0e441b4df9858ba95da0abc8efa4ce1cffa6-rootfs.mount: Deactivated successfully. Jun 20 19:46:56.531917 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-645844e97badd981bc3c97cff82e725598c7e497db2d4e2ae1338cb463101678-rootfs.mount: Deactivated successfully. Jun 20 19:46:56.550279 containerd[1559]: time="2025-06-20T19:46:56.550229848Z" level=info msg="shim disconnected" id=645844e97badd981bc3c97cff82e725598c7e497db2d4e2ae1338cb463101678 namespace=k8s.io Jun 20 19:46:56.550279 containerd[1559]: time="2025-06-20T19:46:56.550278911Z" level=warning msg="cleaning up after shim disconnected" id=645844e97badd981bc3c97cff82e725598c7e497db2d4e2ae1338cb463101678 namespace=k8s.io Jun 20 19:46:56.565563 containerd[1559]: time="2025-06-20T19:46:56.550289070Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 20 19:46:56.565843 containerd[1559]: time="2025-06-20T19:46:56.555323196Z" level=info msg="StopContainer for \"728f7814af8be6c673f13ce1b4aa0e441b4df9858ba95da0abc8efa4ce1cffa6\" returns successfully" Jun 20 19:46:56.566374 containerd[1559]: time="2025-06-20T19:46:56.566337288Z" level=info msg="StopPodSandbox for \"a46e43c12cbf4b6e31df935eefef5a3653c3d251a2e1a79f7f5f8f2fadc22232\"" Jun 20 19:46:56.566465 containerd[1559]: time="2025-06-20T19:46:56.566417590Z" level=info msg="Container to stop \"997729666f07d1457102a98a6e8019b64dd7d36ca5048e006dd67e0510e0e7ea\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 20 19:46:56.566465 containerd[1559]: time="2025-06-20T19:46:56.566433189Z" level=info msg="Container to stop \"435f71bdd4b1b1816e5341992382fe1b23c446847d375e22e3e0b1c987396fc1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 20 19:46:56.566465 containerd[1559]: time="2025-06-20T19:46:56.566444340Z" level=info msg="Container to stop \"728f7814af8be6c673f13ce1b4aa0e441b4df9858ba95da0abc8efa4ce1cffa6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 20 19:46:56.566465 containerd[1559]: time="2025-06-20T19:46:56.566454970Z" level=info msg="Container to stop \"b6db81ee7051beccac0fc432f147bdfae526b7667c52ef1421cecfb64a3aa768\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 20 19:46:56.566465 containerd[1559]: time="2025-06-20T19:46:56.566466112Z" level=info msg="Container to stop \"100fc08b26ef1d48f99f6c8c9c2fe21448c57f8c6383abc7e8127dbe0b2b788e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jun 20 19:46:56.574604 systemd[1]: cri-containerd-a46e43c12cbf4b6e31df935eefef5a3653c3d251a2e1a79f7f5f8f2fadc22232.scope: Deactivated successfully. Jun 20 19:46:56.592722 containerd[1559]: time="2025-06-20T19:46:56.591661855Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a46e43c12cbf4b6e31df935eefef5a3653c3d251a2e1a79f7f5f8f2fadc22232\" id:\"a46e43c12cbf4b6e31df935eefef5a3653c3d251a2e1a79f7f5f8f2fadc22232\" pid:2837 exit_status:137 exited_at:{seconds:1750448816 nanos:581096313}" Jun 20 19:46:56.594543 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-645844e97badd981bc3c97cff82e725598c7e497db2d4e2ae1338cb463101678-shm.mount: Deactivated successfully. Jun 20 19:46:56.598611 containerd[1559]: time="2025-06-20T19:46:56.598569169Z" level=info msg="received exit event sandbox_id:\"645844e97badd981bc3c97cff82e725598c7e497db2d4e2ae1338cb463101678\" exit_status:137 exited_at:{seconds:1750448816 nanos:502005626}" Jun 20 19:46:56.603566 containerd[1559]: time="2025-06-20T19:46:56.603515789Z" level=info msg="TearDown network for sandbox \"645844e97badd981bc3c97cff82e725598c7e497db2d4e2ae1338cb463101678\" successfully" Jun 20 19:46:56.603566 containerd[1559]: time="2025-06-20T19:46:56.603545364Z" level=info msg="StopPodSandbox for \"645844e97badd981bc3c97cff82e725598c7e497db2d4e2ae1338cb463101678\" returns successfully" Jun 20 19:46:56.605031 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a46e43c12cbf4b6e31df935eefef5a3653c3d251a2e1a79f7f5f8f2fadc22232-rootfs.mount: Deactivated successfully. Jun 20 19:46:56.609459 containerd[1559]: time="2025-06-20T19:46:56.609416885Z" level=info msg="received exit event sandbox_id:\"a46e43c12cbf4b6e31df935eefef5a3653c3d251a2e1a79f7f5f8f2fadc22232\" exit_status:137 exited_at:{seconds:1750448816 nanos:581096313}" Jun 20 19:46:56.609860 containerd[1559]: time="2025-06-20T19:46:56.609761940Z" level=info msg="shim disconnected" id=a46e43c12cbf4b6e31df935eefef5a3653c3d251a2e1a79f7f5f8f2fadc22232 namespace=k8s.io Jun 20 19:46:56.609860 containerd[1559]: time="2025-06-20T19:46:56.609804450Z" level=warning msg="cleaning up after shim disconnected" id=a46e43c12cbf4b6e31df935eefef5a3653c3d251a2e1a79f7f5f8f2fadc22232 namespace=k8s.io Jun 20 19:46:56.609860 containerd[1559]: time="2025-06-20T19:46:56.609833816Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jun 20 19:46:56.611018 containerd[1559]: time="2025-06-20T19:46:56.610973334Z" level=info msg="TearDown network for sandbox \"a46e43c12cbf4b6e31df935eefef5a3653c3d251a2e1a79f7f5f8f2fadc22232\" successfully" Jun 20 19:46:56.611018 containerd[1559]: time="2025-06-20T19:46:56.611003531Z" level=info msg="StopPodSandbox for \"a46e43c12cbf4b6e31df935eefef5a3653c3d251a2e1a79f7f5f8f2fadc22232\" returns successfully" Jun 20 19:46:56.741970 kubelet[2684]: I0620 19:46:56.741801 2684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/11cdcc50-16b5-4a26-8270-3396efa20b13-cilium-cgroup\") pod \"11cdcc50-16b5-4a26-8270-3396efa20b13\" (UID: \"11cdcc50-16b5-4a26-8270-3396efa20b13\") " Jun 20 19:46:56.741970 kubelet[2684]: I0620 19:46:56.741882 2684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/11cdcc50-16b5-4a26-8270-3396efa20b13-etc-cni-netd\") pod \"11cdcc50-16b5-4a26-8270-3396efa20b13\" (UID: \"11cdcc50-16b5-4a26-8270-3396efa20b13\") " Jun 20 19:46:56.741970 kubelet[2684]: I0620 19:46:56.741900 2684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/11cdcc50-16b5-4a26-8270-3396efa20b13-cni-path\") pod \"11cdcc50-16b5-4a26-8270-3396efa20b13\" (UID: \"11cdcc50-16b5-4a26-8270-3396efa20b13\") " Jun 20 19:46:56.741970 kubelet[2684]: I0620 19:46:56.741919 2684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/11cdcc50-16b5-4a26-8270-3396efa20b13-host-proc-sys-kernel\") pod \"11cdcc50-16b5-4a26-8270-3396efa20b13\" (UID: \"11cdcc50-16b5-4a26-8270-3396efa20b13\") " Jun 20 19:46:56.741970 kubelet[2684]: I0620 19:46:56.741929 2684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/11cdcc50-16b5-4a26-8270-3396efa20b13-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "11cdcc50-16b5-4a26-8270-3396efa20b13" (UID: "11cdcc50-16b5-4a26-8270-3396efa20b13"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 20 19:46:56.742668 kubelet[2684]: I0620 19:46:56.741996 2684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/11cdcc50-16b5-4a26-8270-3396efa20b13-cni-path" (OuterVolumeSpecName: "cni-path") pod "11cdcc50-16b5-4a26-8270-3396efa20b13" (UID: "11cdcc50-16b5-4a26-8270-3396efa20b13"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 20 19:46:56.742668 kubelet[2684]: I0620 19:46:56.742013 2684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/11cdcc50-16b5-4a26-8270-3396efa20b13-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "11cdcc50-16b5-4a26-8270-3396efa20b13" (UID: "11cdcc50-16b5-4a26-8270-3396efa20b13"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 20 19:46:56.742668 kubelet[2684]: I0620 19:46:56.742028 2684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/11cdcc50-16b5-4a26-8270-3396efa20b13-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "11cdcc50-16b5-4a26-8270-3396efa20b13" (UID: "11cdcc50-16b5-4a26-8270-3396efa20b13"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 20 19:46:56.742668 kubelet[2684]: I0620 19:46:56.741952 2684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/da868121-5b31-40f2-9b37-749d547f61d0-cilium-config-path\") pod \"da868121-5b31-40f2-9b37-749d547f61d0\" (UID: \"da868121-5b31-40f2-9b37-749d547f61d0\") " Jun 20 19:46:56.742668 kubelet[2684]: I0620 19:46:56.742531 2684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/11cdcc50-16b5-4a26-8270-3396efa20b13-lib-modules\") pod \"11cdcc50-16b5-4a26-8270-3396efa20b13\" (UID: \"11cdcc50-16b5-4a26-8270-3396efa20b13\") " Jun 20 19:46:56.742802 kubelet[2684]: I0620 19:46:56.742568 2684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hgczs\" (UniqueName: \"kubernetes.io/projected/11cdcc50-16b5-4a26-8270-3396efa20b13-kube-api-access-hgczs\") pod \"11cdcc50-16b5-4a26-8270-3396efa20b13\" (UID: \"11cdcc50-16b5-4a26-8270-3396efa20b13\") " Jun 20 19:46:56.742802 kubelet[2684]: I0620 19:46:56.742591 2684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/11cdcc50-16b5-4a26-8270-3396efa20b13-cilium-config-path\") pod \"11cdcc50-16b5-4a26-8270-3396efa20b13\" (UID: \"11cdcc50-16b5-4a26-8270-3396efa20b13\") " Jun 20 19:46:56.742802 kubelet[2684]: I0620 19:46:56.742629 2684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/11cdcc50-16b5-4a26-8270-3396efa20b13-clustermesh-secrets\") pod \"11cdcc50-16b5-4a26-8270-3396efa20b13\" (UID: \"11cdcc50-16b5-4a26-8270-3396efa20b13\") " Jun 20 19:46:56.742802 kubelet[2684]: I0620 19:46:56.742649 2684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/11cdcc50-16b5-4a26-8270-3396efa20b13-cilium-run\") pod \"11cdcc50-16b5-4a26-8270-3396efa20b13\" (UID: \"11cdcc50-16b5-4a26-8270-3396efa20b13\") " Jun 20 19:46:56.742802 kubelet[2684]: I0620 19:46:56.742666 2684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/11cdcc50-16b5-4a26-8270-3396efa20b13-bpf-maps\") pod \"11cdcc50-16b5-4a26-8270-3396efa20b13\" (UID: \"11cdcc50-16b5-4a26-8270-3396efa20b13\") " Jun 20 19:46:56.742802 kubelet[2684]: I0620 19:46:56.742684 2684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/11cdcc50-16b5-4a26-8270-3396efa20b13-hostproc\") pod \"11cdcc50-16b5-4a26-8270-3396efa20b13\" (UID: \"11cdcc50-16b5-4a26-8270-3396efa20b13\") " Jun 20 19:46:56.742999 kubelet[2684]: I0620 19:46:56.742709 2684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/11cdcc50-16b5-4a26-8270-3396efa20b13-host-proc-sys-net\") pod \"11cdcc50-16b5-4a26-8270-3396efa20b13\" (UID: \"11cdcc50-16b5-4a26-8270-3396efa20b13\") " Jun 20 19:46:56.742999 kubelet[2684]: I0620 19:46:56.742730 2684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-scn4v\" (UniqueName: \"kubernetes.io/projected/da868121-5b31-40f2-9b37-749d547f61d0-kube-api-access-scn4v\") pod \"da868121-5b31-40f2-9b37-749d547f61d0\" (UID: \"da868121-5b31-40f2-9b37-749d547f61d0\") " Jun 20 19:46:56.742999 kubelet[2684]: I0620 19:46:56.742754 2684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/11cdcc50-16b5-4a26-8270-3396efa20b13-hubble-tls\") pod \"11cdcc50-16b5-4a26-8270-3396efa20b13\" (UID: \"11cdcc50-16b5-4a26-8270-3396efa20b13\") " Jun 20 19:46:56.742999 kubelet[2684]: I0620 19:46:56.742775 2684 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/11cdcc50-16b5-4a26-8270-3396efa20b13-xtables-lock\") pod \"11cdcc50-16b5-4a26-8270-3396efa20b13\" (UID: \"11cdcc50-16b5-4a26-8270-3396efa20b13\") " Jun 20 19:46:56.742999 kubelet[2684]: I0620 19:46:56.742837 2684 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/11cdcc50-16b5-4a26-8270-3396efa20b13-host-proc-sys-kernel\") on node \"localhost\" DevicePath \"\"" Jun 20 19:46:56.742999 kubelet[2684]: I0620 19:46:56.742852 2684 reconciler_common.go:293] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/11cdcc50-16b5-4a26-8270-3396efa20b13-cilium-cgroup\") on node \"localhost\" DevicePath \"\"" Jun 20 19:46:56.742999 kubelet[2684]: I0620 19:46:56.742864 2684 reconciler_common.go:293] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/11cdcc50-16b5-4a26-8270-3396efa20b13-etc-cni-netd\") on node \"localhost\" DevicePath \"\"" Jun 20 19:46:56.743184 kubelet[2684]: I0620 19:46:56.742875 2684 reconciler_common.go:293] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/11cdcc50-16b5-4a26-8270-3396efa20b13-cni-path\") on node \"localhost\" DevicePath \"\"" Jun 20 19:46:56.743184 kubelet[2684]: I0620 19:46:56.742924 2684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/11cdcc50-16b5-4a26-8270-3396efa20b13-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "11cdcc50-16b5-4a26-8270-3396efa20b13" (UID: "11cdcc50-16b5-4a26-8270-3396efa20b13"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 20 19:46:56.743184 kubelet[2684]: I0620 19:46:56.742954 2684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/11cdcc50-16b5-4a26-8270-3396efa20b13-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "11cdcc50-16b5-4a26-8270-3396efa20b13" (UID: "11cdcc50-16b5-4a26-8270-3396efa20b13"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 20 19:46:56.743271 kubelet[2684]: I0620 19:46:56.743204 2684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/11cdcc50-16b5-4a26-8270-3396efa20b13-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "11cdcc50-16b5-4a26-8270-3396efa20b13" (UID: "11cdcc50-16b5-4a26-8270-3396efa20b13"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 20 19:46:56.743914 kubelet[2684]: I0620 19:46:56.743398 2684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/11cdcc50-16b5-4a26-8270-3396efa20b13-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "11cdcc50-16b5-4a26-8270-3396efa20b13" (UID: "11cdcc50-16b5-4a26-8270-3396efa20b13"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 20 19:46:56.743914 kubelet[2684]: I0620 19:46:56.743470 2684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/11cdcc50-16b5-4a26-8270-3396efa20b13-hostproc" (OuterVolumeSpecName: "hostproc") pod "11cdcc50-16b5-4a26-8270-3396efa20b13" (UID: "11cdcc50-16b5-4a26-8270-3396efa20b13"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 20 19:46:56.746336 kubelet[2684]: I0620 19:46:56.746306 2684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/11cdcc50-16b5-4a26-8270-3396efa20b13-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "11cdcc50-16b5-4a26-8270-3396efa20b13" (UID: "11cdcc50-16b5-4a26-8270-3396efa20b13"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jun 20 19:46:56.746387 kubelet[2684]: I0620 19:46:56.746348 2684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/11cdcc50-16b5-4a26-8270-3396efa20b13-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "11cdcc50-16b5-4a26-8270-3396efa20b13" (UID: "11cdcc50-16b5-4a26-8270-3396efa20b13"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jun 20 19:46:56.746785 kubelet[2684]: I0620 19:46:56.746754 2684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/11cdcc50-16b5-4a26-8270-3396efa20b13-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "11cdcc50-16b5-4a26-8270-3396efa20b13" (UID: "11cdcc50-16b5-4a26-8270-3396efa20b13"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jun 20 19:46:56.747419 kubelet[2684]: I0620 19:46:56.747400 2684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/da868121-5b31-40f2-9b37-749d547f61d0-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "da868121-5b31-40f2-9b37-749d547f61d0" (UID: "da868121-5b31-40f2-9b37-749d547f61d0"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jun 20 19:46:56.748775 kubelet[2684]: I0620 19:46:56.748727 2684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11cdcc50-16b5-4a26-8270-3396efa20b13-kube-api-access-hgczs" (OuterVolumeSpecName: "kube-api-access-hgczs") pod "11cdcc50-16b5-4a26-8270-3396efa20b13" (UID: "11cdcc50-16b5-4a26-8270-3396efa20b13"). InnerVolumeSpecName "kube-api-access-hgczs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jun 20 19:46:56.749269 kubelet[2684]: I0620 19:46:56.749231 2684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/da868121-5b31-40f2-9b37-749d547f61d0-kube-api-access-scn4v" (OuterVolumeSpecName: "kube-api-access-scn4v") pod "da868121-5b31-40f2-9b37-749d547f61d0" (UID: "da868121-5b31-40f2-9b37-749d547f61d0"). InnerVolumeSpecName "kube-api-access-scn4v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jun 20 19:46:56.749575 kubelet[2684]: I0620 19:46:56.749555 2684 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11cdcc50-16b5-4a26-8270-3396efa20b13-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "11cdcc50-16b5-4a26-8270-3396efa20b13" (UID: "11cdcc50-16b5-4a26-8270-3396efa20b13"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jun 20 19:46:56.843990 kubelet[2684]: I0620 19:46:56.843932 2684 reconciler_common.go:293] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/11cdcc50-16b5-4a26-8270-3396efa20b13-hubble-tls\") on node \"localhost\" DevicePath \"\"" Jun 20 19:46:56.843990 kubelet[2684]: I0620 19:46:56.844000 2684 reconciler_common.go:293] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/11cdcc50-16b5-4a26-8270-3396efa20b13-host-proc-sys-net\") on node \"localhost\" DevicePath \"\"" Jun 20 19:46:56.844169 kubelet[2684]: I0620 19:46:56.844016 2684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-scn4v\" (UniqueName: \"kubernetes.io/projected/da868121-5b31-40f2-9b37-749d547f61d0-kube-api-access-scn4v\") on node \"localhost\" DevicePath \"\"" Jun 20 19:46:56.844169 kubelet[2684]: I0620 19:46:56.844028 2684 reconciler_common.go:293] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/11cdcc50-16b5-4a26-8270-3396efa20b13-xtables-lock\") on node \"localhost\" DevicePath \"\"" Jun 20 19:46:56.844169 kubelet[2684]: I0620 19:46:56.844036 2684 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/da868121-5b31-40f2-9b37-749d547f61d0-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jun 20 19:46:56.844169 kubelet[2684]: I0620 19:46:56.844044 2684 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/11cdcc50-16b5-4a26-8270-3396efa20b13-lib-modules\") on node \"localhost\" DevicePath \"\"" Jun 20 19:46:56.844169 kubelet[2684]: I0620 19:46:56.844051 2684 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hgczs\" (UniqueName: \"kubernetes.io/projected/11cdcc50-16b5-4a26-8270-3396efa20b13-kube-api-access-hgczs\") on node \"localhost\" DevicePath \"\"" Jun 20 19:46:56.844169 kubelet[2684]: I0620 19:46:56.844058 2684 reconciler_common.go:293] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/11cdcc50-16b5-4a26-8270-3396efa20b13-cilium-config-path\") on node \"localhost\" DevicePath \"\"" Jun 20 19:46:56.844169 kubelet[2684]: I0620 19:46:56.844066 2684 reconciler_common.go:293] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/11cdcc50-16b5-4a26-8270-3396efa20b13-clustermesh-secrets\") on node \"localhost\" DevicePath \"\"" Jun 20 19:46:56.844169 kubelet[2684]: I0620 19:46:56.844073 2684 reconciler_common.go:293] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/11cdcc50-16b5-4a26-8270-3396efa20b13-hostproc\") on node \"localhost\" DevicePath \"\"" Jun 20 19:46:56.844414 kubelet[2684]: I0620 19:46:56.844081 2684 reconciler_common.go:293] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/11cdcc50-16b5-4a26-8270-3396efa20b13-cilium-run\") on node \"localhost\" DevicePath \"\"" Jun 20 19:46:56.844414 kubelet[2684]: I0620 19:46:56.844090 2684 reconciler_common.go:293] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/11cdcc50-16b5-4a26-8270-3396efa20b13-bpf-maps\") on node \"localhost\" DevicePath \"\"" Jun 20 19:46:57.362555 kubelet[2684]: I0620 19:46:57.362502 2684 scope.go:117] "RemoveContainer" containerID="1f13eda7ef9fab81724a413e9b23587f0ca67205a0897b5f2503483db59765a7" Jun 20 19:46:57.364459 containerd[1559]: time="2025-06-20T19:46:57.364380576Z" level=info msg="RemoveContainer for \"1f13eda7ef9fab81724a413e9b23587f0ca67205a0897b5f2503483db59765a7\"" Jun 20 19:46:57.370469 systemd[1]: Removed slice kubepods-besteffort-podda868121_5b31_40f2_9b37_749d547f61d0.slice - libcontainer container kubepods-besteffort-podda868121_5b31_40f2_9b37_749d547f61d0.slice. Jun 20 19:46:57.370569 systemd[1]: kubepods-besteffort-podda868121_5b31_40f2_9b37_749d547f61d0.slice: Consumed 344ms CPU time, 30.8M memory peak, 1.4M read from disk, 4K written to disk. Jun 20 19:46:57.374145 systemd[1]: Removed slice kubepods-burstable-pod11cdcc50_16b5_4a26_8270_3396efa20b13.slice - libcontainer container kubepods-burstable-pod11cdcc50_16b5_4a26_8270_3396efa20b13.slice. Jun 20 19:46:57.374831 containerd[1559]: time="2025-06-20T19:46:57.374530431Z" level=info msg="RemoveContainer for \"1f13eda7ef9fab81724a413e9b23587f0ca67205a0897b5f2503483db59765a7\" returns successfully" Jun 20 19:46:57.374626 systemd[1]: kubepods-burstable-pod11cdcc50_16b5_4a26_8270_3396efa20b13.slice: Consumed 6.364s CPU time, 125.6M memory peak, 608K read from disk, 14.5M written to disk. Jun 20 19:46:57.379247 kubelet[2684]: I0620 19:46:57.379210 2684 scope.go:117] "RemoveContainer" containerID="1f13eda7ef9fab81724a413e9b23587f0ca67205a0897b5f2503483db59765a7" Jun 20 19:46:57.379530 containerd[1559]: time="2025-06-20T19:46:57.379474151Z" level=error msg="ContainerStatus for \"1f13eda7ef9fab81724a413e9b23587f0ca67205a0897b5f2503483db59765a7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"1f13eda7ef9fab81724a413e9b23587f0ca67205a0897b5f2503483db59765a7\": not found" Jun 20 19:46:57.383553 kubelet[2684]: E0620 19:46:57.383526 2684 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"1f13eda7ef9fab81724a413e9b23587f0ca67205a0897b5f2503483db59765a7\": not found" containerID="1f13eda7ef9fab81724a413e9b23587f0ca67205a0897b5f2503483db59765a7" Jun 20 19:46:57.383647 kubelet[2684]: I0620 19:46:57.383562 2684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"1f13eda7ef9fab81724a413e9b23587f0ca67205a0897b5f2503483db59765a7"} err="failed to get container status \"1f13eda7ef9fab81724a413e9b23587f0ca67205a0897b5f2503483db59765a7\": rpc error: code = NotFound desc = an error occurred when try to find container \"1f13eda7ef9fab81724a413e9b23587f0ca67205a0897b5f2503483db59765a7\": not found" Jun 20 19:46:57.383647 kubelet[2684]: I0620 19:46:57.383626 2684 scope.go:117] "RemoveContainer" containerID="728f7814af8be6c673f13ce1b4aa0e441b4df9858ba95da0abc8efa4ce1cffa6" Jun 20 19:46:57.385596 containerd[1559]: time="2025-06-20T19:46:57.385569382Z" level=info msg="RemoveContainer for \"728f7814af8be6c673f13ce1b4aa0e441b4df9858ba95da0abc8efa4ce1cffa6\"" Jun 20 19:46:57.391448 containerd[1559]: time="2025-06-20T19:46:57.391362872Z" level=info msg="RemoveContainer for \"728f7814af8be6c673f13ce1b4aa0e441b4df9858ba95da0abc8efa4ce1cffa6\" returns successfully" Jun 20 19:46:57.391597 kubelet[2684]: I0620 19:46:57.391573 2684 scope.go:117] "RemoveContainer" containerID="b6db81ee7051beccac0fc432f147bdfae526b7667c52ef1421cecfb64a3aa768" Jun 20 19:46:57.393337 containerd[1559]: time="2025-06-20T19:46:57.393292528Z" level=info msg="RemoveContainer for \"b6db81ee7051beccac0fc432f147bdfae526b7667c52ef1421cecfb64a3aa768\"" Jun 20 19:46:57.398497 containerd[1559]: time="2025-06-20T19:46:57.398452597Z" level=info msg="RemoveContainer for \"b6db81ee7051beccac0fc432f147bdfae526b7667c52ef1421cecfb64a3aa768\" returns successfully" Jun 20 19:46:57.398672 kubelet[2684]: I0620 19:46:57.398640 2684 scope.go:117] "RemoveContainer" containerID="435f71bdd4b1b1816e5341992382fe1b23c446847d375e22e3e0b1c987396fc1" Jun 20 19:46:57.400944 containerd[1559]: time="2025-06-20T19:46:57.400919532Z" level=info msg="RemoveContainer for \"435f71bdd4b1b1816e5341992382fe1b23c446847d375e22e3e0b1c987396fc1\"" Jun 20 19:46:57.405534 containerd[1559]: time="2025-06-20T19:46:57.405507357Z" level=info msg="RemoveContainer for \"435f71bdd4b1b1816e5341992382fe1b23c446847d375e22e3e0b1c987396fc1\" returns successfully" Jun 20 19:46:57.405695 kubelet[2684]: I0620 19:46:57.405660 2684 scope.go:117] "RemoveContainer" containerID="997729666f07d1457102a98a6e8019b64dd7d36ca5048e006dd67e0510e0e7ea" Jun 20 19:46:57.406857 containerd[1559]: time="2025-06-20T19:46:57.406788224Z" level=info msg="RemoveContainer for \"997729666f07d1457102a98a6e8019b64dd7d36ca5048e006dd67e0510e0e7ea\"" Jun 20 19:46:57.411021 containerd[1559]: time="2025-06-20T19:46:57.410988224Z" level=info msg="RemoveContainer for \"997729666f07d1457102a98a6e8019b64dd7d36ca5048e006dd67e0510e0e7ea\" returns successfully" Jun 20 19:46:57.411159 kubelet[2684]: I0620 19:46:57.411134 2684 scope.go:117] "RemoveContainer" containerID="100fc08b26ef1d48f99f6c8c9c2fe21448c57f8c6383abc7e8127dbe0b2b788e" Jun 20 19:46:57.412319 containerd[1559]: time="2025-06-20T19:46:57.412302445Z" level=info msg="RemoveContainer for \"100fc08b26ef1d48f99f6c8c9c2fe21448c57f8c6383abc7e8127dbe0b2b788e\"" Jun 20 19:46:57.415603 containerd[1559]: time="2025-06-20T19:46:57.415581739Z" level=info msg="RemoveContainer for \"100fc08b26ef1d48f99f6c8c9c2fe21448c57f8c6383abc7e8127dbe0b2b788e\" returns successfully" Jun 20 19:46:57.415784 kubelet[2684]: I0620 19:46:57.415754 2684 scope.go:117] "RemoveContainer" containerID="728f7814af8be6c673f13ce1b4aa0e441b4df9858ba95da0abc8efa4ce1cffa6" Jun 20 19:46:57.416077 containerd[1559]: time="2025-06-20T19:46:57.416038896Z" level=error msg="ContainerStatus for \"728f7814af8be6c673f13ce1b4aa0e441b4df9858ba95da0abc8efa4ce1cffa6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"728f7814af8be6c673f13ce1b4aa0e441b4df9858ba95da0abc8efa4ce1cffa6\": not found" Jun 20 19:46:57.416191 kubelet[2684]: E0620 19:46:57.416166 2684 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"728f7814af8be6c673f13ce1b4aa0e441b4df9858ba95da0abc8efa4ce1cffa6\": not found" containerID="728f7814af8be6c673f13ce1b4aa0e441b4df9858ba95da0abc8efa4ce1cffa6" Jun 20 19:46:57.416254 kubelet[2684]: I0620 19:46:57.416194 2684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"728f7814af8be6c673f13ce1b4aa0e441b4df9858ba95da0abc8efa4ce1cffa6"} err="failed to get container status \"728f7814af8be6c673f13ce1b4aa0e441b4df9858ba95da0abc8efa4ce1cffa6\": rpc error: code = NotFound desc = an error occurred when try to find container \"728f7814af8be6c673f13ce1b4aa0e441b4df9858ba95da0abc8efa4ce1cffa6\": not found" Jun 20 19:46:57.416254 kubelet[2684]: I0620 19:46:57.416225 2684 scope.go:117] "RemoveContainer" containerID="b6db81ee7051beccac0fc432f147bdfae526b7667c52ef1421cecfb64a3aa768" Jun 20 19:46:57.416395 containerd[1559]: time="2025-06-20T19:46:57.416360245Z" level=error msg="ContainerStatus for \"b6db81ee7051beccac0fc432f147bdfae526b7667c52ef1421cecfb64a3aa768\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b6db81ee7051beccac0fc432f147bdfae526b7667c52ef1421cecfb64a3aa768\": not found" Jun 20 19:46:57.416599 kubelet[2684]: E0620 19:46:57.416567 2684 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b6db81ee7051beccac0fc432f147bdfae526b7667c52ef1421cecfb64a3aa768\": not found" containerID="b6db81ee7051beccac0fc432f147bdfae526b7667c52ef1421cecfb64a3aa768" Jun 20 19:46:57.416599 kubelet[2684]: I0620 19:46:57.416588 2684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b6db81ee7051beccac0fc432f147bdfae526b7667c52ef1421cecfb64a3aa768"} err="failed to get container status \"b6db81ee7051beccac0fc432f147bdfae526b7667c52ef1421cecfb64a3aa768\": rpc error: code = NotFound desc = an error occurred when try to find container \"b6db81ee7051beccac0fc432f147bdfae526b7667c52ef1421cecfb64a3aa768\": not found" Jun 20 19:46:57.416599 kubelet[2684]: I0620 19:46:57.416600 2684 scope.go:117] "RemoveContainer" containerID="435f71bdd4b1b1816e5341992382fe1b23c446847d375e22e3e0b1c987396fc1" Jun 20 19:46:57.416760 containerd[1559]: time="2025-06-20T19:46:57.416730326Z" level=error msg="ContainerStatus for \"435f71bdd4b1b1816e5341992382fe1b23c446847d375e22e3e0b1c987396fc1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"435f71bdd4b1b1816e5341992382fe1b23c446847d375e22e3e0b1c987396fc1\": not found" Jun 20 19:46:57.416899 kubelet[2684]: E0620 19:46:57.416873 2684 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"435f71bdd4b1b1816e5341992382fe1b23c446847d375e22e3e0b1c987396fc1\": not found" containerID="435f71bdd4b1b1816e5341992382fe1b23c446847d375e22e3e0b1c987396fc1" Jun 20 19:46:57.416990 kubelet[2684]: I0620 19:46:57.416903 2684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"435f71bdd4b1b1816e5341992382fe1b23c446847d375e22e3e0b1c987396fc1"} err="failed to get container status \"435f71bdd4b1b1816e5341992382fe1b23c446847d375e22e3e0b1c987396fc1\": rpc error: code = NotFound desc = an error occurred when try to find container \"435f71bdd4b1b1816e5341992382fe1b23c446847d375e22e3e0b1c987396fc1\": not found" Jun 20 19:46:57.416990 kubelet[2684]: I0620 19:46:57.416931 2684 scope.go:117] "RemoveContainer" containerID="997729666f07d1457102a98a6e8019b64dd7d36ca5048e006dd67e0510e0e7ea" Jun 20 19:46:57.417124 containerd[1559]: time="2025-06-20T19:46:57.417058768Z" level=error msg="ContainerStatus for \"997729666f07d1457102a98a6e8019b64dd7d36ca5048e006dd67e0510e0e7ea\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"997729666f07d1457102a98a6e8019b64dd7d36ca5048e006dd67e0510e0e7ea\": not found" Jun 20 19:46:57.417224 kubelet[2684]: E0620 19:46:57.417189 2684 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"997729666f07d1457102a98a6e8019b64dd7d36ca5048e006dd67e0510e0e7ea\": not found" containerID="997729666f07d1457102a98a6e8019b64dd7d36ca5048e006dd67e0510e0e7ea" Jun 20 19:46:57.417269 kubelet[2684]: I0620 19:46:57.417219 2684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"997729666f07d1457102a98a6e8019b64dd7d36ca5048e006dd67e0510e0e7ea"} err="failed to get container status \"997729666f07d1457102a98a6e8019b64dd7d36ca5048e006dd67e0510e0e7ea\": rpc error: code = NotFound desc = an error occurred when try to find container \"997729666f07d1457102a98a6e8019b64dd7d36ca5048e006dd67e0510e0e7ea\": not found" Jun 20 19:46:57.417269 kubelet[2684]: I0620 19:46:57.417234 2684 scope.go:117] "RemoveContainer" containerID="100fc08b26ef1d48f99f6c8c9c2fe21448c57f8c6383abc7e8127dbe0b2b788e" Jun 20 19:46:57.417411 containerd[1559]: time="2025-06-20T19:46:57.417378855Z" level=error msg="ContainerStatus for \"100fc08b26ef1d48f99f6c8c9c2fe21448c57f8c6383abc7e8127dbe0b2b788e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"100fc08b26ef1d48f99f6c8c9c2fe21448c57f8c6383abc7e8127dbe0b2b788e\": not found" Jun 20 19:46:57.417526 kubelet[2684]: E0620 19:46:57.417508 2684 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"100fc08b26ef1d48f99f6c8c9c2fe21448c57f8c6383abc7e8127dbe0b2b788e\": not found" containerID="100fc08b26ef1d48f99f6c8c9c2fe21448c57f8c6383abc7e8127dbe0b2b788e" Jun 20 19:46:57.417609 kubelet[2684]: I0620 19:46:57.417574 2684 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"100fc08b26ef1d48f99f6c8c9c2fe21448c57f8c6383abc7e8127dbe0b2b788e"} err="failed to get container status \"100fc08b26ef1d48f99f6c8c9c2fe21448c57f8c6383abc7e8127dbe0b2b788e\": rpc error: code = NotFound desc = an error occurred when try to find container \"100fc08b26ef1d48f99f6c8c9c2fe21448c57f8c6383abc7e8127dbe0b2b788e\": not found" Jun 20 19:46:57.471688 systemd[1]: var-lib-kubelet-pods-da868121\x2d5b31\x2d40f2\x2d9b37\x2d749d547f61d0-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dscn4v.mount: Deactivated successfully. Jun 20 19:46:57.471837 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a46e43c12cbf4b6e31df935eefef5a3653c3d251a2e1a79f7f5f8f2fadc22232-shm.mount: Deactivated successfully. Jun 20 19:46:57.471933 systemd[1]: var-lib-kubelet-pods-11cdcc50\x2d16b5\x2d4a26\x2d8270\x2d3396efa20b13-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dhgczs.mount: Deactivated successfully. Jun 20 19:46:57.472037 systemd[1]: var-lib-kubelet-pods-11cdcc50\x2d16b5\x2d4a26\x2d8270\x2d3396efa20b13-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jun 20 19:46:57.472130 systemd[1]: var-lib-kubelet-pods-11cdcc50\x2d16b5\x2d4a26\x2d8270\x2d3396efa20b13-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jun 20 19:46:58.153838 kubelet[2684]: I0620 19:46:58.153077 2684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="11cdcc50-16b5-4a26-8270-3396efa20b13" path="/var/lib/kubelet/pods/11cdcc50-16b5-4a26-8270-3396efa20b13/volumes" Jun 20 19:46:58.154242 kubelet[2684]: I0620 19:46:58.154225 2684 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="da868121-5b31-40f2-9b37-749d547f61d0" path="/var/lib/kubelet/pods/da868121-5b31-40f2-9b37-749d547f61d0/volumes" Jun 20 19:46:58.375452 sshd[4307]: Connection closed by 10.0.0.1 port 38454 Jun 20 19:46:58.375969 sshd-session[4305]: pam_unix(sshd:session): session closed for user core Jun 20 19:46:58.384987 systemd[1]: sshd@24-10.0.0.133:22-10.0.0.1:38454.service: Deactivated successfully. Jun 20 19:46:58.387515 systemd[1]: session-25.scope: Deactivated successfully. Jun 20 19:46:58.388413 systemd-logind[1532]: Session 25 logged out. Waiting for processes to exit. Jun 20 19:46:58.391901 systemd[1]: Started sshd@25-10.0.0.133:22-10.0.0.1:34864.service - OpenSSH per-connection server daemon (10.0.0.1:34864). Jun 20 19:46:58.392789 systemd-logind[1532]: Removed session 25. Jun 20 19:46:58.442915 sshd[4457]: Accepted publickey for core from 10.0.0.1 port 34864 ssh2: RSA SHA256:/nHAorWV7i/6K+ZJ86Yj9USNPlOARznL9tuQD88B/d4 Jun 20 19:46:58.444300 sshd-session[4457]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:46:58.448654 systemd-logind[1532]: New session 26 of user core. Jun 20 19:46:58.457957 systemd[1]: Started session-26.scope - Session 26 of User core. Jun 20 19:46:58.832778 sshd[4459]: Connection closed by 10.0.0.1 port 34864 Jun 20 19:46:58.833141 sshd-session[4457]: pam_unix(sshd:session): session closed for user core Jun 20 19:46:58.846437 systemd[1]: sshd@25-10.0.0.133:22-10.0.0.1:34864.service: Deactivated successfully. Jun 20 19:46:58.848689 systemd[1]: session-26.scope: Deactivated successfully. Jun 20 19:46:58.850783 systemd-logind[1532]: Session 26 logged out. Waiting for processes to exit. Jun 20 19:46:58.854586 systemd-logind[1532]: Removed session 26. Jun 20 19:46:58.856884 kubelet[2684]: E0620 19:46:58.856849 2684 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="11cdcc50-16b5-4a26-8270-3396efa20b13" containerName="mount-cgroup" Jun 20 19:46:58.856884 kubelet[2684]: E0620 19:46:58.856880 2684 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="11cdcc50-16b5-4a26-8270-3396efa20b13" containerName="clean-cilium-state" Jun 20 19:46:58.856884 kubelet[2684]: E0620 19:46:58.856887 2684 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="11cdcc50-16b5-4a26-8270-3396efa20b13" containerName="cilium-agent" Jun 20 19:46:58.856884 kubelet[2684]: E0620 19:46:58.856895 2684 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="da868121-5b31-40f2-9b37-749d547f61d0" containerName="cilium-operator" Jun 20 19:46:58.856884 kubelet[2684]: E0620 19:46:58.856902 2684 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="11cdcc50-16b5-4a26-8270-3396efa20b13" containerName="apply-sysctl-overwrites" Jun 20 19:46:58.857070 kubelet[2684]: E0620 19:46:58.856908 2684 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="11cdcc50-16b5-4a26-8270-3396efa20b13" containerName="mount-bpf-fs" Jun 20 19:46:58.857070 kubelet[2684]: I0620 19:46:58.856930 2684 memory_manager.go:354] "RemoveStaleState removing state" podUID="11cdcc50-16b5-4a26-8270-3396efa20b13" containerName="cilium-agent" Jun 20 19:46:58.857070 kubelet[2684]: I0620 19:46:58.856935 2684 memory_manager.go:354] "RemoveStaleState removing state" podUID="da868121-5b31-40f2-9b37-749d547f61d0" containerName="cilium-operator" Jun 20 19:46:58.857235 systemd[1]: Started sshd@26-10.0.0.133:22-10.0.0.1:34878.service - OpenSSH per-connection server daemon (10.0.0.1:34878). Jun 20 19:46:58.879273 systemd[1]: Created slice kubepods-burstable-pod26a7c484_a02c_4a0e_9a40_355834480ad3.slice - libcontainer container kubepods-burstable-pod26a7c484_a02c_4a0e_9a40_355834480ad3.slice. Jun 20 19:46:58.913755 sshd[4471]: Accepted publickey for core from 10.0.0.1 port 34878 ssh2: RSA SHA256:/nHAorWV7i/6K+ZJ86Yj9USNPlOARznL9tuQD88B/d4 Jun 20 19:46:58.915238 sshd-session[4471]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:46:58.919311 systemd-logind[1532]: New session 27 of user core. Jun 20 19:46:58.933924 systemd[1]: Started session-27.scope - Session 27 of User core. Jun 20 19:46:58.956785 kubelet[2684]: I0620 19:46:58.956732 2684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/26a7c484-a02c-4a0e-9a40-355834480ad3-clustermesh-secrets\") pod \"cilium-fx697\" (UID: \"26a7c484-a02c-4a0e-9a40-355834480ad3\") " pod="kube-system/cilium-fx697" Jun 20 19:46:58.956785 kubelet[2684]: I0620 19:46:58.956762 2684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/26a7c484-a02c-4a0e-9a40-355834480ad3-hubble-tls\") pod \"cilium-fx697\" (UID: \"26a7c484-a02c-4a0e-9a40-355834480ad3\") " pod="kube-system/cilium-fx697" Jun 20 19:46:58.956785 kubelet[2684]: I0620 19:46:58.956782 2684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/26a7c484-a02c-4a0e-9a40-355834480ad3-cilium-run\") pod \"cilium-fx697\" (UID: \"26a7c484-a02c-4a0e-9a40-355834480ad3\") " pod="kube-system/cilium-fx697" Jun 20 19:46:58.956785 kubelet[2684]: I0620 19:46:58.956795 2684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/26a7c484-a02c-4a0e-9a40-355834480ad3-xtables-lock\") pod \"cilium-fx697\" (UID: \"26a7c484-a02c-4a0e-9a40-355834480ad3\") " pod="kube-system/cilium-fx697" Jun 20 19:46:58.956785 kubelet[2684]: I0620 19:46:58.956821 2684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/26a7c484-a02c-4a0e-9a40-355834480ad3-host-proc-sys-net\") pod \"cilium-fx697\" (UID: \"26a7c484-a02c-4a0e-9a40-355834480ad3\") " pod="kube-system/cilium-fx697" Jun 20 19:46:58.957110 kubelet[2684]: I0620 19:46:58.956835 2684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/26a7c484-a02c-4a0e-9a40-355834480ad3-hostproc\") pod \"cilium-fx697\" (UID: \"26a7c484-a02c-4a0e-9a40-355834480ad3\") " pod="kube-system/cilium-fx697" Jun 20 19:46:58.957110 kubelet[2684]: I0620 19:46:58.956851 2684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/26a7c484-a02c-4a0e-9a40-355834480ad3-cilium-config-path\") pod \"cilium-fx697\" (UID: \"26a7c484-a02c-4a0e-9a40-355834480ad3\") " pod="kube-system/cilium-fx697" Jun 20 19:46:58.957110 kubelet[2684]: I0620 19:46:58.956864 2684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tbprc\" (UniqueName: \"kubernetes.io/projected/26a7c484-a02c-4a0e-9a40-355834480ad3-kube-api-access-tbprc\") pod \"cilium-fx697\" (UID: \"26a7c484-a02c-4a0e-9a40-355834480ad3\") " pod="kube-system/cilium-fx697" Jun 20 19:46:58.957110 kubelet[2684]: I0620 19:46:58.956885 2684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/26a7c484-a02c-4a0e-9a40-355834480ad3-host-proc-sys-kernel\") pod \"cilium-fx697\" (UID: \"26a7c484-a02c-4a0e-9a40-355834480ad3\") " pod="kube-system/cilium-fx697" Jun 20 19:46:58.957110 kubelet[2684]: I0620 19:46:58.956912 2684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/26a7c484-a02c-4a0e-9a40-355834480ad3-bpf-maps\") pod \"cilium-fx697\" (UID: \"26a7c484-a02c-4a0e-9a40-355834480ad3\") " pod="kube-system/cilium-fx697" Jun 20 19:46:58.957271 kubelet[2684]: I0620 19:46:58.956929 2684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/26a7c484-a02c-4a0e-9a40-355834480ad3-cilium-ipsec-secrets\") pod \"cilium-fx697\" (UID: \"26a7c484-a02c-4a0e-9a40-355834480ad3\") " pod="kube-system/cilium-fx697" Jun 20 19:46:58.957271 kubelet[2684]: I0620 19:46:58.956951 2684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/26a7c484-a02c-4a0e-9a40-355834480ad3-lib-modules\") pod \"cilium-fx697\" (UID: \"26a7c484-a02c-4a0e-9a40-355834480ad3\") " pod="kube-system/cilium-fx697" Jun 20 19:46:58.957271 kubelet[2684]: I0620 19:46:58.956964 2684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/26a7c484-a02c-4a0e-9a40-355834480ad3-etc-cni-netd\") pod \"cilium-fx697\" (UID: \"26a7c484-a02c-4a0e-9a40-355834480ad3\") " pod="kube-system/cilium-fx697" Jun 20 19:46:58.957271 kubelet[2684]: I0620 19:46:58.956991 2684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/26a7c484-a02c-4a0e-9a40-355834480ad3-cilium-cgroup\") pod \"cilium-fx697\" (UID: \"26a7c484-a02c-4a0e-9a40-355834480ad3\") " pod="kube-system/cilium-fx697" Jun 20 19:46:58.957271 kubelet[2684]: I0620 19:46:58.957011 2684 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/26a7c484-a02c-4a0e-9a40-355834480ad3-cni-path\") pod \"cilium-fx697\" (UID: \"26a7c484-a02c-4a0e-9a40-355834480ad3\") " pod="kube-system/cilium-fx697" Jun 20 19:46:58.986338 sshd[4473]: Connection closed by 10.0.0.1 port 34878 Jun 20 19:46:58.986681 sshd-session[4471]: pam_unix(sshd:session): session closed for user core Jun 20 19:46:58.998669 systemd[1]: sshd@26-10.0.0.133:22-10.0.0.1:34878.service: Deactivated successfully. Jun 20 19:46:59.000822 systemd[1]: session-27.scope: Deactivated successfully. Jun 20 19:46:59.001544 systemd-logind[1532]: Session 27 logged out. Waiting for processes to exit. Jun 20 19:46:59.004624 systemd[1]: Started sshd@27-10.0.0.133:22-10.0.0.1:34890.service - OpenSSH per-connection server daemon (10.0.0.1:34890). Jun 20 19:46:59.005550 systemd-logind[1532]: Removed session 27. Jun 20 19:46:59.062105 sshd[4480]: Accepted publickey for core from 10.0.0.1 port 34890 ssh2: RSA SHA256:/nHAorWV7i/6K+ZJ86Yj9USNPlOARznL9tuQD88B/d4 Jun 20 19:46:59.064769 sshd-session[4480]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jun 20 19:46:59.076938 systemd-logind[1532]: New session 28 of user core. Jun 20 19:46:59.092919 systemd[1]: Started session-28.scope - Session 28 of User core. Jun 20 19:46:59.150468 kubelet[2684]: E0620 19:46:59.150432 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 20 19:46:59.182791 kubelet[2684]: E0620 19:46:59.182724 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 20 19:46:59.183400 containerd[1559]: time="2025-06-20T19:46:59.183287528Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-fx697,Uid:26a7c484-a02c-4a0e-9a40-355834480ad3,Namespace:kube-system,Attempt:0,}" Jun 20 19:46:59.200490 containerd[1559]: time="2025-06-20T19:46:59.200443653Z" level=info msg="connecting to shim a8ee03afc3b1ca81cae5ef501720eaad4e4cee2f297894df63acea5815a3ccc7" address="unix:///run/containerd/s/994cc1262ed1466486edbc9f4a312dcda3f255f897288ad4fa8f8f89b334fc91" namespace=k8s.io protocol=ttrpc version=3 Jun 20 19:46:59.223023 systemd[1]: Started cri-containerd-a8ee03afc3b1ca81cae5ef501720eaad4e4cee2f297894df63acea5815a3ccc7.scope - libcontainer container a8ee03afc3b1ca81cae5ef501720eaad4e4cee2f297894df63acea5815a3ccc7. Jun 20 19:46:59.245601 containerd[1559]: time="2025-06-20T19:46:59.245565242Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-fx697,Uid:26a7c484-a02c-4a0e-9a40-355834480ad3,Namespace:kube-system,Attempt:0,} returns sandbox id \"a8ee03afc3b1ca81cae5ef501720eaad4e4cee2f297894df63acea5815a3ccc7\"" Jun 20 19:46:59.246239 kubelet[2684]: E0620 19:46:59.246217 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 20 19:46:59.248034 containerd[1559]: time="2025-06-20T19:46:59.247991715Z" level=info msg="CreateContainer within sandbox \"a8ee03afc3b1ca81cae5ef501720eaad4e4cee2f297894df63acea5815a3ccc7\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jun 20 19:46:59.269797 containerd[1559]: time="2025-06-20T19:46:59.269750372Z" level=info msg="Container 39b60d7893f17246998f736e3711fe062bcd8035eb2d284621a363a1443363ab: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:46:59.283456 containerd[1559]: time="2025-06-20T19:46:59.283398252Z" level=info msg="CreateContainer within sandbox \"a8ee03afc3b1ca81cae5ef501720eaad4e4cee2f297894df63acea5815a3ccc7\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"39b60d7893f17246998f736e3711fe062bcd8035eb2d284621a363a1443363ab\"" Jun 20 19:46:59.284079 containerd[1559]: time="2025-06-20T19:46:59.283860961Z" level=info msg="StartContainer for \"39b60d7893f17246998f736e3711fe062bcd8035eb2d284621a363a1443363ab\"" Jun 20 19:46:59.285056 containerd[1559]: time="2025-06-20T19:46:59.285030319Z" level=info msg="connecting to shim 39b60d7893f17246998f736e3711fe062bcd8035eb2d284621a363a1443363ab" address="unix:///run/containerd/s/994cc1262ed1466486edbc9f4a312dcda3f255f897288ad4fa8f8f89b334fc91" protocol=ttrpc version=3 Jun 20 19:46:59.315982 systemd[1]: Started cri-containerd-39b60d7893f17246998f736e3711fe062bcd8035eb2d284621a363a1443363ab.scope - libcontainer container 39b60d7893f17246998f736e3711fe062bcd8035eb2d284621a363a1443363ab. Jun 20 19:46:59.347352 containerd[1559]: time="2025-06-20T19:46:59.347246998Z" level=info msg="StartContainer for \"39b60d7893f17246998f736e3711fe062bcd8035eb2d284621a363a1443363ab\" returns successfully" Jun 20 19:46:59.356697 systemd[1]: cri-containerd-39b60d7893f17246998f736e3711fe062bcd8035eb2d284621a363a1443363ab.scope: Deactivated successfully. Jun 20 19:46:59.358949 containerd[1559]: time="2025-06-20T19:46:59.358917226Z" level=info msg="received exit event container_id:\"39b60d7893f17246998f736e3711fe062bcd8035eb2d284621a363a1443363ab\" id:\"39b60d7893f17246998f736e3711fe062bcd8035eb2d284621a363a1443363ab\" pid:4551 exited_at:{seconds:1750448819 nanos:358605234}" Jun 20 19:46:59.359177 containerd[1559]: time="2025-06-20T19:46:59.359001687Z" level=info msg="TaskExit event in podsandbox handler container_id:\"39b60d7893f17246998f736e3711fe062bcd8035eb2d284621a363a1443363ab\" id:\"39b60d7893f17246998f736e3711fe062bcd8035eb2d284621a363a1443363ab\" pid:4551 exited_at:{seconds:1750448819 nanos:358605234}" Jun 20 19:46:59.375188 kubelet[2684]: E0620 19:46:59.375152 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 20 19:47:00.203835 kubelet[2684]: E0620 19:47:00.203779 2684 kubelet.go:2902] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jun 20 19:47:00.381919 kubelet[2684]: E0620 19:47:00.381886 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 20 19:47:00.383648 containerd[1559]: time="2025-06-20T19:47:00.383595148Z" level=info msg="CreateContainer within sandbox \"a8ee03afc3b1ca81cae5ef501720eaad4e4cee2f297894df63acea5815a3ccc7\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jun 20 19:47:00.394915 containerd[1559]: time="2025-06-20T19:47:00.394849829Z" level=info msg="Container 032087ba19ebd5927fc02ef3bf2968b1757ee3a2a3fa65a2556f0800f605f4bb: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:47:00.405267 containerd[1559]: time="2025-06-20T19:47:00.405222597Z" level=info msg="CreateContainer within sandbox \"a8ee03afc3b1ca81cae5ef501720eaad4e4cee2f297894df63acea5815a3ccc7\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"032087ba19ebd5927fc02ef3bf2968b1757ee3a2a3fa65a2556f0800f605f4bb\"" Jun 20 19:47:00.405649 containerd[1559]: time="2025-06-20T19:47:00.405626905Z" level=info msg="StartContainer for \"032087ba19ebd5927fc02ef3bf2968b1757ee3a2a3fa65a2556f0800f605f4bb\"" Jun 20 19:47:00.406399 containerd[1559]: time="2025-06-20T19:47:00.406374192Z" level=info msg="connecting to shim 032087ba19ebd5927fc02ef3bf2968b1757ee3a2a3fa65a2556f0800f605f4bb" address="unix:///run/containerd/s/994cc1262ed1466486edbc9f4a312dcda3f255f897288ad4fa8f8f89b334fc91" protocol=ttrpc version=3 Jun 20 19:47:00.426949 systemd[1]: Started cri-containerd-032087ba19ebd5927fc02ef3bf2968b1757ee3a2a3fa65a2556f0800f605f4bb.scope - libcontainer container 032087ba19ebd5927fc02ef3bf2968b1757ee3a2a3fa65a2556f0800f605f4bb. Jun 20 19:47:00.457601 containerd[1559]: time="2025-06-20T19:47:00.457485611Z" level=info msg="StartContainer for \"032087ba19ebd5927fc02ef3bf2968b1757ee3a2a3fa65a2556f0800f605f4bb\" returns successfully" Jun 20 19:47:00.463417 systemd[1]: cri-containerd-032087ba19ebd5927fc02ef3bf2968b1757ee3a2a3fa65a2556f0800f605f4bb.scope: Deactivated successfully. Jun 20 19:47:00.463805 containerd[1559]: time="2025-06-20T19:47:00.463752094Z" level=info msg="TaskExit event in podsandbox handler container_id:\"032087ba19ebd5927fc02ef3bf2968b1757ee3a2a3fa65a2556f0800f605f4bb\" id:\"032087ba19ebd5927fc02ef3bf2968b1757ee3a2a3fa65a2556f0800f605f4bb\" pid:4597 exited_at:{seconds:1750448820 nanos:463539510}" Jun 20 19:47:00.463805 containerd[1559]: time="2025-06-20T19:47:00.463760330Z" level=info msg="received exit event container_id:\"032087ba19ebd5927fc02ef3bf2968b1757ee3a2a3fa65a2556f0800f605f4bb\" id:\"032087ba19ebd5927fc02ef3bf2968b1757ee3a2a3fa65a2556f0800f605f4bb\" pid:4597 exited_at:{seconds:1750448820 nanos:463539510}" Jun 20 19:47:00.483384 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-032087ba19ebd5927fc02ef3bf2968b1757ee3a2a3fa65a2556f0800f605f4bb-rootfs.mount: Deactivated successfully. Jun 20 19:47:01.384356 kubelet[2684]: E0620 19:47:01.384330 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 20 19:47:01.386361 containerd[1559]: time="2025-06-20T19:47:01.386213374Z" level=info msg="CreateContainer within sandbox \"a8ee03afc3b1ca81cae5ef501720eaad4e4cee2f297894df63acea5815a3ccc7\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jun 20 19:47:01.764536 containerd[1559]: time="2025-06-20T19:47:01.764485790Z" level=info msg="Container 9e7d1de6169ec81c89fe78d07a6568a246d6e46cd0e2deddac75e988df9c335f: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:47:01.774051 containerd[1559]: time="2025-06-20T19:47:01.774003503Z" level=info msg="CreateContainer within sandbox \"a8ee03afc3b1ca81cae5ef501720eaad4e4cee2f297894df63acea5815a3ccc7\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"9e7d1de6169ec81c89fe78d07a6568a246d6e46cd0e2deddac75e988df9c335f\"" Jun 20 19:47:01.774458 containerd[1559]: time="2025-06-20T19:47:01.774431064Z" level=info msg="StartContainer for \"9e7d1de6169ec81c89fe78d07a6568a246d6e46cd0e2deddac75e988df9c335f\"" Jun 20 19:47:01.775958 containerd[1559]: time="2025-06-20T19:47:01.775923840Z" level=info msg="connecting to shim 9e7d1de6169ec81c89fe78d07a6568a246d6e46cd0e2deddac75e988df9c335f" address="unix:///run/containerd/s/994cc1262ed1466486edbc9f4a312dcda3f255f897288ad4fa8f8f89b334fc91" protocol=ttrpc version=3 Jun 20 19:47:01.804949 systemd[1]: Started cri-containerd-9e7d1de6169ec81c89fe78d07a6568a246d6e46cd0e2deddac75e988df9c335f.scope - libcontainer container 9e7d1de6169ec81c89fe78d07a6568a246d6e46cd0e2deddac75e988df9c335f. Jun 20 19:47:01.841641 containerd[1559]: time="2025-06-20T19:47:01.841603626Z" level=info msg="StartContainer for \"9e7d1de6169ec81c89fe78d07a6568a246d6e46cd0e2deddac75e988df9c335f\" returns successfully" Jun 20 19:47:01.841851 systemd[1]: cri-containerd-9e7d1de6169ec81c89fe78d07a6568a246d6e46cd0e2deddac75e988df9c335f.scope: Deactivated successfully. Jun 20 19:47:01.842963 containerd[1559]: time="2025-06-20T19:47:01.842933111Z" level=info msg="received exit event container_id:\"9e7d1de6169ec81c89fe78d07a6568a246d6e46cd0e2deddac75e988df9c335f\" id:\"9e7d1de6169ec81c89fe78d07a6568a246d6e46cd0e2deddac75e988df9c335f\" pid:4641 exited_at:{seconds:1750448821 nanos:842703715}" Jun 20 19:47:01.843129 containerd[1559]: time="2025-06-20T19:47:01.842997713Z" level=info msg="TaskExit event in podsandbox handler container_id:\"9e7d1de6169ec81c89fe78d07a6568a246d6e46cd0e2deddac75e988df9c335f\" id:\"9e7d1de6169ec81c89fe78d07a6568a246d6e46cd0e2deddac75e988df9c335f\" pid:4641 exited_at:{seconds:1750448821 nanos:842703715}" Jun 20 19:47:02.061504 kubelet[2684]: I0620 19:47:02.061389 2684 setters.go:600] "Node became not ready" node="localhost" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-06-20T19:47:02Z","lastTransitionTime":"2025-06-20T19:47:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jun 20 19:47:02.150393 kubelet[2684]: E0620 19:47:02.150351 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 20 19:47:02.389277 kubelet[2684]: E0620 19:47:02.389246 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 20 19:47:02.392199 containerd[1559]: time="2025-06-20T19:47:02.392150541Z" level=info msg="CreateContainer within sandbox \"a8ee03afc3b1ca81cae5ef501720eaad4e4cee2f297894df63acea5815a3ccc7\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jun 20 19:47:02.400443 containerd[1559]: time="2025-06-20T19:47:02.400396717Z" level=info msg="Container f9186f822dd4dfc38b6ebe41eb11b6a8c2381d292bf1f93ed070414d9fb6cf5c: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:47:02.407731 containerd[1559]: time="2025-06-20T19:47:02.407686104Z" level=info msg="CreateContainer within sandbox \"a8ee03afc3b1ca81cae5ef501720eaad4e4cee2f297894df63acea5815a3ccc7\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"f9186f822dd4dfc38b6ebe41eb11b6a8c2381d292bf1f93ed070414d9fb6cf5c\"" Jun 20 19:47:02.408274 containerd[1559]: time="2025-06-20T19:47:02.408205602Z" level=info msg="StartContainer for \"f9186f822dd4dfc38b6ebe41eb11b6a8c2381d292bf1f93ed070414d9fb6cf5c\"" Jun 20 19:47:02.409147 containerd[1559]: time="2025-06-20T19:47:02.409121661Z" level=info msg="connecting to shim f9186f822dd4dfc38b6ebe41eb11b6a8c2381d292bf1f93ed070414d9fb6cf5c" address="unix:///run/containerd/s/994cc1262ed1466486edbc9f4a312dcda3f255f897288ad4fa8f8f89b334fc91" protocol=ttrpc version=3 Jun 20 19:47:02.434987 systemd[1]: Started cri-containerd-f9186f822dd4dfc38b6ebe41eb11b6a8c2381d292bf1f93ed070414d9fb6cf5c.scope - libcontainer container f9186f822dd4dfc38b6ebe41eb11b6a8c2381d292bf1f93ed070414d9fb6cf5c. Jun 20 19:47:02.490316 systemd[1]: cri-containerd-f9186f822dd4dfc38b6ebe41eb11b6a8c2381d292bf1f93ed070414d9fb6cf5c.scope: Deactivated successfully. Jun 20 19:47:02.490800 containerd[1559]: time="2025-06-20T19:47:02.490763829Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f9186f822dd4dfc38b6ebe41eb11b6a8c2381d292bf1f93ed070414d9fb6cf5c\" id:\"f9186f822dd4dfc38b6ebe41eb11b6a8c2381d292bf1f93ed070414d9fb6cf5c\" pid:4680 exited_at:{seconds:1750448822 nanos:490413935}" Jun 20 19:47:02.492024 containerd[1559]: time="2025-06-20T19:47:02.490994577Z" level=info msg="received exit event container_id:\"f9186f822dd4dfc38b6ebe41eb11b6a8c2381d292bf1f93ed070414d9fb6cf5c\" id:\"f9186f822dd4dfc38b6ebe41eb11b6a8c2381d292bf1f93ed070414d9fb6cf5c\" pid:4680 exited_at:{seconds:1750448822 nanos:490413935}" Jun 20 19:47:02.493114 containerd[1559]: time="2025-06-20T19:47:02.493083165Z" level=info msg="StartContainer for \"f9186f822dd4dfc38b6ebe41eb11b6a8c2381d292bf1f93ed070414d9fb6cf5c\" returns successfully" Jun 20 19:47:02.755575 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9e7d1de6169ec81c89fe78d07a6568a246d6e46cd0e2deddac75e988df9c335f-rootfs.mount: Deactivated successfully. Jun 20 19:47:03.394386 kubelet[2684]: E0620 19:47:03.394334 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 20 19:47:03.396447 containerd[1559]: time="2025-06-20T19:47:03.396402957Z" level=info msg="CreateContainer within sandbox \"a8ee03afc3b1ca81cae5ef501720eaad4e4cee2f297894df63acea5815a3ccc7\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jun 20 19:47:03.413800 containerd[1559]: time="2025-06-20T19:47:03.413740709Z" level=info msg="Container f1f9916b1abe0e0c38fe0aedeeb410092a08bf67ba59f2dd6bce16616e1ac3b0: CDI devices from CRI Config.CDIDevices: []" Jun 20 19:47:03.418510 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount41095656.mount: Deactivated successfully. Jun 20 19:47:03.422251 containerd[1559]: time="2025-06-20T19:47:03.422211978Z" level=info msg="CreateContainer within sandbox \"a8ee03afc3b1ca81cae5ef501720eaad4e4cee2f297894df63acea5815a3ccc7\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"f1f9916b1abe0e0c38fe0aedeeb410092a08bf67ba59f2dd6bce16616e1ac3b0\"" Jun 20 19:47:03.422719 containerd[1559]: time="2025-06-20T19:47:03.422684938Z" level=info msg="StartContainer for \"f1f9916b1abe0e0c38fe0aedeeb410092a08bf67ba59f2dd6bce16616e1ac3b0\"" Jun 20 19:47:03.423475 containerd[1559]: time="2025-06-20T19:47:03.423451695Z" level=info msg="connecting to shim f1f9916b1abe0e0c38fe0aedeeb410092a08bf67ba59f2dd6bce16616e1ac3b0" address="unix:///run/containerd/s/994cc1262ed1466486edbc9f4a312dcda3f255f897288ad4fa8f8f89b334fc91" protocol=ttrpc version=3 Jun 20 19:47:03.446964 systemd[1]: Started cri-containerd-f1f9916b1abe0e0c38fe0aedeeb410092a08bf67ba59f2dd6bce16616e1ac3b0.scope - libcontainer container f1f9916b1abe0e0c38fe0aedeeb410092a08bf67ba59f2dd6bce16616e1ac3b0. Jun 20 19:47:03.487653 containerd[1559]: time="2025-06-20T19:47:03.487601405Z" level=info msg="StartContainer for \"f1f9916b1abe0e0c38fe0aedeeb410092a08bf67ba59f2dd6bce16616e1ac3b0\" returns successfully" Jun 20 19:47:03.548115 containerd[1559]: time="2025-06-20T19:47:03.548078836Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f1f9916b1abe0e0c38fe0aedeeb410092a08bf67ba59f2dd6bce16616e1ac3b0\" id:\"932c4559f165423387044016579e74ee7a403d1ff2faba978bb6b519040e84b7\" pid:4748 exited_at:{seconds:1750448823 nanos:547730984}" Jun 20 19:47:03.882854 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni-avx)) Jun 20 19:47:04.400423 kubelet[2684]: E0620 19:47:04.400387 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 20 19:47:04.416897 kubelet[2684]: I0620 19:47:04.416831 2684 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-fx697" podStartSLOduration=6.416796498 podStartE2EDuration="6.416796498s" podCreationTimestamp="2025-06-20 19:46:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-06-20 19:47:04.415575797 +0000 UTC m=+94.343263857" watchObservedRunningTime="2025-06-20 19:47:04.416796498 +0000 UTC m=+94.344484559" Jun 20 19:47:05.402498 kubelet[2684]: E0620 19:47:05.402203 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 20 19:47:05.445689 containerd[1559]: time="2025-06-20T19:47:05.445646354Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f1f9916b1abe0e0c38fe0aedeeb410092a08bf67ba59f2dd6bce16616e1ac3b0\" id:\"1a063a1e417fe08a54050eb9911c18120767c25bc0bf21143c8732e815ad9e10\" pid:4889 exit_status:1 exited_at:{seconds:1750448825 nanos:445238538}" Jun 20 19:47:06.860118 systemd-networkd[1480]: lxc_health: Link UP Jun 20 19:47:06.862093 systemd-networkd[1480]: lxc_health: Gained carrier Jun 20 19:47:07.186832 kubelet[2684]: E0620 19:47:07.186398 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 20 19:47:07.405743 kubelet[2684]: E0620 19:47:07.405697 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 20 19:47:07.574379 containerd[1559]: time="2025-06-20T19:47:07.574222520Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f1f9916b1abe0e0c38fe0aedeeb410092a08bf67ba59f2dd6bce16616e1ac3b0\" id:\"a388430cef8bdf6dd31ba303a7270b95c1d51355b21a203a8e17683387844fd9\" pid:5280 exited_at:{seconds:1750448827 nanos:573503672}" Jun 20 19:47:08.026882 systemd-networkd[1480]: lxc_health: Gained IPv6LL Jun 20 19:47:08.407240 kubelet[2684]: E0620 19:47:08.407214 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 20 19:47:09.688255 containerd[1559]: time="2025-06-20T19:47:09.688209031Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f1f9916b1abe0e0c38fe0aedeeb410092a08bf67ba59f2dd6bce16616e1ac3b0\" id:\"e8b63b23accc63fdf6dd6eccebbd5917188fe8954f2f085525688d34e4e4a527\" pid:5314 exited_at:{seconds:1750448829 nanos:687426059}" Jun 20 19:47:10.150724 kubelet[2684]: E0620 19:47:10.150691 2684 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jun 20 19:47:11.786004 containerd[1559]: time="2025-06-20T19:47:11.785941179Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f1f9916b1abe0e0c38fe0aedeeb410092a08bf67ba59f2dd6bce16616e1ac3b0\" id:\"1fb071a7e8f814306e0caafb7fe3124398206e02cedff15b4f460f92f1e1c853\" pid:5344 exited_at:{seconds:1750448831 nanos:785451646}" Jun 20 19:47:13.870580 containerd[1559]: time="2025-06-20T19:47:13.870535700Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f1f9916b1abe0e0c38fe0aedeeb410092a08bf67ba59f2dd6bce16616e1ac3b0\" id:\"2ebd903af0a0a45926034e17cc8299c7910bd9f19f211b161f81f0b31588f555\" pid:5368 exited_at:{seconds:1750448833 nanos:870245296}" Jun 20 19:47:13.876052 sshd[4486]: Connection closed by 10.0.0.1 port 34890 Jun 20 19:47:13.876497 sshd-session[4480]: pam_unix(sshd:session): session closed for user core Jun 20 19:47:13.880219 systemd[1]: sshd@27-10.0.0.133:22-10.0.0.1:34890.service: Deactivated successfully. Jun 20 19:47:13.882244 systemd[1]: session-28.scope: Deactivated successfully. Jun 20 19:47:13.883089 systemd-logind[1532]: Session 28 logged out. Waiting for processes to exit. Jun 20 19:47:13.884326 systemd-logind[1532]: Removed session 28.