Jul 7 06:10:16.864678 kernel: Linux version 6.12.35-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Sun Jul 6 21:56:00 -00 2025 Jul 7 06:10:16.864704 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=2e0b2c30526b1d273b6d599d4c30389a93a14ce36aaa5af83a05b11c5ea5ae50 Jul 7 06:10:16.864715 kernel: BIOS-provided physical RAM map: Jul 7 06:10:16.864722 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000002ffff] usable Jul 7 06:10:16.864729 kernel: BIOS-e820: [mem 0x0000000000030000-0x000000000004ffff] reserved Jul 7 06:10:16.864735 kernel: BIOS-e820: [mem 0x0000000000050000-0x000000000009efff] usable Jul 7 06:10:16.864743 kernel: BIOS-e820: [mem 0x000000000009f000-0x000000000009ffff] reserved Jul 7 06:10:16.864750 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000009b8ecfff] usable Jul 7 06:10:16.864760 kernel: BIOS-e820: [mem 0x000000009b8ed000-0x000000009bb6cfff] reserved Jul 7 06:10:16.864766 kernel: BIOS-e820: [mem 0x000000009bb6d000-0x000000009bb7efff] ACPI data Jul 7 06:10:16.864787 kernel: BIOS-e820: [mem 0x000000009bb7f000-0x000000009bbfefff] ACPI NVS Jul 7 06:10:16.864796 kernel: BIOS-e820: [mem 0x000000009bbff000-0x000000009bfb0fff] usable Jul 7 06:10:16.864803 kernel: BIOS-e820: [mem 0x000000009bfb1000-0x000000009bfb4fff] reserved Jul 7 06:10:16.864810 kernel: BIOS-e820: [mem 0x000000009bfb5000-0x000000009bfb6fff] ACPI NVS Jul 7 06:10:16.864818 kernel: BIOS-e820: [mem 0x000000009bfb7000-0x000000009bffffff] usable Jul 7 06:10:16.864825 kernel: BIOS-e820: [mem 0x000000009c000000-0x000000009cffffff] reserved Jul 7 06:10:16.864837 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jul 7 06:10:16.864844 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jul 7 06:10:16.864851 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jul 7 06:10:16.864858 kernel: NX (Execute Disable) protection: active Jul 7 06:10:16.864865 kernel: APIC: Static calls initialized Jul 7 06:10:16.864872 kernel: e820: update [mem 0x9a13f018-0x9a148c57] usable ==> usable Jul 7 06:10:16.864879 kernel: e820: update [mem 0x9a102018-0x9a13ee57] usable ==> usable Jul 7 06:10:16.864886 kernel: extended physical RAM map: Jul 7 06:10:16.864893 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000002ffff] usable Jul 7 06:10:16.864900 kernel: reserve setup_data: [mem 0x0000000000030000-0x000000000004ffff] reserved Jul 7 06:10:16.864907 kernel: reserve setup_data: [mem 0x0000000000050000-0x000000000009efff] usable Jul 7 06:10:16.864917 kernel: reserve setup_data: [mem 0x000000000009f000-0x000000000009ffff] reserved Jul 7 06:10:16.864924 kernel: reserve setup_data: [mem 0x0000000000100000-0x000000009a102017] usable Jul 7 06:10:16.864931 kernel: reserve setup_data: [mem 0x000000009a102018-0x000000009a13ee57] usable Jul 7 06:10:16.864938 kernel: reserve setup_data: [mem 0x000000009a13ee58-0x000000009a13f017] usable Jul 7 06:10:16.864945 kernel: reserve setup_data: [mem 0x000000009a13f018-0x000000009a148c57] usable Jul 7 06:10:16.864952 kernel: reserve setup_data: [mem 0x000000009a148c58-0x000000009b8ecfff] usable Jul 7 06:10:16.864959 kernel: reserve setup_data: [mem 0x000000009b8ed000-0x000000009bb6cfff] reserved Jul 7 06:10:16.864966 kernel: reserve setup_data: [mem 0x000000009bb6d000-0x000000009bb7efff] ACPI data Jul 7 06:10:16.864973 kernel: reserve setup_data: [mem 0x000000009bb7f000-0x000000009bbfefff] ACPI NVS Jul 7 06:10:16.864980 kernel: reserve setup_data: [mem 0x000000009bbff000-0x000000009bfb0fff] usable Jul 7 06:10:16.864987 kernel: reserve setup_data: [mem 0x000000009bfb1000-0x000000009bfb4fff] reserved Jul 7 06:10:16.864997 kernel: reserve setup_data: [mem 0x000000009bfb5000-0x000000009bfb6fff] ACPI NVS Jul 7 06:10:16.865004 kernel: reserve setup_data: [mem 0x000000009bfb7000-0x000000009bffffff] usable Jul 7 06:10:16.865014 kernel: reserve setup_data: [mem 0x000000009c000000-0x000000009cffffff] reserved Jul 7 06:10:16.865022 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Jul 7 06:10:16.865029 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jul 7 06:10:16.865036 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Jul 7 06:10:16.865045 kernel: efi: EFI v2.7 by EDK II Jul 7 06:10:16.865053 kernel: efi: SMBIOS=0x9b9d5000 ACPI=0x9bb7e000 ACPI 2.0=0x9bb7e014 MEMATTR=0x9a1af018 RNG=0x9bb73018 Jul 7 06:10:16.865060 kernel: random: crng init done Jul 7 06:10:16.865068 kernel: Kernel is locked down from EFI Secure Boot; see man kernel_lockdown.7 Jul 7 06:10:16.865075 kernel: secureboot: Secure boot enabled Jul 7 06:10:16.865082 kernel: SMBIOS 2.8 present. Jul 7 06:10:16.865089 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Jul 7 06:10:16.865097 kernel: DMI: Memory slots populated: 1/1 Jul 7 06:10:16.865104 kernel: Hypervisor detected: KVM Jul 7 06:10:16.865111 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jul 7 06:10:16.865118 kernel: kvm-clock: using sched offset of 8216558387 cycles Jul 7 06:10:16.865129 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jul 7 06:10:16.865136 kernel: tsc: Detected 2794.748 MHz processor Jul 7 06:10:16.865144 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 7 06:10:16.865151 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 7 06:10:16.865159 kernel: last_pfn = 0x9c000 max_arch_pfn = 0x400000000 Jul 7 06:10:16.865166 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Jul 7 06:10:16.865178 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 7 06:10:16.865186 kernel: Using GB pages for direct mapping Jul 7 06:10:16.865195 kernel: ACPI: Early table checksum verification disabled Jul 7 06:10:16.865204 kernel: ACPI: RSDP 0x000000009BB7E014 000024 (v02 BOCHS ) Jul 7 06:10:16.865212 kernel: ACPI: XSDT 0x000000009BB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Jul 7 06:10:16.865220 kernel: ACPI: FACP 0x000000009BB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 06:10:16.865227 kernel: ACPI: DSDT 0x000000009BB7A000 002237 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 06:10:16.865235 kernel: ACPI: FACS 0x000000009BBDD000 000040 Jul 7 06:10:16.865242 kernel: ACPI: APIC 0x000000009BB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 06:10:16.865249 kernel: ACPI: HPET 0x000000009BB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 06:10:16.865257 kernel: ACPI: MCFG 0x000000009BB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 06:10:16.865267 kernel: ACPI: WAET 0x000000009BB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jul 7 06:10:16.865274 kernel: ACPI: BGRT 0x000000009BB74000 000038 (v01 INTEL EDK2 00000002 01000013) Jul 7 06:10:16.865282 kernel: ACPI: Reserving FACP table memory at [mem 0x9bb79000-0x9bb790f3] Jul 7 06:10:16.865289 kernel: ACPI: Reserving DSDT table memory at [mem 0x9bb7a000-0x9bb7c236] Jul 7 06:10:16.865297 kernel: ACPI: Reserving FACS table memory at [mem 0x9bbdd000-0x9bbdd03f] Jul 7 06:10:16.865304 kernel: ACPI: Reserving APIC table memory at [mem 0x9bb78000-0x9bb7808f] Jul 7 06:10:16.865311 kernel: ACPI: Reserving HPET table memory at [mem 0x9bb77000-0x9bb77037] Jul 7 06:10:16.865319 kernel: ACPI: Reserving MCFG table memory at [mem 0x9bb76000-0x9bb7603b] Jul 7 06:10:16.865326 kernel: ACPI: Reserving WAET table memory at [mem 0x9bb75000-0x9bb75027] Jul 7 06:10:16.865336 kernel: ACPI: Reserving BGRT table memory at [mem 0x9bb74000-0x9bb74037] Jul 7 06:10:16.865343 kernel: No NUMA configuration found Jul 7 06:10:16.865351 kernel: Faking a node at [mem 0x0000000000000000-0x000000009bffffff] Jul 7 06:10:16.865359 kernel: NODE_DATA(0) allocated [mem 0x9bf57dc0-0x9bf5efff] Jul 7 06:10:16.865366 kernel: Zone ranges: Jul 7 06:10:16.865373 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 7 06:10:16.865381 kernel: DMA32 [mem 0x0000000001000000-0x000000009bffffff] Jul 7 06:10:16.865388 kernel: Normal empty Jul 7 06:10:16.865396 kernel: Device empty Jul 7 06:10:16.865403 kernel: Movable zone start for each node Jul 7 06:10:16.865413 kernel: Early memory node ranges Jul 7 06:10:16.865420 kernel: node 0: [mem 0x0000000000001000-0x000000000002ffff] Jul 7 06:10:16.865428 kernel: node 0: [mem 0x0000000000050000-0x000000000009efff] Jul 7 06:10:16.865435 kernel: node 0: [mem 0x0000000000100000-0x000000009b8ecfff] Jul 7 06:10:16.865442 kernel: node 0: [mem 0x000000009bbff000-0x000000009bfb0fff] Jul 7 06:10:16.865450 kernel: node 0: [mem 0x000000009bfb7000-0x000000009bffffff] Jul 7 06:10:16.865457 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009bffffff] Jul 7 06:10:16.865465 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 7 06:10:16.865472 kernel: On node 0, zone DMA: 32 pages in unavailable ranges Jul 7 06:10:16.865483 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jul 7 06:10:16.865492 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Jul 7 06:10:16.865500 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Jul 7 06:10:16.865508 kernel: On node 0, zone DMA32: 16384 pages in unavailable ranges Jul 7 06:10:16.865517 kernel: ACPI: PM-Timer IO Port: 0x608 Jul 7 06:10:16.865525 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jul 7 06:10:16.865544 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jul 7 06:10:16.865552 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jul 7 06:10:16.865559 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jul 7 06:10:16.865571 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 7 06:10:16.865579 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jul 7 06:10:16.865586 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jul 7 06:10:16.865594 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 7 06:10:16.865601 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jul 7 06:10:16.865609 kernel: TSC deadline timer available Jul 7 06:10:16.865616 kernel: CPU topo: Max. logical packages: 1 Jul 7 06:10:16.865624 kernel: CPU topo: Max. logical dies: 1 Jul 7 06:10:16.865631 kernel: CPU topo: Max. dies per package: 1 Jul 7 06:10:16.865652 kernel: CPU topo: Max. threads per core: 1 Jul 7 06:10:16.865667 kernel: CPU topo: Num. cores per package: 4 Jul 7 06:10:16.865686 kernel: CPU topo: Num. threads per package: 4 Jul 7 06:10:16.865704 kernel: CPU topo: Allowing 4 present CPUs plus 0 hotplug CPUs Jul 7 06:10:16.865718 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jul 7 06:10:16.865726 kernel: kvm-guest: KVM setup pv remote TLB flush Jul 7 06:10:16.865733 kernel: kvm-guest: setup PV sched yield Jul 7 06:10:16.865741 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Jul 7 06:10:16.865751 kernel: Booting paravirtualized kernel on KVM Jul 7 06:10:16.865759 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 7 06:10:16.865767 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Jul 7 06:10:16.865787 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u524288 Jul 7 06:10:16.865795 kernel: pcpu-alloc: s207832 r8192 d29736 u524288 alloc=1*2097152 Jul 7 06:10:16.865803 kernel: pcpu-alloc: [0] 0 1 2 3 Jul 7 06:10:16.865811 kernel: kvm-guest: PV spinlocks enabled Jul 7 06:10:16.865819 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jul 7 06:10:16.865828 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=2e0b2c30526b1d273b6d599d4c30389a93a14ce36aaa5af83a05b11c5ea5ae50 Jul 7 06:10:16.865839 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 7 06:10:16.865847 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 7 06:10:16.865855 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jul 7 06:10:16.865863 kernel: Fallback order for Node 0: 0 Jul 7 06:10:16.865870 kernel: Built 1 zonelists, mobility grouping on. Total pages: 638054 Jul 7 06:10:16.865878 kernel: Policy zone: DMA32 Jul 7 06:10:16.865886 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 7 06:10:16.865894 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jul 7 06:10:16.865904 kernel: ftrace: allocating 40095 entries in 157 pages Jul 7 06:10:16.865912 kernel: ftrace: allocated 157 pages with 5 groups Jul 7 06:10:16.865919 kernel: Dynamic Preempt: voluntary Jul 7 06:10:16.865927 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 7 06:10:16.865940 kernel: rcu: RCU event tracing is enabled. Jul 7 06:10:16.865948 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jul 7 06:10:16.865956 kernel: Trampoline variant of Tasks RCU enabled. Jul 7 06:10:16.865964 kernel: Rude variant of Tasks RCU enabled. Jul 7 06:10:16.865982 kernel: Tracing variant of Tasks RCU enabled. Jul 7 06:10:16.866002 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 7 06:10:16.866011 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jul 7 06:10:16.866019 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 7 06:10:16.866027 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 7 06:10:16.866038 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jul 7 06:10:16.866046 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Jul 7 06:10:16.866054 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 7 06:10:16.866061 kernel: Console: colour dummy device 80x25 Jul 7 06:10:16.866071 kernel: printk: legacy console [ttyS0] enabled Jul 7 06:10:16.866090 kernel: ACPI: Core revision 20240827 Jul 7 06:10:16.866102 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jul 7 06:10:16.866112 kernel: APIC: Switch to symmetric I/O mode setup Jul 7 06:10:16.866122 kernel: x2apic enabled Jul 7 06:10:16.866132 kernel: APIC: Switched APIC routing to: physical x2apic Jul 7 06:10:16.866142 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Jul 7 06:10:16.866153 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Jul 7 06:10:16.866163 kernel: kvm-guest: setup PV IPIs Jul 7 06:10:16.866173 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jul 7 06:10:16.866187 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Jul 7 06:10:16.866197 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Jul 7 06:10:16.866207 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Jul 7 06:10:16.866217 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Jul 7 06:10:16.866227 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Jul 7 06:10:16.866242 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 7 06:10:16.866253 kernel: Spectre V2 : Mitigation: Retpolines Jul 7 06:10:16.866261 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Jul 7 06:10:16.866269 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Jul 7 06:10:16.866280 kernel: RETBleed: Mitigation: untrained return thunk Jul 7 06:10:16.866288 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jul 7 06:10:16.866296 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jul 7 06:10:16.866304 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Jul 7 06:10:16.866312 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Jul 7 06:10:16.866321 kernel: x86/bugs: return thunk changed Jul 7 06:10:16.866329 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Jul 7 06:10:16.866337 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 7 06:10:16.866347 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 7 06:10:16.866356 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 7 06:10:16.866364 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 7 06:10:16.866372 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jul 7 06:10:16.866380 kernel: Freeing SMP alternatives memory: 32K Jul 7 06:10:16.866389 kernel: pid_max: default: 32768 minimum: 301 Jul 7 06:10:16.866397 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jul 7 06:10:16.866405 kernel: landlock: Up and running. Jul 7 06:10:16.866413 kernel: SELinux: Initializing. Jul 7 06:10:16.866423 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 7 06:10:16.866432 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jul 7 06:10:16.866442 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Jul 7 06:10:16.866451 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Jul 7 06:10:16.866459 kernel: ... version: 0 Jul 7 06:10:16.866467 kernel: ... bit width: 48 Jul 7 06:10:16.866477 kernel: ... generic registers: 6 Jul 7 06:10:16.866485 kernel: ... value mask: 0000ffffffffffff Jul 7 06:10:16.866494 kernel: ... max period: 00007fffffffffff Jul 7 06:10:16.866504 kernel: ... fixed-purpose events: 0 Jul 7 06:10:16.866512 kernel: ... event mask: 000000000000003f Jul 7 06:10:16.866520 kernel: signal: max sigframe size: 1776 Jul 7 06:10:16.866540 kernel: rcu: Hierarchical SRCU implementation. Jul 7 06:10:16.866548 kernel: rcu: Max phase no-delay instances is 400. Jul 7 06:10:16.866556 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jul 7 06:10:16.866564 kernel: smp: Bringing up secondary CPUs ... Jul 7 06:10:16.866572 kernel: smpboot: x86: Booting SMP configuration: Jul 7 06:10:16.866580 kernel: .... node #0, CPUs: #1 #2 #3 Jul 7 06:10:16.866587 kernel: smp: Brought up 1 node, 4 CPUs Jul 7 06:10:16.866597 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Jul 7 06:10:16.866605 kernel: Memory: 2409212K/2552216K available (14336K kernel code, 2430K rwdata, 9956K rodata, 54432K init, 2536K bss, 137064K reserved, 0K cma-reserved) Jul 7 06:10:16.866613 kernel: devtmpfs: initialized Jul 7 06:10:16.866621 kernel: x86/mm: Memory block size: 128MB Jul 7 06:10:16.866629 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9bb7f000-0x9bbfefff] (524288 bytes) Jul 7 06:10:16.866637 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9bfb5000-0x9bfb6fff] (8192 bytes) Jul 7 06:10:16.866645 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 7 06:10:16.866653 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jul 7 06:10:16.866663 kernel: pinctrl core: initialized pinctrl subsystem Jul 7 06:10:16.866670 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 7 06:10:16.866678 kernel: audit: initializing netlink subsys (disabled) Jul 7 06:10:16.866686 kernel: audit: type=2000 audit(1751868614.123:1): state=initialized audit_enabled=0 res=1 Jul 7 06:10:16.866694 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 7 06:10:16.866702 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 7 06:10:16.866710 kernel: cpuidle: using governor menu Jul 7 06:10:16.866717 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 7 06:10:16.866725 kernel: dca service started, version 1.12.1 Jul 7 06:10:16.866735 kernel: PCI: ECAM [mem 0xe0000000-0xefffffff] (base 0xe0000000) for domain 0000 [bus 00-ff] Jul 7 06:10:16.866743 kernel: PCI: Using configuration type 1 for base access Jul 7 06:10:16.866751 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 7 06:10:16.866758 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 7 06:10:16.866766 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jul 7 06:10:16.866795 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 7 06:10:16.866803 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jul 7 06:10:16.866811 kernel: ACPI: Added _OSI(Module Device) Jul 7 06:10:16.866819 kernel: ACPI: Added _OSI(Processor Device) Jul 7 06:10:16.866829 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 7 06:10:16.866837 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 7 06:10:16.866844 kernel: ACPI: Interpreter enabled Jul 7 06:10:16.866852 kernel: ACPI: PM: (supports S0 S5) Jul 7 06:10:16.866860 kernel: ACPI: Using IOAPIC for interrupt routing Jul 7 06:10:16.866867 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 7 06:10:16.866875 kernel: PCI: Using E820 reservations for host bridge windows Jul 7 06:10:16.866883 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Jul 7 06:10:16.866891 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 7 06:10:16.867099 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jul 7 06:10:16.867227 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Jul 7 06:10:16.867395 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Jul 7 06:10:16.867407 kernel: PCI host bridge to bus 0000:00 Jul 7 06:10:16.867571 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 7 06:10:16.867684 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jul 7 06:10:16.867826 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 7 06:10:16.867939 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Jul 7 06:10:16.868049 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Jul 7 06:10:16.868158 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Jul 7 06:10:16.868268 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 7 06:10:16.868556 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Jul 7 06:10:16.868701 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Jul 7 06:10:16.868855 kernel: pci 0000:00:01.0: BAR 0 [mem 0xc0000000-0xc0ffffff pref] Jul 7 06:10:16.868978 kernel: pci 0000:00:01.0: BAR 2 [mem 0xc1044000-0xc1044fff] Jul 7 06:10:16.869098 kernel: pci 0000:00:01.0: ROM [mem 0xffff0000-0xffffffff pref] Jul 7 06:10:16.869217 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jul 7 06:10:16.869358 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint Jul 7 06:10:16.869488 kernel: pci 0000:00:02.0: BAR 0 [io 0x6100-0x611f] Jul 7 06:10:16.869653 kernel: pci 0000:00:02.0: BAR 1 [mem 0xc1043000-0xc1043fff] Jul 7 06:10:16.869807 kernel: pci 0000:00:02.0: BAR 4 [mem 0x380000000000-0x380000003fff 64bit pref] Jul 7 06:10:16.869974 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint Jul 7 06:10:16.870099 kernel: pci 0000:00:03.0: BAR 0 [io 0x6000-0x607f] Jul 7 06:10:16.870220 kernel: pci 0000:00:03.0: BAR 1 [mem 0xc1042000-0xc1042fff] Jul 7 06:10:16.870340 kernel: pci 0000:00:03.0: BAR 4 [mem 0x380000004000-0x380000007fff 64bit pref] Jul 7 06:10:16.870479 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Jul 7 06:10:16.870623 kernel: pci 0000:00:04.0: BAR 0 [io 0x60e0-0x60ff] Jul 7 06:10:16.870745 kernel: pci 0000:00:04.0: BAR 1 [mem 0xc1041000-0xc1041fff] Jul 7 06:10:16.870897 kernel: pci 0000:00:04.0: BAR 4 [mem 0x380000008000-0x38000000bfff 64bit pref] Jul 7 06:10:16.871018 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref] Jul 7 06:10:16.871154 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Jul 7 06:10:16.871276 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Jul 7 06:10:16.871417 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Jul 7 06:10:16.871576 kernel: pci 0000:00:1f.2: BAR 4 [io 0x60c0-0x60df] Jul 7 06:10:16.871695 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xc1040000-0xc1040fff] Jul 7 06:10:16.871871 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Jul 7 06:10:16.871998 kernel: pci 0000:00:1f.3: BAR 4 [io 0x6080-0x60bf] Jul 7 06:10:16.872009 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jul 7 06:10:16.872017 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jul 7 06:10:16.872024 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jul 7 06:10:16.872032 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jul 7 06:10:16.872045 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Jul 7 06:10:16.872052 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Jul 7 06:10:16.872061 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Jul 7 06:10:16.872068 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Jul 7 06:10:16.872076 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Jul 7 06:10:16.872084 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Jul 7 06:10:16.872092 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Jul 7 06:10:16.872099 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Jul 7 06:10:16.872107 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Jul 7 06:10:16.872117 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Jul 7 06:10:16.872124 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Jul 7 06:10:16.872132 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Jul 7 06:10:16.872140 kernel: iommu: Default domain type: Translated Jul 7 06:10:16.872148 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 7 06:10:16.872155 kernel: efivars: Registered efivars operations Jul 7 06:10:16.872163 kernel: PCI: Using ACPI for IRQ routing Jul 7 06:10:16.872171 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 7 06:10:16.872179 kernel: e820: reserve RAM buffer [mem 0x0009f000-0x0009ffff] Jul 7 06:10:16.872188 kernel: e820: reserve RAM buffer [mem 0x9a102018-0x9bffffff] Jul 7 06:10:16.872196 kernel: e820: reserve RAM buffer [mem 0x9a13f018-0x9bffffff] Jul 7 06:10:16.872204 kernel: e820: reserve RAM buffer [mem 0x9b8ed000-0x9bffffff] Jul 7 06:10:16.872211 kernel: e820: reserve RAM buffer [mem 0x9bfb1000-0x9bffffff] Jul 7 06:10:16.872330 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Jul 7 06:10:16.872450 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Jul 7 06:10:16.872612 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jul 7 06:10:16.872625 kernel: vgaarb: loaded Jul 7 06:10:16.872640 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jul 7 06:10:16.872650 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jul 7 06:10:16.872666 kernel: clocksource: Switched to clocksource kvm-clock Jul 7 06:10:16.872676 kernel: VFS: Disk quotas dquot_6.6.0 Jul 7 06:10:16.872685 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 7 06:10:16.872694 kernel: pnp: PnP ACPI init Jul 7 06:10:16.872876 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Jul 7 06:10:16.872888 kernel: pnp: PnP ACPI: found 6 devices Jul 7 06:10:16.872896 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 7 06:10:16.872908 kernel: NET: Registered PF_INET protocol family Jul 7 06:10:16.872917 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 7 06:10:16.872925 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jul 7 06:10:16.872933 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 7 06:10:16.872941 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jul 7 06:10:16.872949 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jul 7 06:10:16.872957 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jul 7 06:10:16.872965 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 7 06:10:16.872975 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jul 7 06:10:16.872982 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 7 06:10:16.872990 kernel: NET: Registered PF_XDP protocol family Jul 7 06:10:16.873114 kernel: pci 0000:00:04.0: ROM [mem 0xfffc0000-0xffffffff pref]: can't claim; no compatible bridge window Jul 7 06:10:16.873235 kernel: pci 0000:00:04.0: ROM [mem 0x9d000000-0x9d03ffff pref]: assigned Jul 7 06:10:16.873359 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jul 7 06:10:16.873491 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jul 7 06:10:16.873619 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jul 7 06:10:16.873738 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Jul 7 06:10:16.873867 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Jul 7 06:10:16.873978 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Jul 7 06:10:16.873989 kernel: PCI: CLS 0 bytes, default 64 Jul 7 06:10:16.873997 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x2848df6a9de, max_idle_ns: 440795280912 ns Jul 7 06:10:16.874005 kernel: Initialise system trusted keyrings Jul 7 06:10:16.874013 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jul 7 06:10:16.874021 kernel: Key type asymmetric registered Jul 7 06:10:16.874028 kernel: Asymmetric key parser 'x509' registered Jul 7 06:10:16.874041 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jul 7 06:10:16.874062 kernel: io scheduler mq-deadline registered Jul 7 06:10:16.874072 kernel: io scheduler kyber registered Jul 7 06:10:16.874080 kernel: io scheduler bfq registered Jul 7 06:10:16.874088 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 7 06:10:16.874097 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Jul 7 06:10:16.874105 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Jul 7 06:10:16.874113 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Jul 7 06:10:16.874121 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 7 06:10:16.874131 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 7 06:10:16.874140 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jul 7 06:10:16.874148 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jul 7 06:10:16.874156 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jul 7 06:10:16.874290 kernel: rtc_cmos 00:04: RTC can wake from S4 Jul 7 06:10:16.874302 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jul 7 06:10:16.874422 kernel: rtc_cmos 00:04: registered as rtc0 Jul 7 06:10:16.874570 kernel: rtc_cmos 00:04: setting system clock to 2025-07-07T06:10:16 UTC (1751868616) Jul 7 06:10:16.874713 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Jul 7 06:10:16.874723 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Jul 7 06:10:16.874731 kernel: efifb: probing for efifb Jul 7 06:10:16.874739 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Jul 7 06:10:16.874748 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Jul 7 06:10:16.874756 kernel: efifb: scrolling: redraw Jul 7 06:10:16.874764 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jul 7 06:10:16.874786 kernel: Console: switching to colour frame buffer device 160x50 Jul 7 06:10:16.874807 kernel: fb0: EFI VGA frame buffer device Jul 7 06:10:16.874820 kernel: pstore: Using crash dump compression: deflate Jul 7 06:10:16.874829 kernel: pstore: Registered efi_pstore as persistent store backend Jul 7 06:10:16.874839 kernel: NET: Registered PF_INET6 protocol family Jul 7 06:10:16.874847 kernel: Segment Routing with IPv6 Jul 7 06:10:16.874855 kernel: In-situ OAM (IOAM) with IPv6 Jul 7 06:10:16.874866 kernel: NET: Registered PF_PACKET protocol family Jul 7 06:10:16.874874 kernel: Key type dns_resolver registered Jul 7 06:10:16.874882 kernel: IPI shorthand broadcast: enabled Jul 7 06:10:16.874890 kernel: sched_clock: Marking stable (3572006025, 151571909)->(3744835233, -21257299) Jul 7 06:10:16.874899 kernel: registered taskstats version 1 Jul 7 06:10:16.874907 kernel: Loading compiled-in X.509 certificates Jul 7 06:10:16.874915 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.35-flatcar: b8e96f4c6a9e663230fc9c12b186cf91fcc7a64e' Jul 7 06:10:16.874923 kernel: Demotion targets for Node 0: null Jul 7 06:10:16.874931 kernel: Key type .fscrypt registered Jul 7 06:10:16.874942 kernel: Key type fscrypt-provisioning registered Jul 7 06:10:16.874950 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 7 06:10:16.874958 kernel: ima: Allocated hash algorithm: sha1 Jul 7 06:10:16.874966 kernel: ima: No architecture policies found Jul 7 06:10:16.874974 kernel: clk: Disabling unused clocks Jul 7 06:10:16.874982 kernel: Warning: unable to open an initial console. Jul 7 06:10:16.874990 kernel: Freeing unused kernel image (initmem) memory: 54432K Jul 7 06:10:16.874998 kernel: Write protecting the kernel read-only data: 24576k Jul 7 06:10:16.875006 kernel: Freeing unused kernel image (rodata/data gap) memory: 284K Jul 7 06:10:16.875016 kernel: Run /init as init process Jul 7 06:10:16.875024 kernel: with arguments: Jul 7 06:10:16.875032 kernel: /init Jul 7 06:10:16.875040 kernel: with environment: Jul 7 06:10:16.875048 kernel: HOME=/ Jul 7 06:10:16.875056 kernel: TERM=linux Jul 7 06:10:16.875064 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 7 06:10:16.875074 systemd[1]: Successfully made /usr/ read-only. Jul 7 06:10:16.875087 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 7 06:10:16.875096 systemd[1]: Detected virtualization kvm. Jul 7 06:10:16.875105 systemd[1]: Detected architecture x86-64. Jul 7 06:10:16.875113 systemd[1]: Running in initrd. Jul 7 06:10:16.875122 systemd[1]: No hostname configured, using default hostname. Jul 7 06:10:16.875131 systemd[1]: Hostname set to . Jul 7 06:10:16.875139 systemd[1]: Initializing machine ID from VM UUID. Jul 7 06:10:16.875148 systemd[1]: Queued start job for default target initrd.target. Jul 7 06:10:16.875158 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 7 06:10:16.875167 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 7 06:10:16.875177 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jul 7 06:10:16.875186 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 7 06:10:16.875194 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jul 7 06:10:16.875204 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jul 7 06:10:16.875216 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jul 7 06:10:16.875225 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jul 7 06:10:16.875233 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 7 06:10:16.875242 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 7 06:10:16.875251 systemd[1]: Reached target paths.target - Path Units. Jul 7 06:10:16.875260 systemd[1]: Reached target slices.target - Slice Units. Jul 7 06:10:16.875268 systemd[1]: Reached target swap.target - Swaps. Jul 7 06:10:16.875277 systemd[1]: Reached target timers.target - Timer Units. Jul 7 06:10:16.875286 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 7 06:10:16.875296 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 7 06:10:16.875305 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 7 06:10:16.875314 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jul 7 06:10:16.875322 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 7 06:10:16.875331 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 7 06:10:16.875342 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 7 06:10:16.875351 systemd[1]: Reached target sockets.target - Socket Units. Jul 7 06:10:16.875359 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jul 7 06:10:16.875370 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 7 06:10:16.875379 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 7 06:10:16.875388 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jul 7 06:10:16.875397 systemd[1]: Starting systemd-fsck-usr.service... Jul 7 06:10:16.875406 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 7 06:10:16.875415 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 7 06:10:16.875424 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 06:10:16.875433 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jul 7 06:10:16.875445 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 7 06:10:16.875456 systemd[1]: Finished systemd-fsck-usr.service. Jul 7 06:10:16.875467 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jul 7 06:10:16.875508 systemd-journald[220]: Collecting audit messages is disabled. Jul 7 06:10:16.875544 systemd-journald[220]: Journal started Jul 7 06:10:16.875564 systemd-journald[220]: Runtime Journal (/run/log/journal/84269b13140d4f51adcad22587932ac1) is 6M, max 48.2M, 42.2M free. Jul 7 06:10:16.866080 systemd-modules-load[221]: Inserted module 'overlay' Jul 7 06:10:16.877572 systemd[1]: Started systemd-journald.service - Journal Service. Jul 7 06:10:16.882904 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 7 06:10:16.885644 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 06:10:16.894894 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 7 06:10:16.896897 systemd-modules-load[221]: Inserted module 'br_netfilter' Jul 7 06:10:16.897921 kernel: Bridge firewalling registered Jul 7 06:10:16.905920 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jul 7 06:10:16.906433 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 7 06:10:16.911709 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 7 06:10:16.912853 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 7 06:10:16.916683 systemd-tmpfiles[237]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jul 7 06:10:16.917728 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 7 06:10:16.931025 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 7 06:10:16.939943 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 7 06:10:16.941851 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 7 06:10:16.944156 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 7 06:10:16.956998 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 7 06:10:16.961150 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 7 06:10:17.000188 systemd-resolved[256]: Positive Trust Anchors: Jul 7 06:10:17.000213 systemd-resolved[256]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 7 06:10:17.000258 systemd-resolved[256]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 7 06:10:17.003398 systemd-resolved[256]: Defaulting to hostname 'linux'. Jul 7 06:10:17.004934 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 7 06:10:17.013178 dracut-cmdline[265]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=2e0b2c30526b1d273b6d599d4c30389a93a14ce36aaa5af83a05b11c5ea5ae50 Jul 7 06:10:17.010355 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 7 06:10:17.156853 kernel: SCSI subsystem initialized Jul 7 06:10:17.169820 kernel: Loading iSCSI transport class v2.0-870. Jul 7 06:10:17.184833 kernel: iscsi: registered transport (tcp) Jul 7 06:10:17.208823 kernel: iscsi: registered transport (qla4xxx) Jul 7 06:10:17.208891 kernel: QLogic iSCSI HBA Driver Jul 7 06:10:17.233746 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 7 06:10:17.259502 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 7 06:10:17.261239 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 7 06:10:17.331989 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 7 06:10:17.334188 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 7 06:10:17.394842 kernel: raid6: avx2x4 gen() 28397 MB/s Jul 7 06:10:17.411814 kernel: raid6: avx2x2 gen() 30257 MB/s Jul 7 06:10:17.428928 kernel: raid6: avx2x1 gen() 20723 MB/s Jul 7 06:10:17.428986 kernel: raid6: using algorithm avx2x2 gen() 30257 MB/s Jul 7 06:10:17.446925 kernel: raid6: .... xor() 17890 MB/s, rmw enabled Jul 7 06:10:17.446997 kernel: raid6: using avx2x2 recovery algorithm Jul 7 06:10:17.469829 kernel: xor: automatically using best checksumming function avx Jul 7 06:10:17.677834 kernel: Btrfs loaded, zoned=no, fsverity=no Jul 7 06:10:17.687753 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 7 06:10:17.691154 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 7 06:10:17.721150 systemd-udevd[473]: Using default interface naming scheme 'v255'. Jul 7 06:10:17.727152 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 7 06:10:17.730918 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 7 06:10:17.768619 dracut-pre-trigger[479]: rd.md=0: removing MD RAID activation Jul 7 06:10:17.801483 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 7 06:10:17.805384 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 7 06:10:17.893993 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 7 06:10:17.896886 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 7 06:10:17.980916 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jul 7 06:10:17.982838 kernel: cryptd: max_cpu_qlen set to 1000 Jul 7 06:10:17.989051 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Jul 7 06:10:17.988928 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 7 06:10:17.989116 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 06:10:17.992007 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 06:10:17.996136 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 06:10:17.999915 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 7 06:10:17.999948 kernel: GPT:9289727 != 19775487 Jul 7 06:10:17.999959 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 7 06:10:17.999988 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 7 06:10:18.003454 kernel: GPT:9289727 != 19775487 Jul 7 06:10:18.003469 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 7 06:10:18.003485 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 7 06:10:18.005813 kernel: AES CTR mode by8 optimization enabled Jul 7 06:10:18.015791 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Jul 7 06:10:18.023620 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 7 06:10:18.027126 kernel: libata version 3.00 loaded. Jul 7 06:10:18.030044 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 06:10:18.034176 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 7 06:10:18.039210 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 06:10:18.040805 kernel: ahci 0000:00:1f.2: version 3.0 Jul 7 06:10:18.046825 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Jul 7 06:10:18.050638 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Jul 7 06:10:18.050911 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Jul 7 06:10:18.051096 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Jul 7 06:10:18.064810 kernel: scsi host0: ahci Jul 7 06:10:18.066286 kernel: scsi host1: ahci Jul 7 06:10:18.064901 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jul 7 06:10:18.070653 kernel: scsi host2: ahci Jul 7 06:10:18.070853 kernel: scsi host3: ahci Jul 7 06:10:18.071313 kernel: scsi host4: ahci Jul 7 06:10:18.071489 kernel: scsi host5: ahci Jul 7 06:10:18.066592 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jul 7 06:10:18.078217 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 lpm-pol 0 Jul 7 06:10:18.078249 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 lpm-pol 0 Jul 7 06:10:18.078260 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 lpm-pol 0 Jul 7 06:10:18.078277 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 lpm-pol 0 Jul 7 06:10:18.078287 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 lpm-pol 0 Jul 7 06:10:18.078298 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 lpm-pol 0 Jul 7 06:10:18.090395 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 06:10:18.101887 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jul 7 06:10:18.114249 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jul 7 06:10:18.124316 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 7 06:10:18.127067 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 7 06:10:18.307078 disk-uuid[637]: Primary Header is updated. Jul 7 06:10:18.307078 disk-uuid[637]: Secondary Entries is updated. Jul 7 06:10:18.307078 disk-uuid[637]: Secondary Header is updated. Jul 7 06:10:18.311163 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 7 06:10:18.315826 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 7 06:10:18.386011 kernel: ata6: SATA link down (SStatus 0 SControl 300) Jul 7 06:10:18.386062 kernel: ata2: SATA link down (SStatus 0 SControl 300) Jul 7 06:10:18.386079 kernel: ata1: SATA link down (SStatus 0 SControl 300) Jul 7 06:10:18.388793 kernel: ata5: SATA link down (SStatus 0 SControl 300) Jul 7 06:10:18.388815 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Jul 7 06:10:18.390101 kernel: ata4: SATA link down (SStatus 0 SControl 300) Jul 7 06:10:18.391367 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Jul 7 06:10:18.391388 kernel: ata3.00: applying bridge limits Jul 7 06:10:18.392805 kernel: ata3.00: configured for UDMA/100 Jul 7 06:10:18.393909 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Jul 7 06:10:18.429298 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Jul 7 06:10:18.429564 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jul 7 06:10:18.452802 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jul 7 06:10:18.806386 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 7 06:10:18.808434 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 7 06:10:18.809987 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 7 06:10:18.811227 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 7 06:10:18.815016 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 7 06:10:18.845269 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 7 06:10:19.317806 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jul 7 06:10:19.318111 disk-uuid[638]: The operation has completed successfully. Jul 7 06:10:19.343194 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 7 06:10:19.343360 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 7 06:10:19.390731 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 7 06:10:19.421935 sh[666]: Success Jul 7 06:10:19.443251 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 7 06:10:19.443324 kernel: device-mapper: uevent: version 1.0.3 Jul 7 06:10:19.444483 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jul 7 06:10:19.456018 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Jul 7 06:10:19.493839 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 7 06:10:19.498634 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 7 06:10:19.515923 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 7 06:10:19.524671 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Jul 7 06:10:19.524718 kernel: BTRFS: device fsid 9d124217-7448-4fc6-a329-8a233bb5a0ac devid 1 transid 38 /dev/mapper/usr (253:0) scanned by mount (678) Jul 7 06:10:19.527249 kernel: BTRFS info (device dm-0): first mount of filesystem 9d124217-7448-4fc6-a329-8a233bb5a0ac Jul 7 06:10:19.527278 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jul 7 06:10:19.527298 kernel: BTRFS info (device dm-0): using free-space-tree Jul 7 06:10:19.533460 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 7 06:10:19.535075 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jul 7 06:10:19.536482 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 7 06:10:19.537613 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 7 06:10:19.541747 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 7 06:10:19.566949 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (711) Jul 7 06:10:19.567000 kernel: BTRFS info (device vda6): first mount of filesystem 847f3129-822b-493d-8278-974df083638f Jul 7 06:10:19.567016 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 7 06:10:19.567949 kernel: BTRFS info (device vda6): using free-space-tree Jul 7 06:10:19.576812 kernel: BTRFS info (device vda6): last unmount of filesystem 847f3129-822b-493d-8278-974df083638f Jul 7 06:10:19.577686 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 7 06:10:19.579302 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 7 06:10:19.763379 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 7 06:10:19.767439 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 7 06:10:19.787350 ignition[752]: Ignition 2.21.0 Jul 7 06:10:19.787365 ignition[752]: Stage: fetch-offline Jul 7 06:10:19.787420 ignition[752]: no configs at "/usr/lib/ignition/base.d" Jul 7 06:10:19.787432 ignition[752]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 7 06:10:19.787602 ignition[752]: parsed url from cmdline: "" Jul 7 06:10:19.787607 ignition[752]: no config URL provided Jul 7 06:10:19.787613 ignition[752]: reading system config file "/usr/lib/ignition/user.ign" Jul 7 06:10:19.787623 ignition[752]: no config at "/usr/lib/ignition/user.ign" Jul 7 06:10:19.787655 ignition[752]: op(1): [started] loading QEMU firmware config module Jul 7 06:10:19.787664 ignition[752]: op(1): executing: "modprobe" "qemu_fw_cfg" Jul 7 06:10:19.798108 ignition[752]: op(1): [finished] loading QEMU firmware config module Jul 7 06:10:19.837723 ignition[752]: parsing config with SHA512: 70e3a3849362c7ca512146a3e12390b84879d8659174297d6459f79c054222c96fcf13a0b2a183db4104309048359dad11ab853858ce28e96bc38e0446bba4a9 Jul 7 06:10:19.878055 unknown[752]: fetched base config from "system" Jul 7 06:10:19.878073 unknown[752]: fetched user config from "qemu" Jul 7 06:10:19.878648 ignition[752]: fetch-offline: fetch-offline passed Jul 7 06:10:19.878743 ignition[752]: Ignition finished successfully Jul 7 06:10:19.884833 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 7 06:10:19.891766 systemd-networkd[853]: lo: Link UP Jul 7 06:10:19.891796 systemd-networkd[853]: lo: Gained carrier Jul 7 06:10:19.893759 systemd-networkd[853]: Enumeration completed Jul 7 06:10:19.893904 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 7 06:10:19.894212 systemd-networkd[853]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 06:10:19.894217 systemd-networkd[853]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 7 06:10:19.894326 systemd[1]: Reached target network.target - Network. Jul 7 06:10:19.894650 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jul 7 06:10:19.900079 systemd-networkd[853]: eth0: Link UP Jul 7 06:10:19.900085 systemd-networkd[853]: eth0: Gained carrier Jul 7 06:10:19.900248 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 7 06:10:19.903830 systemd-networkd[853]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 06:10:19.919816 systemd-networkd[853]: eth0: DHCPv4 address 10.0.0.94/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 7 06:10:19.954878 ignition[860]: Ignition 2.21.0 Jul 7 06:10:19.954894 ignition[860]: Stage: kargs Jul 7 06:10:19.955172 ignition[860]: no configs at "/usr/lib/ignition/base.d" Jul 7 06:10:19.955188 ignition[860]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 7 06:10:19.956268 ignition[860]: kargs: kargs passed Jul 7 06:10:19.956333 ignition[860]: Ignition finished successfully Jul 7 06:10:19.962349 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 7 06:10:19.965876 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 7 06:10:20.020033 ignition[870]: Ignition 2.21.0 Jul 7 06:10:20.020058 ignition[870]: Stage: disks Jul 7 06:10:20.020256 ignition[870]: no configs at "/usr/lib/ignition/base.d" Jul 7 06:10:20.020272 ignition[870]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 7 06:10:20.026188 ignition[870]: disks: disks passed Jul 7 06:10:20.026310 ignition[870]: Ignition finished successfully Jul 7 06:10:20.030892 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 7 06:10:20.031649 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 7 06:10:20.033525 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 7 06:10:20.034295 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 7 06:10:20.034680 systemd[1]: Reached target sysinit.target - System Initialization. Jul 7 06:10:20.042379 systemd[1]: Reached target basic.target - Basic System. Jul 7 06:10:20.045912 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 7 06:10:20.088624 systemd-fsck[880]: ROOT: clean, 15/553520 files, 52789/553472 blocks Jul 7 06:10:20.097081 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 7 06:10:20.099508 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 7 06:10:20.283821 kernel: EXT4-fs (vda9): mounted filesystem df0fa228-af1b-4496-9a54-2d4ccccd27d9 r/w with ordered data mode. Quota mode: none. Jul 7 06:10:20.285102 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 7 06:10:20.288061 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 7 06:10:20.292463 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 7 06:10:20.295612 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 7 06:10:20.298119 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jul 7 06:10:20.298184 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 7 06:10:20.300343 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 7 06:10:20.311856 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 7 06:10:20.314945 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 7 06:10:20.321304 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (888) Jul 7 06:10:20.321330 kernel: BTRFS info (device vda6): first mount of filesystem 847f3129-822b-493d-8278-974df083638f Jul 7 06:10:20.321343 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 7 06:10:20.321356 kernel: BTRFS info (device vda6): using free-space-tree Jul 7 06:10:20.325924 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 7 06:10:20.362233 initrd-setup-root[913]: cut: /sysroot/etc/passwd: No such file or directory Jul 7 06:10:20.367158 initrd-setup-root[920]: cut: /sysroot/etc/group: No such file or directory Jul 7 06:10:20.373406 initrd-setup-root[927]: cut: /sysroot/etc/shadow: No such file or directory Jul 7 06:10:20.379541 initrd-setup-root[934]: cut: /sysroot/etc/gshadow: No such file or directory Jul 7 06:10:20.483250 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 7 06:10:20.485914 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 7 06:10:20.487822 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 7 06:10:20.511809 kernel: BTRFS info (device vda6): last unmount of filesystem 847f3129-822b-493d-8278-974df083638f Jul 7 06:10:20.523724 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 7 06:10:20.532268 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 7 06:10:20.554275 ignition[1003]: INFO : Ignition 2.21.0 Jul 7 06:10:20.554275 ignition[1003]: INFO : Stage: mount Jul 7 06:10:20.556455 ignition[1003]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 7 06:10:20.556455 ignition[1003]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 7 06:10:20.559088 ignition[1003]: INFO : mount: mount passed Jul 7 06:10:20.559088 ignition[1003]: INFO : Ignition finished successfully Jul 7 06:10:20.562280 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 7 06:10:20.566280 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 7 06:10:20.597581 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 7 06:10:20.634810 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1015) Jul 7 06:10:20.634880 kernel: BTRFS info (device vda6): first mount of filesystem 847f3129-822b-493d-8278-974df083638f Jul 7 06:10:20.636368 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jul 7 06:10:20.636391 kernel: BTRFS info (device vda6): using free-space-tree Jul 7 06:10:20.642059 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 7 06:10:20.746123 ignition[1032]: INFO : Ignition 2.21.0 Jul 7 06:10:20.746123 ignition[1032]: INFO : Stage: files Jul 7 06:10:20.748408 ignition[1032]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 7 06:10:20.748408 ignition[1032]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 7 06:10:20.748408 ignition[1032]: DEBUG : files: compiled without relabeling support, skipping Jul 7 06:10:20.752824 ignition[1032]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 7 06:10:20.752824 ignition[1032]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 7 06:10:20.756384 ignition[1032]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 7 06:10:20.756384 ignition[1032]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 7 06:10:20.756384 ignition[1032]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 7 06:10:20.756384 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jul 7 06:10:20.756384 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Jul 7 06:10:20.753911 unknown[1032]: wrote ssh authorized keys file for user: core Jul 7 06:10:20.796336 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 7 06:10:20.972583 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jul 7 06:10:20.972583 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jul 7 06:10:20.976980 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jul 7 06:10:20.976980 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 7 06:10:20.976980 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 7 06:10:20.976980 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 7 06:10:20.976980 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 7 06:10:20.976980 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 7 06:10:20.976980 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 7 06:10:21.158886 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 7 06:10:21.161059 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 7 06:10:21.161059 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 7 06:10:21.227308 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 7 06:10:21.227308 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 7 06:10:21.232147 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Jul 7 06:10:21.475114 systemd-networkd[853]: eth0: Gained IPv6LL Jul 7 06:10:21.955209 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jul 7 06:10:22.878601 ignition[1032]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Jul 7 06:10:22.881099 ignition[1032]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jul 7 06:10:22.883430 ignition[1032]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 7 06:10:22.998284 ignition[1032]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 7 06:10:22.998284 ignition[1032]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jul 7 06:10:22.998284 ignition[1032]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jul 7 06:10:23.002911 ignition[1032]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 7 06:10:23.002911 ignition[1032]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jul 7 06:10:23.002911 ignition[1032]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jul 7 06:10:23.002911 ignition[1032]: INFO : files: op(f): [started] setting preset to disabled for "coreos-metadata.service" Jul 7 06:10:23.035543 ignition[1032]: INFO : files: op(f): op(10): [started] removing enablement symlink(s) for "coreos-metadata.service" Jul 7 06:10:23.042565 ignition[1032]: INFO : files: op(f): op(10): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jul 7 06:10:23.044433 ignition[1032]: INFO : files: op(f): [finished] setting preset to disabled for "coreos-metadata.service" Jul 7 06:10:23.044433 ignition[1032]: INFO : files: op(11): [started] setting preset to enabled for "prepare-helm.service" Jul 7 06:10:23.047301 ignition[1032]: INFO : files: op(11): [finished] setting preset to enabled for "prepare-helm.service" Jul 7 06:10:23.047301 ignition[1032]: INFO : files: createResultFile: createFiles: op(12): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 7 06:10:23.047301 ignition[1032]: INFO : files: createResultFile: createFiles: op(12): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 7 06:10:23.047301 ignition[1032]: INFO : files: files passed Jul 7 06:10:23.047301 ignition[1032]: INFO : Ignition finished successfully Jul 7 06:10:23.056666 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 7 06:10:23.058678 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 7 06:10:23.061320 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 7 06:10:23.082915 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 7 06:10:23.083057 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 7 06:10:23.087475 initrd-setup-root-after-ignition[1061]: grep: /sysroot/oem/oem-release: No such file or directory Jul 7 06:10:23.091957 initrd-setup-root-after-ignition[1063]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 7 06:10:23.093795 initrd-setup-root-after-ignition[1063]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 7 06:10:23.095538 initrd-setup-root-after-ignition[1067]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 7 06:10:23.099649 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 7 06:10:23.100459 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 7 06:10:23.103881 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 7 06:10:23.175435 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 7 06:10:23.176558 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 7 06:10:23.179307 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 7 06:10:23.181315 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 7 06:10:23.183506 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 7 06:10:23.185900 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 7 06:10:23.215176 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 7 06:10:23.221501 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 7 06:10:23.255497 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 7 06:10:23.256137 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 7 06:10:23.256530 systemd[1]: Stopped target timers.target - Timer Units. Jul 7 06:10:23.260247 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 7 06:10:23.260426 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 7 06:10:23.263905 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 7 06:10:23.264400 systemd[1]: Stopped target basic.target - Basic System. Jul 7 06:10:23.264738 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 7 06:10:23.265242 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 7 06:10:23.265587 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 7 06:10:23.266091 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jul 7 06:10:23.266425 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 7 06:10:23.266768 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 7 06:10:23.267138 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 7 06:10:23.267483 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 7 06:10:23.282742 systemd[1]: Stopped target swap.target - Swaps. Jul 7 06:10:23.284566 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 7 06:10:23.284749 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 7 06:10:23.286537 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 7 06:10:23.287088 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 7 06:10:23.287464 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jul 7 06:10:23.293363 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 7 06:10:23.294259 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 7 06:10:23.294462 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 7 06:10:23.299581 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 7 06:10:23.299758 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 7 06:10:23.300363 systemd[1]: Stopped target paths.target - Path Units. Jul 7 06:10:23.304623 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 7 06:10:23.309867 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 7 06:10:23.310472 systemd[1]: Stopped target slices.target - Slice Units. Jul 7 06:10:23.314235 systemd[1]: Stopped target sockets.target - Socket Units. Jul 7 06:10:23.314833 systemd[1]: iscsid.socket: Deactivated successfully. Jul 7 06:10:23.314976 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 7 06:10:23.316554 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 7 06:10:23.316686 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 7 06:10:23.318269 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 7 06:10:23.318458 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 7 06:10:23.320266 systemd[1]: ignition-files.service: Deactivated successfully. Jul 7 06:10:23.320433 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 7 06:10:23.324523 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 7 06:10:23.325174 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 7 06:10:23.325329 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 7 06:10:23.328657 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 7 06:10:23.330848 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 7 06:10:23.331025 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 7 06:10:23.331854 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 7 06:10:23.332072 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 7 06:10:23.341902 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 7 06:10:23.342039 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 7 06:10:23.406397 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 7 06:10:23.450867 ignition[1087]: INFO : Ignition 2.21.0 Jul 7 06:10:23.450867 ignition[1087]: INFO : Stage: umount Jul 7 06:10:23.452761 ignition[1087]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 7 06:10:23.452761 ignition[1087]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Jul 7 06:10:23.455483 ignition[1087]: INFO : umount: umount passed Jul 7 06:10:23.459598 ignition[1087]: INFO : Ignition finished successfully Jul 7 06:10:23.463816 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 7 06:10:23.463986 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 7 06:10:23.464844 systemd[1]: Stopped target network.target - Network. Jul 7 06:10:23.477662 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 7 06:10:23.477733 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 7 06:10:23.479493 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 7 06:10:23.479547 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 7 06:10:23.481849 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 7 06:10:23.481907 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 7 06:10:23.482319 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jul 7 06:10:23.482381 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jul 7 06:10:23.486159 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 7 06:10:23.487843 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 7 06:10:23.497446 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 7 06:10:23.497632 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 7 06:10:23.502139 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jul 7 06:10:23.502454 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 7 06:10:23.502584 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 7 06:10:23.506446 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jul 7 06:10:23.507228 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jul 7 06:10:23.508280 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 7 06:10:23.508342 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 7 06:10:23.511624 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 7 06:10:23.512438 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 7 06:10:23.512565 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 7 06:10:23.515197 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 7 06:10:23.515286 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 7 06:10:23.520817 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 7 06:10:23.520918 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 7 06:10:23.521577 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 7 06:10:23.521640 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 7 06:10:23.526003 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 7 06:10:23.527888 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 7 06:10:23.527969 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jul 7 06:10:23.553313 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 7 06:10:23.553625 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 7 06:10:23.582647 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 7 06:10:23.582742 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 7 06:10:23.584527 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 7 06:10:23.584576 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 7 06:10:23.586835 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 7 06:10:23.586912 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 7 06:10:23.587875 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 7 06:10:23.587947 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 7 06:10:23.588759 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 7 06:10:23.588847 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 7 06:10:23.590884 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 7 06:10:23.598077 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jul 7 06:10:23.598216 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jul 7 06:10:23.602243 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jul 7 06:10:23.602332 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 7 06:10:23.605971 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 7 06:10:23.606041 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 06:10:23.611089 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Jul 7 06:10:23.611176 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Jul 7 06:10:23.611240 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 7 06:10:23.611683 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 7 06:10:23.613004 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 7 06:10:23.621855 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 7 06:10:23.622068 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 7 06:10:24.043871 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 7 06:10:24.044033 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 7 06:10:24.045167 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 7 06:10:24.047182 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 7 06:10:24.047283 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 7 06:10:24.048718 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 7 06:10:24.075768 systemd[1]: Switching root. Jul 7 06:10:24.127442 systemd-journald[220]: Journal stopped Jul 7 06:10:26.809066 systemd-journald[220]: Received SIGTERM from PID 1 (systemd). Jul 7 06:10:26.809159 kernel: SELinux: policy capability network_peer_controls=1 Jul 7 06:10:26.809178 kernel: SELinux: policy capability open_perms=1 Jul 7 06:10:26.809197 kernel: SELinux: policy capability extended_socket_class=1 Jul 7 06:10:26.809215 kernel: SELinux: policy capability always_check_network=0 Jul 7 06:10:26.809230 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 7 06:10:26.809244 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 7 06:10:26.809258 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 7 06:10:26.809310 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 7 06:10:26.809338 kernel: SELinux: policy capability userspace_initial_context=0 Jul 7 06:10:26.809357 kernel: audit: type=1403 audit(1751868625.672:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 7 06:10:26.809379 systemd[1]: Successfully loaded SELinux policy in 56.270ms. Jul 7 06:10:26.809398 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 16.873ms. Jul 7 06:10:26.809421 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jul 7 06:10:26.809439 systemd[1]: Detected virtualization kvm. Jul 7 06:10:26.809454 systemd[1]: Detected architecture x86-64. Jul 7 06:10:26.809470 systemd[1]: Detected first boot. Jul 7 06:10:26.809483 systemd[1]: Initializing machine ID from VM UUID. Jul 7 06:10:26.809502 zram_generator::config[1144]: No configuration found. Jul 7 06:10:26.809521 kernel: Guest personality initialized and is inactive Jul 7 06:10:26.809532 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Jul 7 06:10:26.809547 kernel: Initialized host personality Jul 7 06:10:26.809560 kernel: NET: Registered PF_VSOCK protocol family Jul 7 06:10:26.809575 systemd[1]: Populated /etc with preset unit settings. Jul 7 06:10:26.809589 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jul 7 06:10:26.809601 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 7 06:10:26.809619 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 7 06:10:26.809631 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 7 06:10:26.809644 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 7 06:10:26.809665 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 7 06:10:26.809677 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 7 06:10:26.809689 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 7 06:10:26.809702 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 7 06:10:26.809714 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 7 06:10:26.809727 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 7 06:10:26.809744 systemd[1]: Created slice user.slice - User and Session Slice. Jul 7 06:10:26.809761 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jul 7 06:10:26.809789 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 7 06:10:26.809802 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 7 06:10:26.809815 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 7 06:10:26.809827 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 7 06:10:26.809840 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jul 7 06:10:26.809858 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jul 7 06:10:26.809871 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jul 7 06:10:26.809887 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 7 06:10:26.809903 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 7 06:10:26.809915 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 7 06:10:26.809928 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 7 06:10:26.809940 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 7 06:10:26.809952 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 7 06:10:26.809972 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 7 06:10:26.809984 systemd[1]: Reached target slices.target - Slice Units. Jul 7 06:10:26.810002 systemd[1]: Reached target swap.target - Swaps. Jul 7 06:10:26.810014 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 7 06:10:26.810027 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 7 06:10:26.810040 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jul 7 06:10:26.810052 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 7 06:10:26.810064 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 7 06:10:26.810076 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 7 06:10:26.810088 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 7 06:10:26.810100 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 7 06:10:26.810118 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 7 06:10:26.810133 systemd[1]: Mounting media.mount - External Media Directory... Jul 7 06:10:26.810149 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 06:10:26.810164 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 7 06:10:26.810180 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 7 06:10:26.810200 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 7 06:10:26.810218 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 7 06:10:26.810234 systemd[1]: Reached target machines.target - Containers. Jul 7 06:10:26.810255 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 7 06:10:26.810270 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 7 06:10:26.810299 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 7 06:10:26.810316 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 7 06:10:26.810332 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 7 06:10:26.810348 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 7 06:10:26.810363 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 7 06:10:26.810376 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 7 06:10:26.810388 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 7 06:10:26.810406 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 7 06:10:26.810419 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 7 06:10:26.810431 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 7 06:10:26.810443 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 7 06:10:26.810456 systemd[1]: Stopped systemd-fsck-usr.service. Jul 7 06:10:26.810469 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 7 06:10:26.810481 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 7 06:10:26.810493 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 7 06:10:26.810510 kernel: loop: module loaded Jul 7 06:10:26.810522 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 7 06:10:26.810534 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 7 06:10:26.810549 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jul 7 06:10:26.810565 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 7 06:10:26.810583 kernel: fuse: init (API version 7.41) Jul 7 06:10:26.810603 systemd[1]: verity-setup.service: Deactivated successfully. Jul 7 06:10:26.810615 systemd[1]: Stopped verity-setup.service. Jul 7 06:10:26.810636 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 06:10:26.810648 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 7 06:10:26.810667 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 7 06:10:26.810679 systemd[1]: Mounted media.mount - External Media Directory. Jul 7 06:10:26.810692 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 7 06:10:26.810704 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 7 06:10:26.810716 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 7 06:10:26.810766 systemd-journald[1208]: Collecting audit messages is disabled. Jul 7 06:10:26.810825 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 7 06:10:26.810838 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 7 06:10:26.810860 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 7 06:10:26.810872 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 7 06:10:26.810888 systemd-journald[1208]: Journal started Jul 7 06:10:26.810916 systemd-journald[1208]: Runtime Journal (/run/log/journal/84269b13140d4f51adcad22587932ac1) is 6M, max 48.2M, 42.2M free. Jul 7 06:10:26.465063 systemd[1]: Queued start job for default target multi-user.target. Jul 7 06:10:26.492892 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jul 7 06:10:26.493515 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 7 06:10:26.813950 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 7 06:10:26.813976 systemd[1]: Started systemd-journald.service - Journal Service. Jul 7 06:10:26.816928 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 7 06:10:26.817845 kernel: ACPI: bus type drm_connector registered Jul 7 06:10:26.817252 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 7 06:10:26.819010 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 7 06:10:26.819254 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 7 06:10:26.820669 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 7 06:10:26.820919 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 7 06:10:26.822422 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 7 06:10:26.822649 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 7 06:10:26.824261 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 7 06:10:26.825993 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 7 06:10:26.828689 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 7 06:10:26.830359 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jul 7 06:10:26.846888 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 7 06:10:26.850283 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 7 06:10:26.853929 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 7 06:10:26.855369 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 7 06:10:26.855423 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 7 06:10:26.857952 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jul 7 06:10:26.865134 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 7 06:10:26.932812 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 7 06:10:26.935407 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 7 06:10:26.937990 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 7 06:10:26.939973 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 7 06:10:26.941857 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jul 7 06:10:26.943519 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 7 06:10:26.949567 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 7 06:10:26.963102 systemd-journald[1208]: Time spent on flushing to /var/log/journal/84269b13140d4f51adcad22587932ac1 is 21.601ms for 1035 entries. Jul 7 06:10:26.963102 systemd-journald[1208]: System Journal (/var/log/journal/84269b13140d4f51adcad22587932ac1) is 8M, max 195.6M, 187.6M free. Jul 7 06:10:27.208529 systemd-journald[1208]: Received client request to flush runtime journal. Jul 7 06:10:27.208603 kernel: loop0: detected capacity change from 0 to 146240 Jul 7 06:10:27.208636 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 7 06:10:27.208654 kernel: loop1: detected capacity change from 0 to 113872 Jul 7 06:10:27.208681 kernel: loop2: detected capacity change from 0 to 224512 Jul 7 06:10:27.208702 kernel: loop3: detected capacity change from 0 to 146240 Jul 7 06:10:27.208719 kernel: loop4: detected capacity change from 0 to 113872 Jul 7 06:10:26.956939 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 7 06:10:26.960907 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 7 06:10:26.963885 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 7 06:10:26.966050 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 7 06:10:26.989044 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 7 06:10:27.148468 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jul 7 06:10:27.153488 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 7 06:10:27.163066 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jul 7 06:10:27.210512 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 7 06:10:27.215212 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 7 06:10:27.217534 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 7 06:10:27.228834 kernel: loop5: detected capacity change from 0 to 224512 Jul 7 06:10:27.244248 (sd-merge)[1267]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Jul 7 06:10:27.245121 (sd-merge)[1267]: Merged extensions into '/usr'. Jul 7 06:10:27.251047 systemd[1]: Reload requested from client PID 1248 ('systemd-sysext') (unit systemd-sysext.service)... Jul 7 06:10:27.251192 systemd[1]: Reloading... Jul 7 06:10:27.328825 zram_generator::config[1309]: No configuration found. Jul 7 06:10:27.383902 ldconfig[1243]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 7 06:10:27.449337 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 06:10:27.536491 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 7 06:10:27.536959 systemd[1]: Reloading finished in 283 ms. Jul 7 06:10:27.571676 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 7 06:10:27.573380 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 7 06:10:27.575310 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jul 7 06:10:27.577064 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 7 06:10:27.594806 systemd[1]: Starting ensure-sysext.service... Jul 7 06:10:27.597146 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 7 06:10:27.599664 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jul 7 06:10:27.687914 systemd[1]: Reload requested from client PID 1348 ('systemctl') (unit ensure-sysext.service)... Jul 7 06:10:27.687931 systemd[1]: Reloading... Jul 7 06:10:27.692562 systemd-tmpfiles[1350]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jul 7 06:10:27.692609 systemd-tmpfiles[1350]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jul 7 06:10:27.693178 systemd-tmpfiles[1350]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 7 06:10:27.693895 systemd-tmpfiles[1350]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 7 06:10:27.695026 systemd-tmpfiles[1349]: ACLs are not supported, ignoring. Jul 7 06:10:27.695044 systemd-tmpfiles[1349]: ACLs are not supported, ignoring. Jul 7 06:10:27.695121 systemd-tmpfiles[1350]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 7 06:10:27.695475 systemd-tmpfiles[1350]: ACLs are not supported, ignoring. Jul 7 06:10:27.695562 systemd-tmpfiles[1350]: ACLs are not supported, ignoring. Jul 7 06:10:27.743815 zram_generator::config[1380]: No configuration found. Jul 7 06:10:27.779507 systemd-tmpfiles[1350]: Detected autofs mount point /boot during canonicalization of boot. Jul 7 06:10:27.779523 systemd-tmpfiles[1350]: Skipping /boot Jul 7 06:10:27.793756 systemd-tmpfiles[1350]: Detected autofs mount point /boot during canonicalization of boot. Jul 7 06:10:27.793791 systemd-tmpfiles[1350]: Skipping /boot Jul 7 06:10:27.834177 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 06:10:27.916884 systemd[1]: Reloading finished in 228 ms. Jul 7 06:10:27.939413 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 7 06:10:27.996974 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 06:10:27.997161 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 7 06:10:27.998737 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 7 06:10:28.001085 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 7 06:10:28.003496 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 7 06:10:28.005008 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 7 06:10:28.005161 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 7 06:10:28.005279 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 06:10:28.008279 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 06:10:28.008460 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 7 06:10:28.008626 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 7 06:10:28.008718 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 7 06:10:28.008834 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 06:10:28.009423 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 7 06:10:28.009669 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 7 06:10:28.011563 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 7 06:10:28.011808 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 7 06:10:28.013522 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 7 06:10:28.013745 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 7 06:10:28.021058 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 06:10:28.021301 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 7 06:10:28.022883 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 7 06:10:28.025301 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 7 06:10:28.028139 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 7 06:10:28.035991 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 7 06:10:28.037287 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 7 06:10:28.037449 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jul 7 06:10:28.037645 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 7 06:10:28.039065 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 7 06:10:28.039304 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 7 06:10:28.041063 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 7 06:10:28.041328 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 7 06:10:28.043039 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 7 06:10:28.043254 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 7 06:10:28.045216 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 7 06:10:28.045442 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 7 06:10:28.050468 systemd[1]: Finished ensure-sysext.service. Jul 7 06:10:28.056048 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jul 7 06:10:28.059993 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 7 06:10:28.109749 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 7 06:10:28.112402 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 7 06:10:28.113685 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 7 06:10:28.113806 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 7 06:10:28.124609 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 7 06:10:28.128832 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 7 06:10:28.133690 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 7 06:10:28.139177 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 7 06:10:28.152462 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 7 06:10:28.159463 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 7 06:10:28.230951 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 7 06:10:28.263073 augenrules[1467]: No rules Jul 7 06:10:28.265928 systemd[1]: audit-rules.service: Deactivated successfully. Jul 7 06:10:28.266360 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 7 06:10:28.283856 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 7 06:10:28.286019 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 7 06:10:28.307634 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 7 06:10:28.311084 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 7 06:10:28.313502 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 7 06:10:28.320857 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 7 06:10:28.322718 systemd[1]: Reached target time-set.target - System Time Set. Jul 7 06:10:28.332026 systemd-resolved[1437]: Positive Trust Anchors: Jul 7 06:10:28.332043 systemd-resolved[1437]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 7 06:10:28.332077 systemd-resolved[1437]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jul 7 06:10:28.336039 systemd-resolved[1437]: Defaulting to hostname 'linux'. Jul 7 06:10:28.337737 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 7 06:10:28.339608 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 7 06:10:28.341054 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 7 06:10:28.356183 systemd-udevd[1475]: Using default interface naming scheme 'v255'. Jul 7 06:10:28.378975 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 7 06:10:28.381438 systemd[1]: Reached target sysinit.target - System Initialization. Jul 7 06:10:28.383010 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 7 06:10:28.384636 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 7 06:10:28.386869 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Jul 7 06:10:28.388548 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 7 06:10:28.390079 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 7 06:10:28.391675 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 7 06:10:28.393264 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 7 06:10:28.393321 systemd[1]: Reached target paths.target - Path Units. Jul 7 06:10:28.394597 systemd[1]: Reached target timers.target - Timer Units. Jul 7 06:10:28.397182 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 7 06:10:28.401490 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 7 06:10:28.410127 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jul 7 06:10:28.414422 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jul 7 06:10:28.415957 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jul 7 06:10:28.425210 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 7 06:10:28.427670 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jul 7 06:10:28.435062 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 7 06:10:28.437304 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 7 06:10:28.444671 systemd[1]: Reached target sockets.target - Socket Units. Jul 7 06:10:28.446017 systemd[1]: Reached target basic.target - Basic System. Jul 7 06:10:28.447450 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 7 06:10:28.447577 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 7 06:10:28.451939 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 7 06:10:28.455107 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jul 7 06:10:28.458067 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 7 06:10:28.461071 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 7 06:10:28.462128 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 7 06:10:28.464030 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Jul 7 06:10:28.469465 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 7 06:10:28.478015 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 7 06:10:28.480970 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 7 06:10:28.485337 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 7 06:10:28.490013 jq[1511]: false Jul 7 06:10:28.498940 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 7 06:10:28.501597 google_oslogin_nss_cache[1513]: oslogin_cache_refresh[1513]: Refreshing passwd entry cache Jul 7 06:10:28.501230 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 7 06:10:28.500313 oslogin_cache_refresh[1513]: Refreshing passwd entry cache Jul 7 06:10:28.503433 google_oslogin_nss_cache[1513]: oslogin_cache_refresh[1513]: Failure getting users, quitting Jul 7 06:10:28.503433 google_oslogin_nss_cache[1513]: oslogin_cache_refresh[1513]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jul 7 06:10:28.503420 oslogin_cache_refresh[1513]: Failure getting users, quitting Jul 7 06:10:28.503515 google_oslogin_nss_cache[1513]: oslogin_cache_refresh[1513]: Refreshing group entry cache Jul 7 06:10:28.503437 oslogin_cache_refresh[1513]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Jul 7 06:10:28.503482 oslogin_cache_refresh[1513]: Refreshing group entry cache Jul 7 06:10:28.504556 oslogin_cache_refresh[1513]: Failure getting groups, quitting Jul 7 06:10:28.504188 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 7 06:10:28.504828 google_oslogin_nss_cache[1513]: oslogin_cache_refresh[1513]: Failure getting groups, quitting Jul 7 06:10:28.504828 google_oslogin_nss_cache[1513]: oslogin_cache_refresh[1513]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jul 7 06:10:28.504566 oslogin_cache_refresh[1513]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Jul 7 06:10:28.505319 systemd[1]: Starting update-engine.service - Update Engine... Jul 7 06:10:28.508991 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 7 06:10:28.511761 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jul 7 06:10:28.513478 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 7 06:10:28.513991 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 7 06:10:28.514372 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Jul 7 06:10:28.514819 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Jul 7 06:10:28.528974 extend-filesystems[1512]: Found /dev/vda6 Jul 7 06:10:28.530928 jq[1531]: true Jul 7 06:10:28.524383 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 7 06:10:28.524688 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 7 06:10:28.538071 systemd[1]: motdgen.service: Deactivated successfully. Jul 7 06:10:28.538895 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 7 06:10:28.550190 update_engine[1530]: I20250707 06:10:28.550123 1530 main.cc:92] Flatcar Update Engine starting Jul 7 06:10:28.559662 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jul 7 06:10:28.574941 jq[1537]: true Jul 7 06:10:28.635661 tar[1534]: linux-amd64/LICENSE Jul 7 06:10:28.635661 tar[1534]: linux-amd64/helm Jul 7 06:10:28.644478 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Jul 7 06:10:28.647972 kernel: ACPI: button: Power Button [PWRF] Jul 7 06:10:28.655440 dbus-daemon[1509]: [system] SELinux support is enabled Jul 7 06:10:28.655601 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 7 06:10:28.670550 update_engine[1530]: I20250707 06:10:28.667402 1530 update_check_scheduler.cc:74] Next update check in 8m23s Jul 7 06:10:28.664636 systemd-networkd[1508]: lo: Link UP Jul 7 06:10:28.664640 systemd-networkd[1508]: lo: Gained carrier Jul 7 06:10:28.679812 systemd-networkd[1508]: Enumeration completed Jul 7 06:10:28.693874 systemd-logind[1529]: New seat seat0. Jul 7 06:10:28.695881 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 7 06:10:28.703743 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Jul 7 06:10:28.704192 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Jul 7 06:10:28.704437 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Jul 7 06:10:28.699288 systemd[1]: Reached target network.target - Network. Jul 7 06:10:28.704971 systemd[1]: Starting containerd.service - containerd container runtime... Jul 7 06:10:28.706103 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 7 06:10:28.706137 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 7 06:10:28.709173 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jul 7 06:10:28.712020 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 7 06:10:28.713198 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 7 06:10:28.713222 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 7 06:10:28.714544 systemd[1]: Started systemd-logind.service - User Login Management. Jul 7 06:10:28.715948 systemd[1]: Started update-engine.service - Update Engine. Jul 7 06:10:28.754039 extend-filesystems[1512]: Found /dev/vda9 Jul 7 06:10:28.755133 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 7 06:10:28.763398 extend-filesystems[1512]: Checking size of /dev/vda9 Jul 7 06:10:28.778287 bash[1565]: Updated "/home/core/.ssh/authorized_keys" Jul 7 06:10:28.778285 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 7 06:10:28.781454 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 7 06:10:28.782196 systemd-networkd[1508]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 06:10:28.782203 systemd-networkd[1508]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 7 06:10:28.787671 systemd-networkd[1508]: eth0: Link UP Jul 7 06:10:28.789924 systemd-networkd[1508]: eth0: Gained carrier Jul 7 06:10:28.790056 systemd-networkd[1508]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 7 06:10:28.800853 extend-filesystems[1512]: Resized partition /dev/vda9 Jul 7 06:10:28.804455 extend-filesystems[1597]: resize2fs 1.47.2 (1-Jan-2025) Jul 7 06:10:28.805931 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jul 7 06:10:28.805938 (ntainerd)[1591]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jul 7 06:10:28.807454 systemd-networkd[1508]: eth0: DHCPv4 address 10.0.0.94/16, gateway 10.0.0.1 acquired from 10.0.0.1 Jul 7 06:10:28.809691 systemd-timesyncd[1446]: Network configuration changed, trying to establish connection. Jul 7 06:10:29.841125 kernel: mousedev: PS/2 mouse device common for all mice Jul 7 06:10:29.836968 systemd-timesyncd[1446]: Contacted time server 10.0.0.1:123 (10.0.0.1). Jul 7 06:10:29.837220 systemd-resolved[1437]: Clock change detected. Flushing caches. Jul 7 06:10:29.837293 systemd-timesyncd[1446]: Initial clock synchronization to Mon 2025-07-07 06:10:29.835337 UTC. Jul 7 06:10:29.844130 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Jul 7 06:10:29.869152 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Jul 7 06:10:29.890933 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jul 7 06:10:29.895485 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 7 06:10:29.898847 extend-filesystems[1597]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jul 7 06:10:29.898847 extend-filesystems[1597]: old_desc_blocks = 1, new_desc_blocks = 1 Jul 7 06:10:29.898847 extend-filesystems[1597]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Jul 7 06:10:29.913114 extend-filesystems[1512]: Resized filesystem in /dev/vda9 Jul 7 06:10:29.904601 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 7 06:10:29.917198 sshd_keygen[1538]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 7 06:10:29.905176 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 7 06:10:29.949151 kernel: kvm_amd: TSC scaling supported Jul 7 06:10:29.949299 kernel: kvm_amd: Nested Virtualization enabled Jul 7 06:10:29.949376 kernel: kvm_amd: Nested Paging enabled Jul 7 06:10:29.949411 kernel: kvm_amd: LBR virtualization supported Jul 7 06:10:29.949438 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Jul 7 06:10:29.949466 kernel: kvm_amd: Virtual GIF supported Jul 7 06:10:29.952240 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 06:10:29.965834 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 7 06:10:29.993729 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 7 06:10:30.011368 systemd-logind[1529]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 7 06:10:30.019204 systemd[1]: issuegen.service: Deactivated successfully. Jul 7 06:10:30.019741 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 7 06:10:30.024465 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 7 06:10:30.058139 kernel: EDAC MC: Ver: 3.0.0 Jul 7 06:10:30.076232 systemd-logind[1529]: Watching system buttons on /dev/input/event2 (Power Button) Jul 7 06:10:30.087889 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 7 06:10:30.094403 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 7 06:10:30.094697 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 06:10:30.101604 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jul 7 06:10:30.108402 locksmithd[1578]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 7 06:10:30.110479 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jul 7 06:10:30.130853 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 7 06:10:30.135831 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 7 06:10:30.139368 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jul 7 06:10:30.140986 systemd[1]: Reached target getty.target - Login Prompts. Jul 7 06:10:30.182884 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jul 7 06:10:30.201942 containerd[1591]: time="2025-07-07T06:10:30Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jul 7 06:10:30.202897 containerd[1591]: time="2025-07-07T06:10:30.202824115Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Jul 7 06:10:30.213892 containerd[1591]: time="2025-07-07T06:10:30.213802503Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="11.582µs" Jul 7 06:10:30.213892 containerd[1591]: time="2025-07-07T06:10:30.213860131Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jul 7 06:10:30.213892 containerd[1591]: time="2025-07-07T06:10:30.213884267Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jul 7 06:10:30.214222 containerd[1591]: time="2025-07-07T06:10:30.214187525Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jul 7 06:10:30.214222 containerd[1591]: time="2025-07-07T06:10:30.214218153Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jul 7 06:10:30.214284 containerd[1591]: time="2025-07-07T06:10:30.214252167Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 7 06:10:30.214369 containerd[1591]: time="2025-07-07T06:10:30.214341133Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jul 7 06:10:30.214369 containerd[1591]: time="2025-07-07T06:10:30.214361211Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 7 06:10:30.214875 containerd[1591]: time="2025-07-07T06:10:30.214827736Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jul 7 06:10:30.214875 containerd[1591]: time="2025-07-07T06:10:30.214854216Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 7 06:10:30.214875 containerd[1591]: time="2025-07-07T06:10:30.214869174Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jul 7 06:10:30.214963 containerd[1591]: time="2025-07-07T06:10:30.214880194Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jul 7 06:10:30.215039 containerd[1591]: time="2025-07-07T06:10:30.215009968Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jul 7 06:10:30.215412 containerd[1591]: time="2025-07-07T06:10:30.215362389Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 7 06:10:30.215412 containerd[1591]: time="2025-07-07T06:10:30.215408184Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jul 7 06:10:30.215497 containerd[1591]: time="2025-07-07T06:10:30.215424074Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jul 7 06:10:30.215497 containerd[1591]: time="2025-07-07T06:10:30.215467476Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jul 7 06:10:30.215851 containerd[1591]: time="2025-07-07T06:10:30.215739055Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jul 7 06:10:30.215851 containerd[1591]: time="2025-07-07T06:10:30.215836277Z" level=info msg="metadata content store policy set" policy=shared Jul 7 06:10:30.225198 containerd[1591]: time="2025-07-07T06:10:30.225118926Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jul 7 06:10:30.225198 containerd[1591]: time="2025-07-07T06:10:30.225180351Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jul 7 06:10:30.225198 containerd[1591]: time="2025-07-07T06:10:30.225200689Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jul 7 06:10:30.225198 containerd[1591]: time="2025-07-07T06:10:30.225216599Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jul 7 06:10:30.225472 containerd[1591]: time="2025-07-07T06:10:30.225232799Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jul 7 06:10:30.225472 containerd[1591]: time="2025-07-07T06:10:30.225247447Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jul 7 06:10:30.225472 containerd[1591]: time="2025-07-07T06:10:30.225265310Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jul 7 06:10:30.225472 containerd[1591]: time="2025-07-07T06:10:30.225280969Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jul 7 06:10:30.225472 containerd[1591]: time="2025-07-07T06:10:30.225294314Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jul 7 06:10:30.225472 containerd[1591]: time="2025-07-07T06:10:30.225307720Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jul 7 06:10:30.225472 containerd[1591]: time="2025-07-07T06:10:30.225320023Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jul 7 06:10:30.225472 containerd[1591]: time="2025-07-07T06:10:30.225337115Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jul 7 06:10:30.225689 containerd[1591]: time="2025-07-07T06:10:30.225507765Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jul 7 06:10:30.225689 containerd[1591]: time="2025-07-07T06:10:30.225534936Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jul 7 06:10:30.225689 containerd[1591]: time="2025-07-07T06:10:30.225568348Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jul 7 06:10:30.225689 containerd[1591]: time="2025-07-07T06:10:30.225585831Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jul 7 06:10:30.225689 containerd[1591]: time="2025-07-07T06:10:30.225601601Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jul 7 06:10:30.225689 containerd[1591]: time="2025-07-07T06:10:30.225624053Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jul 7 06:10:30.225689 containerd[1591]: time="2025-07-07T06:10:30.225641536Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jul 7 06:10:30.225689 containerd[1591]: time="2025-07-07T06:10:30.225658237Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jul 7 06:10:30.225689 containerd[1591]: time="2025-07-07T06:10:30.225673265Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jul 7 06:10:30.225689 containerd[1591]: time="2025-07-07T06:10:30.225688664Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jul 7 06:10:30.225935 containerd[1591]: time="2025-07-07T06:10:30.225714513Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jul 7 06:10:30.225935 containerd[1591]: time="2025-07-07T06:10:30.225800814Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jul 7 06:10:30.225935 containerd[1591]: time="2025-07-07T06:10:30.225820902Z" level=info msg="Start snapshots syncer" Jul 7 06:10:30.225935 containerd[1591]: time="2025-07-07T06:10:30.225882467Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jul 7 06:10:30.226336 containerd[1591]: time="2025-07-07T06:10:30.226273561Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jul 7 06:10:30.226500 containerd[1591]: time="2025-07-07T06:10:30.226345977Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jul 7 06:10:30.226500 containerd[1591]: time="2025-07-07T06:10:30.226474538Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jul 7 06:10:30.226659 containerd[1591]: time="2025-07-07T06:10:30.226619500Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jul 7 06:10:30.226706 containerd[1591]: time="2025-07-07T06:10:30.226654786Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jul 7 06:10:30.226706 containerd[1591]: time="2025-07-07T06:10:30.226688790Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jul 7 06:10:30.226758 containerd[1591]: time="2025-07-07T06:10:30.226707865Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jul 7 06:10:30.226758 containerd[1591]: time="2025-07-07T06:10:30.226726080Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jul 7 06:10:30.226758 containerd[1591]: time="2025-07-07T06:10:30.226742029Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jul 7 06:10:30.226835 containerd[1591]: time="2025-07-07T06:10:30.226757909Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jul 7 06:10:30.226835 containerd[1591]: time="2025-07-07T06:10:30.226788537Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jul 7 06:10:30.226835 containerd[1591]: time="2025-07-07T06:10:30.226807011Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jul 7 06:10:30.226835 containerd[1591]: time="2025-07-07T06:10:30.226824805Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jul 7 06:10:30.227739 containerd[1591]: time="2025-07-07T06:10:30.227689867Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 7 06:10:30.227739 containerd[1591]: time="2025-07-07T06:10:30.227720775Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jul 7 06:10:30.227739 containerd[1591]: time="2025-07-07T06:10:30.227733739Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 7 06:10:30.227838 containerd[1591]: time="2025-07-07T06:10:30.227746283Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jul 7 06:10:30.227838 containerd[1591]: time="2025-07-07T06:10:30.227757594Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jul 7 06:10:30.227838 containerd[1591]: time="2025-07-07T06:10:30.227770699Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jul 7 06:10:30.227838 containerd[1591]: time="2025-07-07T06:10:30.227784525Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jul 7 06:10:30.227838 containerd[1591]: time="2025-07-07T06:10:30.227809602Z" level=info msg="runtime interface created" Jul 7 06:10:30.227838 containerd[1591]: time="2025-07-07T06:10:30.227817677Z" level=info msg="created NRI interface" Jul 7 06:10:30.227838 containerd[1591]: time="2025-07-07T06:10:30.227827635Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jul 7 06:10:30.227838 containerd[1591]: time="2025-07-07T06:10:30.227841321Z" level=info msg="Connect containerd service" Jul 7 06:10:30.227984 containerd[1591]: time="2025-07-07T06:10:30.227885404Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 7 06:10:30.229448 containerd[1591]: time="2025-07-07T06:10:30.229408720Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 7 06:10:30.343546 containerd[1591]: time="2025-07-07T06:10:30.343395076Z" level=info msg="Start subscribing containerd event" Jul 7 06:10:30.343546 containerd[1591]: time="2025-07-07T06:10:30.343479283Z" level=info msg="Start recovering state" Jul 7 06:10:30.343723 containerd[1591]: time="2025-07-07T06:10:30.343616942Z" level=info msg="Start event monitor" Jul 7 06:10:30.343723 containerd[1591]: time="2025-07-07T06:10:30.343636779Z" level=info msg="Start cni network conf syncer for default" Jul 7 06:10:30.343723 containerd[1591]: time="2025-07-07T06:10:30.343647078Z" level=info msg="Start streaming server" Jul 7 06:10:30.343723 containerd[1591]: time="2025-07-07T06:10:30.343657768Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jul 7 06:10:30.343723 containerd[1591]: time="2025-07-07T06:10:30.343667707Z" level=info msg="runtime interface starting up..." Jul 7 06:10:30.343723 containerd[1591]: time="2025-07-07T06:10:30.343675231Z" level=info msg="starting plugins..." Jul 7 06:10:30.343723 containerd[1591]: time="2025-07-07T06:10:30.343695539Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jul 7 06:10:30.343906 containerd[1591]: time="2025-07-07T06:10:30.343695058Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 7 06:10:30.343906 containerd[1591]: time="2025-07-07T06:10:30.343870798Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 7 06:10:30.344084 systemd[1]: Started containerd.service - containerd container runtime. Jul 7 06:10:30.344514 containerd[1591]: time="2025-07-07T06:10:30.344478067Z" level=info msg="containerd successfully booted in 0.143252s" Jul 7 06:10:30.382610 tar[1534]: linux-amd64/README.md Jul 7 06:10:30.417755 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 7 06:10:31.303400 systemd-networkd[1508]: eth0: Gained IPv6LL Jul 7 06:10:31.306808 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 7 06:10:31.308704 systemd[1]: Reached target network-online.target - Network is Online. Jul 7 06:10:31.311612 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Jul 7 06:10:31.314163 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 06:10:31.316560 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 7 06:10:31.434411 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 7 06:10:31.438683 systemd[1]: coreos-metadata.service: Deactivated successfully. Jul 7 06:10:31.439077 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Jul 7 06:10:31.441078 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 7 06:10:31.945170 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 7 06:10:31.947898 systemd[1]: Started sshd@0-10.0.0.94:22-10.0.0.1:38330.service - OpenSSH per-connection server daemon (10.0.0.1:38330). Jul 7 06:10:32.061805 sshd[1685]: Accepted publickey for core from 10.0.0.1 port 38330 ssh2: RSA SHA256:f18dB8zRu6tlNxBqmR8LZaZDJCd15iHz/95DxGwb5s0 Jul 7 06:10:32.064879 sshd-session[1685]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:10:32.074623 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 7 06:10:32.168164 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 7 06:10:32.182808 systemd-logind[1529]: New session 1 of user core. Jul 7 06:10:32.204528 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 7 06:10:32.209260 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 7 06:10:32.231704 (systemd)[1689]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 7 06:10:32.235350 systemd-logind[1529]: New session c1 of user core. Jul 7 06:10:32.438262 systemd[1689]: Queued start job for default target default.target. Jul 7 06:10:32.510940 systemd[1689]: Created slice app.slice - User Application Slice. Jul 7 06:10:32.510983 systemd[1689]: Reached target paths.target - Paths. Jul 7 06:10:32.511054 systemd[1689]: Reached target timers.target - Timers. Jul 7 06:10:32.512894 systemd[1689]: Starting dbus.socket - D-Bus User Message Bus Socket... Jul 7 06:10:32.526861 systemd[1689]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jul 7 06:10:32.526992 systemd[1689]: Reached target sockets.target - Sockets. Jul 7 06:10:32.527030 systemd[1689]: Reached target basic.target - Basic System. Jul 7 06:10:32.527071 systemd[1689]: Reached target default.target - Main User Target. Jul 7 06:10:32.527134 systemd[1689]: Startup finished in 279ms. Jul 7 06:10:32.528002 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 7 06:10:32.531606 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 7 06:10:32.630413 systemd[1]: Started sshd@1-10.0.0.94:22-10.0.0.1:38344.service - OpenSSH per-connection server daemon (10.0.0.1:38344). Jul 7 06:10:32.689466 sshd[1700]: Accepted publickey for core from 10.0.0.1 port 38344 ssh2: RSA SHA256:f18dB8zRu6tlNxBqmR8LZaZDJCd15iHz/95DxGwb5s0 Jul 7 06:10:32.707545 sshd-session[1700]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:10:32.714854 systemd-logind[1529]: New session 2 of user core. Jul 7 06:10:32.734410 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 7 06:10:32.824123 sshd[1702]: Connection closed by 10.0.0.1 port 38344 Jul 7 06:10:32.824454 sshd-session[1700]: pam_unix(sshd:session): session closed for user core Jul 7 06:10:32.839023 systemd[1]: sshd@1-10.0.0.94:22-10.0.0.1:38344.service: Deactivated successfully. Jul 7 06:10:32.841753 systemd[1]: session-2.scope: Deactivated successfully. Jul 7 06:10:32.842780 systemd-logind[1529]: Session 2 logged out. Waiting for processes to exit. Jul 7 06:10:32.847669 systemd[1]: Started sshd@2-10.0.0.94:22-10.0.0.1:38348.service - OpenSSH per-connection server daemon (10.0.0.1:38348). Jul 7 06:10:32.850366 systemd-logind[1529]: Removed session 2. Jul 7 06:10:32.914851 sshd[1708]: Accepted publickey for core from 10.0.0.1 port 38348 ssh2: RSA SHA256:f18dB8zRu6tlNxBqmR8LZaZDJCd15iHz/95DxGwb5s0 Jul 7 06:10:32.916982 sshd-session[1708]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:10:32.922543 systemd-logind[1529]: New session 3 of user core. Jul 7 06:10:32.932271 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 7 06:10:32.989826 sshd[1710]: Connection closed by 10.0.0.1 port 38348 Jul 7 06:10:32.991885 sshd-session[1708]: pam_unix(sshd:session): session closed for user core Jul 7 06:10:32.997045 systemd[1]: sshd@2-10.0.0.94:22-10.0.0.1:38348.service: Deactivated successfully. Jul 7 06:10:32.999177 systemd[1]: session-3.scope: Deactivated successfully. Jul 7 06:10:32.999991 systemd-logind[1529]: Session 3 logged out. Waiting for processes to exit. Jul 7 06:10:33.001402 systemd-logind[1529]: Removed session 3. Jul 7 06:10:33.076242 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 06:10:33.078007 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 7 06:10:33.079600 systemd[1]: Startup finished in 3.636s (kernel) + 9.032s (initrd) + 6.463s (userspace) = 19.132s. Jul 7 06:10:33.086524 (kubelet)[1720]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 06:10:33.559883 kubelet[1720]: E0707 06:10:33.559789 1720 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 06:10:33.563719 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 06:10:33.563963 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 06:10:33.564408 systemd[1]: kubelet.service: Consumed 1.988s CPU time, 265.6M memory peak. Jul 7 06:10:43.008219 systemd[1]: Started sshd@3-10.0.0.94:22-10.0.0.1:44190.service - OpenSSH per-connection server daemon (10.0.0.1:44190). Jul 7 06:10:43.066291 sshd[1733]: Accepted publickey for core from 10.0.0.1 port 44190 ssh2: RSA SHA256:f18dB8zRu6tlNxBqmR8LZaZDJCd15iHz/95DxGwb5s0 Jul 7 06:10:43.067932 sshd-session[1733]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:10:43.072447 systemd-logind[1529]: New session 4 of user core. Jul 7 06:10:43.079251 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 7 06:10:43.133817 sshd[1735]: Connection closed by 10.0.0.1 port 44190 Jul 7 06:10:43.133899 sshd-session[1733]: pam_unix(sshd:session): session closed for user core Jul 7 06:10:43.147573 systemd[1]: sshd@3-10.0.0.94:22-10.0.0.1:44190.service: Deactivated successfully. Jul 7 06:10:43.149907 systemd[1]: session-4.scope: Deactivated successfully. Jul 7 06:10:43.150830 systemd-logind[1529]: Session 4 logged out. Waiting for processes to exit. Jul 7 06:10:43.154275 systemd[1]: Started sshd@4-10.0.0.94:22-10.0.0.1:44198.service - OpenSSH per-connection server daemon (10.0.0.1:44198). Jul 7 06:10:43.155004 systemd-logind[1529]: Removed session 4. Jul 7 06:10:43.209179 sshd[1741]: Accepted publickey for core from 10.0.0.1 port 44198 ssh2: RSA SHA256:f18dB8zRu6tlNxBqmR8LZaZDJCd15iHz/95DxGwb5s0 Jul 7 06:10:43.211165 sshd-session[1741]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:10:43.217333 systemd-logind[1529]: New session 5 of user core. Jul 7 06:10:43.232498 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 7 06:10:43.285771 sshd[1743]: Connection closed by 10.0.0.1 port 44198 Jul 7 06:10:43.286261 sshd-session[1741]: pam_unix(sshd:session): session closed for user core Jul 7 06:10:43.295200 systemd[1]: sshd@4-10.0.0.94:22-10.0.0.1:44198.service: Deactivated successfully. Jul 7 06:10:43.297525 systemd[1]: session-5.scope: Deactivated successfully. Jul 7 06:10:43.298444 systemd-logind[1529]: Session 5 logged out. Waiting for processes to exit. Jul 7 06:10:43.301889 systemd[1]: Started sshd@5-10.0.0.94:22-10.0.0.1:44208.service - OpenSSH per-connection server daemon (10.0.0.1:44208). Jul 7 06:10:43.302658 systemd-logind[1529]: Removed session 5. Jul 7 06:10:43.363466 sshd[1749]: Accepted publickey for core from 10.0.0.1 port 44208 ssh2: RSA SHA256:f18dB8zRu6tlNxBqmR8LZaZDJCd15iHz/95DxGwb5s0 Jul 7 06:10:43.365357 sshd-session[1749]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:10:43.371606 systemd-logind[1529]: New session 6 of user core. Jul 7 06:10:43.385421 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 7 06:10:43.441926 sshd[1751]: Connection closed by 10.0.0.1 port 44208 Jul 7 06:10:43.442315 sshd-session[1749]: pam_unix(sshd:session): session closed for user core Jul 7 06:10:43.459228 systemd[1]: sshd@5-10.0.0.94:22-10.0.0.1:44208.service: Deactivated successfully. Jul 7 06:10:43.461351 systemd[1]: session-6.scope: Deactivated successfully. Jul 7 06:10:43.462436 systemd-logind[1529]: Session 6 logged out. Waiting for processes to exit. Jul 7 06:10:43.465993 systemd[1]: Started sshd@6-10.0.0.94:22-10.0.0.1:44210.service - OpenSSH per-connection server daemon (10.0.0.1:44210). Jul 7 06:10:43.466619 systemd-logind[1529]: Removed session 6. Jul 7 06:10:43.523907 sshd[1757]: Accepted publickey for core from 10.0.0.1 port 44210 ssh2: RSA SHA256:f18dB8zRu6tlNxBqmR8LZaZDJCd15iHz/95DxGwb5s0 Jul 7 06:10:43.525562 sshd-session[1757]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:10:43.530610 systemd-logind[1529]: New session 7 of user core. Jul 7 06:10:43.540264 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 7 06:10:43.602687 sudo[1760]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jul 7 06:10:43.603127 sudo[1760]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 06:10:43.604486 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 7 06:10:43.606468 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 06:10:43.622503 sudo[1760]: pam_unix(sudo:session): session closed for user root Jul 7 06:10:43.624685 sshd[1759]: Connection closed by 10.0.0.1 port 44210 Jul 7 06:10:43.625295 sshd-session[1757]: pam_unix(sshd:session): session closed for user core Jul 7 06:10:43.634759 systemd[1]: sshd@6-10.0.0.94:22-10.0.0.1:44210.service: Deactivated successfully. Jul 7 06:10:43.637243 systemd[1]: session-7.scope: Deactivated successfully. Jul 7 06:10:43.638074 systemd-logind[1529]: Session 7 logged out. Waiting for processes to exit. Jul 7 06:10:43.641468 systemd[1]: Started sshd@7-10.0.0.94:22-10.0.0.1:44218.service - OpenSSH per-connection server daemon (10.0.0.1:44218). Jul 7 06:10:43.642285 systemd-logind[1529]: Removed session 7. Jul 7 06:10:43.698129 sshd[1769]: Accepted publickey for core from 10.0.0.1 port 44218 ssh2: RSA SHA256:f18dB8zRu6tlNxBqmR8LZaZDJCd15iHz/95DxGwb5s0 Jul 7 06:10:43.700347 sshd-session[1769]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:10:43.706207 systemd-logind[1529]: New session 8 of user core. Jul 7 06:10:43.715303 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 7 06:10:43.772067 sudo[1773]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jul 7 06:10:43.772787 sudo[1773]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 06:10:44.056231 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 06:10:44.061787 (kubelet)[1780]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 06:10:44.257785 sudo[1773]: pam_unix(sudo:session): session closed for user root Jul 7 06:10:44.267659 sudo[1772]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jul 7 06:10:44.268149 sudo[1772]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 06:10:44.285622 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jul 7 06:10:44.292006 kubelet[1780]: E0707 06:10:44.291918 1780 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 06:10:44.300871 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 06:10:44.301171 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 06:10:44.301795 systemd[1]: kubelet.service: Consumed 398ms CPU time, 111.2M memory peak. Jul 7 06:10:44.349339 augenrules[1808]: No rules Jul 7 06:10:44.351479 systemd[1]: audit-rules.service: Deactivated successfully. Jul 7 06:10:44.351824 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jul 7 06:10:44.353532 sudo[1772]: pam_unix(sudo:session): session closed for user root Jul 7 06:10:44.357061 sshd[1771]: Connection closed by 10.0.0.1 port 44218 Jul 7 06:10:44.357485 sshd-session[1769]: pam_unix(sshd:session): session closed for user core Jul 7 06:10:44.368566 systemd[1]: sshd@7-10.0.0.94:22-10.0.0.1:44218.service: Deactivated successfully. Jul 7 06:10:44.371420 systemd[1]: session-8.scope: Deactivated successfully. Jul 7 06:10:44.372412 systemd-logind[1529]: Session 8 logged out. Waiting for processes to exit. Jul 7 06:10:44.378899 systemd[1]: Started sshd@8-10.0.0.94:22-10.0.0.1:44228.service - OpenSSH per-connection server daemon (10.0.0.1:44228). Jul 7 06:10:44.379628 systemd-logind[1529]: Removed session 8. Jul 7 06:10:44.441666 sshd[1817]: Accepted publickey for core from 10.0.0.1 port 44228 ssh2: RSA SHA256:f18dB8zRu6tlNxBqmR8LZaZDJCd15iHz/95DxGwb5s0 Jul 7 06:10:44.443380 sshd-session[1817]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:10:44.448342 systemd-logind[1529]: New session 9 of user core. Jul 7 06:10:44.460014 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 7 06:10:44.517146 sudo[1820]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 7 06:10:44.517553 sudo[1820]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jul 7 06:10:44.876664 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 7 06:10:44.897642 (dockerd)[1840]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jul 7 06:10:45.171150 dockerd[1840]: time="2025-07-07T06:10:45.170914958Z" level=info msg="Starting up" Jul 7 06:10:45.173481 dockerd[1840]: time="2025-07-07T06:10:45.173415968Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jul 7 06:10:45.330742 dockerd[1840]: time="2025-07-07T06:10:45.330644458Z" level=info msg="Loading containers: start." Jul 7 06:10:45.342153 kernel: Initializing XFRM netlink socket Jul 7 06:10:45.703676 systemd-networkd[1508]: docker0: Link UP Jul 7 06:10:45.712303 dockerd[1840]: time="2025-07-07T06:10:45.712217473Z" level=info msg="Loading containers: done." Jul 7 06:10:45.731027 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2057856821-merged.mount: Deactivated successfully. Jul 7 06:10:45.733015 dockerd[1840]: time="2025-07-07T06:10:45.732945756Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 7 06:10:45.733259 dockerd[1840]: time="2025-07-07T06:10:45.733066152Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Jul 7 06:10:45.733309 dockerd[1840]: time="2025-07-07T06:10:45.733276306Z" level=info msg="Initializing buildkit" Jul 7 06:10:45.771963 dockerd[1840]: time="2025-07-07T06:10:45.771847501Z" level=info msg="Completed buildkit initialization" Jul 7 06:10:45.778121 dockerd[1840]: time="2025-07-07T06:10:45.778052648Z" level=info msg="Daemon has completed initialization" Jul 7 06:10:45.778265 dockerd[1840]: time="2025-07-07T06:10:45.778133850Z" level=info msg="API listen on /run/docker.sock" Jul 7 06:10:45.778361 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 7 06:10:46.708348 containerd[1591]: time="2025-07-07T06:10:46.708296708Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\"" Jul 7 06:10:48.236049 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2635702817.mount: Deactivated successfully. Jul 7 06:10:49.701218 containerd[1591]: time="2025-07-07T06:10:49.701140107Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:10:49.702054 containerd[1591]: time="2025-07-07T06:10:49.701978530Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.6: active requests=0, bytes read=28799045" Jul 7 06:10:49.703354 containerd[1591]: time="2025-07-07T06:10:49.703302232Z" level=info msg="ImageCreate event name:\"sha256:8c5b95b1b5cb4a908fcbbbe81697c57019f9e9d89bfb5e0355235d440b7a6aa9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:10:49.706255 containerd[1591]: time="2025-07-07T06:10:49.706195598Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:0f5764551d7de4ef70489ff8a70f32df7dea00701f5545af089b60bc5ede4f6f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:10:49.707388 containerd[1591]: time="2025-07-07T06:10:49.707346186Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.6\" with image id \"sha256:8c5b95b1b5cb4a908fcbbbe81697c57019f9e9d89bfb5e0355235d440b7a6aa9\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.6\", repo digest \"registry.k8s.io/kube-apiserver@sha256:0f5764551d7de4ef70489ff8a70f32df7dea00701f5545af089b60bc5ede4f6f\", size \"28795845\" in 2.999011677s" Jul 7 06:10:49.707388 containerd[1591]: time="2025-07-07T06:10:49.707381632Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.6\" returns image reference \"sha256:8c5b95b1b5cb4a908fcbbbe81697c57019f9e9d89bfb5e0355235d440b7a6aa9\"" Jul 7 06:10:49.708143 containerd[1591]: time="2025-07-07T06:10:49.708054134Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\"" Jul 7 06:10:51.559308 containerd[1591]: time="2025-07-07T06:10:51.559212897Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:10:51.560187 containerd[1591]: time="2025-07-07T06:10:51.560120178Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.6: active requests=0, bytes read=24783912" Jul 7 06:10:51.561614 containerd[1591]: time="2025-07-07T06:10:51.561556071Z" level=info msg="ImageCreate event name:\"sha256:77d0e7de0c6b41e2331c3997698c3f917527cf7bbe462f5c813f514e788436de\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:10:51.564419 containerd[1591]: time="2025-07-07T06:10:51.564360350Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3425f29c94a77d74cb89f38413e6274277dcf5e2bc7ab6ae953578a91e9e8356\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:10:51.565571 containerd[1591]: time="2025-07-07T06:10:51.565520626Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.6\" with image id \"sha256:77d0e7de0c6b41e2331c3997698c3f917527cf7bbe462f5c813f514e788436de\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.6\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3425f29c94a77d74cb89f38413e6274277dcf5e2bc7ab6ae953578a91e9e8356\", size \"26385746\" in 1.857422711s" Jul 7 06:10:51.565571 containerd[1591]: time="2025-07-07T06:10:51.565569879Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.6\" returns image reference \"sha256:77d0e7de0c6b41e2331c3997698c3f917527cf7bbe462f5c813f514e788436de\"" Jul 7 06:10:51.566176 containerd[1591]: time="2025-07-07T06:10:51.566148694Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\"" Jul 7 06:10:53.476969 containerd[1591]: time="2025-07-07T06:10:53.476887241Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:10:53.477965 containerd[1591]: time="2025-07-07T06:10:53.477901473Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.6: active requests=0, bytes read=19176916" Jul 7 06:10:53.479225 containerd[1591]: time="2025-07-07T06:10:53.479190390Z" level=info msg="ImageCreate event name:\"sha256:b34d1cd163151c2491919f315274d85bff904721213f2b19341b403a28a39ae2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:10:53.483413 containerd[1591]: time="2025-07-07T06:10:53.483360771Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:130f633cbd1d70e2f4655350153cb3fc469f4d5a6310b4f0b49d93fb2ba2132b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:10:53.484628 containerd[1591]: time="2025-07-07T06:10:53.484597410Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.6\" with image id \"sha256:b34d1cd163151c2491919f315274d85bff904721213f2b19341b403a28a39ae2\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.6\", repo digest \"registry.k8s.io/kube-scheduler@sha256:130f633cbd1d70e2f4655350153cb3fc469f4d5a6310b4f0b49d93fb2ba2132b\", size \"20778768\" in 1.91841903s" Jul 7 06:10:53.484687 containerd[1591]: time="2025-07-07T06:10:53.484632185Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.6\" returns image reference \"sha256:b34d1cd163151c2491919f315274d85bff904721213f2b19341b403a28a39ae2\"" Jul 7 06:10:53.485095 containerd[1591]: time="2025-07-07T06:10:53.485068935Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\"" Jul 7 06:10:54.515483 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 7 06:10:54.517584 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 06:10:54.622246 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3695144030.mount: Deactivated successfully. Jul 7 06:10:54.745180 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 06:10:54.761465 (kubelet)[2125]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jul 7 06:10:55.371339 kubelet[2125]: E0707 06:10:55.371254 2125 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 7 06:10:55.376319 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 7 06:10:55.376557 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 7 06:10:55.377058 systemd[1]: kubelet.service: Consumed 302ms CPU time, 110.4M memory peak. Jul 7 06:10:55.846143 containerd[1591]: time="2025-07-07T06:10:55.846027184Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:10:55.847126 containerd[1591]: time="2025-07-07T06:10:55.847039112Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.6: active requests=0, bytes read=30895363" Jul 7 06:10:55.848501 containerd[1591]: time="2025-07-07T06:10:55.848426624Z" level=info msg="ImageCreate event name:\"sha256:63f0cbe3b7339c5d006efc9964228e48271bae73039320037c451b5e8f763e02\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:10:55.850884 containerd[1591]: time="2025-07-07T06:10:55.850826975Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:10:55.851446 containerd[1591]: time="2025-07-07T06:10:55.851402204Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.6\" with image id \"sha256:63f0cbe3b7339c5d006efc9964228e48271bae73039320037c451b5e8f763e02\", repo tag \"registry.k8s.io/kube-proxy:v1.32.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:b13d9da413b983d130bf090b83fce12e1ccc704e95f366da743c18e964d9d7e9\", size \"30894382\" in 2.366309084s" Jul 7 06:10:55.851446 containerd[1591]: time="2025-07-07T06:10:55.851432922Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.6\" returns image reference \"sha256:63f0cbe3b7339c5d006efc9964228e48271bae73039320037c451b5e8f763e02\"" Jul 7 06:10:55.852025 containerd[1591]: time="2025-07-07T06:10:55.851994795Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jul 7 06:10:56.557488 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3569829008.mount: Deactivated successfully. Jul 7 06:10:57.654607 containerd[1591]: time="2025-07-07T06:10:57.654520685Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:10:57.655534 containerd[1591]: time="2025-07-07T06:10:57.655478611Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Jul 7 06:10:57.656850 containerd[1591]: time="2025-07-07T06:10:57.656805460Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:10:57.660446 containerd[1591]: time="2025-07-07T06:10:57.660364574Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:10:57.661580 containerd[1591]: time="2025-07-07T06:10:57.661532665Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.809474761s" Jul 7 06:10:57.661580 containerd[1591]: time="2025-07-07T06:10:57.661577579Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jul 7 06:10:57.662125 containerd[1591]: time="2025-07-07T06:10:57.662058681Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jul 7 06:10:58.243688 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1029605526.mount: Deactivated successfully. Jul 7 06:10:58.249857 containerd[1591]: time="2025-07-07T06:10:58.249775891Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 06:10:58.250760 containerd[1591]: time="2025-07-07T06:10:58.250721374Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jul 7 06:10:58.252167 containerd[1591]: time="2025-07-07T06:10:58.252092375Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 06:10:58.255756 containerd[1591]: time="2025-07-07T06:10:58.255706463Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 7 06:10:58.256698 containerd[1591]: time="2025-07-07T06:10:58.256622962Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 594.529164ms" Jul 7 06:10:58.256698 containerd[1591]: time="2025-07-07T06:10:58.256685268Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jul 7 06:10:58.257524 containerd[1591]: time="2025-07-07T06:10:58.257474549Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jul 7 06:10:59.060786 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3539536050.mount: Deactivated successfully. Jul 7 06:11:01.378412 containerd[1591]: time="2025-07-07T06:11:01.378340105Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:11:01.379281 containerd[1591]: time="2025-07-07T06:11:01.379215748Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57551360" Jul 7 06:11:01.380490 containerd[1591]: time="2025-07-07T06:11:01.380452317Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:11:01.383576 containerd[1591]: time="2025-07-07T06:11:01.383525139Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:11:01.384826 containerd[1591]: time="2025-07-07T06:11:01.384758422Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 3.127246774s" Jul 7 06:11:01.384826 containerd[1591]: time="2025-07-07T06:11:01.384812333Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Jul 7 06:11:03.241187 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 06:11:03.241411 systemd[1]: kubelet.service: Consumed 302ms CPU time, 110.4M memory peak. Jul 7 06:11:03.244551 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 06:11:03.274371 systemd[1]: Reload requested from client PID 2277 ('systemctl') (unit session-9.scope)... Jul 7 06:11:03.274382 systemd[1]: Reloading... Jul 7 06:11:03.386158 zram_generator::config[2322]: No configuration found. Jul 7 06:11:03.613472 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 06:11:03.744169 systemd[1]: Reloading finished in 469 ms. Jul 7 06:11:03.827261 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jul 7 06:11:03.827395 systemd[1]: kubelet.service: Failed with result 'signal'. Jul 7 06:11:03.827782 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 06:11:03.827843 systemd[1]: kubelet.service: Consumed 170ms CPU time, 98.2M memory peak. Jul 7 06:11:03.829816 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 06:11:04.014848 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 06:11:04.020928 (kubelet)[2367]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 7 06:11:04.064906 kubelet[2367]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 06:11:04.064906 kubelet[2367]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 7 06:11:04.064906 kubelet[2367]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 06:11:04.065368 kubelet[2367]: I0707 06:11:04.064957 2367 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 7 06:11:04.476502 kubelet[2367]: I0707 06:11:04.476345 2367 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jul 7 06:11:04.476502 kubelet[2367]: I0707 06:11:04.476383 2367 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 7 06:11:04.476666 kubelet[2367]: I0707 06:11:04.476650 2367 server.go:954] "Client rotation is on, will bootstrap in background" Jul 7 06:11:04.504369 kubelet[2367]: E0707 06:11:04.504302 2367 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.0.94:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.0.94:6443: connect: connection refused" logger="UnhandledError" Jul 7 06:11:04.505180 kubelet[2367]: I0707 06:11:04.505157 2367 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 7 06:11:04.512422 kubelet[2367]: I0707 06:11:04.512388 2367 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 7 06:11:04.518912 kubelet[2367]: I0707 06:11:04.518869 2367 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 7 06:11:04.521742 kubelet[2367]: I0707 06:11:04.521659 2367 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 7 06:11:04.521957 kubelet[2367]: I0707 06:11:04.521720 2367 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 7 06:11:04.522379 kubelet[2367]: I0707 06:11:04.521960 2367 topology_manager.go:138] "Creating topology manager with none policy" Jul 7 06:11:04.522379 kubelet[2367]: I0707 06:11:04.521975 2367 container_manager_linux.go:304] "Creating device plugin manager" Jul 7 06:11:04.522379 kubelet[2367]: I0707 06:11:04.522186 2367 state_mem.go:36] "Initialized new in-memory state store" Jul 7 06:11:04.526052 kubelet[2367]: I0707 06:11:04.526005 2367 kubelet.go:446] "Attempting to sync node with API server" Jul 7 06:11:04.526139 kubelet[2367]: I0707 06:11:04.526060 2367 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 7 06:11:04.526139 kubelet[2367]: I0707 06:11:04.526134 2367 kubelet.go:352] "Adding apiserver pod source" Jul 7 06:11:04.526213 kubelet[2367]: I0707 06:11:04.526152 2367 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 7 06:11:04.529641 kubelet[2367]: I0707 06:11:04.529598 2367 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jul 7 06:11:04.530033 kubelet[2367]: I0707 06:11:04.529986 2367 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 7 06:11:04.531839 kubelet[2367]: W0707 06:11:04.531272 2367 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.94:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.94:6443: connect: connection refused Jul 7 06:11:04.531839 kubelet[2367]: E0707 06:11:04.531341 2367 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.94:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.94:6443: connect: connection refused" logger="UnhandledError" Jul 7 06:11:04.531839 kubelet[2367]: W0707 06:11:04.531404 2367 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.94:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.94:6443: connect: connection refused Jul 7 06:11:04.531839 kubelet[2367]: E0707 06:11:04.531433 2367 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.94:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.94:6443: connect: connection refused" logger="UnhandledError" Jul 7 06:11:04.535046 kubelet[2367]: W0707 06:11:04.534990 2367 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 7 06:11:04.537733 kubelet[2367]: I0707 06:11:04.537692 2367 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 7 06:11:04.537803 kubelet[2367]: I0707 06:11:04.537750 2367 server.go:1287] "Started kubelet" Jul 7 06:11:04.538909 kubelet[2367]: I0707 06:11:04.538838 2367 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 7 06:11:04.540082 kubelet[2367]: I0707 06:11:04.539430 2367 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 7 06:11:04.540082 kubelet[2367]: I0707 06:11:04.539516 2367 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jul 7 06:11:04.540082 kubelet[2367]: I0707 06:11:04.539670 2367 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 7 06:11:04.541314 kubelet[2367]: I0707 06:11:04.540562 2367 server.go:479] "Adding debug handlers to kubelet server" Jul 7 06:11:04.542166 kubelet[2367]: I0707 06:11:04.541923 2367 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 7 06:11:04.542276 kubelet[2367]: E0707 06:11:04.542204 2367 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 7 06:11:04.542276 kubelet[2367]: I0707 06:11:04.542255 2367 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 7 06:11:04.542476 kubelet[2367]: I0707 06:11:04.542453 2367 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 7 06:11:04.542562 kubelet[2367]: I0707 06:11:04.542518 2367 reconciler.go:26] "Reconciler: start to sync state" Jul 7 06:11:04.542889 kubelet[2367]: W0707 06:11:04.542834 2367 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.0.94:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.0.94:6443: connect: connection refused Jul 7 06:11:04.542976 kubelet[2367]: E0707 06:11:04.542903 2367 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.0.94:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.0.94:6443: connect: connection refused" logger="UnhandledError" Jul 7 06:11:04.543478 kubelet[2367]: E0707 06:11:04.543449 2367 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 7 06:11:04.544968 kubelet[2367]: E0707 06:11:04.542845 2367 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.0.94:6443/api/v1/namespaces/default/events\": dial tcp 10.0.0.94:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.184fe344c2e89a03 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-07-07 06:11:04.537717251 +0000 UTC m=+0.511711333,LastTimestamp:2025-07-07 06:11:04.537717251 +0000 UTC m=+0.511711333,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jul 7 06:11:04.545434 kubelet[2367]: E0707 06:11:04.545398 2367 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.94:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.94:6443: connect: connection refused" interval="200ms" Jul 7 06:11:04.545885 kubelet[2367]: I0707 06:11:04.545856 2367 factory.go:221] Registration of the systemd container factory successfully Jul 7 06:11:04.545992 kubelet[2367]: I0707 06:11:04.545965 2367 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 7 06:11:04.548565 kubelet[2367]: I0707 06:11:04.548536 2367 factory.go:221] Registration of the containerd container factory successfully Jul 7 06:11:04.560299 kubelet[2367]: I0707 06:11:04.560210 2367 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 7 06:11:04.561703 kubelet[2367]: I0707 06:11:04.561651 2367 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 7 06:11:04.561703 kubelet[2367]: I0707 06:11:04.561691 2367 status_manager.go:227] "Starting to sync pod status with apiserver" Jul 7 06:11:04.561773 kubelet[2367]: I0707 06:11:04.561719 2367 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 7 06:11:04.561773 kubelet[2367]: I0707 06:11:04.561728 2367 kubelet.go:2382] "Starting kubelet main sync loop" Jul 7 06:11:04.561853 kubelet[2367]: E0707 06:11:04.561800 2367 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 7 06:11:04.567049 kubelet[2367]: W0707 06:11:04.566988 2367 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.94:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.94:6443: connect: connection refused Jul 7 06:11:04.567127 kubelet[2367]: E0707 06:11:04.567067 2367 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.94:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.94:6443: connect: connection refused" logger="UnhandledError" Jul 7 06:11:04.569159 kubelet[2367]: I0707 06:11:04.569080 2367 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 7 06:11:04.569483 kubelet[2367]: I0707 06:11:04.569235 2367 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 7 06:11:04.569483 kubelet[2367]: I0707 06:11:04.569258 2367 state_mem.go:36] "Initialized new in-memory state store" Jul 7 06:11:04.643000 kubelet[2367]: E0707 06:11:04.642931 2367 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 7 06:11:04.662497 kubelet[2367]: E0707 06:11:04.662422 2367 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 7 06:11:04.743913 kubelet[2367]: E0707 06:11:04.743722 2367 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 7 06:11:04.746514 kubelet[2367]: E0707 06:11:04.746463 2367 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.94:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.94:6443: connect: connection refused" interval="400ms" Jul 7 06:11:04.844894 kubelet[2367]: E0707 06:11:04.844829 2367 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 7 06:11:04.863289 kubelet[2367]: E0707 06:11:04.863224 2367 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 7 06:11:04.945802 kubelet[2367]: E0707 06:11:04.945704 2367 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 7 06:11:05.025831 kubelet[2367]: I0707 06:11:05.025759 2367 policy_none.go:49] "None policy: Start" Jul 7 06:11:05.025831 kubelet[2367]: I0707 06:11:05.025825 2367 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 7 06:11:05.025831 kubelet[2367]: I0707 06:11:05.025851 2367 state_mem.go:35] "Initializing new in-memory state store" Jul 7 06:11:05.035387 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 7 06:11:05.046751 kubelet[2367]: E0707 06:11:05.046702 2367 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"localhost\" not found" Jul 7 06:11:05.050584 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 7 06:11:05.054885 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 7 06:11:05.077769 kubelet[2367]: I0707 06:11:05.077711 2367 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 7 06:11:05.078241 kubelet[2367]: I0707 06:11:05.078167 2367 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 7 06:11:05.078241 kubelet[2367]: I0707 06:11:05.078180 2367 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 7 06:11:05.078783 kubelet[2367]: I0707 06:11:05.078752 2367 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 7 06:11:05.079638 kubelet[2367]: E0707 06:11:05.079605 2367 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 7 06:11:05.079706 kubelet[2367]: E0707 06:11:05.079660 2367 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jul 7 06:11:05.147561 kubelet[2367]: E0707 06:11:05.147513 2367 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.94:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.94:6443: connect: connection refused" interval="800ms" Jul 7 06:11:05.179884 kubelet[2367]: I0707 06:11:05.179853 2367 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 7 06:11:05.180360 kubelet[2367]: E0707 06:11:05.180308 2367 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.94:6443/api/v1/nodes\": dial tcp 10.0.0.94:6443: connect: connection refused" node="localhost" Jul 7 06:11:05.275705 systemd[1]: Created slice kubepods-burstable-pod0a6ec25e7544dd24f1d3793e661cee37.slice - libcontainer container kubepods-burstable-pod0a6ec25e7544dd24f1d3793e661cee37.slice. Jul 7 06:11:05.288369 kubelet[2367]: E0707 06:11:05.288324 2367 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 7 06:11:05.292672 systemd[1]: Created slice kubepods-burstable-podd1af03769b64da1b1e8089a7035018fc.slice - libcontainer container kubepods-burstable-podd1af03769b64da1b1e8089a7035018fc.slice. Jul 7 06:11:05.304078 kubelet[2367]: E0707 06:11:05.304023 2367 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 7 06:11:05.308080 systemd[1]: Created slice kubepods-burstable-pod8a75e163f27396b2168da0f88f85f8a5.slice - libcontainer container kubepods-burstable-pod8a75e163f27396b2168da0f88f85f8a5.slice. Jul 7 06:11:05.310353 kubelet[2367]: E0707 06:11:05.310311 2367 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 7 06:11:05.349081 kubelet[2367]: I0707 06:11:05.349010 2367 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0a6ec25e7544dd24f1d3793e661cee37-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"0a6ec25e7544dd24f1d3793e661cee37\") " pod="kube-system/kube-apiserver-localhost" Jul 7 06:11:05.349081 kubelet[2367]: I0707 06:11:05.349058 2367 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 7 06:11:05.349081 kubelet[2367]: I0707 06:11:05.349082 2367 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 7 06:11:05.349081 kubelet[2367]: I0707 06:11:05.349123 2367 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0a6ec25e7544dd24f1d3793e661cee37-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"0a6ec25e7544dd24f1d3793e661cee37\") " pod="kube-system/kube-apiserver-localhost" Jul 7 06:11:05.349344 kubelet[2367]: I0707 06:11:05.349179 2367 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 7 06:11:05.349344 kubelet[2367]: I0707 06:11:05.349237 2367 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 7 06:11:05.349344 kubelet[2367]: I0707 06:11:05.349254 2367 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 7 06:11:05.349344 kubelet[2367]: I0707 06:11:05.349295 2367 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8a75e163f27396b2168da0f88f85f8a5-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"8a75e163f27396b2168da0f88f85f8a5\") " pod="kube-system/kube-scheduler-localhost" Jul 7 06:11:05.349344 kubelet[2367]: I0707 06:11:05.349320 2367 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0a6ec25e7544dd24f1d3793e661cee37-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"0a6ec25e7544dd24f1d3793e661cee37\") " pod="kube-system/kube-apiserver-localhost" Jul 7 06:11:05.349452 kubelet[2367]: W0707 06:11:05.349318 2367 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.0.94:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 10.0.0.94:6443: connect: connection refused Jul 7 06:11:05.349452 kubelet[2367]: E0707 06:11:05.349375 2367 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.0.94:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 10.0.0.94:6443: connect: connection refused" logger="UnhandledError" Jul 7 06:11:05.382174 kubelet[2367]: I0707 06:11:05.382133 2367 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 7 06:11:05.382546 kubelet[2367]: E0707 06:11:05.382511 2367 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.94:6443/api/v1/nodes\": dial tcp 10.0.0.94:6443: connect: connection refused" node="localhost" Jul 7 06:11:05.413137 kubelet[2367]: W0707 06:11:05.413050 2367 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.0.94:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.0.94:6443: connect: connection refused Jul 7 06:11:05.413197 kubelet[2367]: E0707 06:11:05.413149 2367 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.0.94:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.0.94:6443: connect: connection refused" logger="UnhandledError" Jul 7 06:11:05.589179 kubelet[2367]: E0707 06:11:05.589001 2367 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:11:05.589846 containerd[1591]: time="2025-07-07T06:11:05.589673687Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:0a6ec25e7544dd24f1d3793e661cee37,Namespace:kube-system,Attempt:0,}" Jul 7 06:11:05.605023 kubelet[2367]: E0707 06:11:05.604954 2367 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:11:05.605807 containerd[1591]: time="2025-07-07T06:11:05.605770464Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d1af03769b64da1b1e8089a7035018fc,Namespace:kube-system,Attempt:0,}" Jul 7 06:11:05.611560 kubelet[2367]: E0707 06:11:05.611522 2367 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:11:05.612033 containerd[1591]: time="2025-07-07T06:11:05.611989502Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:8a75e163f27396b2168da0f88f85f8a5,Namespace:kube-system,Attempt:0,}" Jul 7 06:11:05.620913 containerd[1591]: time="2025-07-07T06:11:05.620856949Z" level=info msg="connecting to shim c9f1fcb0fd62a31665a52a75ac16cbe971966abed42f86bba51777509fb3aa3b" address="unix:///run/containerd/s/c043bd0b6a9338f806a32df650802cdf4957e23abfdda3d5f7ba004d114c118e" namespace=k8s.io protocol=ttrpc version=3 Jul 7 06:11:05.675289 containerd[1591]: time="2025-07-07T06:11:05.675228459Z" level=info msg="connecting to shim 3c1d4ed301a7ea612a884c302e1ea5a27c86e74a5014e64758249af893268138" address="unix:///run/containerd/s/4557ebc04a9194097af23676599de1015113e53e621ad58f1c4885001d28485f" namespace=k8s.io protocol=ttrpc version=3 Jul 7 06:11:05.712603 containerd[1591]: time="2025-07-07T06:11:05.712522208Z" level=info msg="connecting to shim c408180860f51e8e6b3662c3e4d17641eaa9f392666c3061dde4b0b2baba5b54" address="unix:///run/containerd/s/1949e621f9f4ae6e1bd18bcc7021356ac0ac4b462c5002c683e4687d12356f84" namespace=k8s.io protocol=ttrpc version=3 Jul 7 06:11:05.734315 systemd[1]: Started cri-containerd-3c1d4ed301a7ea612a884c302e1ea5a27c86e74a5014e64758249af893268138.scope - libcontainer container 3c1d4ed301a7ea612a884c302e1ea5a27c86e74a5014e64758249af893268138. Jul 7 06:11:05.739303 systemd[1]: Started cri-containerd-c9f1fcb0fd62a31665a52a75ac16cbe971966abed42f86bba51777509fb3aa3b.scope - libcontainer container c9f1fcb0fd62a31665a52a75ac16cbe971966abed42f86bba51777509fb3aa3b. Jul 7 06:11:05.750993 kubelet[2367]: W0707 06:11:05.750908 2367 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.0.94:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.0.94:6443: connect: connection refused Jul 7 06:11:05.752004 kubelet[2367]: E0707 06:11:05.750998 2367 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.0.94:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.0.94:6443: connect: connection refused" logger="UnhandledError" Jul 7 06:11:05.757271 systemd[1]: Started cri-containerd-c408180860f51e8e6b3662c3e4d17641eaa9f392666c3061dde4b0b2baba5b54.scope - libcontainer container c408180860f51e8e6b3662c3e4d17641eaa9f392666c3061dde4b0b2baba5b54. Jul 7 06:11:05.785863 kubelet[2367]: I0707 06:11:05.784929 2367 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 7 06:11:05.785863 kubelet[2367]: E0707 06:11:05.785476 2367 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.0.94:6443/api/v1/nodes\": dial tcp 10.0.0.94:6443: connect: connection refused" node="localhost" Jul 7 06:11:05.842888 containerd[1591]: time="2025-07-07T06:11:05.842724911Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:0a6ec25e7544dd24f1d3793e661cee37,Namespace:kube-system,Attempt:0,} returns sandbox id \"c9f1fcb0fd62a31665a52a75ac16cbe971966abed42f86bba51777509fb3aa3b\"" Jul 7 06:11:05.845411 kubelet[2367]: E0707 06:11:05.845372 2367 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:11:05.848712 containerd[1591]: time="2025-07-07T06:11:05.848680835Z" level=info msg="CreateContainer within sandbox \"c9f1fcb0fd62a31665a52a75ac16cbe971966abed42f86bba51777509fb3aa3b\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 7 06:11:05.859140 containerd[1591]: time="2025-07-07T06:11:05.858745975Z" level=info msg="Container 5a45e42f60cd2206782884eca227db5d3a82ac63874fc5c6f29bd4534e7481f4: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:11:05.867251 containerd[1591]: time="2025-07-07T06:11:05.866357666Z" level=info msg="CreateContainer within sandbox \"c9f1fcb0fd62a31665a52a75ac16cbe971966abed42f86bba51777509fb3aa3b\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"5a45e42f60cd2206782884eca227db5d3a82ac63874fc5c6f29bd4534e7481f4\"" Jul 7 06:11:05.867251 containerd[1591]: time="2025-07-07T06:11:05.867033302Z" level=info msg="StartContainer for \"5a45e42f60cd2206782884eca227db5d3a82ac63874fc5c6f29bd4534e7481f4\"" Jul 7 06:11:05.870439 containerd[1591]: time="2025-07-07T06:11:05.869026882Z" level=info msg="connecting to shim 5a45e42f60cd2206782884eca227db5d3a82ac63874fc5c6f29bd4534e7481f4" address="unix:///run/containerd/s/c043bd0b6a9338f806a32df650802cdf4957e23abfdda3d5f7ba004d114c118e" protocol=ttrpc version=3 Jul 7 06:11:05.910391 systemd[1]: Started cri-containerd-5a45e42f60cd2206782884eca227db5d3a82ac63874fc5c6f29bd4534e7481f4.scope - libcontainer container 5a45e42f60cd2206782884eca227db5d3a82ac63874fc5c6f29bd4534e7481f4. Jul 7 06:11:05.948309 kubelet[2367]: E0707 06:11:05.948253 2367 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.0.94:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 10.0.0.94:6443: connect: connection refused" interval="1.6s" Jul 7 06:11:06.088566 containerd[1591]: time="2025-07-07T06:11:06.088476311Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:8a75e163f27396b2168da0f88f85f8a5,Namespace:kube-system,Attempt:0,} returns sandbox id \"3c1d4ed301a7ea612a884c302e1ea5a27c86e74a5014e64758249af893268138\"" Jul 7 06:11:06.089907 kubelet[2367]: E0707 06:11:06.089227 2367 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:11:06.091706 containerd[1591]: time="2025-07-07T06:11:06.090359152Z" level=info msg="StartContainer for \"5a45e42f60cd2206782884eca227db5d3a82ac63874fc5c6f29bd4534e7481f4\" returns successfully" Jul 7 06:11:06.092005 containerd[1591]: time="2025-07-07T06:11:06.091941202Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:d1af03769b64da1b1e8089a7035018fc,Namespace:kube-system,Attempt:0,} returns sandbox id \"c408180860f51e8e6b3662c3e4d17641eaa9f392666c3061dde4b0b2baba5b54\"" Jul 7 06:11:06.092801 containerd[1591]: time="2025-07-07T06:11:06.092740951Z" level=info msg="CreateContainer within sandbox \"3c1d4ed301a7ea612a884c302e1ea5a27c86e74a5014e64758249af893268138\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 7 06:11:06.094165 kubelet[2367]: E0707 06:11:06.094054 2367 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:11:06.096483 containerd[1591]: time="2025-07-07T06:11:06.096442412Z" level=info msg="CreateContainer within sandbox \"c408180860f51e8e6b3662c3e4d17641eaa9f392666c3061dde4b0b2baba5b54\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 7 06:11:06.104164 containerd[1591]: time="2025-07-07T06:11:06.104090029Z" level=info msg="Container a3270e7ffa0a904011d95662e3c096fab925a7c827072c4b236d6f0a46f9ef21: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:11:06.111867 containerd[1591]: time="2025-07-07T06:11:06.111817934Z" level=info msg="CreateContainer within sandbox \"3c1d4ed301a7ea612a884c302e1ea5a27c86e74a5014e64758249af893268138\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"a3270e7ffa0a904011d95662e3c096fab925a7c827072c4b236d6f0a46f9ef21\"" Jul 7 06:11:06.112595 containerd[1591]: time="2025-07-07T06:11:06.112551861Z" level=info msg="StartContainer for \"a3270e7ffa0a904011d95662e3c096fab925a7c827072c4b236d6f0a46f9ef21\"" Jul 7 06:11:06.114768 containerd[1591]: time="2025-07-07T06:11:06.114731629Z" level=info msg="Container ec1e31225b717ac1c1da66ea78cae4e9496ee1fa29df0da7d309d49283797950: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:11:06.114848 containerd[1591]: time="2025-07-07T06:11:06.114750158Z" level=info msg="connecting to shim a3270e7ffa0a904011d95662e3c096fab925a7c827072c4b236d6f0a46f9ef21" address="unix:///run/containerd/s/4557ebc04a9194097af23676599de1015113e53e621ad58f1c4885001d28485f" protocol=ttrpc version=3 Jul 7 06:11:06.124169 containerd[1591]: time="2025-07-07T06:11:06.123834534Z" level=info msg="CreateContainer within sandbox \"c408180860f51e8e6b3662c3e4d17641eaa9f392666c3061dde4b0b2baba5b54\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"ec1e31225b717ac1c1da66ea78cae4e9496ee1fa29df0da7d309d49283797950\"" Jul 7 06:11:06.124951 containerd[1591]: time="2025-07-07T06:11:06.124901463Z" level=info msg="StartContainer for \"ec1e31225b717ac1c1da66ea78cae4e9496ee1fa29df0da7d309d49283797950\"" Jul 7 06:11:06.125914 containerd[1591]: time="2025-07-07T06:11:06.125865573Z" level=info msg="connecting to shim ec1e31225b717ac1c1da66ea78cae4e9496ee1fa29df0da7d309d49283797950" address="unix:///run/containerd/s/1949e621f9f4ae6e1bd18bcc7021356ac0ac4b462c5002c683e4687d12356f84" protocol=ttrpc version=3 Jul 7 06:11:06.196276 systemd[1]: Started cri-containerd-ec1e31225b717ac1c1da66ea78cae4e9496ee1fa29df0da7d309d49283797950.scope - libcontainer container ec1e31225b717ac1c1da66ea78cae4e9496ee1fa29df0da7d309d49283797950. Jul 7 06:11:06.199876 systemd[1]: Started cri-containerd-a3270e7ffa0a904011d95662e3c096fab925a7c827072c4b236d6f0a46f9ef21.scope - libcontainer container a3270e7ffa0a904011d95662e3c096fab925a7c827072c4b236d6f0a46f9ef21. Jul 7 06:11:06.269819 containerd[1591]: time="2025-07-07T06:11:06.269766134Z" level=info msg="StartContainer for \"ec1e31225b717ac1c1da66ea78cae4e9496ee1fa29df0da7d309d49283797950\" returns successfully" Jul 7 06:11:06.272596 containerd[1591]: time="2025-07-07T06:11:06.272508129Z" level=info msg="StartContainer for \"a3270e7ffa0a904011d95662e3c096fab925a7c827072c4b236d6f0a46f9ef21\" returns successfully" Jul 7 06:11:06.579402 kubelet[2367]: E0707 06:11:06.578909 2367 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 7 06:11:06.579402 kubelet[2367]: E0707 06:11:06.579050 2367 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:11:06.586695 kubelet[2367]: E0707 06:11:06.586596 2367 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 7 06:11:06.587147 kubelet[2367]: E0707 06:11:06.586760 2367 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:11:06.587147 kubelet[2367]: E0707 06:11:06.586961 2367 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 7 06:11:06.587147 kubelet[2367]: E0707 06:11:06.587045 2367 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:11:06.587300 kubelet[2367]: I0707 06:11:06.587274 2367 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 7 06:11:07.563619 kubelet[2367]: E0707 06:11:07.563570 2367 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jul 7 06:11:07.587546 kubelet[2367]: E0707 06:11:07.587507 2367 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 7 06:11:07.587704 kubelet[2367]: E0707 06:11:07.587632 2367 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:11:07.587704 kubelet[2367]: E0707 06:11:07.587683 2367 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jul 7 06:11:07.587840 kubelet[2367]: E0707 06:11:07.587817 2367 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:11:07.738174 kubelet[2367]: I0707 06:11:07.737995 2367 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jul 7 06:11:07.738174 kubelet[2367]: E0707 06:11:07.738042 2367 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jul 7 06:11:07.745536 kubelet[2367]: I0707 06:11:07.745437 2367 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 7 06:11:07.755463 kubelet[2367]: E0707 06:11:07.755286 2367 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jul 7 06:11:07.756303 kubelet[2367]: I0707 06:11:07.755545 2367 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 7 06:11:07.757545 kubelet[2367]: E0707 06:11:07.757494 2367 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Jul 7 06:11:07.757545 kubelet[2367]: I0707 06:11:07.757516 2367 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 7 06:11:07.759387 kubelet[2367]: E0707 06:11:07.759349 2367 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jul 7 06:11:08.537941 kubelet[2367]: I0707 06:11:08.537894 2367 apiserver.go:52] "Watching apiserver" Jul 7 06:11:08.543180 kubelet[2367]: I0707 06:11:08.543132 2367 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 7 06:11:09.022974 kubelet[2367]: I0707 06:11:09.022932 2367 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 7 06:11:09.027038 kubelet[2367]: I0707 06:11:09.027008 2367 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 7 06:11:09.028701 kubelet[2367]: E0707 06:11:09.028652 2367 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:11:09.030892 kubelet[2367]: E0707 06:11:09.030853 2367 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:11:09.533275 systemd[1]: Reload requested from client PID 2642 ('systemctl') (unit session-9.scope)... Jul 7 06:11:09.533291 systemd[1]: Reloading... Jul 7 06:11:09.591895 kubelet[2367]: E0707 06:11:09.591846 2367 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:11:09.592151 kubelet[2367]: E0707 06:11:09.592089 2367 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:11:09.683140 zram_generator::config[2685]: No configuration found. Jul 7 06:11:09.809015 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 7 06:11:09.959536 systemd[1]: Reloading finished in 425 ms. Jul 7 06:11:09.986273 kubelet[2367]: I0707 06:11:09.986211 2367 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 7 06:11:09.986448 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 06:11:10.005926 systemd[1]: kubelet.service: Deactivated successfully. Jul 7 06:11:10.006287 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 06:11:10.006350 systemd[1]: kubelet.service: Consumed 1.034s CPU time, 131.5M memory peak. Jul 7 06:11:10.009192 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 7 06:11:10.242845 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 7 06:11:10.253644 (kubelet)[2730]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jul 7 06:11:10.309094 kubelet[2730]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 06:11:10.309094 kubelet[2730]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jul 7 06:11:10.309094 kubelet[2730]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 7 06:11:10.309595 kubelet[2730]: I0707 06:11:10.309188 2730 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 7 06:11:10.319089 kubelet[2730]: I0707 06:11:10.319029 2730 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jul 7 06:11:10.319089 kubelet[2730]: I0707 06:11:10.319077 2730 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 7 06:11:10.319478 kubelet[2730]: I0707 06:11:10.319445 2730 server.go:954] "Client rotation is on, will bootstrap in background" Jul 7 06:11:10.320825 kubelet[2730]: I0707 06:11:10.320796 2730 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 7 06:11:10.323475 kubelet[2730]: I0707 06:11:10.323438 2730 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 7 06:11:10.328766 kubelet[2730]: I0707 06:11:10.328741 2730 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jul 7 06:11:10.335547 kubelet[2730]: I0707 06:11:10.335502 2730 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 7 06:11:10.335856 kubelet[2730]: I0707 06:11:10.335800 2730 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 7 06:11:10.336023 kubelet[2730]: I0707 06:11:10.335843 2730 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jul 7 06:11:10.336206 kubelet[2730]: I0707 06:11:10.336029 2730 topology_manager.go:138] "Creating topology manager with none policy" Jul 7 06:11:10.336206 kubelet[2730]: I0707 06:11:10.336039 2730 container_manager_linux.go:304] "Creating device plugin manager" Jul 7 06:11:10.336206 kubelet[2730]: I0707 06:11:10.336151 2730 state_mem.go:36] "Initialized new in-memory state store" Jul 7 06:11:10.336354 kubelet[2730]: I0707 06:11:10.336335 2730 kubelet.go:446] "Attempting to sync node with API server" Jul 7 06:11:10.336386 kubelet[2730]: I0707 06:11:10.336362 2730 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 7 06:11:10.336386 kubelet[2730]: I0707 06:11:10.336385 2730 kubelet.go:352] "Adding apiserver pod source" Jul 7 06:11:10.336432 kubelet[2730]: I0707 06:11:10.336394 2730 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 7 06:11:10.337274 kubelet[2730]: I0707 06:11:10.337226 2730 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Jul 7 06:11:10.338284 kubelet[2730]: I0707 06:11:10.338250 2730 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 7 06:11:10.339654 kubelet[2730]: I0707 06:11:10.339180 2730 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jul 7 06:11:10.339654 kubelet[2730]: I0707 06:11:10.339257 2730 server.go:1287] "Started kubelet" Jul 7 06:11:10.343484 kubelet[2730]: I0707 06:11:10.343450 2730 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 7 06:11:10.345514 kubelet[2730]: I0707 06:11:10.345444 2730 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jul 7 06:11:10.346530 kubelet[2730]: I0707 06:11:10.346496 2730 server.go:479] "Adding debug handlers to kubelet server" Jul 7 06:11:10.347435 kubelet[2730]: I0707 06:11:10.347379 2730 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 7 06:11:10.347685 kubelet[2730]: I0707 06:11:10.347630 2730 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 7 06:11:10.347845 kubelet[2730]: I0707 06:11:10.347803 2730 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jul 7 06:11:10.348012 kubelet[2730]: I0707 06:11:10.347992 2730 volume_manager.go:297] "Starting Kubelet Volume Manager" Jul 7 06:11:10.348121 kubelet[2730]: I0707 06:11:10.348079 2730 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jul 7 06:11:10.348973 kubelet[2730]: I0707 06:11:10.348934 2730 reconciler.go:26] "Reconciler: start to sync state" Jul 7 06:11:10.350915 kubelet[2730]: I0707 06:11:10.350879 2730 factory.go:221] Registration of the systemd container factory successfully Jul 7 06:11:10.351134 kubelet[2730]: I0707 06:11:10.351060 2730 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 7 06:11:10.352779 kubelet[2730]: E0707 06:11:10.352744 2730 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 7 06:11:10.355501 kubelet[2730]: I0707 06:11:10.355475 2730 factory.go:221] Registration of the containerd container factory successfully Jul 7 06:11:10.362409 kubelet[2730]: I0707 06:11:10.362341 2730 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 7 06:11:10.364750 kubelet[2730]: I0707 06:11:10.364717 2730 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 7 06:11:10.365356 kubelet[2730]: I0707 06:11:10.364904 2730 status_manager.go:227] "Starting to sync pod status with apiserver" Jul 7 06:11:10.365356 kubelet[2730]: I0707 06:11:10.364936 2730 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jul 7 06:11:10.365356 kubelet[2730]: I0707 06:11:10.364948 2730 kubelet.go:2382] "Starting kubelet main sync loop" Jul 7 06:11:10.365356 kubelet[2730]: E0707 06:11:10.365023 2730 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 7 06:11:10.396741 kubelet[2730]: I0707 06:11:10.396693 2730 cpu_manager.go:221] "Starting CPU manager" policy="none" Jul 7 06:11:10.396741 kubelet[2730]: I0707 06:11:10.396718 2730 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jul 7 06:11:10.396741 kubelet[2730]: I0707 06:11:10.396740 2730 state_mem.go:36] "Initialized new in-memory state store" Jul 7 06:11:10.396952 kubelet[2730]: I0707 06:11:10.396932 2730 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 7 06:11:10.396982 kubelet[2730]: I0707 06:11:10.396951 2730 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 7 06:11:10.396982 kubelet[2730]: I0707 06:11:10.396973 2730 policy_none.go:49] "None policy: Start" Jul 7 06:11:10.397034 kubelet[2730]: I0707 06:11:10.396984 2730 memory_manager.go:186] "Starting memorymanager" policy="None" Jul 7 06:11:10.397034 kubelet[2730]: I0707 06:11:10.396997 2730 state_mem.go:35] "Initializing new in-memory state store" Jul 7 06:11:10.397179 kubelet[2730]: I0707 06:11:10.397159 2730 state_mem.go:75] "Updated machine memory state" Jul 7 06:11:10.402196 kubelet[2730]: I0707 06:11:10.402158 2730 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 7 06:11:10.402479 kubelet[2730]: I0707 06:11:10.402447 2730 eviction_manager.go:189] "Eviction manager: starting control loop" Jul 7 06:11:10.402523 kubelet[2730]: I0707 06:11:10.402476 2730 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 7 06:11:10.402949 kubelet[2730]: I0707 06:11:10.402769 2730 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 7 06:11:10.405011 kubelet[2730]: E0707 06:11:10.404980 2730 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jul 7 06:11:10.467298 kubelet[2730]: I0707 06:11:10.467239 2730 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 7 06:11:10.468212 kubelet[2730]: I0707 06:11:10.468156 2730 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jul 7 06:11:10.468513 kubelet[2730]: I0707 06:11:10.468489 2730 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 7 06:11:10.477822 kubelet[2730]: E0707 06:11:10.477544 2730 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 7 06:11:10.478019 kubelet[2730]: E0707 06:11:10.477946 2730 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jul 7 06:11:10.512116 kubelet[2730]: I0707 06:11:10.511964 2730 kubelet_node_status.go:75] "Attempting to register node" node="localhost" Jul 7 06:11:10.522453 kubelet[2730]: I0707 06:11:10.522399 2730 kubelet_node_status.go:124] "Node was previously registered" node="localhost" Jul 7 06:11:10.522629 kubelet[2730]: I0707 06:11:10.522506 2730 kubelet_node_status.go:78] "Successfully registered node" node="localhost" Jul 7 06:11:10.550875 kubelet[2730]: I0707 06:11:10.550803 2730 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0a6ec25e7544dd24f1d3793e661cee37-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"0a6ec25e7544dd24f1d3793e661cee37\") " pod="kube-system/kube-apiserver-localhost" Jul 7 06:11:10.550875 kubelet[2730]: I0707 06:11:10.550870 2730 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 7 06:11:10.551078 kubelet[2730]: I0707 06:11:10.550908 2730 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 7 06:11:10.551078 kubelet[2730]: I0707 06:11:10.550932 2730 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 7 06:11:10.551078 kubelet[2730]: I0707 06:11:10.551017 2730 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0a6ec25e7544dd24f1d3793e661cee37-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"0a6ec25e7544dd24f1d3793e661cee37\") " pod="kube-system/kube-apiserver-localhost" Jul 7 06:11:10.551078 kubelet[2730]: I0707 06:11:10.551041 2730 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0a6ec25e7544dd24f1d3793e661cee37-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"0a6ec25e7544dd24f1d3793e661cee37\") " pod="kube-system/kube-apiserver-localhost" Jul 7 06:11:10.551210 kubelet[2730]: I0707 06:11:10.551138 2730 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 7 06:11:10.551263 kubelet[2730]: I0707 06:11:10.551216 2730 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d1af03769b64da1b1e8089a7035018fc-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"d1af03769b64da1b1e8089a7035018fc\") " pod="kube-system/kube-controller-manager-localhost" Jul 7 06:11:10.551295 kubelet[2730]: I0707 06:11:10.551255 2730 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8a75e163f27396b2168da0f88f85f8a5-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"8a75e163f27396b2168da0f88f85f8a5\") " pod="kube-system/kube-scheduler-localhost" Jul 7 06:11:10.780889 kubelet[2730]: E0707 06:11:10.780712 2730 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:11:10.781351 kubelet[2730]: E0707 06:11:10.781175 2730 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:11:10.781351 kubelet[2730]: E0707 06:11:10.781263 2730 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:11:11.337407 kubelet[2730]: I0707 06:11:11.337349 2730 apiserver.go:52] "Watching apiserver" Jul 7 06:11:11.349089 kubelet[2730]: I0707 06:11:11.349028 2730 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jul 7 06:11:11.377165 kubelet[2730]: I0707 06:11:11.376467 2730 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jul 7 06:11:11.377165 kubelet[2730]: I0707 06:11:11.376526 2730 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jul 7 06:11:11.377165 kubelet[2730]: E0707 06:11:11.376648 2730 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:11:11.596080 kubelet[2730]: E0707 06:11:11.595751 2730 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jul 7 06:11:11.596348 kubelet[2730]: E0707 06:11:11.596287 2730 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:11:11.596988 kubelet[2730]: E0707 06:11:11.596922 2730 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jul 7 06:11:11.597235 kubelet[2730]: E0707 06:11:11.597210 2730 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:11:11.636419 kubelet[2730]: I0707 06:11:11.636287 2730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=2.636254265 podStartE2EDuration="2.636254265s" podCreationTimestamp="2025-07-07 06:11:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 06:11:11.627040371 +0000 UTC m=+1.364236618" watchObservedRunningTime="2025-07-07 06:11:11.636254265 +0000 UTC m=+1.373450512" Jul 7 06:11:11.636833 kubelet[2730]: I0707 06:11:11.636755 2730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=2.63674457 podStartE2EDuration="2.63674457s" podCreationTimestamp="2025-07-07 06:11:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 06:11:11.6365045 +0000 UTC m=+1.373700757" watchObservedRunningTime="2025-07-07 06:11:11.63674457 +0000 UTC m=+1.373940817" Jul 7 06:11:11.646929 kubelet[2730]: I0707 06:11:11.646844 2730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.646822454 podStartE2EDuration="1.646822454s" podCreationTimestamp="2025-07-07 06:11:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 06:11:11.6457827 +0000 UTC m=+1.382978957" watchObservedRunningTime="2025-07-07 06:11:11.646822454 +0000 UTC m=+1.384018701" Jul 7 06:11:12.378040 kubelet[2730]: E0707 06:11:12.377893 2730 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:11:12.378040 kubelet[2730]: E0707 06:11:12.377964 2730 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:11:13.379659 kubelet[2730]: E0707 06:11:13.379592 2730 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:11:14.551508 kubelet[2730]: I0707 06:11:14.551462 2730 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 7 06:11:14.551983 kubelet[2730]: I0707 06:11:14.551908 2730 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 7 06:11:14.552015 containerd[1591]: time="2025-07-07T06:11:14.551775690Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 7 06:11:14.949655 update_engine[1530]: I20250707 06:11:14.949466 1530 update_attempter.cc:509] Updating boot flags... Jul 7 06:11:15.460921 systemd[1]: Created slice kubepods-besteffort-poda284bd75_07cb_472b_8bb1_568bb85f21ff.slice - libcontainer container kubepods-besteffort-poda284bd75_07cb_472b_8bb1_568bb85f21ff.slice. Jul 7 06:11:15.514042 kubelet[2730]: I0707 06:11:15.513957 2730 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a284bd75-07cb-472b-8bb1-568bb85f21ff-xtables-lock\") pod \"kube-proxy-t5fzr\" (UID: \"a284bd75-07cb-472b-8bb1-568bb85f21ff\") " pod="kube-system/kube-proxy-t5fzr" Jul 7 06:11:15.514042 kubelet[2730]: I0707 06:11:15.514013 2730 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xg7kn\" (UniqueName: \"kubernetes.io/projected/a284bd75-07cb-472b-8bb1-568bb85f21ff-kube-api-access-xg7kn\") pod \"kube-proxy-t5fzr\" (UID: \"a284bd75-07cb-472b-8bb1-568bb85f21ff\") " pod="kube-system/kube-proxy-t5fzr" Jul 7 06:11:15.514042 kubelet[2730]: I0707 06:11:15.514043 2730 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/a284bd75-07cb-472b-8bb1-568bb85f21ff-kube-proxy\") pod \"kube-proxy-t5fzr\" (UID: \"a284bd75-07cb-472b-8bb1-568bb85f21ff\") " pod="kube-system/kube-proxy-t5fzr" Jul 7 06:11:15.514281 kubelet[2730]: I0707 06:11:15.514072 2730 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a284bd75-07cb-472b-8bb1-568bb85f21ff-lib-modules\") pod \"kube-proxy-t5fzr\" (UID: \"a284bd75-07cb-472b-8bb1-568bb85f21ff\") " pod="kube-system/kube-proxy-t5fzr" Jul 7 06:11:15.679549 systemd[1]: Created slice kubepods-besteffort-pod524841e5_b872_43c0_bd4c_2a83590d790c.slice - libcontainer container kubepods-besteffort-pod524841e5_b872_43c0_bd4c_2a83590d790c.slice. Jul 7 06:11:15.715787 kubelet[2730]: I0707 06:11:15.715611 2730 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/524841e5-b872-43c0-bd4c-2a83590d790c-var-lib-calico\") pod \"tigera-operator-747864d56d-z7mzm\" (UID: \"524841e5-b872-43c0-bd4c-2a83590d790c\") " pod="tigera-operator/tigera-operator-747864d56d-z7mzm" Jul 7 06:11:15.715787 kubelet[2730]: I0707 06:11:15.715659 2730 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c57wn\" (UniqueName: \"kubernetes.io/projected/524841e5-b872-43c0-bd4c-2a83590d790c-kube-api-access-c57wn\") pod \"tigera-operator-747864d56d-z7mzm\" (UID: \"524841e5-b872-43c0-bd4c-2a83590d790c\") " pod="tigera-operator/tigera-operator-747864d56d-z7mzm" Jul 7 06:11:15.771934 kubelet[2730]: E0707 06:11:15.771884 2730 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:11:15.773244 containerd[1591]: time="2025-07-07T06:11:15.773196824Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-t5fzr,Uid:a284bd75-07cb-472b-8bb1-568bb85f21ff,Namespace:kube-system,Attempt:0,}" Jul 7 06:11:15.800689 containerd[1591]: time="2025-07-07T06:11:15.800606157Z" level=info msg="connecting to shim 8cdc355e74fa4a05e2e06bc56d628b22562b57b220386c6d3e76223ea78f913b" address="unix:///run/containerd/s/a0227a1894b73ace5c5022075a176101a57ebb40f1e75f9b59c2d5a5a8246fed" namespace=k8s.io protocol=ttrpc version=3 Jul 7 06:11:15.831359 systemd[1]: Started cri-containerd-8cdc355e74fa4a05e2e06bc56d628b22562b57b220386c6d3e76223ea78f913b.scope - libcontainer container 8cdc355e74fa4a05e2e06bc56d628b22562b57b220386c6d3e76223ea78f913b. Jul 7 06:11:15.860233 containerd[1591]: time="2025-07-07T06:11:15.860187496Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-t5fzr,Uid:a284bd75-07cb-472b-8bb1-568bb85f21ff,Namespace:kube-system,Attempt:0,} returns sandbox id \"8cdc355e74fa4a05e2e06bc56d628b22562b57b220386c6d3e76223ea78f913b\"" Jul 7 06:11:15.860993 kubelet[2730]: E0707 06:11:15.860967 2730 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:11:15.863015 containerd[1591]: time="2025-07-07T06:11:15.862986128Z" level=info msg="CreateContainer within sandbox \"8cdc355e74fa4a05e2e06bc56d628b22562b57b220386c6d3e76223ea78f913b\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 7 06:11:15.876718 containerd[1591]: time="2025-07-07T06:11:15.876626460Z" level=info msg="Container 4321378e5d2478d6a63ac23ad409a6616f53ccb693ff782bef67bc2384211aea: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:11:15.886643 containerd[1591]: time="2025-07-07T06:11:15.886594319Z" level=info msg="CreateContainer within sandbox \"8cdc355e74fa4a05e2e06bc56d628b22562b57b220386c6d3e76223ea78f913b\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"4321378e5d2478d6a63ac23ad409a6616f53ccb693ff782bef67bc2384211aea\"" Jul 7 06:11:15.887022 containerd[1591]: time="2025-07-07T06:11:15.886998425Z" level=info msg="StartContainer for \"4321378e5d2478d6a63ac23ad409a6616f53ccb693ff782bef67bc2384211aea\"" Jul 7 06:11:15.888393 containerd[1591]: time="2025-07-07T06:11:15.888361783Z" level=info msg="connecting to shim 4321378e5d2478d6a63ac23ad409a6616f53ccb693ff782bef67bc2384211aea" address="unix:///run/containerd/s/a0227a1894b73ace5c5022075a176101a57ebb40f1e75f9b59c2d5a5a8246fed" protocol=ttrpc version=3 Jul 7 06:11:15.915397 systemd[1]: Started cri-containerd-4321378e5d2478d6a63ac23ad409a6616f53ccb693ff782bef67bc2384211aea.scope - libcontainer container 4321378e5d2478d6a63ac23ad409a6616f53ccb693ff782bef67bc2384211aea. Jul 7 06:11:15.985237 containerd[1591]: time="2025-07-07T06:11:15.985070951Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-z7mzm,Uid:524841e5-b872-43c0-bd4c-2a83590d790c,Namespace:tigera-operator,Attempt:0,}" Jul 7 06:11:16.006565 containerd[1591]: time="2025-07-07T06:11:16.006501490Z" level=info msg="connecting to shim 25ef0a926a76bae712c0af64ca04ad1f37b58c2a1ac4ec81945611c33bd59585" address="unix:///run/containerd/s/eddf8ae52f554d18317d3a6961cf45d5355ee73f181518800ffb02e861459df2" namespace=k8s.io protocol=ttrpc version=3 Jul 7 06:11:16.511401 systemd[1]: Started cri-containerd-25ef0a926a76bae712c0af64ca04ad1f37b58c2a1ac4ec81945611c33bd59585.scope - libcontainer container 25ef0a926a76bae712c0af64ca04ad1f37b58c2a1ac4ec81945611c33bd59585. Jul 7 06:11:16.541227 containerd[1591]: time="2025-07-07T06:11:16.541179886Z" level=info msg="StartContainer for \"4321378e5d2478d6a63ac23ad409a6616f53ccb693ff782bef67bc2384211aea\" returns successfully" Jul 7 06:11:16.575798 containerd[1591]: time="2025-07-07T06:11:16.575738445Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-z7mzm,Uid:524841e5-b872-43c0-bd4c-2a83590d790c,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"25ef0a926a76bae712c0af64ca04ad1f37b58c2a1ac4ec81945611c33bd59585\"" Jul 7 06:11:16.577693 containerd[1591]: time="2025-07-07T06:11:16.577663220Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Jul 7 06:11:17.390947 kubelet[2730]: E0707 06:11:17.390834 2730 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:11:17.401721 kubelet[2730]: I0707 06:11:17.401632 2730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-t5fzr" podStartSLOduration=2.401600062 podStartE2EDuration="2.401600062s" podCreationTimestamp="2025-07-07 06:11:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 06:11:17.401385681 +0000 UTC m=+7.138581938" watchObservedRunningTime="2025-07-07 06:11:17.401600062 +0000 UTC m=+7.138796309" Jul 7 06:11:18.141931 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount108131478.mount: Deactivated successfully. Jul 7 06:11:18.205445 kubelet[2730]: E0707 06:11:18.205373 2730 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:11:18.393698 kubelet[2730]: E0707 06:11:18.393534 2730 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:11:18.393698 kubelet[2730]: E0707 06:11:18.393664 2730 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:11:18.742538 containerd[1591]: time="2025-07-07T06:11:18.742388857Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:11:18.743275 containerd[1591]: time="2025-07-07T06:11:18.743248847Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=25056543" Jul 7 06:11:18.744554 containerd[1591]: time="2025-07-07T06:11:18.744483889Z" level=info msg="ImageCreate event name:\"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:11:18.746870 containerd[1591]: time="2025-07-07T06:11:18.746826289Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:11:18.747549 containerd[1591]: time="2025-07-07T06:11:18.747514448Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"25052538\" in 2.169814194s" Jul 7 06:11:18.747585 containerd[1591]: time="2025-07-07T06:11:18.747549074Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\"" Jul 7 06:11:18.749808 containerd[1591]: time="2025-07-07T06:11:18.749770748Z" level=info msg="CreateContainer within sandbox \"25ef0a926a76bae712c0af64ca04ad1f37b58c2a1ac4ec81945611c33bd59585\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jul 7 06:11:18.758776 containerd[1591]: time="2025-07-07T06:11:18.758722656Z" level=info msg="Container d4d269be86dab431380e4e9686e076090792c373d347ce5b59a7b4eeb6614d9a: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:11:18.766220 containerd[1591]: time="2025-07-07T06:11:18.766166946Z" level=info msg="CreateContainer within sandbox \"25ef0a926a76bae712c0af64ca04ad1f37b58c2a1ac4ec81945611c33bd59585\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"d4d269be86dab431380e4e9686e076090792c373d347ce5b59a7b4eeb6614d9a\"" Jul 7 06:11:18.766685 containerd[1591]: time="2025-07-07T06:11:18.766646170Z" level=info msg="StartContainer for \"d4d269be86dab431380e4e9686e076090792c373d347ce5b59a7b4eeb6614d9a\"" Jul 7 06:11:18.767616 containerd[1591]: time="2025-07-07T06:11:18.767574210Z" level=info msg="connecting to shim d4d269be86dab431380e4e9686e076090792c373d347ce5b59a7b4eeb6614d9a" address="unix:///run/containerd/s/eddf8ae52f554d18317d3a6961cf45d5355ee73f181518800ffb02e861459df2" protocol=ttrpc version=3 Jul 7 06:11:18.828450 systemd[1]: Started cri-containerd-d4d269be86dab431380e4e9686e076090792c373d347ce5b59a7b4eeb6614d9a.scope - libcontainer container d4d269be86dab431380e4e9686e076090792c373d347ce5b59a7b4eeb6614d9a. Jul 7 06:11:18.866266 containerd[1591]: time="2025-07-07T06:11:18.866203996Z" level=info msg="StartContainer for \"d4d269be86dab431380e4e9686e076090792c373d347ce5b59a7b4eeb6614d9a\" returns successfully" Jul 7 06:11:19.149247 kubelet[2730]: E0707 06:11:19.149173 2730 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:11:19.396417 kubelet[2730]: E0707 06:11:19.396286 2730 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:11:19.408388 kubelet[2730]: I0707 06:11:19.408062 2730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-747864d56d-z7mzm" podStartSLOduration=2.236860567 podStartE2EDuration="4.408043595s" podCreationTimestamp="2025-07-07 06:11:15 +0000 UTC" firstStartedPulling="2025-07-07 06:11:16.57718704 +0000 UTC m=+6.314383277" lastFinishedPulling="2025-07-07 06:11:18.748370068 +0000 UTC m=+8.485566305" observedRunningTime="2025-07-07 06:11:19.408008467 +0000 UTC m=+9.145204714" watchObservedRunningTime="2025-07-07 06:11:19.408043595 +0000 UTC m=+9.145239832" Jul 7 06:11:20.399239 kubelet[2730]: E0707 06:11:20.399171 2730 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:11:20.754027 kubelet[2730]: E0707 06:11:20.753847 2730 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:11:21.400650 kubelet[2730]: E0707 06:11:21.400594 2730 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:11:22.401857 kubelet[2730]: E0707 06:11:22.401794 2730 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:11:24.741670 sudo[1820]: pam_unix(sudo:session): session closed for user root Jul 7 06:11:24.744904 sshd[1819]: Connection closed by 10.0.0.1 port 44228 Jul 7 06:11:24.746421 sshd-session[1817]: pam_unix(sshd:session): session closed for user core Jul 7 06:11:24.754564 systemd[1]: sshd@8-10.0.0.94:22-10.0.0.1:44228.service: Deactivated successfully. Jul 7 06:11:24.760342 systemd[1]: session-9.scope: Deactivated successfully. Jul 7 06:11:24.760576 systemd[1]: session-9.scope: Consumed 4.523s CPU time, 223.7M memory peak. Jul 7 06:11:24.764410 systemd-logind[1529]: Session 9 logged out. Waiting for processes to exit. Jul 7 06:11:24.766454 systemd-logind[1529]: Removed session 9. Jul 7 06:11:27.809332 systemd[1]: Created slice kubepods-besteffort-pod5974c63c_6bec_47fc_9038_1fe97dd837d5.slice - libcontainer container kubepods-besteffort-pod5974c63c_6bec_47fc_9038_1fe97dd837d5.slice. Jul 7 06:11:27.904226 kubelet[2730]: I0707 06:11:27.904156 2730 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5974c63c-6bec-47fc-9038-1fe97dd837d5-tigera-ca-bundle\") pod \"calico-typha-77f4d6d947-pcmqk\" (UID: \"5974c63c-6bec-47fc-9038-1fe97dd837d5\") " pod="calico-system/calico-typha-77f4d6d947-pcmqk" Jul 7 06:11:27.904226 kubelet[2730]: I0707 06:11:27.904214 2730 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/5974c63c-6bec-47fc-9038-1fe97dd837d5-typha-certs\") pod \"calico-typha-77f4d6d947-pcmqk\" (UID: \"5974c63c-6bec-47fc-9038-1fe97dd837d5\") " pod="calico-system/calico-typha-77f4d6d947-pcmqk" Jul 7 06:11:27.904226 kubelet[2730]: I0707 06:11:27.904239 2730 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8tn8\" (UniqueName: \"kubernetes.io/projected/5974c63c-6bec-47fc-9038-1fe97dd837d5-kube-api-access-j8tn8\") pod \"calico-typha-77f4d6d947-pcmqk\" (UID: \"5974c63c-6bec-47fc-9038-1fe97dd837d5\") " pod="calico-system/calico-typha-77f4d6d947-pcmqk" Jul 7 06:11:28.114878 kubelet[2730]: E0707 06:11:28.114245 2730 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:11:28.116081 containerd[1591]: time="2025-07-07T06:11:28.115073508Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-77f4d6d947-pcmqk,Uid:5974c63c-6bec-47fc-9038-1fe97dd837d5,Namespace:calico-system,Attempt:0,}" Jul 7 06:11:28.324869 systemd[1]: Created slice kubepods-besteffort-pod5386294a_8bc0_463f_a966_3b39021a71c1.slice - libcontainer container kubepods-besteffort-pod5386294a_8bc0_463f_a966_3b39021a71c1.slice. Jul 7 06:11:28.339619 containerd[1591]: time="2025-07-07T06:11:28.339300056Z" level=info msg="connecting to shim a76fcb1e2bc5ffeb54e79380d75f146e46ae6bf22350ffc4fd7529e8a6fb10d1" address="unix:///run/containerd/s/785b7532f8ecb4d62eab5326530d1666c8612f89e2b50042f922180dbb52f0e0" namespace=k8s.io protocol=ttrpc version=3 Jul 7 06:11:28.384626 systemd[1]: Started cri-containerd-a76fcb1e2bc5ffeb54e79380d75f146e46ae6bf22350ffc4fd7529e8a6fb10d1.scope - libcontainer container a76fcb1e2bc5ffeb54e79380d75f146e46ae6bf22350ffc4fd7529e8a6fb10d1. Jul 7 06:11:28.407559 kubelet[2730]: I0707 06:11:28.407499 2730 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/5386294a-8bc0-463f-a966-3b39021a71c1-policysync\") pod \"calico-node-zzhzb\" (UID: \"5386294a-8bc0-463f-a966-3b39021a71c1\") " pod="calico-system/calico-node-zzhzb" Jul 7 06:11:28.407559 kubelet[2730]: I0707 06:11:28.407544 2730 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/5386294a-8bc0-463f-a966-3b39021a71c1-cni-log-dir\") pod \"calico-node-zzhzb\" (UID: \"5386294a-8bc0-463f-a966-3b39021a71c1\") " pod="calico-system/calico-node-zzhzb" Jul 7 06:11:28.407559 kubelet[2730]: I0707 06:11:28.407560 2730 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5386294a-8bc0-463f-a966-3b39021a71c1-lib-modules\") pod \"calico-node-zzhzb\" (UID: \"5386294a-8bc0-463f-a966-3b39021a71c1\") " pod="calico-system/calico-node-zzhzb" Jul 7 06:11:28.407559 kubelet[2730]: I0707 06:11:28.407574 2730 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/5386294a-8bc0-463f-a966-3b39021a71c1-cni-bin-dir\") pod \"calico-node-zzhzb\" (UID: \"5386294a-8bc0-463f-a966-3b39021a71c1\") " pod="calico-system/calico-node-zzhzb" Jul 7 06:11:28.407818 kubelet[2730]: I0707 06:11:28.407596 2730 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gsmnf\" (UniqueName: \"kubernetes.io/projected/5386294a-8bc0-463f-a966-3b39021a71c1-kube-api-access-gsmnf\") pod \"calico-node-zzhzb\" (UID: \"5386294a-8bc0-463f-a966-3b39021a71c1\") " pod="calico-system/calico-node-zzhzb" Jul 7 06:11:28.407818 kubelet[2730]: I0707 06:11:28.407614 2730 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/5386294a-8bc0-463f-a966-3b39021a71c1-cni-net-dir\") pod \"calico-node-zzhzb\" (UID: \"5386294a-8bc0-463f-a966-3b39021a71c1\") " pod="calico-system/calico-node-zzhzb" Jul 7 06:11:28.407818 kubelet[2730]: I0707 06:11:28.407628 2730 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/5386294a-8bc0-463f-a966-3b39021a71c1-var-lib-calico\") pod \"calico-node-zzhzb\" (UID: \"5386294a-8bc0-463f-a966-3b39021a71c1\") " pod="calico-system/calico-node-zzhzb" Jul 7 06:11:28.407818 kubelet[2730]: I0707 06:11:28.407647 2730 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/5386294a-8bc0-463f-a966-3b39021a71c1-node-certs\") pod \"calico-node-zzhzb\" (UID: \"5386294a-8bc0-463f-a966-3b39021a71c1\") " pod="calico-system/calico-node-zzhzb" Jul 7 06:11:28.407818 kubelet[2730]: I0707 06:11:28.407662 2730 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5386294a-8bc0-463f-a966-3b39021a71c1-tigera-ca-bundle\") pod \"calico-node-zzhzb\" (UID: \"5386294a-8bc0-463f-a966-3b39021a71c1\") " pod="calico-system/calico-node-zzhzb" Jul 7 06:11:28.407975 kubelet[2730]: I0707 06:11:28.407766 2730 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/5386294a-8bc0-463f-a966-3b39021a71c1-flexvol-driver-host\") pod \"calico-node-zzhzb\" (UID: \"5386294a-8bc0-463f-a966-3b39021a71c1\") " pod="calico-system/calico-node-zzhzb" Jul 7 06:11:28.407975 kubelet[2730]: I0707 06:11:28.407809 2730 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/5386294a-8bc0-463f-a966-3b39021a71c1-var-run-calico\") pod \"calico-node-zzhzb\" (UID: \"5386294a-8bc0-463f-a966-3b39021a71c1\") " pod="calico-system/calico-node-zzhzb" Jul 7 06:11:28.407975 kubelet[2730]: I0707 06:11:28.407833 2730 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5386294a-8bc0-463f-a966-3b39021a71c1-xtables-lock\") pod \"calico-node-zzhzb\" (UID: \"5386294a-8bc0-463f-a966-3b39021a71c1\") " pod="calico-system/calico-node-zzhzb" Jul 7 06:11:28.445840 kubelet[2730]: E0707 06:11:28.445498 2730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mnpf6" podUID="018e5b2e-15b1-47a4-aa58-1ffe99e5a2b7" Jul 7 06:11:28.458133 containerd[1591]: time="2025-07-07T06:11:28.458061656Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-77f4d6d947-pcmqk,Uid:5974c63c-6bec-47fc-9038-1fe97dd837d5,Namespace:calico-system,Attempt:0,} returns sandbox id \"a76fcb1e2bc5ffeb54e79380d75f146e46ae6bf22350ffc4fd7529e8a6fb10d1\"" Jul 7 06:11:28.459811 kubelet[2730]: E0707 06:11:28.459780 2730 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:11:28.462330 containerd[1591]: time="2025-07-07T06:11:28.461710960Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Jul 7 06:11:28.508214 kubelet[2730]: I0707 06:11:28.508162 2730 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/018e5b2e-15b1-47a4-aa58-1ffe99e5a2b7-registration-dir\") pod \"csi-node-driver-mnpf6\" (UID: \"018e5b2e-15b1-47a4-aa58-1ffe99e5a2b7\") " pod="calico-system/csi-node-driver-mnpf6" Jul 7 06:11:28.508214 kubelet[2730]: I0707 06:11:28.508226 2730 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/018e5b2e-15b1-47a4-aa58-1ffe99e5a2b7-varrun\") pod \"csi-node-driver-mnpf6\" (UID: \"018e5b2e-15b1-47a4-aa58-1ffe99e5a2b7\") " pod="calico-system/csi-node-driver-mnpf6" Jul 7 06:11:28.508426 kubelet[2730]: I0707 06:11:28.508244 2730 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7dnrg\" (UniqueName: \"kubernetes.io/projected/018e5b2e-15b1-47a4-aa58-1ffe99e5a2b7-kube-api-access-7dnrg\") pod \"csi-node-driver-mnpf6\" (UID: \"018e5b2e-15b1-47a4-aa58-1ffe99e5a2b7\") " pod="calico-system/csi-node-driver-mnpf6" Jul 7 06:11:28.508426 kubelet[2730]: I0707 06:11:28.508344 2730 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/018e5b2e-15b1-47a4-aa58-1ffe99e5a2b7-socket-dir\") pod \"csi-node-driver-mnpf6\" (UID: \"018e5b2e-15b1-47a4-aa58-1ffe99e5a2b7\") " pod="calico-system/csi-node-driver-mnpf6" Jul 7 06:11:28.508478 kubelet[2730]: I0707 06:11:28.508429 2730 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/018e5b2e-15b1-47a4-aa58-1ffe99e5a2b7-kubelet-dir\") pod \"csi-node-driver-mnpf6\" (UID: \"018e5b2e-15b1-47a4-aa58-1ffe99e5a2b7\") " pod="calico-system/csi-node-driver-mnpf6" Jul 7 06:11:28.512140 kubelet[2730]: E0707 06:11:28.511831 2730 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:11:28.512140 kubelet[2730]: W0707 06:11:28.511863 2730 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:11:28.512826 kubelet[2730]: E0707 06:11:28.512353 2730 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:11:28.516139 kubelet[2730]: E0707 06:11:28.514711 2730 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:11:28.516139 kubelet[2730]: W0707 06:11:28.514745 2730 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:11:28.516139 kubelet[2730]: E0707 06:11:28.514797 2730 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:11:28.516490 kubelet[2730]: E0707 06:11:28.516446 2730 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:11:28.516490 kubelet[2730]: W0707 06:11:28.516468 2730 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:11:28.516661 kubelet[2730]: E0707 06:11:28.516631 2730 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:11:28.517286 kubelet[2730]: E0707 06:11:28.517254 2730 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:11:28.517286 kubelet[2730]: W0707 06:11:28.517273 2730 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:11:28.517449 kubelet[2730]: E0707 06:11:28.517419 2730 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:11:28.517786 kubelet[2730]: E0707 06:11:28.517753 2730 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:11:28.517786 kubelet[2730]: W0707 06:11:28.517772 2730 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:11:28.518200 kubelet[2730]: E0707 06:11:28.518044 2730 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:11:28.518440 kubelet[2730]: E0707 06:11:28.518407 2730 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:11:28.519339 kubelet[2730]: W0707 06:11:28.518428 2730 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:11:28.519339 kubelet[2730]: E0707 06:11:28.519294 2730 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:11:28.521141 kubelet[2730]: E0707 06:11:28.521076 2730 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:11:28.521141 kubelet[2730]: W0707 06:11:28.521123 2730 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:11:28.521141 kubelet[2730]: E0707 06:11:28.521137 2730 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:11:28.521696 kubelet[2730]: E0707 06:11:28.521664 2730 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:11:28.521696 kubelet[2730]: W0707 06:11:28.521687 2730 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:11:28.521815 kubelet[2730]: E0707 06:11:28.521715 2730 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:11:28.527671 kubelet[2730]: E0707 06:11:28.527635 2730 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:11:28.527886 kubelet[2730]: W0707 06:11:28.527796 2730 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:11:28.527886 kubelet[2730]: E0707 06:11:28.527833 2730 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:11:28.610042 kubelet[2730]: E0707 06:11:28.609979 2730 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:11:28.610042 kubelet[2730]: W0707 06:11:28.610005 2730 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:11:28.610042 kubelet[2730]: E0707 06:11:28.610027 2730 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:11:28.610356 kubelet[2730]: E0707 06:11:28.610255 2730 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:11:28.610356 kubelet[2730]: W0707 06:11:28.610263 2730 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:11:28.610356 kubelet[2730]: E0707 06:11:28.610278 2730 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:11:28.610704 kubelet[2730]: E0707 06:11:28.610656 2730 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:11:28.610704 kubelet[2730]: W0707 06:11:28.610692 2730 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:11:28.610757 kubelet[2730]: E0707 06:11:28.610730 2730 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:11:28.610991 kubelet[2730]: E0707 06:11:28.610962 2730 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:11:28.610991 kubelet[2730]: W0707 06:11:28.610978 2730 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:11:28.611056 kubelet[2730]: E0707 06:11:28.611000 2730 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:11:28.611272 kubelet[2730]: E0707 06:11:28.611240 2730 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:11:28.611272 kubelet[2730]: W0707 06:11:28.611257 2730 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:11:28.611335 kubelet[2730]: E0707 06:11:28.611276 2730 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:11:28.611661 kubelet[2730]: E0707 06:11:28.611630 2730 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:11:28.611661 kubelet[2730]: W0707 06:11:28.611646 2730 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:11:28.611737 kubelet[2730]: E0707 06:11:28.611679 2730 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:11:28.611889 kubelet[2730]: E0707 06:11:28.611859 2730 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:11:28.611889 kubelet[2730]: W0707 06:11:28.611875 2730 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:11:28.611962 kubelet[2730]: E0707 06:11:28.611915 2730 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:11:28.612164 kubelet[2730]: E0707 06:11:28.612133 2730 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:11:28.612164 kubelet[2730]: W0707 06:11:28.612149 2730 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:11:28.612236 kubelet[2730]: E0707 06:11:28.612169 2730 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:11:28.612528 kubelet[2730]: E0707 06:11:28.612486 2730 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:11:28.612528 kubelet[2730]: W0707 06:11:28.612516 2730 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:11:28.612585 kubelet[2730]: E0707 06:11:28.612548 2730 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:11:28.612835 kubelet[2730]: E0707 06:11:28.612809 2730 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:11:28.612835 kubelet[2730]: W0707 06:11:28.612821 2730 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:11:28.612835 kubelet[2730]: E0707 06:11:28.612836 2730 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:11:28.613108 kubelet[2730]: E0707 06:11:28.613087 2730 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:11:28.613147 kubelet[2730]: W0707 06:11:28.613122 2730 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:11:28.613147 kubelet[2730]: E0707 06:11:28.613139 2730 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:11:28.613356 kubelet[2730]: E0707 06:11:28.613334 2730 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:11:28.613356 kubelet[2730]: W0707 06:11:28.613347 2730 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:11:28.613413 kubelet[2730]: E0707 06:11:28.613361 2730 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:11:28.613563 kubelet[2730]: E0707 06:11:28.613541 2730 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:11:28.613563 kubelet[2730]: W0707 06:11:28.613552 2730 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:11:28.613626 kubelet[2730]: E0707 06:11:28.613584 2730 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:11:28.613767 kubelet[2730]: E0707 06:11:28.613745 2730 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:11:28.613767 kubelet[2730]: W0707 06:11:28.613756 2730 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:11:28.613828 kubelet[2730]: E0707 06:11:28.613787 2730 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:11:28.613986 kubelet[2730]: E0707 06:11:28.613966 2730 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:11:28.613986 kubelet[2730]: W0707 06:11:28.613976 2730 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:11:28.614037 kubelet[2730]: E0707 06:11:28.614008 2730 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:11:28.614198 kubelet[2730]: E0707 06:11:28.614172 2730 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:11:28.614198 kubelet[2730]: W0707 06:11:28.614183 2730 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:11:28.614198 kubelet[2730]: E0707 06:11:28.614197 2730 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:11:28.614395 kubelet[2730]: E0707 06:11:28.614368 2730 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:11:28.614395 kubelet[2730]: W0707 06:11:28.614379 2730 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:11:28.614395 kubelet[2730]: E0707 06:11:28.614392 2730 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:11:28.614635 kubelet[2730]: E0707 06:11:28.614624 2730 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:11:28.614635 kubelet[2730]: W0707 06:11:28.614633 2730 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:11:28.614713 kubelet[2730]: E0707 06:11:28.614647 2730 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:11:28.614851 kubelet[2730]: E0707 06:11:28.614828 2730 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:11:28.614851 kubelet[2730]: W0707 06:11:28.614838 2730 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:11:28.614851 kubelet[2730]: E0707 06:11:28.614851 2730 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:11:28.615191 kubelet[2730]: E0707 06:11:28.615166 2730 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:11:28.615191 kubelet[2730]: W0707 06:11:28.615183 2730 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:11:28.615262 kubelet[2730]: E0707 06:11:28.615203 2730 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:11:28.615468 kubelet[2730]: E0707 06:11:28.615447 2730 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:11:28.615468 kubelet[2730]: W0707 06:11:28.615461 2730 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:11:28.615567 kubelet[2730]: E0707 06:11:28.615506 2730 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:11:28.615708 kubelet[2730]: E0707 06:11:28.615676 2730 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:11:28.615708 kubelet[2730]: W0707 06:11:28.615693 2730 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:11:28.615860 kubelet[2730]: E0707 06:11:28.615737 2730 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:11:28.616062 kubelet[2730]: E0707 06:11:28.616029 2730 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:11:28.616062 kubelet[2730]: W0707 06:11:28.616059 2730 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:11:28.616154 kubelet[2730]: E0707 06:11:28.616082 2730 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:11:28.616434 kubelet[2730]: E0707 06:11:28.616413 2730 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:11:28.616434 kubelet[2730]: W0707 06:11:28.616432 2730 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:11:28.616495 kubelet[2730]: E0707 06:11:28.616446 2730 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:11:28.616744 kubelet[2730]: E0707 06:11:28.616726 2730 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:11:28.616744 kubelet[2730]: W0707 06:11:28.616740 2730 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:11:28.616804 kubelet[2730]: E0707 06:11:28.616752 2730 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:11:28.625610 kubelet[2730]: E0707 06:11:28.625576 2730 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:11:28.625610 kubelet[2730]: W0707 06:11:28.625596 2730 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:11:28.625610 kubelet[2730]: E0707 06:11:28.625611 2730 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:11:28.632510 containerd[1591]: time="2025-07-07T06:11:28.632440646Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-zzhzb,Uid:5386294a-8bc0-463f-a966-3b39021a71c1,Namespace:calico-system,Attempt:0,}" Jul 7 06:11:28.662748 containerd[1591]: time="2025-07-07T06:11:28.661851506Z" level=info msg="connecting to shim ad5008858a83129334040535eb01f2d8c0cc83e9343a7c629a36d0069c66a1a1" address="unix:///run/containerd/s/c204ccbc28e9467b0fb9d98495d687a45edadca2ab6faccab40ee0b770533d15" namespace=k8s.io protocol=ttrpc version=3 Jul 7 06:11:28.694291 systemd[1]: Started cri-containerd-ad5008858a83129334040535eb01f2d8c0cc83e9343a7c629a36d0069c66a1a1.scope - libcontainer container ad5008858a83129334040535eb01f2d8c0cc83e9343a7c629a36d0069c66a1a1. Jul 7 06:11:28.729864 containerd[1591]: time="2025-07-07T06:11:28.729808052Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-zzhzb,Uid:5386294a-8bc0-463f-a966-3b39021a71c1,Namespace:calico-system,Attempt:0,} returns sandbox id \"ad5008858a83129334040535eb01f2d8c0cc83e9343a7c629a36d0069c66a1a1\"" Jul 7 06:11:30.365697 kubelet[2730]: E0707 06:11:30.365608 2730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mnpf6" podUID="018e5b2e-15b1-47a4-aa58-1ffe99e5a2b7" Jul 7 06:11:30.444331 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2987924698.mount: Deactivated successfully. Jul 7 06:11:31.431557 containerd[1591]: time="2025-07-07T06:11:31.431491510Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:11:31.432354 containerd[1591]: time="2025-07-07T06:11:31.432319697Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=35233364" Jul 7 06:11:31.433768 containerd[1591]: time="2025-07-07T06:11:31.433713869Z" level=info msg="ImageCreate event name:\"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:11:31.436012 containerd[1591]: time="2025-07-07T06:11:31.435967271Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:11:31.436669 containerd[1591]: time="2025-07-07T06:11:31.436628621Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"35233218\" in 2.974880415s" Jul 7 06:11:31.436669 containerd[1591]: time="2025-07-07T06:11:31.436658460Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\"" Jul 7 06:11:31.437629 containerd[1591]: time="2025-07-07T06:11:31.437602182Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Jul 7 06:11:31.450331 containerd[1591]: time="2025-07-07T06:11:31.450088140Z" level=info msg="CreateContainer within sandbox \"a76fcb1e2bc5ffeb54e79380d75f146e46ae6bf22350ffc4fd7529e8a6fb10d1\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jul 7 06:11:31.468211 containerd[1591]: time="2025-07-07T06:11:31.468095316Z" level=info msg="Container 3e4fc81e583f56ac3ab556876358665fdc467e8214ff32e0d585eb39c3984f11: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:11:31.477258 containerd[1591]: time="2025-07-07T06:11:31.477183725Z" level=info msg="CreateContainer within sandbox \"a76fcb1e2bc5ffeb54e79380d75f146e46ae6bf22350ffc4fd7529e8a6fb10d1\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"3e4fc81e583f56ac3ab556876358665fdc467e8214ff32e0d585eb39c3984f11\"" Jul 7 06:11:31.477835 containerd[1591]: time="2025-07-07T06:11:31.477786085Z" level=info msg="StartContainer for \"3e4fc81e583f56ac3ab556876358665fdc467e8214ff32e0d585eb39c3984f11\"" Jul 7 06:11:31.479056 containerd[1591]: time="2025-07-07T06:11:31.479008680Z" level=info msg="connecting to shim 3e4fc81e583f56ac3ab556876358665fdc467e8214ff32e0d585eb39c3984f11" address="unix:///run/containerd/s/785b7532f8ecb4d62eab5326530d1666c8612f89e2b50042f922180dbb52f0e0" protocol=ttrpc version=3 Jul 7 06:11:31.506405 systemd[1]: Started cri-containerd-3e4fc81e583f56ac3ab556876358665fdc467e8214ff32e0d585eb39c3984f11.scope - libcontainer container 3e4fc81e583f56ac3ab556876358665fdc467e8214ff32e0d585eb39c3984f11. Jul 7 06:11:31.881420 containerd[1591]: time="2025-07-07T06:11:31.881374351Z" level=info msg="StartContainer for \"3e4fc81e583f56ac3ab556876358665fdc467e8214ff32e0d585eb39c3984f11\" returns successfully" Jul 7 06:11:32.366052 kubelet[2730]: E0707 06:11:32.365923 2730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mnpf6" podUID="018e5b2e-15b1-47a4-aa58-1ffe99e5a2b7" Jul 7 06:11:32.430801 kubelet[2730]: E0707 06:11:32.430761 2730 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:11:32.442760 kubelet[2730]: I0707 06:11:32.442692 2730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-77f4d6d947-pcmqk" podStartSLOduration=2.466322907 podStartE2EDuration="5.442674913s" podCreationTimestamp="2025-07-07 06:11:27 +0000 UTC" firstStartedPulling="2025-07-07 06:11:28.461043798 +0000 UTC m=+18.198240035" lastFinishedPulling="2025-07-07 06:11:31.437395804 +0000 UTC m=+21.174592041" observedRunningTime="2025-07-07 06:11:32.442090225 +0000 UTC m=+22.179286482" watchObservedRunningTime="2025-07-07 06:11:32.442674913 +0000 UTC m=+22.179871150" Jul 7 06:11:32.479128 kubelet[2730]: E0707 06:11:32.478076 2730 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:11:32.479128 kubelet[2730]: W0707 06:11:32.478132 2730 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:11:32.479128 kubelet[2730]: E0707 06:11:32.478154 2730 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:11:32.479128 kubelet[2730]: E0707 06:11:32.478381 2730 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:11:32.479128 kubelet[2730]: W0707 06:11:32.478391 2730 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:11:32.479128 kubelet[2730]: E0707 06:11:32.478403 2730 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:11:32.479128 kubelet[2730]: E0707 06:11:32.478589 2730 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:11:32.479128 kubelet[2730]: W0707 06:11:32.478599 2730 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:11:32.479128 kubelet[2730]: E0707 06:11:32.478609 2730 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:11:32.479128 kubelet[2730]: E0707 06:11:32.478809 2730 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:11:32.479642 kubelet[2730]: W0707 06:11:32.478818 2730 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:11:32.479642 kubelet[2730]: E0707 06:11:32.478829 2730 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:11:32.479642 kubelet[2730]: E0707 06:11:32.479012 2730 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:11:32.479642 kubelet[2730]: W0707 06:11:32.479021 2730 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:11:32.479642 kubelet[2730]: E0707 06:11:32.479038 2730 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:11:32.479642 kubelet[2730]: E0707 06:11:32.479221 2730 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:11:32.479642 kubelet[2730]: W0707 06:11:32.479232 2730 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:11:32.479642 kubelet[2730]: E0707 06:11:32.479248 2730 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:11:32.479642 kubelet[2730]: E0707 06:11:32.479409 2730 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:11:32.479642 kubelet[2730]: W0707 06:11:32.479418 2730 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:11:32.480002 kubelet[2730]: E0707 06:11:32.479444 2730 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:11:32.480002 kubelet[2730]: E0707 06:11:32.479607 2730 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:11:32.480002 kubelet[2730]: W0707 06:11:32.479621 2730 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:11:32.480002 kubelet[2730]: E0707 06:11:32.479631 2730 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:11:32.480002 kubelet[2730]: E0707 06:11:32.479805 2730 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:11:32.480002 kubelet[2730]: W0707 06:11:32.479814 2730 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:11:32.480002 kubelet[2730]: E0707 06:11:32.479824 2730 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:11:32.480002 kubelet[2730]: E0707 06:11:32.479995 2730 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:11:32.480002 kubelet[2730]: W0707 06:11:32.480003 2730 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:11:32.480376 kubelet[2730]: E0707 06:11:32.480018 2730 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:11:32.480376 kubelet[2730]: E0707 06:11:32.480205 2730 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:11:32.480376 kubelet[2730]: W0707 06:11:32.480215 2730 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:11:32.480376 kubelet[2730]: E0707 06:11:32.480230 2730 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:11:32.480533 kubelet[2730]: E0707 06:11:32.480401 2730 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:11:32.480533 kubelet[2730]: W0707 06:11:32.480411 2730 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:11:32.480533 kubelet[2730]: E0707 06:11:32.480437 2730 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:11:32.480647 kubelet[2730]: E0707 06:11:32.480614 2730 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:11:32.480647 kubelet[2730]: W0707 06:11:32.480624 2730 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:11:32.480647 kubelet[2730]: E0707 06:11:32.480635 2730 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:11:32.480828 kubelet[2730]: E0707 06:11:32.480805 2730 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:11:32.480828 kubelet[2730]: W0707 06:11:32.480822 2730 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:11:32.480920 kubelet[2730]: E0707 06:11:32.480833 2730 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:11:32.481028 kubelet[2730]: E0707 06:11:32.481007 2730 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:11:32.481028 kubelet[2730]: W0707 06:11:32.481023 2730 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:11:32.481156 kubelet[2730]: E0707 06:11:32.481035 2730 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:11:32.540328 kubelet[2730]: E0707 06:11:32.540276 2730 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:11:32.540328 kubelet[2730]: W0707 06:11:32.540318 2730 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:11:32.540546 kubelet[2730]: E0707 06:11:32.540353 2730 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:11:32.540781 kubelet[2730]: E0707 06:11:32.540756 2730 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:11:32.540781 kubelet[2730]: W0707 06:11:32.540772 2730 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:11:32.540834 kubelet[2730]: E0707 06:11:32.540792 2730 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:11:32.541174 kubelet[2730]: E0707 06:11:32.541143 2730 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:11:32.541174 kubelet[2730]: W0707 06:11:32.541170 2730 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:11:32.541246 kubelet[2730]: E0707 06:11:32.541196 2730 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:11:32.541477 kubelet[2730]: E0707 06:11:32.541458 2730 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:11:32.541477 kubelet[2730]: W0707 06:11:32.541470 2730 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:11:32.541535 kubelet[2730]: E0707 06:11:32.541485 2730 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:11:32.541722 kubelet[2730]: E0707 06:11:32.541702 2730 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:11:32.541722 kubelet[2730]: W0707 06:11:32.541719 2730 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:11:32.541782 kubelet[2730]: E0707 06:11:32.541766 2730 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:11:32.541958 kubelet[2730]: E0707 06:11:32.541941 2730 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:11:32.541958 kubelet[2730]: W0707 06:11:32.541954 2730 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:11:32.542008 kubelet[2730]: E0707 06:11:32.541985 2730 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:11:32.542190 kubelet[2730]: E0707 06:11:32.542173 2730 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:11:32.542190 kubelet[2730]: W0707 06:11:32.542187 2730 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:11:32.542251 kubelet[2730]: E0707 06:11:32.542221 2730 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:11:32.542441 kubelet[2730]: E0707 06:11:32.542419 2730 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:11:32.542472 kubelet[2730]: W0707 06:11:32.542444 2730 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:11:32.542472 kubelet[2730]: E0707 06:11:32.542464 2730 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:11:32.542676 kubelet[2730]: E0707 06:11:32.542659 2730 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:11:32.542676 kubelet[2730]: W0707 06:11:32.542672 2730 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:11:32.542727 kubelet[2730]: E0707 06:11:32.542689 2730 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:11:32.542873 kubelet[2730]: E0707 06:11:32.542858 2730 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:11:32.542873 kubelet[2730]: W0707 06:11:32.542868 2730 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:11:32.542923 kubelet[2730]: E0707 06:11:32.542881 2730 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:11:32.543115 kubelet[2730]: E0707 06:11:32.543090 2730 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:11:32.543146 kubelet[2730]: W0707 06:11:32.543127 2730 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:11:32.543146 kubelet[2730]: E0707 06:11:32.543141 2730 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:11:32.543416 kubelet[2730]: E0707 06:11:32.543399 2730 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:11:32.543416 kubelet[2730]: W0707 06:11:32.543412 2730 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:11:32.543483 kubelet[2730]: E0707 06:11:32.543428 2730 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:11:32.543608 kubelet[2730]: E0707 06:11:32.543595 2730 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:11:32.543608 kubelet[2730]: W0707 06:11:32.543604 2730 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:11:32.543657 kubelet[2730]: E0707 06:11:32.543632 2730 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:11:32.543796 kubelet[2730]: E0707 06:11:32.543785 2730 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:11:32.543796 kubelet[2730]: W0707 06:11:32.543794 2730 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:11:32.543841 kubelet[2730]: E0707 06:11:32.543819 2730 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:11:32.543968 kubelet[2730]: E0707 06:11:32.543954 2730 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:11:32.543968 kubelet[2730]: W0707 06:11:32.543965 2730 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:11:32.544016 kubelet[2730]: E0707 06:11:32.543979 2730 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:11:32.544192 kubelet[2730]: E0707 06:11:32.544179 2730 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:11:32.544192 kubelet[2730]: W0707 06:11:32.544188 2730 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:11:32.544246 kubelet[2730]: E0707 06:11:32.544201 2730 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:11:32.544465 kubelet[2730]: E0707 06:11:32.544447 2730 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:11:32.544465 kubelet[2730]: W0707 06:11:32.544460 2730 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:11:32.544528 kubelet[2730]: E0707 06:11:32.544470 2730 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:11:32.544717 kubelet[2730]: E0707 06:11:32.544701 2730 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jul 7 06:11:32.544748 kubelet[2730]: W0707 06:11:32.544716 2730 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jul 7 06:11:32.544748 kubelet[2730]: E0707 06:11:32.544727 2730 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jul 7 06:11:33.043449 containerd[1591]: time="2025-07-07T06:11:33.043374217Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:11:33.044379 containerd[1591]: time="2025-07-07T06:11:33.044323200Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=4446956" Jul 7 06:11:33.045676 containerd[1591]: time="2025-07-07T06:11:33.045625192Z" level=info msg="ImageCreate event name:\"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:11:33.047787 containerd[1591]: time="2025-07-07T06:11:33.047738684Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:11:33.048471 containerd[1591]: time="2025-07-07T06:11:33.048426844Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5939619\" in 1.61078333s" Jul 7 06:11:33.048471 containerd[1591]: time="2025-07-07T06:11:33.048464209Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\"" Jul 7 06:11:33.052715 containerd[1591]: time="2025-07-07T06:11:33.052672964Z" level=info msg="CreateContainer within sandbox \"ad5008858a83129334040535eb01f2d8c0cc83e9343a7c629a36d0069c66a1a1\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jul 7 06:11:33.064278 containerd[1591]: time="2025-07-07T06:11:33.064210003Z" level=info msg="Container b1c51312c223db097502cfca3b8a91086fa267e8965c696267f4fe4d976ce01c: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:11:33.074634 containerd[1591]: time="2025-07-07T06:11:33.074567484Z" level=info msg="CreateContainer within sandbox \"ad5008858a83129334040535eb01f2d8c0cc83e9343a7c629a36d0069c66a1a1\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"b1c51312c223db097502cfca3b8a91086fa267e8965c696267f4fe4d976ce01c\"" Jul 7 06:11:33.075397 containerd[1591]: time="2025-07-07T06:11:33.075268671Z" level=info msg="StartContainer for \"b1c51312c223db097502cfca3b8a91086fa267e8965c696267f4fe4d976ce01c\"" Jul 7 06:11:33.077364 containerd[1591]: time="2025-07-07T06:11:33.077326541Z" level=info msg="connecting to shim b1c51312c223db097502cfca3b8a91086fa267e8965c696267f4fe4d976ce01c" address="unix:///run/containerd/s/c204ccbc28e9467b0fb9d98495d687a45edadca2ab6faccab40ee0b770533d15" protocol=ttrpc version=3 Jul 7 06:11:33.106424 systemd[1]: Started cri-containerd-b1c51312c223db097502cfca3b8a91086fa267e8965c696267f4fe4d976ce01c.scope - libcontainer container b1c51312c223db097502cfca3b8a91086fa267e8965c696267f4fe4d976ce01c. Jul 7 06:11:33.162589 containerd[1591]: time="2025-07-07T06:11:33.162551070Z" level=info msg="StartContainer for \"b1c51312c223db097502cfca3b8a91086fa267e8965c696267f4fe4d976ce01c\" returns successfully" Jul 7 06:11:33.174453 systemd[1]: cri-containerd-b1c51312c223db097502cfca3b8a91086fa267e8965c696267f4fe4d976ce01c.scope: Deactivated successfully. Jul 7 06:11:33.176328 containerd[1591]: time="2025-07-07T06:11:33.176233715Z" level=info msg="received exit event container_id:\"b1c51312c223db097502cfca3b8a91086fa267e8965c696267f4fe4d976ce01c\" id:\"b1c51312c223db097502cfca3b8a91086fa267e8965c696267f4fe4d976ce01c\" pid:3396 exited_at:{seconds:1751868693 nanos:175686126}" Jul 7 06:11:33.176328 containerd[1591]: time="2025-07-07T06:11:33.176317012Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b1c51312c223db097502cfca3b8a91086fa267e8965c696267f4fe4d976ce01c\" id:\"b1c51312c223db097502cfca3b8a91086fa267e8965c696267f4fe4d976ce01c\" pid:3396 exited_at:{seconds:1751868693 nanos:175686126}" Jul 7 06:11:33.208803 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b1c51312c223db097502cfca3b8a91086fa267e8965c696267f4fe4d976ce01c-rootfs.mount: Deactivated successfully. Jul 7 06:11:33.435124 kubelet[2730]: I0707 06:11:33.434898 2730 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 06:11:33.436477 kubelet[2730]: E0707 06:11:33.436428 2730 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:11:34.365913 kubelet[2730]: E0707 06:11:34.365832 2730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mnpf6" podUID="018e5b2e-15b1-47a4-aa58-1ffe99e5a2b7" Jul 7 06:11:34.439917 containerd[1591]: time="2025-07-07T06:11:34.439695054Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Jul 7 06:11:36.367570 kubelet[2730]: E0707 06:11:36.366944 2730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mnpf6" podUID="018e5b2e-15b1-47a4-aa58-1ffe99e5a2b7" Jul 7 06:11:37.617202 containerd[1591]: time="2025-07-07T06:11:37.617132135Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:11:37.617935 containerd[1591]: time="2025-07-07T06:11:37.617870765Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=70436221" Jul 7 06:11:37.619145 containerd[1591]: time="2025-07-07T06:11:37.619091288Z" level=info msg="ImageCreate event name:\"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:11:37.621816 containerd[1591]: time="2025-07-07T06:11:37.621759582Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:11:37.622420 containerd[1591]: time="2025-07-07T06:11:37.622369498Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"71928924\" in 3.182627569s" Jul 7 06:11:37.622420 containerd[1591]: time="2025-07-07T06:11:37.622414246Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\"" Jul 7 06:11:37.625003 containerd[1591]: time="2025-07-07T06:11:37.624965899Z" level=info msg="CreateContainer within sandbox \"ad5008858a83129334040535eb01f2d8c0cc83e9343a7c629a36d0069c66a1a1\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 7 06:11:37.730294 containerd[1591]: time="2025-07-07T06:11:37.730239677Z" level=info msg="Container 60e6512eb3ced410b19c2fe7ef587090931afc1a5f54bfe8ac32f00b28cc1a24: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:11:37.883851 containerd[1591]: time="2025-07-07T06:11:37.883685034Z" level=info msg="CreateContainer within sandbox \"ad5008858a83129334040535eb01f2d8c0cc83e9343a7c629a36d0069c66a1a1\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"60e6512eb3ced410b19c2fe7ef587090931afc1a5f54bfe8ac32f00b28cc1a24\"" Jul 7 06:11:37.884697 containerd[1591]: time="2025-07-07T06:11:37.884647787Z" level=info msg="StartContainer for \"60e6512eb3ced410b19c2fe7ef587090931afc1a5f54bfe8ac32f00b28cc1a24\"" Jul 7 06:11:37.886416 containerd[1591]: time="2025-07-07T06:11:37.886383177Z" level=info msg="connecting to shim 60e6512eb3ced410b19c2fe7ef587090931afc1a5f54bfe8ac32f00b28cc1a24" address="unix:///run/containerd/s/c204ccbc28e9467b0fb9d98495d687a45edadca2ab6faccab40ee0b770533d15" protocol=ttrpc version=3 Jul 7 06:11:37.919296 systemd[1]: Started cri-containerd-60e6512eb3ced410b19c2fe7ef587090931afc1a5f54bfe8ac32f00b28cc1a24.scope - libcontainer container 60e6512eb3ced410b19c2fe7ef587090931afc1a5f54bfe8ac32f00b28cc1a24. Jul 7 06:11:37.972125 containerd[1591]: time="2025-07-07T06:11:37.972054118Z" level=info msg="StartContainer for \"60e6512eb3ced410b19c2fe7ef587090931afc1a5f54bfe8ac32f00b28cc1a24\" returns successfully" Jul 7 06:11:38.365868 kubelet[2730]: E0707 06:11:38.365791 2730 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-mnpf6" podUID="018e5b2e-15b1-47a4-aa58-1ffe99e5a2b7" Jul 7 06:11:39.332111 systemd[1]: cri-containerd-60e6512eb3ced410b19c2fe7ef587090931afc1a5f54bfe8ac32f00b28cc1a24.scope: Deactivated successfully. Jul 7 06:11:39.332610 systemd[1]: cri-containerd-60e6512eb3ced410b19c2fe7ef587090931afc1a5f54bfe8ac32f00b28cc1a24.scope: Consumed 646ms CPU time, 179.7M memory peak, 4.4M read from disk, 171.2M written to disk. Jul 7 06:11:39.333252 containerd[1591]: time="2025-07-07T06:11:39.333193860Z" level=info msg="received exit event container_id:\"60e6512eb3ced410b19c2fe7ef587090931afc1a5f54bfe8ac32f00b28cc1a24\" id:\"60e6512eb3ced410b19c2fe7ef587090931afc1a5f54bfe8ac32f00b28cc1a24\" pid:3457 exited_at:{seconds:1751868699 nanos:332898270}" Jul 7 06:11:39.333252 containerd[1591]: time="2025-07-07T06:11:39.333240872Z" level=info msg="TaskExit event in podsandbox handler container_id:\"60e6512eb3ced410b19c2fe7ef587090931afc1a5f54bfe8ac32f00b28cc1a24\" id:\"60e6512eb3ced410b19c2fe7ef587090931afc1a5f54bfe8ac32f00b28cc1a24\" pid:3457 exited_at:{seconds:1751868699 nanos:332898270}" Jul 7 06:11:39.362033 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-60e6512eb3ced410b19c2fe7ef587090931afc1a5f54bfe8ac32f00b28cc1a24-rootfs.mount: Deactivated successfully. Jul 7 06:11:39.384519 kubelet[2730]: I0707 06:11:39.384481 2730 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jul 7 06:11:39.623403 systemd[1]: Created slice kubepods-besteffort-poda5aaecf1_178e_4a17_aa11_1d10ccb44ba4.slice - libcontainer container kubepods-besteffort-poda5aaecf1_178e_4a17_aa11_1d10ccb44ba4.slice. Jul 7 06:11:39.631084 systemd[1]: Created slice kubepods-besteffort-podb9066ac6_a84d_4bfc_ba1c_386e6a24601f.slice - libcontainer container kubepods-besteffort-podb9066ac6_a84d_4bfc_ba1c_386e6a24601f.slice. Jul 7 06:11:39.636841 systemd[1]: Created slice kubepods-burstable-pod9c16e4b5_2630_4cbc_b777_1a667284a980.slice - libcontainer container kubepods-burstable-pod9c16e4b5_2630_4cbc_b777_1a667284a980.slice. Jul 7 06:11:39.691125 kubelet[2730]: I0707 06:11:39.690994 2730 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/07c13200-9675-440b-9395-714f9a92d182-goldmane-key-pair\") pod \"goldmane-768f4c5c69-tcv2d\" (UID: \"07c13200-9675-440b-9395-714f9a92d182\") " pod="calico-system/goldmane-768f4c5c69-tcv2d" Jul 7 06:11:39.691125 kubelet[2730]: I0707 06:11:39.691077 2730 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-szw5m\" (UniqueName: \"kubernetes.io/projected/9c16e4b5-2630-4cbc-b777-1a667284a980-kube-api-access-szw5m\") pod \"coredns-668d6bf9bc-hklhb\" (UID: \"9c16e4b5-2630-4cbc-b777-1a667284a980\") " pod="kube-system/coredns-668d6bf9bc-hklhb" Jul 7 06:11:39.691465 kubelet[2730]: I0707 06:11:39.691161 2730 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/a5aaecf1-178e-4a17-aa11-1d10ccb44ba4-calico-apiserver-certs\") pod \"calico-apiserver-c85fc6b4c-lmt2g\" (UID: \"a5aaecf1-178e-4a17-aa11-1d10ccb44ba4\") " pod="calico-apiserver/calico-apiserver-c85fc6b4c-lmt2g" Jul 7 06:11:39.691465 kubelet[2730]: I0707 06:11:39.691196 2730 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/88ceba10-8017-4d8e-b438-c29c09deb831-tigera-ca-bundle\") pod \"calico-kube-controllers-5676987769-nmdmf\" (UID: \"88ceba10-8017-4d8e-b438-c29c09deb831\") " pod="calico-system/calico-kube-controllers-5676987769-nmdmf" Jul 7 06:11:39.691465 kubelet[2730]: I0707 06:11:39.691230 2730 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/b9066ac6-a84d-4bfc-ba1c-386e6a24601f-whisker-backend-key-pair\") pod \"whisker-5c9f8cc964-hhwk5\" (UID: \"b9066ac6-a84d-4bfc-ba1c-386e6a24601f\") " pod="calico-system/whisker-5c9f8cc964-hhwk5" Jul 7 06:11:39.691465 kubelet[2730]: I0707 06:11:39.691257 2730 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b9066ac6-a84d-4bfc-ba1c-386e6a24601f-whisker-ca-bundle\") pod \"whisker-5c9f8cc964-hhwk5\" (UID: \"b9066ac6-a84d-4bfc-ba1c-386e6a24601f\") " pod="calico-system/whisker-5c9f8cc964-hhwk5" Jul 7 06:11:39.691465 kubelet[2730]: I0707 06:11:39.691288 2730 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kds8r\" (UniqueName: \"kubernetes.io/projected/83e8eed0-31ef-4362-a10b-04ce1bca5c07-kube-api-access-kds8r\") pod \"coredns-668d6bf9bc-z9whx\" (UID: \"83e8eed0-31ef-4362-a10b-04ce1bca5c07\") " pod="kube-system/coredns-668d6bf9bc-z9whx" Jul 7 06:11:39.691626 kubelet[2730]: I0707 06:11:39.691378 2730 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mg785\" (UniqueName: \"kubernetes.io/projected/b9066ac6-a84d-4bfc-ba1c-386e6a24601f-kube-api-access-mg785\") pod \"whisker-5c9f8cc964-hhwk5\" (UID: \"b9066ac6-a84d-4bfc-ba1c-386e6a24601f\") " pod="calico-system/whisker-5c9f8cc964-hhwk5" Jul 7 06:11:39.691626 kubelet[2730]: I0707 06:11:39.691407 2730 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9c16e4b5-2630-4cbc-b777-1a667284a980-config-volume\") pod \"coredns-668d6bf9bc-hklhb\" (UID: \"9c16e4b5-2630-4cbc-b777-1a667284a980\") " pod="kube-system/coredns-668d6bf9bc-hklhb" Jul 7 06:11:39.691626 kubelet[2730]: I0707 06:11:39.691432 2730 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ph99q\" (UniqueName: \"kubernetes.io/projected/a5aaecf1-178e-4a17-aa11-1d10ccb44ba4-kube-api-access-ph99q\") pod \"calico-apiserver-c85fc6b4c-lmt2g\" (UID: \"a5aaecf1-178e-4a17-aa11-1d10ccb44ba4\") " pod="calico-apiserver/calico-apiserver-c85fc6b4c-lmt2g" Jul 7 06:11:39.691626 kubelet[2730]: I0707 06:11:39.691471 2730 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/83e8eed0-31ef-4362-a10b-04ce1bca5c07-config-volume\") pod \"coredns-668d6bf9bc-z9whx\" (UID: \"83e8eed0-31ef-4362-a10b-04ce1bca5c07\") " pod="kube-system/coredns-668d6bf9bc-z9whx" Jul 7 06:11:39.691626 kubelet[2730]: I0707 06:11:39.691492 2730 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bdrdc\" (UniqueName: \"kubernetes.io/projected/88ceba10-8017-4d8e-b438-c29c09deb831-kube-api-access-bdrdc\") pod \"calico-kube-controllers-5676987769-nmdmf\" (UID: \"88ceba10-8017-4d8e-b438-c29c09deb831\") " pod="calico-system/calico-kube-controllers-5676987769-nmdmf" Jul 7 06:11:39.691778 kubelet[2730]: I0707 06:11:39.691523 2730 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8kl44\" (UniqueName: \"kubernetes.io/projected/f8df9f4b-117d-4ed3-9a45-083bbe24b183-kube-api-access-8kl44\") pod \"calico-apiserver-c85fc6b4c-cwc8p\" (UID: \"f8df9f4b-117d-4ed3-9a45-083bbe24b183\") " pod="calico-apiserver/calico-apiserver-c85fc6b4c-cwc8p" Jul 7 06:11:39.691778 kubelet[2730]: I0707 06:11:39.691592 2730 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/f8df9f4b-117d-4ed3-9a45-083bbe24b183-calico-apiserver-certs\") pod \"calico-apiserver-c85fc6b4c-cwc8p\" (UID: \"f8df9f4b-117d-4ed3-9a45-083bbe24b183\") " pod="calico-apiserver/calico-apiserver-c85fc6b4c-cwc8p" Jul 7 06:11:39.691778 kubelet[2730]: I0707 06:11:39.691637 2730 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07c13200-9675-440b-9395-714f9a92d182-config\") pod \"goldmane-768f4c5c69-tcv2d\" (UID: \"07c13200-9675-440b-9395-714f9a92d182\") " pod="calico-system/goldmane-768f4c5c69-tcv2d" Jul 7 06:11:39.691778 kubelet[2730]: I0707 06:11:39.691663 2730 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/07c13200-9675-440b-9395-714f9a92d182-goldmane-ca-bundle\") pod \"goldmane-768f4c5c69-tcv2d\" (UID: \"07c13200-9675-440b-9395-714f9a92d182\") " pod="calico-system/goldmane-768f4c5c69-tcv2d" Jul 7 06:11:39.691778 kubelet[2730]: I0707 06:11:39.691677 2730 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ktbs6\" (UniqueName: \"kubernetes.io/projected/07c13200-9675-440b-9395-714f9a92d182-kube-api-access-ktbs6\") pod \"goldmane-768f4c5c69-tcv2d\" (UID: \"07c13200-9675-440b-9395-714f9a92d182\") " pod="calico-system/goldmane-768f4c5c69-tcv2d" Jul 7 06:11:39.755368 systemd[1]: Created slice kubepods-burstable-pod83e8eed0_31ef_4362_a10b_04ce1bca5c07.slice - libcontainer container kubepods-burstable-pod83e8eed0_31ef_4362_a10b_04ce1bca5c07.slice. Jul 7 06:11:39.765752 systemd[1]: Created slice kubepods-besteffort-pod88ceba10_8017_4d8e_b438_c29c09deb831.slice - libcontainer container kubepods-besteffort-pod88ceba10_8017_4d8e_b438_c29c09deb831.slice. Jul 7 06:11:39.773545 systemd[1]: Created slice kubepods-besteffort-podf8df9f4b_117d_4ed3_9a45_083bbe24b183.slice - libcontainer container kubepods-besteffort-podf8df9f4b_117d_4ed3_9a45_083bbe24b183.slice. Jul 7 06:11:39.780938 systemd[1]: Created slice kubepods-besteffort-pod07c13200_9675_440b_9395_714f9a92d182.slice - libcontainer container kubepods-besteffort-pod07c13200_9675_440b_9395_714f9a92d182.slice. Jul 7 06:11:39.929133 containerd[1591]: time="2025-07-07T06:11:39.928911322Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c85fc6b4c-lmt2g,Uid:a5aaecf1-178e-4a17-aa11-1d10ccb44ba4,Namespace:calico-apiserver,Attempt:0,}" Jul 7 06:11:39.934928 containerd[1591]: time="2025-07-07T06:11:39.934871831Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5c9f8cc964-hhwk5,Uid:b9066ac6-a84d-4bfc-ba1c-386e6a24601f,Namespace:calico-system,Attempt:0,}" Jul 7 06:11:40.049127 kubelet[2730]: E0707 06:11:40.048710 2730 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:11:40.049882 containerd[1591]: time="2025-07-07T06:11:40.049837957Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-hklhb,Uid:9c16e4b5-2630-4cbc-b777-1a667284a980,Namespace:kube-system,Attempt:0,}" Jul 7 06:11:40.062588 kubelet[2730]: E0707 06:11:40.062266 2730 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:11:40.064538 containerd[1591]: time="2025-07-07T06:11:40.064494065Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-z9whx,Uid:83e8eed0-31ef-4362-a10b-04ce1bca5c07,Namespace:kube-system,Attempt:0,}" Jul 7 06:11:40.070901 containerd[1591]: time="2025-07-07T06:11:40.070830978Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5676987769-nmdmf,Uid:88ceba10-8017-4d8e-b438-c29c09deb831,Namespace:calico-system,Attempt:0,}" Jul 7 06:11:40.078663 containerd[1591]: time="2025-07-07T06:11:40.078584886Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c85fc6b4c-cwc8p,Uid:f8df9f4b-117d-4ed3-9a45-083bbe24b183,Namespace:calico-apiserver,Attempt:0,}" Jul 7 06:11:40.087924 containerd[1591]: time="2025-07-07T06:11:40.087866968Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-tcv2d,Uid:07c13200-9675-440b-9395-714f9a92d182,Namespace:calico-system,Attempt:0,}" Jul 7 06:11:40.129405 containerd[1591]: time="2025-07-07T06:11:40.129333874Z" level=error msg="Failed to destroy network for sandbox \"f213fbe1ca95e3a5fdfc702b056baddb5e0368985ee8b3ca4885e1a6aad9e8a8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:11:40.138181 containerd[1591]: time="2025-07-07T06:11:40.138126638Z" level=error msg="Failed to destroy network for sandbox \"bc57ae5ba83be82585f9fd1a42784c27a6720eb518903e84995d2f3277630a2d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:11:40.151218 containerd[1591]: time="2025-07-07T06:11:40.151059200Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c85fc6b4c-lmt2g,Uid:a5aaecf1-178e-4a17-aa11-1d10ccb44ba4,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f213fbe1ca95e3a5fdfc702b056baddb5e0368985ee8b3ca4885e1a6aad9e8a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:11:40.169975 containerd[1591]: time="2025-07-07T06:11:40.169377502Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5c9f8cc964-hhwk5,Uid:b9066ac6-a84d-4bfc-ba1c-386e6a24601f,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"bc57ae5ba83be82585f9fd1a42784c27a6720eb518903e84995d2f3277630a2d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:11:40.182734 containerd[1591]: time="2025-07-07T06:11:40.182590574Z" level=error msg="Failed to destroy network for sandbox \"73fc979a947e68495f76b4c648efc2db1b16a587b4ac516f7ee3ea920cd76371\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:11:40.184892 containerd[1591]: time="2025-07-07T06:11:40.184861472Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-z9whx,Uid:83e8eed0-31ef-4362-a10b-04ce1bca5c07,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"73fc979a947e68495f76b4c648efc2db1b16a587b4ac516f7ee3ea920cd76371\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:11:40.190185 containerd[1591]: time="2025-07-07T06:11:40.188793885Z" level=error msg="Failed to destroy network for sandbox \"a040ae460a1af4d77f2076fa4546c74cf551741cdbaf65203785930d9437f0e0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:11:40.190283 kubelet[2730]: E0707 06:11:40.188791 2730 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"73fc979a947e68495f76b4c648efc2db1b16a587b4ac516f7ee3ea920cd76371\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:11:40.190283 kubelet[2730]: E0707 06:11:40.188889 2730 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"73fc979a947e68495f76b4c648efc2db1b16a587b4ac516f7ee3ea920cd76371\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-z9whx" Jul 7 06:11:40.190283 kubelet[2730]: E0707 06:11:40.188918 2730 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"73fc979a947e68495f76b4c648efc2db1b16a587b4ac516f7ee3ea920cd76371\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-z9whx" Jul 7 06:11:40.190588 kubelet[2730]: E0707 06:11:40.188978 2730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-z9whx_kube-system(83e8eed0-31ef-4362-a10b-04ce1bca5c07)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-z9whx_kube-system(83e8eed0-31ef-4362-a10b-04ce1bca5c07)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"73fc979a947e68495f76b4c648efc2db1b16a587b4ac516f7ee3ea920cd76371\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-z9whx" podUID="83e8eed0-31ef-4362-a10b-04ce1bca5c07" Jul 7 06:11:40.190588 kubelet[2730]: E0707 06:11:40.189962 2730 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f213fbe1ca95e3a5fdfc702b056baddb5e0368985ee8b3ca4885e1a6aad9e8a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:11:40.190588 kubelet[2730]: E0707 06:11:40.189998 2730 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f213fbe1ca95e3a5fdfc702b056baddb5e0368985ee8b3ca4885e1a6aad9e8a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-c85fc6b4c-lmt2g" Jul 7 06:11:40.190720 containerd[1591]: time="2025-07-07T06:11:40.190262853Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c85fc6b4c-cwc8p,Uid:f8df9f4b-117d-4ed3-9a45-083bbe24b183,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a040ae460a1af4d77f2076fa4546c74cf551741cdbaf65203785930d9437f0e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:11:40.190763 kubelet[2730]: E0707 06:11:40.190017 2730 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f213fbe1ca95e3a5fdfc702b056baddb5e0368985ee8b3ca4885e1a6aad9e8a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-c85fc6b4c-lmt2g" Jul 7 06:11:40.190763 kubelet[2730]: E0707 06:11:40.190044 2730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-c85fc6b4c-lmt2g_calico-apiserver(a5aaecf1-178e-4a17-aa11-1d10ccb44ba4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-c85fc6b4c-lmt2g_calico-apiserver(a5aaecf1-178e-4a17-aa11-1d10ccb44ba4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f213fbe1ca95e3a5fdfc702b056baddb5e0368985ee8b3ca4885e1a6aad9e8a8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-c85fc6b4c-lmt2g" podUID="a5aaecf1-178e-4a17-aa11-1d10ccb44ba4" Jul 7 06:11:40.190763 kubelet[2730]: E0707 06:11:40.190073 2730 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bc57ae5ba83be82585f9fd1a42784c27a6720eb518903e84995d2f3277630a2d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:11:40.190856 kubelet[2730]: E0707 06:11:40.190118 2730 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bc57ae5ba83be82585f9fd1a42784c27a6720eb518903e84995d2f3277630a2d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5c9f8cc964-hhwk5" Jul 7 06:11:40.190856 kubelet[2730]: E0707 06:11:40.190131 2730 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bc57ae5ba83be82585f9fd1a42784c27a6720eb518903e84995d2f3277630a2d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-5c9f8cc964-hhwk5" Jul 7 06:11:40.190856 kubelet[2730]: E0707 06:11:40.190152 2730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-5c9f8cc964-hhwk5_calico-system(b9066ac6-a84d-4bfc-ba1c-386e6a24601f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-5c9f8cc964-hhwk5_calico-system(b9066ac6-a84d-4bfc-ba1c-386e6a24601f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bc57ae5ba83be82585f9fd1a42784c27a6720eb518903e84995d2f3277630a2d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-5c9f8cc964-hhwk5" podUID="b9066ac6-a84d-4bfc-ba1c-386e6a24601f" Jul 7 06:11:40.190940 kubelet[2730]: E0707 06:11:40.190475 2730 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a040ae460a1af4d77f2076fa4546c74cf551741cdbaf65203785930d9437f0e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:11:40.190940 kubelet[2730]: E0707 06:11:40.190497 2730 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a040ae460a1af4d77f2076fa4546c74cf551741cdbaf65203785930d9437f0e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-c85fc6b4c-cwc8p" Jul 7 06:11:40.190940 kubelet[2730]: E0707 06:11:40.190510 2730 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a040ae460a1af4d77f2076fa4546c74cf551741cdbaf65203785930d9437f0e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-c85fc6b4c-cwc8p" Jul 7 06:11:40.191016 kubelet[2730]: E0707 06:11:40.190533 2730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-c85fc6b4c-cwc8p_calico-apiserver(f8df9f4b-117d-4ed3-9a45-083bbe24b183)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-c85fc6b4c-cwc8p_calico-apiserver(f8df9f4b-117d-4ed3-9a45-083bbe24b183)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a040ae460a1af4d77f2076fa4546c74cf551741cdbaf65203785930d9437f0e0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-c85fc6b4c-cwc8p" podUID="f8df9f4b-117d-4ed3-9a45-083bbe24b183" Jul 7 06:11:40.208232 containerd[1591]: time="2025-07-07T06:11:40.208178176Z" level=error msg="Failed to destroy network for sandbox \"97b7778d4f6ea05abe574ec5c3399ddd4f39faf39b37e0ca0d3355b979d01fb1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:11:40.208849 containerd[1591]: time="2025-07-07T06:11:40.208788571Z" level=error msg="Failed to destroy network for sandbox \"6192b34529c8be5585215a511cc460f96ccb607a09e016bde85a49afa9c53660\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:11:40.213592 containerd[1591]: time="2025-07-07T06:11:40.213226054Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5676987769-nmdmf,Uid:88ceba10-8017-4d8e-b438-c29c09deb831,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"97b7778d4f6ea05abe574ec5c3399ddd4f39faf39b37e0ca0d3355b979d01fb1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:11:40.213763 kubelet[2730]: E0707 06:11:40.213675 2730 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"97b7778d4f6ea05abe574ec5c3399ddd4f39faf39b37e0ca0d3355b979d01fb1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:11:40.213763 kubelet[2730]: E0707 06:11:40.213735 2730 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"97b7778d4f6ea05abe574ec5c3399ddd4f39faf39b37e0ca0d3355b979d01fb1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5676987769-nmdmf" Jul 7 06:11:40.213848 kubelet[2730]: E0707 06:11:40.213756 2730 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"97b7778d4f6ea05abe574ec5c3399ddd4f39faf39b37e0ca0d3355b979d01fb1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-5676987769-nmdmf" Jul 7 06:11:40.213848 kubelet[2730]: E0707 06:11:40.213811 2730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-5676987769-nmdmf_calico-system(88ceba10-8017-4d8e-b438-c29c09deb831)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-5676987769-nmdmf_calico-system(88ceba10-8017-4d8e-b438-c29c09deb831)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"97b7778d4f6ea05abe574ec5c3399ddd4f39faf39b37e0ca0d3355b979d01fb1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-5676987769-nmdmf" podUID="88ceba10-8017-4d8e-b438-c29c09deb831" Jul 7 06:11:40.214546 containerd[1591]: time="2025-07-07T06:11:40.214511090Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-tcv2d,Uid:07c13200-9675-440b-9395-714f9a92d182,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6192b34529c8be5585215a511cc460f96ccb607a09e016bde85a49afa9c53660\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:11:40.214801 kubelet[2730]: E0707 06:11:40.214736 2730 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6192b34529c8be5585215a511cc460f96ccb607a09e016bde85a49afa9c53660\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:11:40.214801 kubelet[2730]: E0707 06:11:40.214802 2730 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6192b34529c8be5585215a511cc460f96ccb607a09e016bde85a49afa9c53660\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-tcv2d" Jul 7 06:11:40.215243 kubelet[2730]: E0707 06:11:40.214823 2730 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6192b34529c8be5585215a511cc460f96ccb607a09e016bde85a49afa9c53660\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-tcv2d" Jul 7 06:11:40.215243 kubelet[2730]: E0707 06:11:40.214878 2730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-768f4c5c69-tcv2d_calico-system(07c13200-9675-440b-9395-714f9a92d182)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-768f4c5c69-tcv2d_calico-system(07c13200-9675-440b-9395-714f9a92d182)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6192b34529c8be5585215a511cc460f96ccb607a09e016bde85a49afa9c53660\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-768f4c5c69-tcv2d" podUID="07c13200-9675-440b-9395-714f9a92d182" Jul 7 06:11:40.216946 containerd[1591]: time="2025-07-07T06:11:40.216898788Z" level=error msg="Failed to destroy network for sandbox \"e7bb750057887d894e9630be0868b1f8faad698b9f22163e1090f19e02b9a19b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:11:40.218365 containerd[1591]: time="2025-07-07T06:11:40.218322967Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-hklhb,Uid:9c16e4b5-2630-4cbc-b777-1a667284a980,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e7bb750057887d894e9630be0868b1f8faad698b9f22163e1090f19e02b9a19b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:11:40.218547 kubelet[2730]: E0707 06:11:40.218513 2730 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e7bb750057887d894e9630be0868b1f8faad698b9f22163e1090f19e02b9a19b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:11:40.218592 kubelet[2730]: E0707 06:11:40.218555 2730 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e7bb750057887d894e9630be0868b1f8faad698b9f22163e1090f19e02b9a19b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-hklhb" Jul 7 06:11:40.218592 kubelet[2730]: E0707 06:11:40.218570 2730 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e7bb750057887d894e9630be0868b1f8faad698b9f22163e1090f19e02b9a19b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-hklhb" Jul 7 06:11:40.218646 kubelet[2730]: E0707 06:11:40.218611 2730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-hklhb_kube-system(9c16e4b5-2630-4cbc-b777-1a667284a980)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-hklhb_kube-system(9c16e4b5-2630-4cbc-b777-1a667284a980)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e7bb750057887d894e9630be0868b1f8faad698b9f22163e1090f19e02b9a19b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-hklhb" podUID="9c16e4b5-2630-4cbc-b777-1a667284a980" Jul 7 06:11:40.371844 systemd[1]: Created slice kubepods-besteffort-pod018e5b2e_15b1_47a4_aa58_1ffe99e5a2b7.slice - libcontainer container kubepods-besteffort-pod018e5b2e_15b1_47a4_aa58_1ffe99e5a2b7.slice. Jul 7 06:11:40.374667 containerd[1591]: time="2025-07-07T06:11:40.374622810Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mnpf6,Uid:018e5b2e-15b1-47a4-aa58-1ffe99e5a2b7,Namespace:calico-system,Attempt:0,}" Jul 7 06:11:40.435551 containerd[1591]: time="2025-07-07T06:11:40.435413938Z" level=error msg="Failed to destroy network for sandbox \"c3f4c96d2b06ec5cd81d127b739d221de6bd3ead55b50ade978633a7d6b7b329\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:11:40.438600 systemd[1]: run-netns-cni\x2deb4a46f3\x2dda26\x2debf1\x2df97e\x2dfe55929413f4.mount: Deactivated successfully. Jul 7 06:11:40.453512 containerd[1591]: time="2025-07-07T06:11:40.437150470Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mnpf6,Uid:018e5b2e-15b1-47a4-aa58-1ffe99e5a2b7,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c3f4c96d2b06ec5cd81d127b739d221de6bd3ead55b50ade978633a7d6b7b329\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:11:40.453645 kubelet[2730]: E0707 06:11:40.453607 2730 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c3f4c96d2b06ec5cd81d127b739d221de6bd3ead55b50ade978633a7d6b7b329\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jul 7 06:11:40.454045 kubelet[2730]: E0707 06:11:40.453665 2730 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c3f4c96d2b06ec5cd81d127b739d221de6bd3ead55b50ade978633a7d6b7b329\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-mnpf6" Jul 7 06:11:40.454045 kubelet[2730]: E0707 06:11:40.453693 2730 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c3f4c96d2b06ec5cd81d127b739d221de6bd3ead55b50ade978633a7d6b7b329\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-mnpf6" Jul 7 06:11:40.454045 kubelet[2730]: E0707 06:11:40.453751 2730 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-mnpf6_calico-system(018e5b2e-15b1-47a4-aa58-1ffe99e5a2b7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-mnpf6_calico-system(018e5b2e-15b1-47a4-aa58-1ffe99e5a2b7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c3f4c96d2b06ec5cd81d127b739d221de6bd3ead55b50ade978633a7d6b7b329\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-mnpf6" podUID="018e5b2e-15b1-47a4-aa58-1ffe99e5a2b7" Jul 7 06:11:40.457806 containerd[1591]: time="2025-07-07T06:11:40.457668009Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Jul 7 06:11:46.907279 kubelet[2730]: I0707 06:11:46.907203 2730 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jul 7 06:11:46.908254 kubelet[2730]: E0707 06:11:46.907703 2730 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:11:47.469347 kubelet[2730]: E0707 06:11:47.469307 2730 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:11:49.370012 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount769575430.mount: Deactivated successfully. Jul 7 06:11:50.701871 containerd[1591]: time="2025-07-07T06:11:50.701797848Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:11:50.702982 containerd[1591]: time="2025-07-07T06:11:50.702947153Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=158500163" Jul 7 06:11:50.704936 containerd[1591]: time="2025-07-07T06:11:50.704862970Z" level=info msg="ImageCreate event name:\"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:11:50.707520 containerd[1591]: time="2025-07-07T06:11:50.707462820Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:11:50.708026 containerd[1591]: time="2025-07-07T06:11:50.707998869Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"158500025\" in 10.24984621s" Jul 7 06:11:50.708065 containerd[1591]: time="2025-07-07T06:11:50.708031200Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\"" Jul 7 06:11:50.721736 containerd[1591]: time="2025-07-07T06:11:50.721668141Z" level=info msg="CreateContainer within sandbox \"ad5008858a83129334040535eb01f2d8c0cc83e9343a7c629a36d0069c66a1a1\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jul 7 06:11:50.748883 containerd[1591]: time="2025-07-07T06:11:50.748806303Z" level=info msg="Container d2cc9c16259578df8f2a5a14ce0b2a8eac94cf2fa408527a81926f580f4ae506: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:11:50.759760 containerd[1591]: time="2025-07-07T06:11:50.759691904Z" level=info msg="CreateContainer within sandbox \"ad5008858a83129334040535eb01f2d8c0cc83e9343a7c629a36d0069c66a1a1\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"d2cc9c16259578df8f2a5a14ce0b2a8eac94cf2fa408527a81926f580f4ae506\"" Jul 7 06:11:50.760262 containerd[1591]: time="2025-07-07T06:11:50.760230868Z" level=info msg="StartContainer for \"d2cc9c16259578df8f2a5a14ce0b2a8eac94cf2fa408527a81926f580f4ae506\"" Jul 7 06:11:50.761774 containerd[1591]: time="2025-07-07T06:11:50.761741527Z" level=info msg="connecting to shim d2cc9c16259578df8f2a5a14ce0b2a8eac94cf2fa408527a81926f580f4ae506" address="unix:///run/containerd/s/c204ccbc28e9467b0fb9d98495d687a45edadca2ab6faccab40ee0b770533d15" protocol=ttrpc version=3 Jul 7 06:11:50.812259 systemd[1]: Started cri-containerd-d2cc9c16259578df8f2a5a14ce0b2a8eac94cf2fa408527a81926f580f4ae506.scope - libcontainer container d2cc9c16259578df8f2a5a14ce0b2a8eac94cf2fa408527a81926f580f4ae506. Jul 7 06:11:50.872026 containerd[1591]: time="2025-07-07T06:11:50.871975331Z" level=info msg="StartContainer for \"d2cc9c16259578df8f2a5a14ce0b2a8eac94cf2fa408527a81926f580f4ae506\" returns successfully" Jul 7 06:11:50.958721 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jul 7 06:11:50.959507 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jul 7 06:11:51.178316 kubelet[2730]: I0707 06:11:51.178220 2730 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/b9066ac6-a84d-4bfc-ba1c-386e6a24601f-whisker-backend-key-pair\") pod \"b9066ac6-a84d-4bfc-ba1c-386e6a24601f\" (UID: \"b9066ac6-a84d-4bfc-ba1c-386e6a24601f\") " Jul 7 06:11:51.178316 kubelet[2730]: I0707 06:11:51.178296 2730 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b9066ac6-a84d-4bfc-ba1c-386e6a24601f-whisker-ca-bundle\") pod \"b9066ac6-a84d-4bfc-ba1c-386e6a24601f\" (UID: \"b9066ac6-a84d-4bfc-ba1c-386e6a24601f\") " Jul 7 06:11:51.178316 kubelet[2730]: I0707 06:11:51.178337 2730 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg785\" (UniqueName: \"kubernetes.io/projected/b9066ac6-a84d-4bfc-ba1c-386e6a24601f-kube-api-access-mg785\") pod \"b9066ac6-a84d-4bfc-ba1c-386e6a24601f\" (UID: \"b9066ac6-a84d-4bfc-ba1c-386e6a24601f\") " Jul 7 06:11:51.178994 kubelet[2730]: I0707 06:11:51.178926 2730 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b9066ac6-a84d-4bfc-ba1c-386e6a24601f-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "b9066ac6-a84d-4bfc-ba1c-386e6a24601f" (UID: "b9066ac6-a84d-4bfc-ba1c-386e6a24601f"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jul 7 06:11:51.182741 kubelet[2730]: I0707 06:11:51.182702 2730 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b9066ac6-a84d-4bfc-ba1c-386e6a24601f-kube-api-access-mg785" (OuterVolumeSpecName: "kube-api-access-mg785") pod "b9066ac6-a84d-4bfc-ba1c-386e6a24601f" (UID: "b9066ac6-a84d-4bfc-ba1c-386e6a24601f"). InnerVolumeSpecName "kube-api-access-mg785". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jul 7 06:11:51.182837 kubelet[2730]: I0707 06:11:51.182811 2730 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9066ac6-a84d-4bfc-ba1c-386e6a24601f-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "b9066ac6-a84d-4bfc-ba1c-386e6a24601f" (UID: "b9066ac6-a84d-4bfc-ba1c-386e6a24601f"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jul 7 06:11:51.279389 kubelet[2730]: I0707 06:11:51.279275 2730 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mg785\" (UniqueName: \"kubernetes.io/projected/b9066ac6-a84d-4bfc-ba1c-386e6a24601f-kube-api-access-mg785\") on node \"localhost\" DevicePath \"\"" Jul 7 06:11:51.279389 kubelet[2730]: I0707 06:11:51.279310 2730 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/b9066ac6-a84d-4bfc-ba1c-386e6a24601f-whisker-backend-key-pair\") on node \"localhost\" DevicePath \"\"" Jul 7 06:11:51.279389 kubelet[2730]: I0707 06:11:51.279319 2730 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b9066ac6-a84d-4bfc-ba1c-386e6a24601f-whisker-ca-bundle\") on node \"localhost\" DevicePath \"\"" Jul 7 06:11:51.365852 kubelet[2730]: E0707 06:11:51.365773 2730 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:11:51.366237 containerd[1591]: time="2025-07-07T06:11:51.366202456Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-z9whx,Uid:83e8eed0-31ef-4362-a10b-04ce1bca5c07,Namespace:kube-system,Attempt:0,}" Jul 7 06:11:51.366425 containerd[1591]: time="2025-07-07T06:11:51.366349718Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-tcv2d,Uid:07c13200-9675-440b-9395-714f9a92d182,Namespace:calico-system,Attempt:0,}" Jul 7 06:11:51.502600 systemd[1]: Removed slice kubepods-besteffort-podb9066ac6_a84d_4bfc_ba1c_386e6a24601f.slice - libcontainer container kubepods-besteffort-podb9066ac6_a84d_4bfc_ba1c_386e6a24601f.slice. Jul 7 06:11:51.562661 systemd-networkd[1508]: cali882bf728df0: Link UP Jul 7 06:11:51.562910 systemd-networkd[1508]: cali882bf728df0: Gained carrier Jul 7 06:11:51.577361 kubelet[2730]: I0707 06:11:51.577278 2730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-zzhzb" podStartSLOduration=1.5993029380000001 podStartE2EDuration="23.577245798s" podCreationTimestamp="2025-07-07 06:11:28 +0000 UTC" firstStartedPulling="2025-07-07 06:11:28.731198962 +0000 UTC m=+18.468395199" lastFinishedPulling="2025-07-07 06:11:50.709141822 +0000 UTC m=+40.446338059" observedRunningTime="2025-07-07 06:11:51.559577969 +0000 UTC m=+41.296774216" watchObservedRunningTime="2025-07-07 06:11:51.577245798 +0000 UTC m=+41.314442036" Jul 7 06:11:51.586523 containerd[1591]: 2025-07-07 06:11:51.398 [INFO][3838] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 7 06:11:51.586523 containerd[1591]: 2025-07-07 06:11:51.418 [INFO][3838] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--z9whx-eth0 coredns-668d6bf9bc- kube-system 83e8eed0-31ef-4362-a10b-04ce1bca5c07 869 0 2025-07-07 06:11:15 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-z9whx eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali882bf728df0 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="e719a8a3f4304479ce24dc295221150dbc2dec2f6db7fb5b0f49d0016c90fc1a" Namespace="kube-system" Pod="coredns-668d6bf9bc-z9whx" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--z9whx-" Jul 7 06:11:51.586523 containerd[1591]: 2025-07-07 06:11:51.418 [INFO][3838] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e719a8a3f4304479ce24dc295221150dbc2dec2f6db7fb5b0f49d0016c90fc1a" Namespace="kube-system" Pod="coredns-668d6bf9bc-z9whx" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--z9whx-eth0" Jul 7 06:11:51.586523 containerd[1591]: 2025-07-07 06:11:51.501 [INFO][3866] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e719a8a3f4304479ce24dc295221150dbc2dec2f6db7fb5b0f49d0016c90fc1a" HandleID="k8s-pod-network.e719a8a3f4304479ce24dc295221150dbc2dec2f6db7fb5b0f49d0016c90fc1a" Workload="localhost-k8s-coredns--668d6bf9bc--z9whx-eth0" Jul 7 06:11:51.586792 containerd[1591]: 2025-07-07 06:11:51.502 [INFO][3866] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e719a8a3f4304479ce24dc295221150dbc2dec2f6db7fb5b0f49d0016c90fc1a" HandleID="k8s-pod-network.e719a8a3f4304479ce24dc295221150dbc2dec2f6db7fb5b0f49d0016c90fc1a" Workload="localhost-k8s-coredns--668d6bf9bc--z9whx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00034f830), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-z9whx", "timestamp":"2025-07-07 06:11:51.50155777 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 06:11:51.586792 containerd[1591]: 2025-07-07 06:11:51.502 [INFO][3866] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:11:51.586792 containerd[1591]: 2025-07-07 06:11:51.502 [INFO][3866] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:11:51.586792 containerd[1591]: 2025-07-07 06:11:51.503 [INFO][3866] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 7 06:11:51.586792 containerd[1591]: 2025-07-07 06:11:51.515 [INFO][3866] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e719a8a3f4304479ce24dc295221150dbc2dec2f6db7fb5b0f49d0016c90fc1a" host="localhost" Jul 7 06:11:51.586792 containerd[1591]: 2025-07-07 06:11:51.521 [INFO][3866] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 7 06:11:51.586792 containerd[1591]: 2025-07-07 06:11:51.525 [INFO][3866] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 7 06:11:51.586792 containerd[1591]: 2025-07-07 06:11:51.526 [INFO][3866] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 7 06:11:51.586792 containerd[1591]: 2025-07-07 06:11:51.528 [INFO][3866] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 7 06:11:51.586792 containerd[1591]: 2025-07-07 06:11:51.529 [INFO][3866] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e719a8a3f4304479ce24dc295221150dbc2dec2f6db7fb5b0f49d0016c90fc1a" host="localhost" Jul 7 06:11:51.587248 containerd[1591]: 2025-07-07 06:11:51.530 [INFO][3866] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.e719a8a3f4304479ce24dc295221150dbc2dec2f6db7fb5b0f49d0016c90fc1a Jul 7 06:11:51.587248 containerd[1591]: 2025-07-07 06:11:51.536 [INFO][3866] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e719a8a3f4304479ce24dc295221150dbc2dec2f6db7fb5b0f49d0016c90fc1a" host="localhost" Jul 7 06:11:51.587248 containerd[1591]: 2025-07-07 06:11:51.545 [INFO][3866] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.e719a8a3f4304479ce24dc295221150dbc2dec2f6db7fb5b0f49d0016c90fc1a" host="localhost" Jul 7 06:11:51.587248 containerd[1591]: 2025-07-07 06:11:51.545 [INFO][3866] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.e719a8a3f4304479ce24dc295221150dbc2dec2f6db7fb5b0f49d0016c90fc1a" host="localhost" Jul 7 06:11:51.587248 containerd[1591]: 2025-07-07 06:11:51.545 [INFO][3866] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:11:51.587248 containerd[1591]: 2025-07-07 06:11:51.545 [INFO][3866] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="e719a8a3f4304479ce24dc295221150dbc2dec2f6db7fb5b0f49d0016c90fc1a" HandleID="k8s-pod-network.e719a8a3f4304479ce24dc295221150dbc2dec2f6db7fb5b0f49d0016c90fc1a" Workload="localhost-k8s-coredns--668d6bf9bc--z9whx-eth0" Jul 7 06:11:51.587410 containerd[1591]: 2025-07-07 06:11:51.549 [INFO][3838] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e719a8a3f4304479ce24dc295221150dbc2dec2f6db7fb5b0f49d0016c90fc1a" Namespace="kube-system" Pod="coredns-668d6bf9bc-z9whx" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--z9whx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--z9whx-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"83e8eed0-31ef-4362-a10b-04ce1bca5c07", ResourceVersion:"869", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 11, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-z9whx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali882bf728df0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:11:51.587494 containerd[1591]: 2025-07-07 06:11:51.550 [INFO][3838] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.129/32] ContainerID="e719a8a3f4304479ce24dc295221150dbc2dec2f6db7fb5b0f49d0016c90fc1a" Namespace="kube-system" Pod="coredns-668d6bf9bc-z9whx" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--z9whx-eth0" Jul 7 06:11:51.587494 containerd[1591]: 2025-07-07 06:11:51.550 [INFO][3838] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali882bf728df0 ContainerID="e719a8a3f4304479ce24dc295221150dbc2dec2f6db7fb5b0f49d0016c90fc1a" Namespace="kube-system" Pod="coredns-668d6bf9bc-z9whx" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--z9whx-eth0" Jul 7 06:11:51.587494 containerd[1591]: 2025-07-07 06:11:51.563 [INFO][3838] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e719a8a3f4304479ce24dc295221150dbc2dec2f6db7fb5b0f49d0016c90fc1a" Namespace="kube-system" Pod="coredns-668d6bf9bc-z9whx" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--z9whx-eth0" Jul 7 06:11:51.587584 containerd[1591]: 2025-07-07 06:11:51.564 [INFO][3838] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e719a8a3f4304479ce24dc295221150dbc2dec2f6db7fb5b0f49d0016c90fc1a" Namespace="kube-system" Pod="coredns-668d6bf9bc-z9whx" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--z9whx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--z9whx-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"83e8eed0-31ef-4362-a10b-04ce1bca5c07", ResourceVersion:"869", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 11, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e719a8a3f4304479ce24dc295221150dbc2dec2f6db7fb5b0f49d0016c90fc1a", Pod:"coredns-668d6bf9bc-z9whx", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali882bf728df0", MAC:"fe:bf:65:c6:66:f6", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:11:51.587584 containerd[1591]: 2025-07-07 06:11:51.580 [INFO][3838] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e719a8a3f4304479ce24dc295221150dbc2dec2f6db7fb5b0f49d0016c90fc1a" Namespace="kube-system" Pod="coredns-668d6bf9bc-z9whx" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--z9whx-eth0" Jul 7 06:11:51.646181 systemd[1]: Created slice kubepods-besteffort-pode5062fb3_ccb0_4bf6_b753_4997f3a0c4de.slice - libcontainer container kubepods-besteffort-pode5062fb3_ccb0_4bf6_b753_4997f3a0c4de.slice. Jul 7 06:11:51.695049 systemd-networkd[1508]: cali4c2a5b6e49f: Link UP Jul 7 06:11:51.695456 systemd-networkd[1508]: cali4c2a5b6e49f: Gained carrier Jul 7 06:11:51.720006 systemd[1]: var-lib-kubelet-pods-b9066ac6\x2da84d\x2d4bfc\x2dba1c\x2d386e6a24601f-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dmg785.mount: Deactivated successfully. Jul 7 06:11:51.720181 systemd[1]: var-lib-kubelet-pods-b9066ac6\x2da84d\x2d4bfc\x2dba1c\x2d386e6a24601f-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jul 7 06:11:51.722338 containerd[1591]: 2025-07-07 06:11:51.397 [INFO][3844] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 7 06:11:51.722338 containerd[1591]: 2025-07-07 06:11:51.419 [INFO][3844] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-goldmane--768f4c5c69--tcv2d-eth0 goldmane-768f4c5c69- calico-system 07c13200-9675-440b-9395-714f9a92d182 873 0 2025-07-07 06:11:27 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:768f4c5c69 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s localhost goldmane-768f4c5c69-tcv2d eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali4c2a5b6e49f [] [] }} ContainerID="3c45ff7f4db51ac91643293c1a0ff411dd9abb2c314935191b33215e0001ce57" Namespace="calico-system" Pod="goldmane-768f4c5c69-tcv2d" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--tcv2d-" Jul 7 06:11:51.722338 containerd[1591]: 2025-07-07 06:11:51.419 [INFO][3844] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3c45ff7f4db51ac91643293c1a0ff411dd9abb2c314935191b33215e0001ce57" Namespace="calico-system" Pod="goldmane-768f4c5c69-tcv2d" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--tcv2d-eth0" Jul 7 06:11:51.722338 containerd[1591]: 2025-07-07 06:11:51.504 [INFO][3868] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3c45ff7f4db51ac91643293c1a0ff411dd9abb2c314935191b33215e0001ce57" HandleID="k8s-pod-network.3c45ff7f4db51ac91643293c1a0ff411dd9abb2c314935191b33215e0001ce57" Workload="localhost-k8s-goldmane--768f4c5c69--tcv2d-eth0" Jul 7 06:11:51.722338 containerd[1591]: 2025-07-07 06:11:51.505 [INFO][3868] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3c45ff7f4db51ac91643293c1a0ff411dd9abb2c314935191b33215e0001ce57" HandleID="k8s-pod-network.3c45ff7f4db51ac91643293c1a0ff411dd9abb2c314935191b33215e0001ce57" Workload="localhost-k8s-goldmane--768f4c5c69--tcv2d-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024e150), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"goldmane-768f4c5c69-tcv2d", "timestamp":"2025-07-07 06:11:51.504469881 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 06:11:51.722338 containerd[1591]: 2025-07-07 06:11:51.505 [INFO][3868] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:11:51.722338 containerd[1591]: 2025-07-07 06:11:51.545 [INFO][3868] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:11:51.722338 containerd[1591]: 2025-07-07 06:11:51.545 [INFO][3868] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 7 06:11:51.722338 containerd[1591]: 2025-07-07 06:11:51.615 [INFO][3868] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3c45ff7f4db51ac91643293c1a0ff411dd9abb2c314935191b33215e0001ce57" host="localhost" Jul 7 06:11:51.722338 containerd[1591]: 2025-07-07 06:11:51.627 [INFO][3868] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 7 06:11:51.722338 containerd[1591]: 2025-07-07 06:11:51.647 [INFO][3868] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 7 06:11:51.722338 containerd[1591]: 2025-07-07 06:11:51.656 [INFO][3868] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 7 06:11:51.722338 containerd[1591]: 2025-07-07 06:11:51.661 [INFO][3868] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 7 06:11:51.722338 containerd[1591]: 2025-07-07 06:11:51.661 [INFO][3868] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.3c45ff7f4db51ac91643293c1a0ff411dd9abb2c314935191b33215e0001ce57" host="localhost" Jul 7 06:11:51.722338 containerd[1591]: 2025-07-07 06:11:51.667 [INFO][3868] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.3c45ff7f4db51ac91643293c1a0ff411dd9abb2c314935191b33215e0001ce57 Jul 7 06:11:51.722338 containerd[1591]: 2025-07-07 06:11:51.673 [INFO][3868] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.3c45ff7f4db51ac91643293c1a0ff411dd9abb2c314935191b33215e0001ce57" host="localhost" Jul 7 06:11:51.722338 containerd[1591]: 2025-07-07 06:11:51.681 [INFO][3868] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.3c45ff7f4db51ac91643293c1a0ff411dd9abb2c314935191b33215e0001ce57" host="localhost" Jul 7 06:11:51.722338 containerd[1591]: 2025-07-07 06:11:51.681 [INFO][3868] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.3c45ff7f4db51ac91643293c1a0ff411dd9abb2c314935191b33215e0001ce57" host="localhost" Jul 7 06:11:51.722338 containerd[1591]: 2025-07-07 06:11:51.681 [INFO][3868] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:11:51.722338 containerd[1591]: 2025-07-07 06:11:51.681 [INFO][3868] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="3c45ff7f4db51ac91643293c1a0ff411dd9abb2c314935191b33215e0001ce57" HandleID="k8s-pod-network.3c45ff7f4db51ac91643293c1a0ff411dd9abb2c314935191b33215e0001ce57" Workload="localhost-k8s-goldmane--768f4c5c69--tcv2d-eth0" Jul 7 06:11:51.723338 containerd[1591]: 2025-07-07 06:11:51.686 [INFO][3844] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3c45ff7f4db51ac91643293c1a0ff411dd9abb2c314935191b33215e0001ce57" Namespace="calico-system" Pod="goldmane-768f4c5c69-tcv2d" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--tcv2d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--768f4c5c69--tcv2d-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"07c13200-9675-440b-9395-714f9a92d182", ResourceVersion:"873", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 11, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"goldmane-768f4c5c69-tcv2d", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali4c2a5b6e49f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:11:51.723338 containerd[1591]: 2025-07-07 06:11:51.686 [INFO][3844] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.130/32] ContainerID="3c45ff7f4db51ac91643293c1a0ff411dd9abb2c314935191b33215e0001ce57" Namespace="calico-system" Pod="goldmane-768f4c5c69-tcv2d" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--tcv2d-eth0" Jul 7 06:11:51.723338 containerd[1591]: 2025-07-07 06:11:51.686 [INFO][3844] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4c2a5b6e49f ContainerID="3c45ff7f4db51ac91643293c1a0ff411dd9abb2c314935191b33215e0001ce57" Namespace="calico-system" Pod="goldmane-768f4c5c69-tcv2d" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--tcv2d-eth0" Jul 7 06:11:51.723338 containerd[1591]: 2025-07-07 06:11:51.694 [INFO][3844] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3c45ff7f4db51ac91643293c1a0ff411dd9abb2c314935191b33215e0001ce57" Namespace="calico-system" Pod="goldmane-768f4c5c69-tcv2d" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--tcv2d-eth0" Jul 7 06:11:51.723338 containerd[1591]: 2025-07-07 06:11:51.696 [INFO][3844] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3c45ff7f4db51ac91643293c1a0ff411dd9abb2c314935191b33215e0001ce57" Namespace="calico-system" Pod="goldmane-768f4c5c69-tcv2d" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--tcv2d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-goldmane--768f4c5c69--tcv2d-eth0", GenerateName:"goldmane-768f4c5c69-", Namespace:"calico-system", SelfLink:"", UID:"07c13200-9675-440b-9395-714f9a92d182", ResourceVersion:"873", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 11, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"768f4c5c69", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"3c45ff7f4db51ac91643293c1a0ff411dd9abb2c314935191b33215e0001ce57", Pod:"goldmane-768f4c5c69-tcv2d", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali4c2a5b6e49f", MAC:"7e:18:6b:9b:66:84", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:11:51.723338 containerd[1591]: 2025-07-07 06:11:51.713 [INFO][3844] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3c45ff7f4db51ac91643293c1a0ff411dd9abb2c314935191b33215e0001ce57" Namespace="calico-system" Pod="goldmane-768f4c5c69-tcv2d" WorkloadEndpoint="localhost-k8s-goldmane--768f4c5c69--tcv2d-eth0" Jul 7 06:11:51.730528 containerd[1591]: time="2025-07-07T06:11:51.730437965Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d2cc9c16259578df8f2a5a14ce0b2a8eac94cf2fa408527a81926f580f4ae506\" id:\"b8381c1b063520ede2b2b53418f8b27fef1fd1851c662e2b0e16c61e3678347b\" pid:3898 exit_status:1 exited_at:{seconds:1751868711 nanos:730046925}" Jul 7 06:11:51.743682 containerd[1591]: time="2025-07-07T06:11:51.743622952Z" level=info msg="connecting to shim e719a8a3f4304479ce24dc295221150dbc2dec2f6db7fb5b0f49d0016c90fc1a" address="unix:///run/containerd/s/12ec3070bb455cc8a56b420582b166247ccb976179e279a51854fed8e6fcd1a5" namespace=k8s.io protocol=ttrpc version=3 Jul 7 06:11:51.756578 containerd[1591]: time="2025-07-07T06:11:51.756421349Z" level=info msg="connecting to shim 3c45ff7f4db51ac91643293c1a0ff411dd9abb2c314935191b33215e0001ce57" address="unix:///run/containerd/s/32d1e3f184b16f1c829031967347cf21010a9e06821d17ebd218a1c3d9c66285" namespace=k8s.io protocol=ttrpc version=3 Jul 7 06:11:51.777415 systemd[1]: Started cri-containerd-e719a8a3f4304479ce24dc295221150dbc2dec2f6db7fb5b0f49d0016c90fc1a.scope - libcontainer container e719a8a3f4304479ce24dc295221150dbc2dec2f6db7fb5b0f49d0016c90fc1a. Jul 7 06:11:51.784013 kubelet[2730]: I0707 06:11:51.783917 2730 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e5062fb3-ccb0-4bf6-b753-4997f3a0c4de-whisker-ca-bundle\") pod \"whisker-7b9b87c9b-vf8dt\" (UID: \"e5062fb3-ccb0-4bf6-b753-4997f3a0c4de\") " pod="calico-system/whisker-7b9b87c9b-vf8dt" Jul 7 06:11:51.784013 kubelet[2730]: I0707 06:11:51.783955 2730 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e5062fb3-ccb0-4bf6-b753-4997f3a0c4de-whisker-backend-key-pair\") pod \"whisker-7b9b87c9b-vf8dt\" (UID: \"e5062fb3-ccb0-4bf6-b753-4997f3a0c4de\") " pod="calico-system/whisker-7b9b87c9b-vf8dt" Jul 7 06:11:51.784291 kubelet[2730]: I0707 06:11:51.783980 2730 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7z4qk\" (UniqueName: \"kubernetes.io/projected/e5062fb3-ccb0-4bf6-b753-4997f3a0c4de-kube-api-access-7z4qk\") pod \"whisker-7b9b87c9b-vf8dt\" (UID: \"e5062fb3-ccb0-4bf6-b753-4997f3a0c4de\") " pod="calico-system/whisker-7b9b87c9b-vf8dt" Jul 7 06:11:51.801247 systemd[1]: Started cri-containerd-3c45ff7f4db51ac91643293c1a0ff411dd9abb2c314935191b33215e0001ce57.scope - libcontainer container 3c45ff7f4db51ac91643293c1a0ff411dd9abb2c314935191b33215e0001ce57. Jul 7 06:11:51.806896 systemd-resolved[1437]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 7 06:11:51.821597 systemd-resolved[1437]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 7 06:11:51.914089 containerd[1591]: time="2025-07-07T06:11:51.914028096Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-z9whx,Uid:83e8eed0-31ef-4362-a10b-04ce1bca5c07,Namespace:kube-system,Attempt:0,} returns sandbox id \"e719a8a3f4304479ce24dc295221150dbc2dec2f6db7fb5b0f49d0016c90fc1a\"" Jul 7 06:11:51.939666 containerd[1591]: time="2025-07-07T06:11:51.939615541Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-tcv2d,Uid:07c13200-9675-440b-9395-714f9a92d182,Namespace:calico-system,Attempt:0,} returns sandbox id \"3c45ff7f4db51ac91643293c1a0ff411dd9abb2c314935191b33215e0001ce57\"" Jul 7 06:11:51.941123 containerd[1591]: time="2025-07-07T06:11:51.941013329Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\"" Jul 7 06:11:51.955091 containerd[1591]: time="2025-07-07T06:11:51.955029490Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7b9b87c9b-vf8dt,Uid:e5062fb3-ccb0-4bf6-b753-4997f3a0c4de,Namespace:calico-system,Attempt:0,}" Jul 7 06:11:51.958500 kubelet[2730]: E0707 06:11:51.958471 2730 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:11:51.959979 containerd[1591]: time="2025-07-07T06:11:51.959948798Z" level=info msg="CreateContainer within sandbox \"e719a8a3f4304479ce24dc295221150dbc2dec2f6db7fb5b0f49d0016c90fc1a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 7 06:11:52.173936 containerd[1591]: time="2025-07-07T06:11:52.173870732Z" level=info msg="Container 147e216186d2a97d27cd38d072a84d596e3932e4a84539153fd15147bdd0ab12: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:11:52.182958 containerd[1591]: time="2025-07-07T06:11:52.182884513Z" level=info msg="CreateContainer within sandbox \"e719a8a3f4304479ce24dc295221150dbc2dec2f6db7fb5b0f49d0016c90fc1a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"147e216186d2a97d27cd38d072a84d596e3932e4a84539153fd15147bdd0ab12\"" Jul 7 06:11:52.183815 containerd[1591]: time="2025-07-07T06:11:52.183766992Z" level=info msg="StartContainer for \"147e216186d2a97d27cd38d072a84d596e3932e4a84539153fd15147bdd0ab12\"" Jul 7 06:11:52.185344 containerd[1591]: time="2025-07-07T06:11:52.185309454Z" level=info msg="connecting to shim 147e216186d2a97d27cd38d072a84d596e3932e4a84539153fd15147bdd0ab12" address="unix:///run/containerd/s/12ec3070bb455cc8a56b420582b166247ccb976179e279a51854fed8e6fcd1a5" protocol=ttrpc version=3 Jul 7 06:11:52.211432 systemd[1]: Started cri-containerd-147e216186d2a97d27cd38d072a84d596e3932e4a84539153fd15147bdd0ab12.scope - libcontainer container 147e216186d2a97d27cd38d072a84d596e3932e4a84539153fd15147bdd0ab12. Jul 7 06:11:52.352311 containerd[1591]: time="2025-07-07T06:11:52.352250130Z" level=info msg="StartContainer for \"147e216186d2a97d27cd38d072a84d596e3932e4a84539153fd15147bdd0ab12\" returns successfully" Jul 7 06:11:52.368585 kubelet[2730]: E0707 06:11:52.368534 2730 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:11:52.372608 containerd[1591]: time="2025-07-07T06:11:52.371192072Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c85fc6b4c-lmt2g,Uid:a5aaecf1-178e-4a17-aa11-1d10ccb44ba4,Namespace:calico-apiserver,Attempt:0,}" Jul 7 06:11:52.372608 containerd[1591]: time="2025-07-07T06:11:52.371732697Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mnpf6,Uid:018e5b2e-15b1-47a4-aa58-1ffe99e5a2b7,Namespace:calico-system,Attempt:0,}" Jul 7 06:11:52.372608 containerd[1591]: time="2025-07-07T06:11:52.371851403Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-hklhb,Uid:9c16e4b5-2630-4cbc-b777-1a667284a980,Namespace:kube-system,Attempt:0,}" Jul 7 06:11:52.372289 systemd-networkd[1508]: calic9923db7c9b: Link UP Jul 7 06:11:52.377086 kubelet[2730]: I0707 06:11:52.375363 2730 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b9066ac6-a84d-4bfc-ba1c-386e6a24601f" path="/var/lib/kubelet/pods/b9066ac6-a84d-4bfc-ba1c-386e6a24601f/volumes" Jul 7 06:11:52.377007 systemd-networkd[1508]: calic9923db7c9b: Gained carrier Jul 7 06:11:52.448114 containerd[1591]: 2025-07-07 06:11:52.190 [INFO][4015] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 7 06:11:52.448114 containerd[1591]: 2025-07-07 06:11:52.203 [INFO][4015] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-whisker--7b9b87c9b--vf8dt-eth0 whisker-7b9b87c9b- calico-system e5062fb3-ccb0-4bf6-b753-4997f3a0c4de 958 0 2025-07-07 06:11:51 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:7b9b87c9b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s localhost whisker-7b9b87c9b-vf8dt eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calic9923db7c9b [] [] }} ContainerID="f467d994047674035755d475ccf4da25db165ac56ba77184cd0a140c006316dc" Namespace="calico-system" Pod="whisker-7b9b87c9b-vf8dt" WorkloadEndpoint="localhost-k8s-whisker--7b9b87c9b--vf8dt-" Jul 7 06:11:52.448114 containerd[1591]: 2025-07-07 06:11:52.203 [INFO][4015] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f467d994047674035755d475ccf4da25db165ac56ba77184cd0a140c006316dc" Namespace="calico-system" Pod="whisker-7b9b87c9b-vf8dt" WorkloadEndpoint="localhost-k8s-whisker--7b9b87c9b--vf8dt-eth0" Jul 7 06:11:52.448114 containerd[1591]: 2025-07-07 06:11:52.237 [INFO][4041] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f467d994047674035755d475ccf4da25db165ac56ba77184cd0a140c006316dc" HandleID="k8s-pod-network.f467d994047674035755d475ccf4da25db165ac56ba77184cd0a140c006316dc" Workload="localhost-k8s-whisker--7b9b87c9b--vf8dt-eth0" Jul 7 06:11:52.448114 containerd[1591]: 2025-07-07 06:11:52.237 [INFO][4041] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f467d994047674035755d475ccf4da25db165ac56ba77184cd0a140c006316dc" HandleID="k8s-pod-network.f467d994047674035755d475ccf4da25db165ac56ba77184cd0a140c006316dc" Workload="localhost-k8s-whisker--7b9b87c9b--vf8dt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004e770), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"whisker-7b9b87c9b-vf8dt", "timestamp":"2025-07-07 06:11:52.237088871 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 06:11:52.448114 containerd[1591]: 2025-07-07 06:11:52.237 [INFO][4041] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:11:52.448114 containerd[1591]: 2025-07-07 06:11:52.237 [INFO][4041] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:11:52.448114 containerd[1591]: 2025-07-07 06:11:52.237 [INFO][4041] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 7 06:11:52.448114 containerd[1591]: 2025-07-07 06:11:52.247 [INFO][4041] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f467d994047674035755d475ccf4da25db165ac56ba77184cd0a140c006316dc" host="localhost" Jul 7 06:11:52.448114 containerd[1591]: 2025-07-07 06:11:52.256 [INFO][4041] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 7 06:11:52.448114 containerd[1591]: 2025-07-07 06:11:52.261 [INFO][4041] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 7 06:11:52.448114 containerd[1591]: 2025-07-07 06:11:52.263 [INFO][4041] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 7 06:11:52.448114 containerd[1591]: 2025-07-07 06:11:52.266 [INFO][4041] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 7 06:11:52.448114 containerd[1591]: 2025-07-07 06:11:52.266 [INFO][4041] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.f467d994047674035755d475ccf4da25db165ac56ba77184cd0a140c006316dc" host="localhost" Jul 7 06:11:52.448114 containerd[1591]: 2025-07-07 06:11:52.268 [INFO][4041] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.f467d994047674035755d475ccf4da25db165ac56ba77184cd0a140c006316dc Jul 7 06:11:52.448114 containerd[1591]: 2025-07-07 06:11:52.299 [INFO][4041] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.f467d994047674035755d475ccf4da25db165ac56ba77184cd0a140c006316dc" host="localhost" Jul 7 06:11:52.448114 containerd[1591]: 2025-07-07 06:11:52.351 [INFO][4041] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.f467d994047674035755d475ccf4da25db165ac56ba77184cd0a140c006316dc" host="localhost" Jul 7 06:11:52.448114 containerd[1591]: 2025-07-07 06:11:52.351 [INFO][4041] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.f467d994047674035755d475ccf4da25db165ac56ba77184cd0a140c006316dc" host="localhost" Jul 7 06:11:52.448114 containerd[1591]: 2025-07-07 06:11:52.351 [INFO][4041] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:11:52.448114 containerd[1591]: 2025-07-07 06:11:52.352 [INFO][4041] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="f467d994047674035755d475ccf4da25db165ac56ba77184cd0a140c006316dc" HandleID="k8s-pod-network.f467d994047674035755d475ccf4da25db165ac56ba77184cd0a140c006316dc" Workload="localhost-k8s-whisker--7b9b87c9b--vf8dt-eth0" Jul 7 06:11:52.448965 containerd[1591]: 2025-07-07 06:11:52.366 [INFO][4015] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f467d994047674035755d475ccf4da25db165ac56ba77184cd0a140c006316dc" Namespace="calico-system" Pod="whisker-7b9b87c9b-vf8dt" WorkloadEndpoint="localhost-k8s-whisker--7b9b87c9b--vf8dt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--7b9b87c9b--vf8dt-eth0", GenerateName:"whisker-7b9b87c9b-", Namespace:"calico-system", SelfLink:"", UID:"e5062fb3-ccb0-4bf6-b753-4997f3a0c4de", ResourceVersion:"958", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 11, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7b9b87c9b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"whisker-7b9b87c9b-vf8dt", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calic9923db7c9b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:11:52.448965 containerd[1591]: 2025-07-07 06:11:52.366 [INFO][4015] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.131/32] ContainerID="f467d994047674035755d475ccf4da25db165ac56ba77184cd0a140c006316dc" Namespace="calico-system" Pod="whisker-7b9b87c9b-vf8dt" WorkloadEndpoint="localhost-k8s-whisker--7b9b87c9b--vf8dt-eth0" Jul 7 06:11:52.448965 containerd[1591]: 2025-07-07 06:11:52.366 [INFO][4015] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic9923db7c9b ContainerID="f467d994047674035755d475ccf4da25db165ac56ba77184cd0a140c006316dc" Namespace="calico-system" Pod="whisker-7b9b87c9b-vf8dt" WorkloadEndpoint="localhost-k8s-whisker--7b9b87c9b--vf8dt-eth0" Jul 7 06:11:52.448965 containerd[1591]: 2025-07-07 06:11:52.376 [INFO][4015] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f467d994047674035755d475ccf4da25db165ac56ba77184cd0a140c006316dc" Namespace="calico-system" Pod="whisker-7b9b87c9b-vf8dt" WorkloadEndpoint="localhost-k8s-whisker--7b9b87c9b--vf8dt-eth0" Jul 7 06:11:52.448965 containerd[1591]: 2025-07-07 06:11:52.376 [INFO][4015] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f467d994047674035755d475ccf4da25db165ac56ba77184cd0a140c006316dc" Namespace="calico-system" Pod="whisker-7b9b87c9b-vf8dt" WorkloadEndpoint="localhost-k8s-whisker--7b9b87c9b--vf8dt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-whisker--7b9b87c9b--vf8dt-eth0", GenerateName:"whisker-7b9b87c9b-", Namespace:"calico-system", SelfLink:"", UID:"e5062fb3-ccb0-4bf6-b753-4997f3a0c4de", ResourceVersion:"958", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 11, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7b9b87c9b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"f467d994047674035755d475ccf4da25db165ac56ba77184cd0a140c006316dc", Pod:"whisker-7b9b87c9b-vf8dt", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calic9923db7c9b", MAC:"3a:c2:44:1e:3e:c5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:11:52.448965 containerd[1591]: 2025-07-07 06:11:52.434 [INFO][4015] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f467d994047674035755d475ccf4da25db165ac56ba77184cd0a140c006316dc" Namespace="calico-system" Pod="whisker-7b9b87c9b-vf8dt" WorkloadEndpoint="localhost-k8s-whisker--7b9b87c9b--vf8dt-eth0" Jul 7 06:11:52.511979 kubelet[2730]: E0707 06:11:52.511931 2730 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:11:52.545151 kubelet[2730]: I0707 06:11:52.543169 2730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-z9whx" podStartSLOduration=37.543151161 podStartE2EDuration="37.543151161s" podCreationTimestamp="2025-07-07 06:11:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 06:11:52.542462524 +0000 UTC m=+42.279658761" watchObservedRunningTime="2025-07-07 06:11:52.543151161 +0000 UTC m=+42.280347398" Jul 7 06:11:52.581142 containerd[1591]: time="2025-07-07T06:11:52.579205964Z" level=info msg="connecting to shim f467d994047674035755d475ccf4da25db165ac56ba77184cd0a140c006316dc" address="unix:///run/containerd/s/a8b54d0d093bb24bf51544b09a3e1789b566ca5d68cf28d72bb8e9a813d604b8" namespace=k8s.io protocol=ttrpc version=3 Jul 7 06:11:52.718885 systemd[1]: Started cri-containerd-f467d994047674035755d475ccf4da25db165ac56ba77184cd0a140c006316dc.scope - libcontainer container f467d994047674035755d475ccf4da25db165ac56ba77184cd0a140c006316dc. Jul 7 06:11:52.788447 systemd-resolved[1437]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 7 06:11:52.842546 containerd[1591]: time="2025-07-07T06:11:52.842490666Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7b9b87c9b-vf8dt,Uid:e5062fb3-ccb0-4bf6-b753-4997f3a0c4de,Namespace:calico-system,Attempt:0,} returns sandbox id \"f467d994047674035755d475ccf4da25db165ac56ba77184cd0a140c006316dc\"" Jul 7 06:11:52.871264 systemd-networkd[1508]: cali7c32b3b6340: Link UP Jul 7 06:11:52.873497 systemd-networkd[1508]: cali7c32b3b6340: Gained carrier Jul 7 06:11:52.897195 containerd[1591]: 2025-07-07 06:11:52.618 [INFO][4188] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 7 06:11:52.897195 containerd[1591]: 2025-07-07 06:11:52.659 [INFO][4188] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--hklhb-eth0 coredns-668d6bf9bc- kube-system 9c16e4b5-2630-4cbc-b777-1a667284a980 872 0 2025-07-07 06:11:15 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-hklhb eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali7c32b3b6340 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="b5b1727d983339d96cee8342cabd008f651192c8a82bce9bc76cd90f7ee5c410" Namespace="kube-system" Pod="coredns-668d6bf9bc-hklhb" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--hklhb-" Jul 7 06:11:52.897195 containerd[1591]: 2025-07-07 06:11:52.659 [INFO][4188] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b5b1727d983339d96cee8342cabd008f651192c8a82bce9bc76cd90f7ee5c410" Namespace="kube-system" Pod="coredns-668d6bf9bc-hklhb" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--hklhb-eth0" Jul 7 06:11:52.897195 containerd[1591]: 2025-07-07 06:11:52.764 [INFO][4278] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b5b1727d983339d96cee8342cabd008f651192c8a82bce9bc76cd90f7ee5c410" HandleID="k8s-pod-network.b5b1727d983339d96cee8342cabd008f651192c8a82bce9bc76cd90f7ee5c410" Workload="localhost-k8s-coredns--668d6bf9bc--hklhb-eth0" Jul 7 06:11:52.897195 containerd[1591]: 2025-07-07 06:11:52.764 [INFO][4278] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b5b1727d983339d96cee8342cabd008f651192c8a82bce9bc76cd90f7ee5c410" HandleID="k8s-pod-network.b5b1727d983339d96cee8342cabd008f651192c8a82bce9bc76cd90f7ee5c410" Workload="localhost-k8s-coredns--668d6bf9bc--hklhb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f5f0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-hklhb", "timestamp":"2025-07-07 06:11:52.764376498 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 06:11:52.897195 containerd[1591]: 2025-07-07 06:11:52.764 [INFO][4278] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:11:52.897195 containerd[1591]: 2025-07-07 06:11:52.765 [INFO][4278] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:11:52.897195 containerd[1591]: 2025-07-07 06:11:52.765 [INFO][4278] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 7 06:11:52.897195 containerd[1591]: 2025-07-07 06:11:52.776 [INFO][4278] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b5b1727d983339d96cee8342cabd008f651192c8a82bce9bc76cd90f7ee5c410" host="localhost" Jul 7 06:11:52.897195 containerd[1591]: 2025-07-07 06:11:52.784 [INFO][4278] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 7 06:11:52.897195 containerd[1591]: 2025-07-07 06:11:52.794 [INFO][4278] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 7 06:11:52.897195 containerd[1591]: 2025-07-07 06:11:52.798 [INFO][4278] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 7 06:11:52.897195 containerd[1591]: 2025-07-07 06:11:52.802 [INFO][4278] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 7 06:11:52.897195 containerd[1591]: 2025-07-07 06:11:52.804 [INFO][4278] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.b5b1727d983339d96cee8342cabd008f651192c8a82bce9bc76cd90f7ee5c410" host="localhost" Jul 7 06:11:52.897195 containerd[1591]: 2025-07-07 06:11:52.806 [INFO][4278] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.b5b1727d983339d96cee8342cabd008f651192c8a82bce9bc76cd90f7ee5c410 Jul 7 06:11:52.897195 containerd[1591]: 2025-07-07 06:11:52.841 [INFO][4278] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.b5b1727d983339d96cee8342cabd008f651192c8a82bce9bc76cd90f7ee5c410" host="localhost" Jul 7 06:11:52.897195 containerd[1591]: 2025-07-07 06:11:52.853 [INFO][4278] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.b5b1727d983339d96cee8342cabd008f651192c8a82bce9bc76cd90f7ee5c410" host="localhost" Jul 7 06:11:52.897195 containerd[1591]: 2025-07-07 06:11:52.853 [INFO][4278] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.b5b1727d983339d96cee8342cabd008f651192c8a82bce9bc76cd90f7ee5c410" host="localhost" Jul 7 06:11:52.897195 containerd[1591]: 2025-07-07 06:11:52.853 [INFO][4278] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:11:52.897195 containerd[1591]: 2025-07-07 06:11:52.853 [INFO][4278] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="b5b1727d983339d96cee8342cabd008f651192c8a82bce9bc76cd90f7ee5c410" HandleID="k8s-pod-network.b5b1727d983339d96cee8342cabd008f651192c8a82bce9bc76cd90f7ee5c410" Workload="localhost-k8s-coredns--668d6bf9bc--hklhb-eth0" Jul 7 06:11:52.898059 containerd[1591]: 2025-07-07 06:11:52.864 [INFO][4188] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b5b1727d983339d96cee8342cabd008f651192c8a82bce9bc76cd90f7ee5c410" Namespace="kube-system" Pod="coredns-668d6bf9bc-hklhb" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--hklhb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--hklhb-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"9c16e4b5-2630-4cbc-b777-1a667284a980", ResourceVersion:"872", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 11, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-hklhb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7c32b3b6340", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:11:52.898059 containerd[1591]: 2025-07-07 06:11:52.865 [INFO][4188] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.132/32] ContainerID="b5b1727d983339d96cee8342cabd008f651192c8a82bce9bc76cd90f7ee5c410" Namespace="kube-system" Pod="coredns-668d6bf9bc-hklhb" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--hklhb-eth0" Jul 7 06:11:52.898059 containerd[1591]: 2025-07-07 06:11:52.865 [INFO][4188] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7c32b3b6340 ContainerID="b5b1727d983339d96cee8342cabd008f651192c8a82bce9bc76cd90f7ee5c410" Namespace="kube-system" Pod="coredns-668d6bf9bc-hklhb" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--hklhb-eth0" Jul 7 06:11:52.898059 containerd[1591]: 2025-07-07 06:11:52.875 [INFO][4188] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b5b1727d983339d96cee8342cabd008f651192c8a82bce9bc76cd90f7ee5c410" Namespace="kube-system" Pod="coredns-668d6bf9bc-hklhb" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--hklhb-eth0" Jul 7 06:11:52.898059 containerd[1591]: 2025-07-07 06:11:52.875 [INFO][4188] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b5b1727d983339d96cee8342cabd008f651192c8a82bce9bc76cd90f7ee5c410" Namespace="kube-system" Pod="coredns-668d6bf9bc-hklhb" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--hklhb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--hklhb-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"9c16e4b5-2630-4cbc-b777-1a667284a980", ResourceVersion:"872", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 11, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"b5b1727d983339d96cee8342cabd008f651192c8a82bce9bc76cd90f7ee5c410", Pod:"coredns-668d6bf9bc-hklhb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7c32b3b6340", MAC:"f6:dc:57:a4:75:f5", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:11:52.898059 containerd[1591]: 2025-07-07 06:11:52.891 [INFO][4188] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b5b1727d983339d96cee8342cabd008f651192c8a82bce9bc76cd90f7ee5c410" Namespace="kube-system" Pod="coredns-668d6bf9bc-hklhb" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--hklhb-eth0" Jul 7 06:11:52.903430 systemd-networkd[1508]: cali882bf728df0: Gained IPv6LL Jul 7 06:11:52.909672 containerd[1591]: time="2025-07-07T06:11:52.909621532Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d2cc9c16259578df8f2a5a14ce0b2a8eac94cf2fa408527a81926f580f4ae506\" id:\"8d3780280393b7ee5534666746e7bb580968ba0c47b2b8142f498b9905aa9401\" pid:4258 exit_status:1 exited_at:{seconds:1751868712 nanos:909282654}" Jul 7 06:11:52.931560 systemd-networkd[1508]: cali6dfd6a8e3f5: Link UP Jul 7 06:11:52.931889 systemd-networkd[1508]: cali6dfd6a8e3f5: Gained carrier Jul 7 06:11:52.940215 containerd[1591]: time="2025-07-07T06:11:52.940135501Z" level=info msg="connecting to shim b5b1727d983339d96cee8342cabd008f651192c8a82bce9bc76cd90f7ee5c410" address="unix:///run/containerd/s/cbdaacb9a812662103b5d7242703bfe9f12a25b940b23d1a6d4eda23116677ec" namespace=k8s.io protocol=ttrpc version=3 Jul 7 06:11:52.962214 containerd[1591]: 2025-07-07 06:11:52.594 [INFO][4176] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 7 06:11:52.962214 containerd[1591]: 2025-07-07 06:11:52.622 [INFO][4176] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--mnpf6-eth0 csi-node-driver- calico-system 018e5b2e-15b1-47a4-aa58-1ffe99e5a2b7 757 0 2025-07-07 06:11:28 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:8967bcb6f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-mnpf6 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali6dfd6a8e3f5 [] [] }} ContainerID="6818ccae37567ad92475bc527d150d86d68e183bfdba17b754decab26074bb08" Namespace="calico-system" Pod="csi-node-driver-mnpf6" WorkloadEndpoint="localhost-k8s-csi--node--driver--mnpf6-" Jul 7 06:11:52.962214 containerd[1591]: 2025-07-07 06:11:52.622 [INFO][4176] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="6818ccae37567ad92475bc527d150d86d68e183bfdba17b754decab26074bb08" Namespace="calico-system" Pod="csi-node-driver-mnpf6" WorkloadEndpoint="localhost-k8s-csi--node--driver--mnpf6-eth0" Jul 7 06:11:52.962214 containerd[1591]: 2025-07-07 06:11:52.763 [INFO][4266] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="6818ccae37567ad92475bc527d150d86d68e183bfdba17b754decab26074bb08" HandleID="k8s-pod-network.6818ccae37567ad92475bc527d150d86d68e183bfdba17b754decab26074bb08" Workload="localhost-k8s-csi--node--driver--mnpf6-eth0" Jul 7 06:11:52.962214 containerd[1591]: 2025-07-07 06:11:52.766 [INFO][4266] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="6818ccae37567ad92475bc527d150d86d68e183bfdba17b754decab26074bb08" HandleID="k8s-pod-network.6818ccae37567ad92475bc527d150d86d68e183bfdba17b754decab26074bb08" Workload="localhost-k8s-csi--node--driver--mnpf6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000364120), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-mnpf6", "timestamp":"2025-07-07 06:11:52.762993953 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 06:11:52.962214 containerd[1591]: 2025-07-07 06:11:52.766 [INFO][4266] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:11:52.962214 containerd[1591]: 2025-07-07 06:11:52.853 [INFO][4266] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:11:52.962214 containerd[1591]: 2025-07-07 06:11:52.853 [INFO][4266] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 7 06:11:52.962214 containerd[1591]: 2025-07-07 06:11:52.879 [INFO][4266] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.6818ccae37567ad92475bc527d150d86d68e183bfdba17b754decab26074bb08" host="localhost" Jul 7 06:11:52.962214 containerd[1591]: 2025-07-07 06:11:52.892 [INFO][4266] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 7 06:11:52.962214 containerd[1591]: 2025-07-07 06:11:52.900 [INFO][4266] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 7 06:11:52.962214 containerd[1591]: 2025-07-07 06:11:52.902 [INFO][4266] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 7 06:11:52.962214 containerd[1591]: 2025-07-07 06:11:52.906 [INFO][4266] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 7 06:11:52.962214 containerd[1591]: 2025-07-07 06:11:52.906 [INFO][4266] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.6818ccae37567ad92475bc527d150d86d68e183bfdba17b754decab26074bb08" host="localhost" Jul 7 06:11:52.962214 containerd[1591]: 2025-07-07 06:11:52.908 [INFO][4266] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.6818ccae37567ad92475bc527d150d86d68e183bfdba17b754decab26074bb08 Jul 7 06:11:52.962214 containerd[1591]: 2025-07-07 06:11:52.913 [INFO][4266] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.6818ccae37567ad92475bc527d150d86d68e183bfdba17b754decab26074bb08" host="localhost" Jul 7 06:11:52.962214 containerd[1591]: 2025-07-07 06:11:52.920 [INFO][4266] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.6818ccae37567ad92475bc527d150d86d68e183bfdba17b754decab26074bb08" host="localhost" Jul 7 06:11:52.962214 containerd[1591]: 2025-07-07 06:11:52.920 [INFO][4266] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.6818ccae37567ad92475bc527d150d86d68e183bfdba17b754decab26074bb08" host="localhost" Jul 7 06:11:52.962214 containerd[1591]: 2025-07-07 06:11:52.920 [INFO][4266] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:11:52.962214 containerd[1591]: 2025-07-07 06:11:52.920 [INFO][4266] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="6818ccae37567ad92475bc527d150d86d68e183bfdba17b754decab26074bb08" HandleID="k8s-pod-network.6818ccae37567ad92475bc527d150d86d68e183bfdba17b754decab26074bb08" Workload="localhost-k8s-csi--node--driver--mnpf6-eth0" Jul 7 06:11:52.962983 containerd[1591]: 2025-07-07 06:11:52.926 [INFO][4176] cni-plugin/k8s.go 418: Populated endpoint ContainerID="6818ccae37567ad92475bc527d150d86d68e183bfdba17b754decab26074bb08" Namespace="calico-system" Pod="csi-node-driver-mnpf6" WorkloadEndpoint="localhost-k8s-csi--node--driver--mnpf6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--mnpf6-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"018e5b2e-15b1-47a4-aa58-1ffe99e5a2b7", ResourceVersion:"757", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 11, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-mnpf6", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali6dfd6a8e3f5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:11:52.962983 containerd[1591]: 2025-07-07 06:11:52.927 [INFO][4176] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.133/32] ContainerID="6818ccae37567ad92475bc527d150d86d68e183bfdba17b754decab26074bb08" Namespace="calico-system" Pod="csi-node-driver-mnpf6" WorkloadEndpoint="localhost-k8s-csi--node--driver--mnpf6-eth0" Jul 7 06:11:52.962983 containerd[1591]: 2025-07-07 06:11:52.927 [INFO][4176] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6dfd6a8e3f5 ContainerID="6818ccae37567ad92475bc527d150d86d68e183bfdba17b754decab26074bb08" Namespace="calico-system" Pod="csi-node-driver-mnpf6" WorkloadEndpoint="localhost-k8s-csi--node--driver--mnpf6-eth0" Jul 7 06:11:52.962983 containerd[1591]: 2025-07-07 06:11:52.935 [INFO][4176] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="6818ccae37567ad92475bc527d150d86d68e183bfdba17b754decab26074bb08" Namespace="calico-system" Pod="csi-node-driver-mnpf6" WorkloadEndpoint="localhost-k8s-csi--node--driver--mnpf6-eth0" Jul 7 06:11:52.962983 containerd[1591]: 2025-07-07 06:11:52.938 [INFO][4176] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="6818ccae37567ad92475bc527d150d86d68e183bfdba17b754decab26074bb08" Namespace="calico-system" Pod="csi-node-driver-mnpf6" WorkloadEndpoint="localhost-k8s-csi--node--driver--mnpf6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--mnpf6-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"018e5b2e-15b1-47a4-aa58-1ffe99e5a2b7", ResourceVersion:"757", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 11, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"6818ccae37567ad92475bc527d150d86d68e183bfdba17b754decab26074bb08", Pod:"csi-node-driver-mnpf6", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali6dfd6a8e3f5", MAC:"f6:25:8b:af:0d:d3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:11:52.962983 containerd[1591]: 2025-07-07 06:11:52.955 [INFO][4176] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="6818ccae37567ad92475bc527d150d86d68e183bfdba17b754decab26074bb08" Namespace="calico-system" Pod="csi-node-driver-mnpf6" WorkloadEndpoint="localhost-k8s-csi--node--driver--mnpf6-eth0" Jul 7 06:11:52.975566 systemd[1]: Started cri-containerd-b5b1727d983339d96cee8342cabd008f651192c8a82bce9bc76cd90f7ee5c410.scope - libcontainer container b5b1727d983339d96cee8342cabd008f651192c8a82bce9bc76cd90f7ee5c410. Jul 7 06:11:53.000368 systemd-resolved[1437]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 7 06:11:53.198211 systemd-networkd[1508]: vxlan.calico: Link UP Jul 7 06:11:53.198223 systemd-networkd[1508]: vxlan.calico: Gained carrier Jul 7 06:11:53.237118 containerd[1591]: time="2025-07-07T06:11:53.237046350Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-hklhb,Uid:9c16e4b5-2630-4cbc-b777-1a667284a980,Namespace:kube-system,Attempt:0,} returns sandbox id \"b5b1727d983339d96cee8342cabd008f651192c8a82bce9bc76cd90f7ee5c410\"" Jul 7 06:11:53.237981 kubelet[2730]: E0707 06:11:53.237947 2730 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:11:53.239781 containerd[1591]: time="2025-07-07T06:11:53.239748344Z" level=info msg="CreateContainer within sandbox \"b5b1727d983339d96cee8342cabd008f651192c8a82bce9bc76cd90f7ee5c410\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 7 06:11:53.254297 systemd-networkd[1508]: cali0683c0a9345: Link UP Jul 7 06:11:53.255233 systemd-networkd[1508]: cali0683c0a9345: Gained carrier Jul 7 06:11:53.300158 containerd[1591]: time="2025-07-07T06:11:53.300056179Z" level=info msg="connecting to shim 6818ccae37567ad92475bc527d150d86d68e183bfdba17b754decab26074bb08" address="unix:///run/containerd/s/5dd090ca248103db187e337726371e71255ffa000c7718d469248ac35a09df16" namespace=k8s.io protocol=ttrpc version=3 Jul 7 06:11:53.342401 systemd[1]: Started cri-containerd-6818ccae37567ad92475bc527d150d86d68e183bfdba17b754decab26074bb08.scope - libcontainer container 6818ccae37567ad92475bc527d150d86d68e183bfdba17b754decab26074bb08. Jul 7 06:11:53.362131 systemd-resolved[1437]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 7 06:11:53.415317 systemd-networkd[1508]: cali4c2a5b6e49f: Gained IPv6LL Jul 7 06:11:53.519213 kubelet[2730]: E0707 06:11:53.519162 2730 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:11:53.561553 containerd[1591]: 2025-07-07 06:11:52.629 [INFO][4169] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jul 7 06:11:53.561553 containerd[1591]: 2025-07-07 06:11:52.658 [INFO][4169] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--c85fc6b4c--lmt2g-eth0 calico-apiserver-c85fc6b4c- calico-apiserver a5aaecf1-178e-4a17-aa11-1d10ccb44ba4 863 0 2025-07-07 06:11:25 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:c85fc6b4c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-c85fc6b4c-lmt2g eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali0683c0a9345 [] [] }} ContainerID="372ac081e5f1d02f9e6e74b2b76a5fc2effc02ad8466bd60116aec53ee64fbf8" Namespace="calico-apiserver" Pod="calico-apiserver-c85fc6b4c-lmt2g" WorkloadEndpoint="localhost-k8s-calico--apiserver--c85fc6b4c--lmt2g-" Jul 7 06:11:53.561553 containerd[1591]: 2025-07-07 06:11:52.659 [INFO][4169] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="372ac081e5f1d02f9e6e74b2b76a5fc2effc02ad8466bd60116aec53ee64fbf8" Namespace="calico-apiserver" Pod="calico-apiserver-c85fc6b4c-lmt2g" WorkloadEndpoint="localhost-k8s-calico--apiserver--c85fc6b4c--lmt2g-eth0" Jul 7 06:11:53.561553 containerd[1591]: 2025-07-07 06:11:52.781 [INFO][4280] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="372ac081e5f1d02f9e6e74b2b76a5fc2effc02ad8466bd60116aec53ee64fbf8" HandleID="k8s-pod-network.372ac081e5f1d02f9e6e74b2b76a5fc2effc02ad8466bd60116aec53ee64fbf8" Workload="localhost-k8s-calico--apiserver--c85fc6b4c--lmt2g-eth0" Jul 7 06:11:53.561553 containerd[1591]: 2025-07-07 06:11:52.781 [INFO][4280] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="372ac081e5f1d02f9e6e74b2b76a5fc2effc02ad8466bd60116aec53ee64fbf8" HandleID="k8s-pod-network.372ac081e5f1d02f9e6e74b2b76a5fc2effc02ad8466bd60116aec53ee64fbf8" Workload="localhost-k8s-calico--apiserver--c85fc6b4c--lmt2g-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0006840d0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-c85fc6b4c-lmt2g", "timestamp":"2025-07-07 06:11:52.781340535 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 06:11:53.561553 containerd[1591]: 2025-07-07 06:11:52.781 [INFO][4280] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:11:53.561553 containerd[1591]: 2025-07-07 06:11:52.921 [INFO][4280] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:11:53.561553 containerd[1591]: 2025-07-07 06:11:52.921 [INFO][4280] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 7 06:11:53.561553 containerd[1591]: 2025-07-07 06:11:52.981 [INFO][4280] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.372ac081e5f1d02f9e6e74b2b76a5fc2effc02ad8466bd60116aec53ee64fbf8" host="localhost" Jul 7 06:11:53.561553 containerd[1591]: 2025-07-07 06:11:52.990 [INFO][4280] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 7 06:11:53.561553 containerd[1591]: 2025-07-07 06:11:53.001 [INFO][4280] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 7 06:11:53.561553 containerd[1591]: 2025-07-07 06:11:53.004 [INFO][4280] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 7 06:11:53.561553 containerd[1591]: 2025-07-07 06:11:53.007 [INFO][4280] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 7 06:11:53.561553 containerd[1591]: 2025-07-07 06:11:53.007 [INFO][4280] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.372ac081e5f1d02f9e6e74b2b76a5fc2effc02ad8466bd60116aec53ee64fbf8" host="localhost" Jul 7 06:11:53.561553 containerd[1591]: 2025-07-07 06:11:53.009 [INFO][4280] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.372ac081e5f1d02f9e6e74b2b76a5fc2effc02ad8466bd60116aec53ee64fbf8 Jul 7 06:11:53.561553 containerd[1591]: 2025-07-07 06:11:53.032 [INFO][4280] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.372ac081e5f1d02f9e6e74b2b76a5fc2effc02ad8466bd60116aec53ee64fbf8" host="localhost" Jul 7 06:11:53.561553 containerd[1591]: 2025-07-07 06:11:53.245 [INFO][4280] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.372ac081e5f1d02f9e6e74b2b76a5fc2effc02ad8466bd60116aec53ee64fbf8" host="localhost" Jul 7 06:11:53.561553 containerd[1591]: 2025-07-07 06:11:53.246 [INFO][4280] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.372ac081e5f1d02f9e6e74b2b76a5fc2effc02ad8466bd60116aec53ee64fbf8" host="localhost" Jul 7 06:11:53.561553 containerd[1591]: 2025-07-07 06:11:53.246 [INFO][4280] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:11:53.561553 containerd[1591]: 2025-07-07 06:11:53.246 [INFO][4280] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="372ac081e5f1d02f9e6e74b2b76a5fc2effc02ad8466bd60116aec53ee64fbf8" HandleID="k8s-pod-network.372ac081e5f1d02f9e6e74b2b76a5fc2effc02ad8466bd60116aec53ee64fbf8" Workload="localhost-k8s-calico--apiserver--c85fc6b4c--lmt2g-eth0" Jul 7 06:11:53.562184 containerd[1591]: 2025-07-07 06:11:53.250 [INFO][4169] cni-plugin/k8s.go 418: Populated endpoint ContainerID="372ac081e5f1d02f9e6e74b2b76a5fc2effc02ad8466bd60116aec53ee64fbf8" Namespace="calico-apiserver" Pod="calico-apiserver-c85fc6b4c-lmt2g" WorkloadEndpoint="localhost-k8s-calico--apiserver--c85fc6b4c--lmt2g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--c85fc6b4c--lmt2g-eth0", GenerateName:"calico-apiserver-c85fc6b4c-", Namespace:"calico-apiserver", SelfLink:"", UID:"a5aaecf1-178e-4a17-aa11-1d10ccb44ba4", ResourceVersion:"863", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 11, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"c85fc6b4c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-c85fc6b4c-lmt2g", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0683c0a9345", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:11:53.562184 containerd[1591]: 2025-07-07 06:11:53.250 [INFO][4169] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.134/32] ContainerID="372ac081e5f1d02f9e6e74b2b76a5fc2effc02ad8466bd60116aec53ee64fbf8" Namespace="calico-apiserver" Pod="calico-apiserver-c85fc6b4c-lmt2g" WorkloadEndpoint="localhost-k8s-calico--apiserver--c85fc6b4c--lmt2g-eth0" Jul 7 06:11:53.562184 containerd[1591]: 2025-07-07 06:11:53.250 [INFO][4169] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali0683c0a9345 ContainerID="372ac081e5f1d02f9e6e74b2b76a5fc2effc02ad8466bd60116aec53ee64fbf8" Namespace="calico-apiserver" Pod="calico-apiserver-c85fc6b4c-lmt2g" WorkloadEndpoint="localhost-k8s-calico--apiserver--c85fc6b4c--lmt2g-eth0" Jul 7 06:11:53.562184 containerd[1591]: 2025-07-07 06:11:53.255 [INFO][4169] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="372ac081e5f1d02f9e6e74b2b76a5fc2effc02ad8466bd60116aec53ee64fbf8" Namespace="calico-apiserver" Pod="calico-apiserver-c85fc6b4c-lmt2g" WorkloadEndpoint="localhost-k8s-calico--apiserver--c85fc6b4c--lmt2g-eth0" Jul 7 06:11:53.562184 containerd[1591]: 2025-07-07 06:11:53.255 [INFO][4169] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="372ac081e5f1d02f9e6e74b2b76a5fc2effc02ad8466bd60116aec53ee64fbf8" Namespace="calico-apiserver" Pod="calico-apiserver-c85fc6b4c-lmt2g" WorkloadEndpoint="localhost-k8s-calico--apiserver--c85fc6b4c--lmt2g-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--c85fc6b4c--lmt2g-eth0", GenerateName:"calico-apiserver-c85fc6b4c-", Namespace:"calico-apiserver", SelfLink:"", UID:"a5aaecf1-178e-4a17-aa11-1d10ccb44ba4", ResourceVersion:"863", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 11, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"c85fc6b4c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"372ac081e5f1d02f9e6e74b2b76a5fc2effc02ad8466bd60116aec53ee64fbf8", Pod:"calico-apiserver-c85fc6b4c-lmt2g", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali0683c0a9345", MAC:"4a:52:85:4f:4c:54", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:11:53.562184 containerd[1591]: 2025-07-07 06:11:53.555 [INFO][4169] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="372ac081e5f1d02f9e6e74b2b76a5fc2effc02ad8466bd60116aec53ee64fbf8" Namespace="calico-apiserver" Pod="calico-apiserver-c85fc6b4c-lmt2g" WorkloadEndpoint="localhost-k8s-calico--apiserver--c85fc6b4c--lmt2g-eth0" Jul 7 06:11:53.821236 containerd[1591]: time="2025-07-07T06:11:53.821042784Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-mnpf6,Uid:018e5b2e-15b1-47a4-aa58-1ffe99e5a2b7,Namespace:calico-system,Attempt:0,} returns sandbox id \"6818ccae37567ad92475bc527d150d86d68e183bfdba17b754decab26074bb08\"" Jul 7 06:11:53.844877 containerd[1591]: time="2025-07-07T06:11:53.844798742Z" level=info msg="Container cc8d2b81e9ffa2ab8ee0051c810a36f891f1f6d624145a9119669be2634449c0: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:11:53.850882 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3757896266.mount: Deactivated successfully. Jul 7 06:11:53.864812 containerd[1591]: time="2025-07-07T06:11:53.864752241Z" level=info msg="CreateContainer within sandbox \"b5b1727d983339d96cee8342cabd008f651192c8a82bce9bc76cd90f7ee5c410\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"cc8d2b81e9ffa2ab8ee0051c810a36f891f1f6d624145a9119669be2634449c0\"" Jul 7 06:11:53.866152 containerd[1591]: time="2025-07-07T06:11:53.865941454Z" level=info msg="StartContainer for \"cc8d2b81e9ffa2ab8ee0051c810a36f891f1f6d624145a9119669be2634449c0\"" Jul 7 06:11:53.867836 containerd[1591]: time="2025-07-07T06:11:53.867746676Z" level=info msg="connecting to shim cc8d2b81e9ffa2ab8ee0051c810a36f891f1f6d624145a9119669be2634449c0" address="unix:///run/containerd/s/cbdaacb9a812662103b5d7242703bfe9f12a25b940b23d1a6d4eda23116677ec" protocol=ttrpc version=3 Jul 7 06:11:53.912581 containerd[1591]: time="2025-07-07T06:11:53.912490289Z" level=info msg="connecting to shim 372ac081e5f1d02f9e6e74b2b76a5fc2effc02ad8466bd60116aec53ee64fbf8" address="unix:///run/containerd/s/ebf391fdcaab93f7618bcb02935cdb288536c0fa8428f1ab4b9393c4aff5c876" namespace=k8s.io protocol=ttrpc version=3 Jul 7 06:11:53.913511 systemd[1]: Started cri-containerd-cc8d2b81e9ffa2ab8ee0051c810a36f891f1f6d624145a9119669be2634449c0.scope - libcontainer container cc8d2b81e9ffa2ab8ee0051c810a36f891f1f6d624145a9119669be2634449c0. Jul 7 06:11:53.950460 systemd[1]: Started cri-containerd-372ac081e5f1d02f9e6e74b2b76a5fc2effc02ad8466bd60116aec53ee64fbf8.scope - libcontainer container 372ac081e5f1d02f9e6e74b2b76a5fc2effc02ad8466bd60116aec53ee64fbf8. Jul 7 06:11:53.967581 containerd[1591]: time="2025-07-07T06:11:53.967519333Z" level=info msg="StartContainer for \"cc8d2b81e9ffa2ab8ee0051c810a36f891f1f6d624145a9119669be2634449c0\" returns successfully" Jul 7 06:11:53.976204 systemd-resolved[1437]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 7 06:11:54.021509 containerd[1591]: time="2025-07-07T06:11:54.021451735Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c85fc6b4c-lmt2g,Uid:a5aaecf1-178e-4a17-aa11-1d10ccb44ba4,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"372ac081e5f1d02f9e6e74b2b76a5fc2effc02ad8466bd60116aec53ee64fbf8\"" Jul 7 06:11:54.216043 systemd[1]: Started sshd@9-10.0.0.94:22-10.0.0.1:34436.service - OpenSSH per-connection server daemon (10.0.0.1:34436). Jul 7 06:11:54.375440 systemd-networkd[1508]: calic9923db7c9b: Gained IPv6LL Jul 7 06:11:54.521721 kubelet[2730]: E0707 06:11:54.521645 2730 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:11:54.523953 kubelet[2730]: E0707 06:11:54.523911 2730 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:11:54.623123 kubelet[2730]: I0707 06:11:54.622997 2730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-hklhb" podStartSLOduration=39.62297794 podStartE2EDuration="39.62297794s" podCreationTimestamp="2025-07-07 06:11:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-07-07 06:11:54.622813125 +0000 UTC m=+44.360009392" watchObservedRunningTime="2025-07-07 06:11:54.62297794 +0000 UTC m=+44.360174177" Jul 7 06:11:54.625122 sshd[4641]: Accepted publickey for core from 10.0.0.1 port 34436 ssh2: RSA SHA256:f18dB8zRu6tlNxBqmR8LZaZDJCd15iHz/95DxGwb5s0 Jul 7 06:11:54.628170 sshd-session[4641]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:11:54.631318 systemd-networkd[1508]: cali6dfd6a8e3f5: Gained IPv6LL Jul 7 06:11:54.649729 systemd-logind[1529]: New session 10 of user core. Jul 7 06:11:54.660264 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 7 06:11:54.887381 systemd-networkd[1508]: cali7c32b3b6340: Gained IPv6LL Jul 7 06:11:55.016480 systemd-networkd[1508]: vxlan.calico: Gained IPv6LL Jul 7 06:11:55.198204 sshd[4645]: Connection closed by 10.0.0.1 port 34436 Jul 7 06:11:55.198505 sshd-session[4641]: pam_unix(sshd:session): session closed for user core Jul 7 06:11:55.203732 systemd[1]: sshd@9-10.0.0.94:22-10.0.0.1:34436.service: Deactivated successfully. Jul 7 06:11:55.206167 systemd[1]: session-10.scope: Deactivated successfully. Jul 7 06:11:55.208348 systemd-networkd[1508]: cali0683c0a9345: Gained IPv6LL Jul 7 06:11:55.208390 systemd-logind[1529]: Session 10 logged out. Waiting for processes to exit. Jul 7 06:11:55.210502 systemd-logind[1529]: Removed session 10. Jul 7 06:11:55.366063 containerd[1591]: time="2025-07-07T06:11:55.366006768Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c85fc6b4c-cwc8p,Uid:f8df9f4b-117d-4ed3-9a45-083bbe24b183,Namespace:calico-apiserver,Attempt:0,}" Jul 7 06:11:55.370473 containerd[1591]: time="2025-07-07T06:11:55.370324071Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5676987769-nmdmf,Uid:88ceba10-8017-4d8e-b438-c29c09deb831,Namespace:calico-system,Attempt:0,}" Jul 7 06:11:55.527155 kubelet[2730]: E0707 06:11:55.526224 2730 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:11:55.592979 systemd-networkd[1508]: calib9f7f9851a5: Link UP Jul 7 06:11:55.595337 systemd-networkd[1508]: calib9f7f9851a5: Gained carrier Jul 7 06:11:55.613333 containerd[1591]: 2025-07-07 06:11:55.481 [INFO][4668] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--c85fc6b4c--cwc8p-eth0 calico-apiserver-c85fc6b4c- calico-apiserver f8df9f4b-117d-4ed3-9a45-083bbe24b183 871 0 2025-07-07 06:11:25 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:c85fc6b4c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-c85fc6b4c-cwc8p eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calib9f7f9851a5 [] [] }} ContainerID="50ff28dba6fdac001831822b8073c79e8fdf0adde41ae3d24a2c133bffda3b62" Namespace="calico-apiserver" Pod="calico-apiserver-c85fc6b4c-cwc8p" WorkloadEndpoint="localhost-k8s-calico--apiserver--c85fc6b4c--cwc8p-" Jul 7 06:11:55.613333 containerd[1591]: 2025-07-07 06:11:55.481 [INFO][4668] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="50ff28dba6fdac001831822b8073c79e8fdf0adde41ae3d24a2c133bffda3b62" Namespace="calico-apiserver" Pod="calico-apiserver-c85fc6b4c-cwc8p" WorkloadEndpoint="localhost-k8s-calico--apiserver--c85fc6b4c--cwc8p-eth0" Jul 7 06:11:55.613333 containerd[1591]: 2025-07-07 06:11:55.536 [INFO][4699] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="50ff28dba6fdac001831822b8073c79e8fdf0adde41ae3d24a2c133bffda3b62" HandleID="k8s-pod-network.50ff28dba6fdac001831822b8073c79e8fdf0adde41ae3d24a2c133bffda3b62" Workload="localhost-k8s-calico--apiserver--c85fc6b4c--cwc8p-eth0" Jul 7 06:11:55.613333 containerd[1591]: 2025-07-07 06:11:55.537 [INFO][4699] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="50ff28dba6fdac001831822b8073c79e8fdf0adde41ae3d24a2c133bffda3b62" HandleID="k8s-pod-network.50ff28dba6fdac001831822b8073c79e8fdf0adde41ae3d24a2c133bffda3b62" Workload="localhost-k8s-calico--apiserver--c85fc6b4c--cwc8p-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0001a4e70), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-c85fc6b4c-cwc8p", "timestamp":"2025-07-07 06:11:55.536753701 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 06:11:55.613333 containerd[1591]: 2025-07-07 06:11:55.537 [INFO][4699] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:11:55.613333 containerd[1591]: 2025-07-07 06:11:55.537 [INFO][4699] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:11:55.613333 containerd[1591]: 2025-07-07 06:11:55.537 [INFO][4699] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 7 06:11:55.613333 containerd[1591]: 2025-07-07 06:11:55.548 [INFO][4699] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.50ff28dba6fdac001831822b8073c79e8fdf0adde41ae3d24a2c133bffda3b62" host="localhost" Jul 7 06:11:55.613333 containerd[1591]: 2025-07-07 06:11:55.554 [INFO][4699] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 7 06:11:55.613333 containerd[1591]: 2025-07-07 06:11:55.563 [INFO][4699] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 7 06:11:55.613333 containerd[1591]: 2025-07-07 06:11:55.565 [INFO][4699] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 7 06:11:55.613333 containerd[1591]: 2025-07-07 06:11:55.568 [INFO][4699] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 7 06:11:55.613333 containerd[1591]: 2025-07-07 06:11:55.568 [INFO][4699] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.50ff28dba6fdac001831822b8073c79e8fdf0adde41ae3d24a2c133bffda3b62" host="localhost" Jul 7 06:11:55.613333 containerd[1591]: 2025-07-07 06:11:55.569 [INFO][4699] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.50ff28dba6fdac001831822b8073c79e8fdf0adde41ae3d24a2c133bffda3b62 Jul 7 06:11:55.613333 containerd[1591]: 2025-07-07 06:11:55.574 [INFO][4699] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.50ff28dba6fdac001831822b8073c79e8fdf0adde41ae3d24a2c133bffda3b62" host="localhost" Jul 7 06:11:55.613333 containerd[1591]: 2025-07-07 06:11:55.582 [INFO][4699] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.135/26] block=192.168.88.128/26 handle="k8s-pod-network.50ff28dba6fdac001831822b8073c79e8fdf0adde41ae3d24a2c133bffda3b62" host="localhost" Jul 7 06:11:55.613333 containerd[1591]: 2025-07-07 06:11:55.582 [INFO][4699] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.135/26] handle="k8s-pod-network.50ff28dba6fdac001831822b8073c79e8fdf0adde41ae3d24a2c133bffda3b62" host="localhost" Jul 7 06:11:55.613333 containerd[1591]: 2025-07-07 06:11:55.582 [INFO][4699] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:11:55.613333 containerd[1591]: 2025-07-07 06:11:55.582 [INFO][4699] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.135/26] IPv6=[] ContainerID="50ff28dba6fdac001831822b8073c79e8fdf0adde41ae3d24a2c133bffda3b62" HandleID="k8s-pod-network.50ff28dba6fdac001831822b8073c79e8fdf0adde41ae3d24a2c133bffda3b62" Workload="localhost-k8s-calico--apiserver--c85fc6b4c--cwc8p-eth0" Jul 7 06:11:55.613950 containerd[1591]: 2025-07-07 06:11:55.586 [INFO][4668] cni-plugin/k8s.go 418: Populated endpoint ContainerID="50ff28dba6fdac001831822b8073c79e8fdf0adde41ae3d24a2c133bffda3b62" Namespace="calico-apiserver" Pod="calico-apiserver-c85fc6b4c-cwc8p" WorkloadEndpoint="localhost-k8s-calico--apiserver--c85fc6b4c--cwc8p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--c85fc6b4c--cwc8p-eth0", GenerateName:"calico-apiserver-c85fc6b4c-", Namespace:"calico-apiserver", SelfLink:"", UID:"f8df9f4b-117d-4ed3-9a45-083bbe24b183", ResourceVersion:"871", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 11, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"c85fc6b4c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-c85fc6b4c-cwc8p", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib9f7f9851a5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:11:55.613950 containerd[1591]: 2025-07-07 06:11:55.586 [INFO][4668] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.135/32] ContainerID="50ff28dba6fdac001831822b8073c79e8fdf0adde41ae3d24a2c133bffda3b62" Namespace="calico-apiserver" Pod="calico-apiserver-c85fc6b4c-cwc8p" WorkloadEndpoint="localhost-k8s-calico--apiserver--c85fc6b4c--cwc8p-eth0" Jul 7 06:11:55.613950 containerd[1591]: 2025-07-07 06:11:55.586 [INFO][4668] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib9f7f9851a5 ContainerID="50ff28dba6fdac001831822b8073c79e8fdf0adde41ae3d24a2c133bffda3b62" Namespace="calico-apiserver" Pod="calico-apiserver-c85fc6b4c-cwc8p" WorkloadEndpoint="localhost-k8s-calico--apiserver--c85fc6b4c--cwc8p-eth0" Jul 7 06:11:55.613950 containerd[1591]: 2025-07-07 06:11:55.596 [INFO][4668] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="50ff28dba6fdac001831822b8073c79e8fdf0adde41ae3d24a2c133bffda3b62" Namespace="calico-apiserver" Pod="calico-apiserver-c85fc6b4c-cwc8p" WorkloadEndpoint="localhost-k8s-calico--apiserver--c85fc6b4c--cwc8p-eth0" Jul 7 06:11:55.613950 containerd[1591]: 2025-07-07 06:11:55.596 [INFO][4668] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="50ff28dba6fdac001831822b8073c79e8fdf0adde41ae3d24a2c133bffda3b62" Namespace="calico-apiserver" Pod="calico-apiserver-c85fc6b4c-cwc8p" WorkloadEndpoint="localhost-k8s-calico--apiserver--c85fc6b4c--cwc8p-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--c85fc6b4c--cwc8p-eth0", GenerateName:"calico-apiserver-c85fc6b4c-", Namespace:"calico-apiserver", SelfLink:"", UID:"f8df9f4b-117d-4ed3-9a45-083bbe24b183", ResourceVersion:"871", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 11, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"c85fc6b4c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"50ff28dba6fdac001831822b8073c79e8fdf0adde41ae3d24a2c133bffda3b62", Pod:"calico-apiserver-c85fc6b4c-cwc8p", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calib9f7f9851a5", MAC:"4a:6a:60:e9:3c:c0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:11:55.613950 containerd[1591]: 2025-07-07 06:11:55.605 [INFO][4668] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="50ff28dba6fdac001831822b8073c79e8fdf0adde41ae3d24a2c133bffda3b62" Namespace="calico-apiserver" Pod="calico-apiserver-c85fc6b4c-cwc8p" WorkloadEndpoint="localhost-k8s-calico--apiserver--c85fc6b4c--cwc8p-eth0" Jul 7 06:11:55.716851 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1471045361.mount: Deactivated successfully. Jul 7 06:11:55.729606 containerd[1591]: time="2025-07-07T06:11:55.729529336Z" level=info msg="connecting to shim 50ff28dba6fdac001831822b8073c79e8fdf0adde41ae3d24a2c133bffda3b62" address="unix:///run/containerd/s/fc110d9cf54df66c3bf568f1faed7e6eb2bf66dab9a1f94f82a721ee43d015b6" namespace=k8s.io protocol=ttrpc version=3 Jul 7 06:11:55.782518 systemd-networkd[1508]: calife143f75fd9: Link UP Jul 7 06:11:55.783903 systemd-networkd[1508]: calife143f75fd9: Gained carrier Jul 7 06:11:55.785351 systemd[1]: Started cri-containerd-50ff28dba6fdac001831822b8073c79e8fdf0adde41ae3d24a2c133bffda3b62.scope - libcontainer container 50ff28dba6fdac001831822b8073c79e8fdf0adde41ae3d24a2c133bffda3b62. Jul 7 06:11:55.809808 containerd[1591]: 2025-07-07 06:11:55.490 [INFO][4679] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--5676987769--nmdmf-eth0 calico-kube-controllers-5676987769- calico-system 88ceba10-8017-4d8e-b438-c29c09deb831 870 0 2025-07-07 06:11:28 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:5676987769 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-5676987769-nmdmf eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calife143f75fd9 [] [] }} ContainerID="dc1c62a20d84a339bb836183cd216c7313314354a9029d5a908b78ab90b3b0e4" Namespace="calico-system" Pod="calico-kube-controllers-5676987769-nmdmf" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5676987769--nmdmf-" Jul 7 06:11:55.809808 containerd[1591]: 2025-07-07 06:11:55.491 [INFO][4679] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="dc1c62a20d84a339bb836183cd216c7313314354a9029d5a908b78ab90b3b0e4" Namespace="calico-system" Pod="calico-kube-controllers-5676987769-nmdmf" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5676987769--nmdmf-eth0" Jul 7 06:11:55.809808 containerd[1591]: 2025-07-07 06:11:55.536 [INFO][4705] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="dc1c62a20d84a339bb836183cd216c7313314354a9029d5a908b78ab90b3b0e4" HandleID="k8s-pod-network.dc1c62a20d84a339bb836183cd216c7313314354a9029d5a908b78ab90b3b0e4" Workload="localhost-k8s-calico--kube--controllers--5676987769--nmdmf-eth0" Jul 7 06:11:55.809808 containerd[1591]: 2025-07-07 06:11:55.536 [INFO][4705] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="dc1c62a20d84a339bb836183cd216c7313314354a9029d5a908b78ab90b3b0e4" HandleID="k8s-pod-network.dc1c62a20d84a339bb836183cd216c7313314354a9029d5a908b78ab90b3b0e4" Workload="localhost-k8s-calico--kube--controllers--5676987769--nmdmf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000139650), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-5676987769-nmdmf", "timestamp":"2025-07-07 06:11:55.536743352 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jul 7 06:11:55.809808 containerd[1591]: 2025-07-07 06:11:55.537 [INFO][4705] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jul 7 06:11:55.809808 containerd[1591]: 2025-07-07 06:11:55.582 [INFO][4705] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jul 7 06:11:55.809808 containerd[1591]: 2025-07-07 06:11:55.583 [INFO][4705] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jul 7 06:11:55.809808 containerd[1591]: 2025-07-07 06:11:55.665 [INFO][4705] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.dc1c62a20d84a339bb836183cd216c7313314354a9029d5a908b78ab90b3b0e4" host="localhost" Jul 7 06:11:55.809808 containerd[1591]: 2025-07-07 06:11:55.675 [INFO][4705] ipam/ipam.go 394: Looking up existing affinities for host host="localhost" Jul 7 06:11:55.809808 containerd[1591]: 2025-07-07 06:11:55.680 [INFO][4705] ipam/ipam.go 511: Trying affinity for 192.168.88.128/26 host="localhost" Jul 7 06:11:55.809808 containerd[1591]: 2025-07-07 06:11:55.682 [INFO][4705] ipam/ipam.go 158: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jul 7 06:11:55.809808 containerd[1591]: 2025-07-07 06:11:55.685 [INFO][4705] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jul 7 06:11:55.809808 containerd[1591]: 2025-07-07 06:11:55.685 [INFO][4705] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.dc1c62a20d84a339bb836183cd216c7313314354a9029d5a908b78ab90b3b0e4" host="localhost" Jul 7 06:11:55.809808 containerd[1591]: 2025-07-07 06:11:55.688 [INFO][4705] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.dc1c62a20d84a339bb836183cd216c7313314354a9029d5a908b78ab90b3b0e4 Jul 7 06:11:55.809808 containerd[1591]: 2025-07-07 06:11:55.698 [INFO][4705] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.dc1c62a20d84a339bb836183cd216c7313314354a9029d5a908b78ab90b3b0e4" host="localhost" Jul 7 06:11:55.809808 containerd[1591]: 2025-07-07 06:11:55.727 [INFO][4705] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.88.136/26] block=192.168.88.128/26 handle="k8s-pod-network.dc1c62a20d84a339bb836183cd216c7313314354a9029d5a908b78ab90b3b0e4" host="localhost" Jul 7 06:11:55.809808 containerd[1591]: 2025-07-07 06:11:55.727 [INFO][4705] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.88.136/26] handle="k8s-pod-network.dc1c62a20d84a339bb836183cd216c7313314354a9029d5a908b78ab90b3b0e4" host="localhost" Jul 7 06:11:55.809808 containerd[1591]: 2025-07-07 06:11:55.727 [INFO][4705] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jul 7 06:11:55.809808 containerd[1591]: 2025-07-07 06:11:55.727 [INFO][4705] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.136/26] IPv6=[] ContainerID="dc1c62a20d84a339bb836183cd216c7313314354a9029d5a908b78ab90b3b0e4" HandleID="k8s-pod-network.dc1c62a20d84a339bb836183cd216c7313314354a9029d5a908b78ab90b3b0e4" Workload="localhost-k8s-calico--kube--controllers--5676987769--nmdmf-eth0" Jul 7 06:11:55.810386 containerd[1591]: 2025-07-07 06:11:55.746 [INFO][4679] cni-plugin/k8s.go 418: Populated endpoint ContainerID="dc1c62a20d84a339bb836183cd216c7313314354a9029d5a908b78ab90b3b0e4" Namespace="calico-system" Pod="calico-kube-controllers-5676987769-nmdmf" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5676987769--nmdmf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5676987769--nmdmf-eth0", GenerateName:"calico-kube-controllers-5676987769-", Namespace:"calico-system", SelfLink:"", UID:"88ceba10-8017-4d8e-b438-c29c09deb831", ResourceVersion:"870", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 11, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5676987769", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-5676987769-nmdmf", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calife143f75fd9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:11:55.810386 containerd[1591]: 2025-07-07 06:11:55.746 [INFO][4679] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.88.136/32] ContainerID="dc1c62a20d84a339bb836183cd216c7313314354a9029d5a908b78ab90b3b0e4" Namespace="calico-system" Pod="calico-kube-controllers-5676987769-nmdmf" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5676987769--nmdmf-eth0" Jul 7 06:11:55.810386 containerd[1591]: 2025-07-07 06:11:55.746 [INFO][4679] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calife143f75fd9 ContainerID="dc1c62a20d84a339bb836183cd216c7313314354a9029d5a908b78ab90b3b0e4" Namespace="calico-system" Pod="calico-kube-controllers-5676987769-nmdmf" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5676987769--nmdmf-eth0" Jul 7 06:11:55.810386 containerd[1591]: 2025-07-07 06:11:55.783 [INFO][4679] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="dc1c62a20d84a339bb836183cd216c7313314354a9029d5a908b78ab90b3b0e4" Namespace="calico-system" Pod="calico-kube-controllers-5676987769-nmdmf" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5676987769--nmdmf-eth0" Jul 7 06:11:55.810386 containerd[1591]: 2025-07-07 06:11:55.788 [INFO][4679] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="dc1c62a20d84a339bb836183cd216c7313314354a9029d5a908b78ab90b3b0e4" Namespace="calico-system" Pod="calico-kube-controllers-5676987769-nmdmf" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5676987769--nmdmf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--5676987769--nmdmf-eth0", GenerateName:"calico-kube-controllers-5676987769-", Namespace:"calico-system", SelfLink:"", UID:"88ceba10-8017-4d8e-b438-c29c09deb831", ResourceVersion:"870", Generation:0, CreationTimestamp:time.Date(2025, time.July, 7, 6, 11, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"5676987769", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"dc1c62a20d84a339bb836183cd216c7313314354a9029d5a908b78ab90b3b0e4", Pod:"calico-kube-controllers-5676987769-nmdmf", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calife143f75fd9", MAC:"66:3b:fe:39:f6:87", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jul 7 06:11:55.810386 containerd[1591]: 2025-07-07 06:11:55.800 [INFO][4679] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="dc1c62a20d84a339bb836183cd216c7313314354a9029d5a908b78ab90b3b0e4" Namespace="calico-system" Pod="calico-kube-controllers-5676987769-nmdmf" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--5676987769--nmdmf-eth0" Jul 7 06:11:55.823921 systemd-resolved[1437]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 7 06:11:55.849346 containerd[1591]: time="2025-07-07T06:11:55.849277302Z" level=info msg="connecting to shim dc1c62a20d84a339bb836183cd216c7313314354a9029d5a908b78ab90b3b0e4" address="unix:///run/containerd/s/c66b3cf6a4ad1edef360d58b695f0e35151da60d72b644cf0aa168403643fde6" namespace=k8s.io protocol=ttrpc version=3 Jul 7 06:11:55.899819 systemd[1]: Started cri-containerd-dc1c62a20d84a339bb836183cd216c7313314354a9029d5a908b78ab90b3b0e4.scope - libcontainer container dc1c62a20d84a339bb836183cd216c7313314354a9029d5a908b78ab90b3b0e4. Jul 7 06:11:55.915012 containerd[1591]: time="2025-07-07T06:11:55.914955143Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-c85fc6b4c-cwc8p,Uid:f8df9f4b-117d-4ed3-9a45-083bbe24b183,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"50ff28dba6fdac001831822b8073c79e8fdf0adde41ae3d24a2c133bffda3b62\"" Jul 7 06:11:55.925158 systemd-resolved[1437]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jul 7 06:11:55.976591 containerd[1591]: time="2025-07-07T06:11:55.976523337Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-5676987769-nmdmf,Uid:88ceba10-8017-4d8e-b438-c29c09deb831,Namespace:calico-system,Attempt:0,} returns sandbox id \"dc1c62a20d84a339bb836183cd216c7313314354a9029d5a908b78ab90b3b0e4\"" Jul 7 06:11:56.508263 containerd[1591]: time="2025-07-07T06:11:56.508211762Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:11:56.517214 containerd[1591]: time="2025-07-07T06:11:56.517189144Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.2: active requests=0, bytes read=66352308" Jul 7 06:11:56.518409 containerd[1591]: time="2025-07-07T06:11:56.518385904Z" level=info msg="ImageCreate event name:\"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:11:56.520527 containerd[1591]: time="2025-07-07T06:11:56.520484251Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:11:56.521276 containerd[1591]: time="2025-07-07T06:11:56.521249359Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" with image id \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\", repo tag \"ghcr.io/flatcar/calico/goldmane:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/goldmane@sha256:a2b761fd93d824431ad93e59e8e670cdf00b478f4b532145297e1e67f2768305\", size \"66352154\" in 4.580203828s" Jul 7 06:11:56.521347 containerd[1591]: time="2025-07-07T06:11:56.521279156Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.2\" returns image reference \"sha256:dc4ea8b409b85d2f118bb4677ad3d34b57e7b01d488c9f019f7073bb58b2162b\"" Jul 7 06:11:56.524132 containerd[1591]: time="2025-07-07T06:11:56.524066105Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\"" Jul 7 06:11:56.526542 containerd[1591]: time="2025-07-07T06:11:56.526473101Z" level=info msg="CreateContainer within sandbox \"3c45ff7f4db51ac91643293c1a0ff411dd9abb2c314935191b33215e0001ce57\" for container &ContainerMetadata{Name:goldmane,Attempt:0,}" Jul 7 06:11:56.530788 kubelet[2730]: E0707 06:11:56.530757 2730 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:11:56.537953 containerd[1591]: time="2025-07-07T06:11:56.537888227Z" level=info msg="Container f6a493223d8643b88d14c582eb45fa96fdab0c0f43ca694b7be578b227841ffe: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:11:56.552951 containerd[1591]: time="2025-07-07T06:11:56.552885128Z" level=info msg="CreateContainer within sandbox \"3c45ff7f4db51ac91643293c1a0ff411dd9abb2c314935191b33215e0001ce57\" for &ContainerMetadata{Name:goldmane,Attempt:0,} returns container id \"f6a493223d8643b88d14c582eb45fa96fdab0c0f43ca694b7be578b227841ffe\"" Jul 7 06:11:56.559772 containerd[1591]: time="2025-07-07T06:11:56.559714257Z" level=info msg="StartContainer for \"f6a493223d8643b88d14c582eb45fa96fdab0c0f43ca694b7be578b227841ffe\"" Jul 7 06:11:56.560992 containerd[1591]: time="2025-07-07T06:11:56.560956253Z" level=info msg="connecting to shim f6a493223d8643b88d14c582eb45fa96fdab0c0f43ca694b7be578b227841ffe" address="unix:///run/containerd/s/32d1e3f184b16f1c829031967347cf21010a9e06821d17ebd218a1c3d9c66285" protocol=ttrpc version=3 Jul 7 06:11:56.594318 systemd[1]: Started cri-containerd-f6a493223d8643b88d14c582eb45fa96fdab0c0f43ca694b7be578b227841ffe.scope - libcontainer container f6a493223d8643b88d14c582eb45fa96fdab0c0f43ca694b7be578b227841ffe. Jul 7 06:11:56.856447 containerd[1591]: time="2025-07-07T06:11:56.856405717Z" level=info msg="StartContainer for \"f6a493223d8643b88d14c582eb45fa96fdab0c0f43ca694b7be578b227841ffe\" returns successfully" Jul 7 06:11:57.064274 systemd-networkd[1508]: calife143f75fd9: Gained IPv6LL Jul 7 06:11:57.191789 systemd-networkd[1508]: calib9f7f9851a5: Gained IPv6LL Jul 7 06:11:57.624247 containerd[1591]: time="2025-07-07T06:11:57.624178065Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f6a493223d8643b88d14c582eb45fa96fdab0c0f43ca694b7be578b227841ffe\" id:\"89981e83afb885675f98fcf5d888678eb59d40f1484dcd3de765abdb816aeadd\" pid:4877 exit_status:1 exited_at:{seconds:1751868717 nanos:623670549}" Jul 7 06:11:58.299702 containerd[1591]: time="2025-07-07T06:11:58.299639254Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:11:58.300509 containerd[1591]: time="2025-07-07T06:11:58.300469243Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.2: active requests=0, bytes read=4661207" Jul 7 06:11:58.301657 containerd[1591]: time="2025-07-07T06:11:58.301612967Z" level=info msg="ImageCreate event name:\"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:11:58.304019 containerd[1591]: time="2025-07-07T06:11:58.303986941Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:11:58.304603 containerd[1591]: time="2025-07-07T06:11:58.304552666Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker:v3.30.2\" with image id \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\", repo tag \"ghcr.io/flatcar/calico/whisker:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker@sha256:31346d4524252a3b0d2a1d289c4985b8402b498b5ce82a12e682096ab7446678\", size \"6153902\" in 1.780455782s" Jul 7 06:11:58.304650 containerd[1591]: time="2025-07-07T06:11:58.304603604Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.2\" returns image reference \"sha256:eb8f512acf9402730da120a7b0d47d3d9d451b56e6e5eb8bad53ab24f926f954\"" Jul 7 06:11:58.305638 containerd[1591]: time="2025-07-07T06:11:58.305605519Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Jul 7 06:11:58.306588 containerd[1591]: time="2025-07-07T06:11:58.306532491Z" level=info msg="CreateContainer within sandbox \"f467d994047674035755d475ccf4da25db165ac56ba77184cd0a140c006316dc\" for container &ContainerMetadata{Name:whisker,Attempt:0,}" Jul 7 06:11:58.315332 containerd[1591]: time="2025-07-07T06:11:58.315255899Z" level=info msg="Container 0f256dfe9945089232d82b9f877ae0aee5e242ac632eaca0fb990c4195560782: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:11:58.323760 containerd[1591]: time="2025-07-07T06:11:58.323713922Z" level=info msg="CreateContainer within sandbox \"f467d994047674035755d475ccf4da25db165ac56ba77184cd0a140c006316dc\" for &ContainerMetadata{Name:whisker,Attempt:0,} returns container id \"0f256dfe9945089232d82b9f877ae0aee5e242ac632eaca0fb990c4195560782\"" Jul 7 06:11:58.324650 containerd[1591]: time="2025-07-07T06:11:58.324600998Z" level=info msg="StartContainer for \"0f256dfe9945089232d82b9f877ae0aee5e242ac632eaca0fb990c4195560782\"" Jul 7 06:11:58.326181 containerd[1591]: time="2025-07-07T06:11:58.326147078Z" level=info msg="connecting to shim 0f256dfe9945089232d82b9f877ae0aee5e242ac632eaca0fb990c4195560782" address="unix:///run/containerd/s/a8b54d0d093bb24bf51544b09a3e1789b566ca5d68cf28d72bb8e9a813d604b8" protocol=ttrpc version=3 Jul 7 06:11:58.354616 systemd[1]: Started cri-containerd-0f256dfe9945089232d82b9f877ae0aee5e242ac632eaca0fb990c4195560782.scope - libcontainer container 0f256dfe9945089232d82b9f877ae0aee5e242ac632eaca0fb990c4195560782. Jul 7 06:11:58.419835 containerd[1591]: time="2025-07-07T06:11:58.419788876Z" level=info msg="StartContainer for \"0f256dfe9945089232d82b9f877ae0aee5e242ac632eaca0fb990c4195560782\" returns successfully" Jul 7 06:11:58.634865 containerd[1591]: time="2025-07-07T06:11:58.634693983Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f6a493223d8643b88d14c582eb45fa96fdab0c0f43ca694b7be578b227841ffe\" id:\"73fd1fc93898641486bfe572f63e8ab10f7a8c09e1e5cfaed8297d59126828c5\" pid:4940 exit_status:1 exited_at:{seconds:1751868718 nanos:634358906}" Jul 7 06:11:59.733301 containerd[1591]: time="2025-07-07T06:11:59.733219619Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8759190" Jul 7 06:11:59.733870 containerd[1591]: time="2025-07-07T06:11:59.733745809Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:11:59.734502 containerd[1591]: time="2025-07-07T06:11:59.734468401Z" level=info msg="ImageCreate event name:\"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:11:59.736566 containerd[1591]: time="2025-07-07T06:11:59.736522253Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:11:59.737211 containerd[1591]: time="2025-07-07T06:11:59.737173932Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"10251893\" in 1.431537123s" Jul 7 06:11:59.737268 containerd[1591]: time="2025-07-07T06:11:59.737210221Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\"" Jul 7 06:11:59.738693 containerd[1591]: time="2025-07-07T06:11:59.738350266Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 7 06:11:59.740009 containerd[1591]: time="2025-07-07T06:11:59.739967939Z" level=info msg="CreateContainer within sandbox \"6818ccae37567ad92475bc527d150d86d68e183bfdba17b754decab26074bb08\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jul 7 06:11:59.757345 containerd[1591]: time="2025-07-07T06:11:59.757292062Z" level=info msg="Container a0f83b1772d504acb2b7719c7e7f0f499e363637dc0be7489ad2e25ce4e42aae: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:11:59.769939 containerd[1591]: time="2025-07-07T06:11:59.769885037Z" level=info msg="CreateContainer within sandbox \"6818ccae37567ad92475bc527d150d86d68e183bfdba17b754decab26074bb08\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"a0f83b1772d504acb2b7719c7e7f0f499e363637dc0be7489ad2e25ce4e42aae\"" Jul 7 06:11:59.770806 containerd[1591]: time="2025-07-07T06:11:59.770728760Z" level=info msg="StartContainer for \"a0f83b1772d504acb2b7719c7e7f0f499e363637dc0be7489ad2e25ce4e42aae\"" Jul 7 06:11:59.772789 containerd[1591]: time="2025-07-07T06:11:59.772764959Z" level=info msg="connecting to shim a0f83b1772d504acb2b7719c7e7f0f499e363637dc0be7489ad2e25ce4e42aae" address="unix:///run/containerd/s/5dd090ca248103db187e337726371e71255ffa000c7718d469248ac35a09df16" protocol=ttrpc version=3 Jul 7 06:11:59.797388 systemd[1]: Started cri-containerd-a0f83b1772d504acb2b7719c7e7f0f499e363637dc0be7489ad2e25ce4e42aae.scope - libcontainer container a0f83b1772d504acb2b7719c7e7f0f499e363637dc0be7489ad2e25ce4e42aae. Jul 7 06:11:59.956885 containerd[1591]: time="2025-07-07T06:11:59.956835013Z" level=info msg="StartContainer for \"a0f83b1772d504acb2b7719c7e7f0f499e363637dc0be7489ad2e25ce4e42aae\" returns successfully" Jul 7 06:12:00.214343 systemd[1]: Started sshd@10-10.0.0.94:22-10.0.0.1:43162.service - OpenSSH per-connection server daemon (10.0.0.1:43162). Jul 7 06:12:00.285884 sshd[4994]: Accepted publickey for core from 10.0.0.1 port 43162 ssh2: RSA SHA256:f18dB8zRu6tlNxBqmR8LZaZDJCd15iHz/95DxGwb5s0 Jul 7 06:12:00.294746 sshd-session[4994]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:12:00.352226 systemd-logind[1529]: New session 11 of user core. Jul 7 06:12:00.364245 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 7 06:12:00.510923 sshd[4996]: Connection closed by 10.0.0.1 port 43162 Jul 7 06:12:00.511349 sshd-session[4994]: pam_unix(sshd:session): session closed for user core Jul 7 06:12:00.516220 systemd[1]: sshd@10-10.0.0.94:22-10.0.0.1:43162.service: Deactivated successfully. Jul 7 06:12:00.519094 systemd[1]: session-11.scope: Deactivated successfully. Jul 7 06:12:00.523084 systemd-logind[1529]: Session 11 logged out. Waiting for processes to exit. Jul 7 06:12:00.524302 systemd-logind[1529]: Removed session 11. Jul 7 06:12:02.489180 containerd[1591]: time="2025-07-07T06:12:02.489114429Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:12:02.490001 containerd[1591]: time="2025-07-07T06:12:02.489967958Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=47317977" Jul 7 06:12:02.491622 containerd[1591]: time="2025-07-07T06:12:02.491579001Z" level=info msg="ImageCreate event name:\"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:12:02.494215 containerd[1591]: time="2025-07-07T06:12:02.494158871Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:12:02.494605 containerd[1591]: time="2025-07-07T06:12:02.494567486Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 2.756175952s" Jul 7 06:12:02.494605 containerd[1591]: time="2025-07-07T06:12:02.494596511Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Jul 7 06:12:02.497902 containerd[1591]: time="2025-07-07T06:12:02.497873973Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\"" Jul 7 06:12:02.500508 containerd[1591]: time="2025-07-07T06:12:02.500478691Z" level=info msg="CreateContainer within sandbox \"372ac081e5f1d02f9e6e74b2b76a5fc2effc02ad8466bd60116aec53ee64fbf8\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 7 06:12:02.510342 containerd[1591]: time="2025-07-07T06:12:02.509644805Z" level=info msg="Container 41ab6465483bd170c8173c6660e67d612d8632166f24b7c60d1b753c29f51444: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:12:02.668784 containerd[1591]: time="2025-07-07T06:12:02.668701612Z" level=info msg="CreateContainer within sandbox \"372ac081e5f1d02f9e6e74b2b76a5fc2effc02ad8466bd60116aec53ee64fbf8\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"41ab6465483bd170c8173c6660e67d612d8632166f24b7c60d1b753c29f51444\"" Jul 7 06:12:02.669501 containerd[1591]: time="2025-07-07T06:12:02.669426467Z" level=info msg="StartContainer for \"41ab6465483bd170c8173c6660e67d612d8632166f24b7c60d1b753c29f51444\"" Jul 7 06:12:02.670828 containerd[1591]: time="2025-07-07T06:12:02.670762970Z" level=info msg="connecting to shim 41ab6465483bd170c8173c6660e67d612d8632166f24b7c60d1b753c29f51444" address="unix:///run/containerd/s/ebf391fdcaab93f7618bcb02935cdb288536c0fa8428f1ab4b9393c4aff5c876" protocol=ttrpc version=3 Jul 7 06:12:02.708417 systemd[1]: Started cri-containerd-41ab6465483bd170c8173c6660e67d612d8632166f24b7c60d1b753c29f51444.scope - libcontainer container 41ab6465483bd170c8173c6660e67d612d8632166f24b7c60d1b753c29f51444. Jul 7 06:12:02.768371 containerd[1591]: time="2025-07-07T06:12:02.768331715Z" level=info msg="StartContainer for \"41ab6465483bd170c8173c6660e67d612d8632166f24b7c60d1b753c29f51444\" returns successfully" Jul 7 06:12:03.126215 containerd[1591]: time="2025-07-07T06:12:03.125634568Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:12:03.127585 containerd[1591]: time="2025-07-07T06:12:03.127513757Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.2: active requests=0, bytes read=77" Jul 7 06:12:03.129294 containerd[1591]: time="2025-07-07T06:12:03.129228005Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" with image id \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:ec6b10660962e7caad70c47755049fad68f9fc2f7064e8bc7cb862583e02cc2b\", size \"48810696\" in 631.322632ms" Jul 7 06:12:03.129294 containerd[1591]: time="2025-07-07T06:12:03.129263681Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.2\" returns image reference \"sha256:5509118eed617ef04ca00f5a095bfd0a4cd1cf69edcfcf9bedf0edb641be51dd\"" Jul 7 06:12:03.131842 containerd[1591]: time="2025-07-07T06:12:03.131599156Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Jul 7 06:12:03.132443 containerd[1591]: time="2025-07-07T06:12:03.132350289Z" level=info msg="CreateContainer within sandbox \"50ff28dba6fdac001831822b8073c79e8fdf0adde41ae3d24a2c133bffda3b62\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jul 7 06:12:03.148039 containerd[1591]: time="2025-07-07T06:12:03.146458323Z" level=info msg="Container 05f400404eee68d3e4d37ea03128d093983bdcc5b86200ff1148aa7daf41eb11: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:12:03.164818 containerd[1591]: time="2025-07-07T06:12:03.164750033Z" level=info msg="CreateContainer within sandbox \"50ff28dba6fdac001831822b8073c79e8fdf0adde41ae3d24a2c133bffda3b62\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"05f400404eee68d3e4d37ea03128d093983bdcc5b86200ff1148aa7daf41eb11\"" Jul 7 06:12:03.165932 containerd[1591]: time="2025-07-07T06:12:03.165873951Z" level=info msg="StartContainer for \"05f400404eee68d3e4d37ea03128d093983bdcc5b86200ff1148aa7daf41eb11\"" Jul 7 06:12:03.171707 containerd[1591]: time="2025-07-07T06:12:03.171641027Z" level=info msg="connecting to shim 05f400404eee68d3e4d37ea03128d093983bdcc5b86200ff1148aa7daf41eb11" address="unix:///run/containerd/s/fc110d9cf54df66c3bf568f1faed7e6eb2bf66dab9a1f94f82a721ee43d015b6" protocol=ttrpc version=3 Jul 7 06:12:03.209959 systemd[1]: Started cri-containerd-05f400404eee68d3e4d37ea03128d093983bdcc5b86200ff1148aa7daf41eb11.scope - libcontainer container 05f400404eee68d3e4d37ea03128d093983bdcc5b86200ff1148aa7daf41eb11. Jul 7 06:12:03.272458 containerd[1591]: time="2025-07-07T06:12:03.272392427Z" level=info msg="StartContainer for \"05f400404eee68d3e4d37ea03128d093983bdcc5b86200ff1148aa7daf41eb11\" returns successfully" Jul 7 06:12:03.706895 kubelet[2730]: I0707 06:12:03.706726 2730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/goldmane-768f4c5c69-tcv2d" podStartSLOduration=32.123574787 podStartE2EDuration="36.706704883s" podCreationTimestamp="2025-07-07 06:11:27 +0000 UTC" firstStartedPulling="2025-07-07 06:11:51.940745085 +0000 UTC m=+41.677941312" lastFinishedPulling="2025-07-07 06:11:56.523875171 +0000 UTC m=+46.261071408" observedRunningTime="2025-07-07 06:11:57.549821498 +0000 UTC m=+47.287017756" watchObservedRunningTime="2025-07-07 06:12:03.706704883 +0000 UTC m=+53.443901110" Jul 7 06:12:03.708519 kubelet[2730]: I0707 06:12:03.706922 2730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-c85fc6b4c-lmt2g" podStartSLOduration=30.232002243 podStartE2EDuration="38.706915382s" podCreationTimestamp="2025-07-07 06:11:25 +0000 UTC" firstStartedPulling="2025-07-07 06:11:54.02282669 +0000 UTC m=+43.760022927" lastFinishedPulling="2025-07-07 06:12:02.497739829 +0000 UTC m=+52.234936066" observedRunningTime="2025-07-07 06:12:03.706697999 +0000 UTC m=+53.443894256" watchObservedRunningTime="2025-07-07 06:12:03.706915382 +0000 UTC m=+53.444111619" Jul 7 06:12:03.724667 kubelet[2730]: I0707 06:12:03.723137 2730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-c85fc6b4c-cwc8p" podStartSLOduration=31.510328624 podStartE2EDuration="38.723116892s" podCreationTimestamp="2025-07-07 06:11:25 +0000 UTC" firstStartedPulling="2025-07-07 06:11:55.917420945 +0000 UTC m=+45.654617182" lastFinishedPulling="2025-07-07 06:12:03.130209213 +0000 UTC m=+52.867405450" observedRunningTime="2025-07-07 06:12:03.722993177 +0000 UTC m=+53.460189434" watchObservedRunningTime="2025-07-07 06:12:03.723116892 +0000 UTC m=+53.460313129" Jul 7 06:12:05.526275 systemd[1]: Started sshd@11-10.0.0.94:22-10.0.0.1:43174.service - OpenSSH per-connection server daemon (10.0.0.1:43174). Jul 7 06:12:05.627490 sshd[5109]: Accepted publickey for core from 10.0.0.1 port 43174 ssh2: RSA SHA256:f18dB8zRu6tlNxBqmR8LZaZDJCd15iHz/95DxGwb5s0 Jul 7 06:12:05.631669 sshd-session[5109]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:12:05.638015 systemd-logind[1529]: New session 12 of user core. Jul 7 06:12:05.645275 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 7 06:12:05.841008 sshd[5111]: Connection closed by 10.0.0.1 port 43174 Jul 7 06:12:05.842910 sshd-session[5109]: pam_unix(sshd:session): session closed for user core Jul 7 06:12:05.847688 systemd[1]: sshd@11-10.0.0.94:22-10.0.0.1:43174.service: Deactivated successfully. Jul 7 06:12:05.851304 systemd[1]: session-12.scope: Deactivated successfully. Jul 7 06:12:05.855811 systemd-logind[1529]: Session 12 logged out. Waiting for processes to exit. Jul 7 06:12:05.858418 systemd-logind[1529]: Removed session 12. Jul 7 06:12:06.629669 containerd[1591]: time="2025-07-07T06:12:06.629587196Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:12:06.641401 containerd[1591]: time="2025-07-07T06:12:06.641318283Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=51276688" Jul 7 06:12:06.700478 containerd[1591]: time="2025-07-07T06:12:06.700366428Z" level=info msg="ImageCreate event name:\"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:12:06.729037 containerd[1591]: time="2025-07-07T06:12:06.728958039Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:12:06.729788 containerd[1591]: time="2025-07-07T06:12:06.729731768Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" with image id \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:5d3ecdec3cbbe8f7009077102e35e8a2141161b59c548cf3f97829177677cbce\", size \"52769359\" in 3.59809391s" Jul 7 06:12:06.729788 containerd[1591]: time="2025-07-07T06:12:06.729783947Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" returns image reference \"sha256:761b294e26556b58aabc85094a3d465389e6b141b7400aee732bd13400a6124a\"" Jul 7 06:12:06.731282 containerd[1591]: time="2025-07-07T06:12:06.731035924Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\"" Jul 7 06:12:06.745208 containerd[1591]: time="2025-07-07T06:12:06.745158932Z" level=info msg="CreateContainer within sandbox \"dc1c62a20d84a339bb836183cd216c7313314354a9029d5a908b78ab90b3b0e4\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jul 7 06:12:06.880784 containerd[1591]: time="2025-07-07T06:12:06.880638136Z" level=info msg="Container 055ef1532d50ebcaca96ed2881b174f755f950129b37896000faf720df72b5b8: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:12:06.893471 containerd[1591]: time="2025-07-07T06:12:06.893410260Z" level=info msg="CreateContainer within sandbox \"dc1c62a20d84a339bb836183cd216c7313314354a9029d5a908b78ab90b3b0e4\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"055ef1532d50ebcaca96ed2881b174f755f950129b37896000faf720df72b5b8\"" Jul 7 06:12:06.893984 containerd[1591]: time="2025-07-07T06:12:06.893953501Z" level=info msg="StartContainer for \"055ef1532d50ebcaca96ed2881b174f755f950129b37896000faf720df72b5b8\"" Jul 7 06:12:06.895297 containerd[1591]: time="2025-07-07T06:12:06.895268688Z" level=info msg="connecting to shim 055ef1532d50ebcaca96ed2881b174f755f950129b37896000faf720df72b5b8" address="unix:///run/containerd/s/c66b3cf6a4ad1edef360d58b695f0e35151da60d72b644cf0aa168403643fde6" protocol=ttrpc version=3 Jul 7 06:12:06.919461 systemd[1]: Started cri-containerd-055ef1532d50ebcaca96ed2881b174f755f950129b37896000faf720df72b5b8.scope - libcontainer container 055ef1532d50ebcaca96ed2881b174f755f950129b37896000faf720df72b5b8. Jul 7 06:12:06.980051 containerd[1591]: time="2025-07-07T06:12:06.980008359Z" level=info msg="StartContainer for \"055ef1532d50ebcaca96ed2881b174f755f950129b37896000faf720df72b5b8\" returns successfully" Jul 7 06:12:07.659345 kubelet[2730]: I0707 06:12:07.659224 2730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-5676987769-nmdmf" podStartSLOduration=28.906823942 podStartE2EDuration="39.659155071s" podCreationTimestamp="2025-07-07 06:11:28 +0000 UTC" firstStartedPulling="2025-07-07 06:11:55.978483705 +0000 UTC m=+45.715679942" lastFinishedPulling="2025-07-07 06:12:06.730814824 +0000 UTC m=+56.468011071" observedRunningTime="2025-07-07 06:12:07.65625792 +0000 UTC m=+57.393454167" watchObservedRunningTime="2025-07-07 06:12:07.659155071 +0000 UTC m=+57.396351318" Jul 7 06:12:07.720249 containerd[1591]: time="2025-07-07T06:12:07.720187686Z" level=info msg="TaskExit event in podsandbox handler container_id:\"055ef1532d50ebcaca96ed2881b174f755f950129b37896000faf720df72b5b8\" id:\"3fe94934f281d5f765f8c4d985ddcf45dc0b927fcb95a9ffd9589acd0ff6aa01\" pid:5187 exited_at:{seconds:1751868727 nanos:719897723}" Jul 7 06:12:08.770020 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount826484634.mount: Deactivated successfully. Jul 7 06:12:09.695447 containerd[1591]: time="2025-07-07T06:12:09.695211240Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:12:09.724542 containerd[1591]: time="2025-07-07T06:12:09.724441227Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.2: active requests=0, bytes read=33083477" Jul 7 06:12:09.729210 containerd[1591]: time="2025-07-07T06:12:09.729068390Z" level=info msg="ImageCreate event name:\"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:12:09.733452 containerd[1591]: time="2025-07-07T06:12:09.733316608Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:12:09.735041 containerd[1591]: time="2025-07-07T06:12:09.734782237Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" with image id \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\", repo tag \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/whisker-backend@sha256:fbf7f21f5aba95930803ad7e7dea8b083220854eae72c2a7c51681c09c5614b5\", size \"33083307\" in 3.003707039s" Jul 7 06:12:09.735041 containerd[1591]: time="2025-07-07T06:12:09.734837504Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.2\" returns image reference \"sha256:6ba7e39edcd8be6d32dfccbfdb65533a727b14a19173515e91607d4259f8ee7f\"" Jul 7 06:12:09.737311 containerd[1591]: time="2025-07-07T06:12:09.737217194Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Jul 7 06:12:09.740150 containerd[1591]: time="2025-07-07T06:12:09.739523781Z" level=info msg="CreateContainer within sandbox \"f467d994047674035755d475ccf4da25db165ac56ba77184cd0a140c006316dc\" for container &ContainerMetadata{Name:whisker-backend,Attempt:0,}" Jul 7 06:12:09.758268 containerd[1591]: time="2025-07-07T06:12:09.757092001Z" level=info msg="Container 754ff33524c79ea1d54b5a11206350995341937e92a914736cdd8642012ec1be: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:12:09.771415 containerd[1591]: time="2025-07-07T06:12:09.771329878Z" level=info msg="CreateContainer within sandbox \"f467d994047674035755d475ccf4da25db165ac56ba77184cd0a140c006316dc\" for &ContainerMetadata{Name:whisker-backend,Attempt:0,} returns container id \"754ff33524c79ea1d54b5a11206350995341937e92a914736cdd8642012ec1be\"" Jul 7 06:12:09.772010 containerd[1591]: time="2025-07-07T06:12:09.771964838Z" level=info msg="StartContainer for \"754ff33524c79ea1d54b5a11206350995341937e92a914736cdd8642012ec1be\"" Jul 7 06:12:09.773425 containerd[1591]: time="2025-07-07T06:12:09.773389648Z" level=info msg="connecting to shim 754ff33524c79ea1d54b5a11206350995341937e92a914736cdd8642012ec1be" address="unix:///run/containerd/s/a8b54d0d093bb24bf51544b09a3e1789b566ca5d68cf28d72bb8e9a813d604b8" protocol=ttrpc version=3 Jul 7 06:12:09.806396 systemd[1]: Started cri-containerd-754ff33524c79ea1d54b5a11206350995341937e92a914736cdd8642012ec1be.scope - libcontainer container 754ff33524c79ea1d54b5a11206350995341937e92a914736cdd8642012ec1be. Jul 7 06:12:09.869720 containerd[1591]: time="2025-07-07T06:12:09.869474810Z" level=info msg="StartContainer for \"754ff33524c79ea1d54b5a11206350995341937e92a914736cdd8642012ec1be\" returns successfully" Jul 7 06:12:10.662873 kubelet[2730]: I0707 06:12:10.662780 2730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/whisker-7b9b87c9b-vf8dt" podStartSLOduration=2.770159455 podStartE2EDuration="19.662758259s" podCreationTimestamp="2025-07-07 06:11:51 +0000 UTC" firstStartedPulling="2025-07-07 06:11:52.844089706 +0000 UTC m=+42.581285943" lastFinishedPulling="2025-07-07 06:12:09.7366885 +0000 UTC m=+59.473884747" observedRunningTime="2025-07-07 06:12:10.662076751 +0000 UTC m=+60.399273008" watchObservedRunningTime="2025-07-07 06:12:10.662758259 +0000 UTC m=+60.399954516" Jul 7 06:12:10.854984 systemd[1]: Started sshd@12-10.0.0.94:22-10.0.0.1:58580.service - OpenSSH per-connection server daemon (10.0.0.1:58580). Jul 7 06:12:10.931972 sshd[5245]: Accepted publickey for core from 10.0.0.1 port 58580 ssh2: RSA SHA256:f18dB8zRu6tlNxBqmR8LZaZDJCd15iHz/95DxGwb5s0 Jul 7 06:12:10.933949 sshd-session[5245]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:12:10.939320 systemd-logind[1529]: New session 13 of user core. Jul 7 06:12:10.948390 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 7 06:12:11.176772 sshd[5247]: Connection closed by 10.0.0.1 port 58580 Jul 7 06:12:11.177137 sshd-session[5245]: pam_unix(sshd:session): session closed for user core Jul 7 06:12:11.181803 systemd[1]: sshd@12-10.0.0.94:22-10.0.0.1:58580.service: Deactivated successfully. Jul 7 06:12:11.184422 systemd[1]: session-13.scope: Deactivated successfully. Jul 7 06:12:11.185377 systemd-logind[1529]: Session 13 logged out. Waiting for processes to exit. Jul 7 06:12:11.187233 systemd-logind[1529]: Removed session 13. Jul 7 06:12:11.991728 containerd[1591]: time="2025-07-07T06:12:11.991655796Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:12:11.992799 containerd[1591]: time="2025-07-07T06:12:11.992759872Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=14703784" Jul 7 06:12:11.994880 containerd[1591]: time="2025-07-07T06:12:11.994806679Z" level=info msg="ImageCreate event name:\"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:12:11.997374 containerd[1591]: time="2025-07-07T06:12:11.997334677Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 7 06:12:11.997970 containerd[1591]: time="2025-07-07T06:12:11.997921832Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"16196439\" in 2.260648531s" Jul 7 06:12:11.998036 containerd[1591]: time="2025-07-07T06:12:11.997974695Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\"" Jul 7 06:12:12.000305 containerd[1591]: time="2025-07-07T06:12:12.000273199Z" level=info msg="CreateContainer within sandbox \"6818ccae37567ad92475bc527d150d86d68e183bfdba17b754decab26074bb08\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jul 7 06:12:12.014740 containerd[1591]: time="2025-07-07T06:12:12.013241005Z" level=info msg="Container b9f12c2d7513c7ef1b37cc1c74aeb9586b86afd2c45a14239e842aaf20dbd9b9: CDI devices from CRI Config.CDIDevices: []" Jul 7 06:12:12.028476 containerd[1591]: time="2025-07-07T06:12:12.028418608Z" level=info msg="CreateContainer within sandbox \"6818ccae37567ad92475bc527d150d86d68e183bfdba17b754decab26074bb08\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"b9f12c2d7513c7ef1b37cc1c74aeb9586b86afd2c45a14239e842aaf20dbd9b9\"" Jul 7 06:12:12.029149 containerd[1591]: time="2025-07-07T06:12:12.029044347Z" level=info msg="StartContainer for \"b9f12c2d7513c7ef1b37cc1c74aeb9586b86afd2c45a14239e842aaf20dbd9b9\"" Jul 7 06:12:12.030599 containerd[1591]: time="2025-07-07T06:12:12.030552722Z" level=info msg="connecting to shim b9f12c2d7513c7ef1b37cc1c74aeb9586b86afd2c45a14239e842aaf20dbd9b9" address="unix:///run/containerd/s/5dd090ca248103db187e337726371e71255ffa000c7718d469248ac35a09df16" protocol=ttrpc version=3 Jul 7 06:12:12.060304 systemd[1]: Started cri-containerd-b9f12c2d7513c7ef1b37cc1c74aeb9586b86afd2c45a14239e842aaf20dbd9b9.scope - libcontainer container b9f12c2d7513c7ef1b37cc1c74aeb9586b86afd2c45a14239e842aaf20dbd9b9. Jul 7 06:12:12.112295 containerd[1591]: time="2025-07-07T06:12:12.112246435Z" level=info msg="StartContainer for \"b9f12c2d7513c7ef1b37cc1c74aeb9586b86afd2c45a14239e842aaf20dbd9b9\" returns successfully" Jul 7 06:12:12.442135 kubelet[2730]: I0707 06:12:12.442073 2730 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jul 7 06:12:12.442135 kubelet[2730]: I0707 06:12:12.442146 2730 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jul 7 06:12:12.675130 kubelet[2730]: I0707 06:12:12.673914 2730 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-mnpf6" podStartSLOduration=26.498926662 podStartE2EDuration="44.673894938s" podCreationTimestamp="2025-07-07 06:11:28 +0000 UTC" firstStartedPulling="2025-07-07 06:11:53.82386039 +0000 UTC m=+43.561056627" lastFinishedPulling="2025-07-07 06:12:11.998828666 +0000 UTC m=+61.736024903" observedRunningTime="2025-07-07 06:12:12.671666062 +0000 UTC m=+62.408862299" watchObservedRunningTime="2025-07-07 06:12:12.673894938 +0000 UTC m=+62.411091175" Jul 7 06:12:16.202992 systemd[1]: Started sshd@13-10.0.0.94:22-10.0.0.1:58592.service - OpenSSH per-connection server daemon (10.0.0.1:58592). Jul 7 06:12:16.276130 sshd[5305]: Accepted publickey for core from 10.0.0.1 port 58592 ssh2: RSA SHA256:f18dB8zRu6tlNxBqmR8LZaZDJCd15iHz/95DxGwb5s0 Jul 7 06:12:16.278027 sshd-session[5305]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:12:16.284249 systemd-logind[1529]: New session 14 of user core. Jul 7 06:12:16.294292 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 7 06:12:16.551074 sshd[5308]: Connection closed by 10.0.0.1 port 58592 Jul 7 06:12:16.551517 sshd-session[5305]: pam_unix(sshd:session): session closed for user core Jul 7 06:12:16.565726 systemd[1]: sshd@13-10.0.0.94:22-10.0.0.1:58592.service: Deactivated successfully. Jul 7 06:12:16.568311 systemd[1]: session-14.scope: Deactivated successfully. Jul 7 06:12:16.569459 systemd-logind[1529]: Session 14 logged out. Waiting for processes to exit. Jul 7 06:12:16.572779 systemd[1]: Started sshd@14-10.0.0.94:22-10.0.0.1:58594.service - OpenSSH per-connection server daemon (10.0.0.1:58594). Jul 7 06:12:16.573627 systemd-logind[1529]: Removed session 14. Jul 7 06:12:16.630903 sshd[5323]: Accepted publickey for core from 10.0.0.1 port 58594 ssh2: RSA SHA256:f18dB8zRu6tlNxBqmR8LZaZDJCd15iHz/95DxGwb5s0 Jul 7 06:12:16.633202 sshd-session[5323]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:12:16.639350 systemd-logind[1529]: New session 15 of user core. Jul 7 06:12:16.648465 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 7 06:12:16.928701 sshd[5325]: Connection closed by 10.0.0.1 port 58594 Jul 7 06:12:16.929250 sshd-session[5323]: pam_unix(sshd:session): session closed for user core Jul 7 06:12:16.940917 systemd[1]: sshd@14-10.0.0.94:22-10.0.0.1:58594.service: Deactivated successfully. Jul 7 06:12:16.943752 systemd[1]: session-15.scope: Deactivated successfully. Jul 7 06:12:16.944768 systemd-logind[1529]: Session 15 logged out. Waiting for processes to exit. Jul 7 06:12:16.948713 systemd[1]: Started sshd@15-10.0.0.94:22-10.0.0.1:58610.service - OpenSSH per-connection server daemon (10.0.0.1:58610). Jul 7 06:12:16.950371 systemd-logind[1529]: Removed session 15. Jul 7 06:12:17.009752 sshd[5338]: Accepted publickey for core from 10.0.0.1 port 58610 ssh2: RSA SHA256:f18dB8zRu6tlNxBqmR8LZaZDJCd15iHz/95DxGwb5s0 Jul 7 06:12:17.011791 sshd-session[5338]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:12:17.017060 systemd-logind[1529]: New session 16 of user core. Jul 7 06:12:17.024369 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 7 06:12:17.174137 sshd[5340]: Connection closed by 10.0.0.1 port 58610 Jul 7 06:12:17.174560 sshd-session[5338]: pam_unix(sshd:session): session closed for user core Jul 7 06:12:17.180890 systemd[1]: sshd@15-10.0.0.94:22-10.0.0.1:58610.service: Deactivated successfully. Jul 7 06:12:17.184778 systemd[1]: session-16.scope: Deactivated successfully. Jul 7 06:12:17.186293 systemd-logind[1529]: Session 16 logged out. Waiting for processes to exit. Jul 7 06:12:17.188140 systemd-logind[1529]: Removed session 16. Jul 7 06:12:22.188285 systemd[1]: Started sshd@16-10.0.0.94:22-10.0.0.1:59082.service - OpenSSH per-connection server daemon (10.0.0.1:59082). Jul 7 06:12:22.246137 sshd[5357]: Accepted publickey for core from 10.0.0.1 port 59082 ssh2: RSA SHA256:f18dB8zRu6tlNxBqmR8LZaZDJCd15iHz/95DxGwb5s0 Jul 7 06:12:22.248008 sshd-session[5357]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:12:22.253559 systemd-logind[1529]: New session 17 of user core. Jul 7 06:12:22.264263 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 7 06:12:22.366078 kubelet[2730]: E0707 06:12:22.366015 2730 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:12:22.395054 sshd[5359]: Connection closed by 10.0.0.1 port 59082 Jul 7 06:12:22.395411 sshd-session[5357]: pam_unix(sshd:session): session closed for user core Jul 7 06:12:22.401172 systemd[1]: sshd@16-10.0.0.94:22-10.0.0.1:59082.service: Deactivated successfully. Jul 7 06:12:22.403675 systemd[1]: session-17.scope: Deactivated successfully. Jul 7 06:12:22.404679 systemd-logind[1529]: Session 17 logged out. Waiting for processes to exit. Jul 7 06:12:22.406623 systemd-logind[1529]: Removed session 17. Jul 7 06:12:22.630139 containerd[1591]: time="2025-07-07T06:12:22.630053348Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d2cc9c16259578df8f2a5a14ce0b2a8eac94cf2fa408527a81926f580f4ae506\" id:\"6eb42deab991768a748c6e00fe10565b76176c0fcb34321e3efeae8e99bd63ff\" pid:5384 exit_status:1 exited_at:{seconds:1751868742 nanos:629663299}" Jul 7 06:12:26.366260 kubelet[2730]: E0707 06:12:26.366208 2730 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:12:27.412809 systemd[1]: Started sshd@17-10.0.0.94:22-10.0.0.1:59084.service - OpenSSH per-connection server daemon (10.0.0.1:59084). Jul 7 06:12:27.569748 sshd[5399]: Accepted publickey for core from 10.0.0.1 port 59084 ssh2: RSA SHA256:f18dB8zRu6tlNxBqmR8LZaZDJCd15iHz/95DxGwb5s0 Jul 7 06:12:27.571903 sshd-session[5399]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:12:27.578724 systemd-logind[1529]: New session 18 of user core. Jul 7 06:12:27.584290 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 7 06:12:27.711641 sshd[5401]: Connection closed by 10.0.0.1 port 59084 Jul 7 06:12:27.712037 sshd-session[5399]: pam_unix(sshd:session): session closed for user core Jul 7 06:12:27.718135 systemd[1]: sshd@17-10.0.0.94:22-10.0.0.1:59084.service: Deactivated successfully. Jul 7 06:12:27.720671 systemd[1]: session-18.scope: Deactivated successfully. Jul 7 06:12:27.721712 systemd-logind[1529]: Session 18 logged out. Waiting for processes to exit. Jul 7 06:12:27.723181 systemd-logind[1529]: Removed session 18. Jul 7 06:12:28.655892 containerd[1591]: time="2025-07-07T06:12:28.655841642Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f6a493223d8643b88d14c582eb45fa96fdab0c0f43ca694b7be578b227841ffe\" id:\"8ea35c803e52bfa94cca463a5bdc524a38352d2dbf5b550b18aa09ed1fcd65f8\" pid:5426 exited_at:{seconds:1751868748 nanos:655378256}" Jul 7 06:12:28.705307 containerd[1591]: time="2025-07-07T06:12:28.705220841Z" level=info msg="TaskExit event in podsandbox handler container_id:\"055ef1532d50ebcaca96ed2881b174f755f950129b37896000faf720df72b5b8\" id:\"433d0e78ab2c90da586a939c54f05701e2eb92923532594ba38dde7954ae348e\" pid:5448 exited_at:{seconds:1751868748 nanos:704914837}" Jul 7 06:12:32.005987 containerd[1591]: time="2025-07-07T06:12:32.005939639Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f6a493223d8643b88d14c582eb45fa96fdab0c0f43ca694b7be578b227841ffe\" id:\"e3c90b08a2891846628b9cbf00eca35573249f300ee4bdbc8182a3deba76f618\" pid:5472 exited_at:{seconds:1751868752 nanos:5585403}" Jul 7 06:12:32.730399 systemd[1]: Started sshd@18-10.0.0.94:22-10.0.0.1:46758.service - OpenSSH per-connection server daemon (10.0.0.1:46758). Jul 7 06:12:32.813266 sshd[5485]: Accepted publickey for core from 10.0.0.1 port 46758 ssh2: RSA SHA256:f18dB8zRu6tlNxBqmR8LZaZDJCd15iHz/95DxGwb5s0 Jul 7 06:12:32.815233 sshd-session[5485]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:12:32.820324 systemd-logind[1529]: New session 19 of user core. Jul 7 06:12:32.827284 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 7 06:12:33.054565 sshd[5487]: Connection closed by 10.0.0.1 port 46758 Jul 7 06:12:33.054912 sshd-session[5485]: pam_unix(sshd:session): session closed for user core Jul 7 06:12:33.061202 systemd[1]: sshd@18-10.0.0.94:22-10.0.0.1:46758.service: Deactivated successfully. Jul 7 06:12:33.064042 systemd[1]: session-19.scope: Deactivated successfully. Jul 7 06:12:33.065250 systemd-logind[1529]: Session 19 logged out. Waiting for processes to exit. Jul 7 06:12:33.068875 systemd-logind[1529]: Removed session 19. Jul 7 06:12:37.692382 containerd[1591]: time="2025-07-07T06:12:37.692329569Z" level=info msg="TaskExit event in podsandbox handler container_id:\"055ef1532d50ebcaca96ed2881b174f755f950129b37896000faf720df72b5b8\" id:\"9190b53bbcd9394a27407871a3b1cafac39b6a5340ef4c1dec4338655ad33b40\" pid:5520 exited_at:{seconds:1751868757 nanos:691845096}" Jul 7 06:12:38.082129 systemd[1]: Started sshd@19-10.0.0.94:22-10.0.0.1:46764.service - OpenSSH per-connection server daemon (10.0.0.1:46764). Jul 7 06:12:38.135903 sshd[5531]: Accepted publickey for core from 10.0.0.1 port 46764 ssh2: RSA SHA256:f18dB8zRu6tlNxBqmR8LZaZDJCd15iHz/95DxGwb5s0 Jul 7 06:12:38.137943 sshd-session[5531]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:12:38.143010 systemd-logind[1529]: New session 20 of user core. Jul 7 06:12:38.149331 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 7 06:12:38.323125 sshd[5533]: Connection closed by 10.0.0.1 port 46764 Jul 7 06:12:38.323554 sshd-session[5531]: pam_unix(sshd:session): session closed for user core Jul 7 06:12:38.330472 systemd[1]: sshd@19-10.0.0.94:22-10.0.0.1:46764.service: Deactivated successfully. Jul 7 06:12:38.333064 systemd[1]: session-20.scope: Deactivated successfully. Jul 7 06:12:38.334144 systemd-logind[1529]: Session 20 logged out. Waiting for processes to exit. Jul 7 06:12:38.336052 systemd-logind[1529]: Removed session 20. Jul 7 06:12:43.338022 systemd[1]: Started sshd@20-10.0.0.94:22-10.0.0.1:49498.service - OpenSSH per-connection server daemon (10.0.0.1:49498). Jul 7 06:12:43.366513 kubelet[2730]: E0707 06:12:43.366477 2730 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:12:43.391953 sshd[5546]: Accepted publickey for core from 10.0.0.1 port 49498 ssh2: RSA SHA256:f18dB8zRu6tlNxBqmR8LZaZDJCd15iHz/95DxGwb5s0 Jul 7 06:12:43.394161 sshd-session[5546]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:12:43.402223 systemd-logind[1529]: New session 21 of user core. Jul 7 06:12:43.409427 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 7 06:12:43.585447 sshd[5548]: Connection closed by 10.0.0.1 port 49498 Jul 7 06:12:43.585920 sshd-session[5546]: pam_unix(sshd:session): session closed for user core Jul 7 06:12:43.595509 systemd[1]: sshd@20-10.0.0.94:22-10.0.0.1:49498.service: Deactivated successfully. Jul 7 06:12:43.597805 systemd[1]: session-21.scope: Deactivated successfully. Jul 7 06:12:43.598817 systemd-logind[1529]: Session 21 logged out. Waiting for processes to exit. Jul 7 06:12:43.602358 systemd[1]: Started sshd@21-10.0.0.94:22-10.0.0.1:49500.service - OpenSSH per-connection server daemon (10.0.0.1:49500). Jul 7 06:12:43.603735 systemd-logind[1529]: Removed session 21. Jul 7 06:12:43.655138 sshd[5561]: Accepted publickey for core from 10.0.0.1 port 49500 ssh2: RSA SHA256:f18dB8zRu6tlNxBqmR8LZaZDJCd15iHz/95DxGwb5s0 Jul 7 06:12:43.656813 sshd-session[5561]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:12:43.662423 systemd-logind[1529]: New session 22 of user core. Jul 7 06:12:43.673301 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 7 06:12:44.439842 sshd[5563]: Connection closed by 10.0.0.1 port 49500 Jul 7 06:12:44.440507 sshd-session[5561]: pam_unix(sshd:session): session closed for user core Jul 7 06:12:44.454528 systemd[1]: sshd@21-10.0.0.94:22-10.0.0.1:49500.service: Deactivated successfully. Jul 7 06:12:44.457554 systemd[1]: session-22.scope: Deactivated successfully. Jul 7 06:12:44.459143 systemd-logind[1529]: Session 22 logged out. Waiting for processes to exit. Jul 7 06:12:44.465814 systemd[1]: Started sshd@22-10.0.0.94:22-10.0.0.1:49506.service - OpenSSH per-connection server daemon (10.0.0.1:49506). Jul 7 06:12:44.467381 systemd-logind[1529]: Removed session 22. Jul 7 06:12:44.535748 sshd[5574]: Accepted publickey for core from 10.0.0.1 port 49506 ssh2: RSA SHA256:f18dB8zRu6tlNxBqmR8LZaZDJCd15iHz/95DxGwb5s0 Jul 7 06:12:44.537802 sshd-session[5574]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:12:44.542863 systemd-logind[1529]: New session 23 of user core. Jul 7 06:12:44.556298 systemd[1]: Started session-23.scope - Session 23 of User core. Jul 7 06:12:45.368658 kubelet[2730]: E0707 06:12:45.368194 2730 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:12:45.524463 sshd[5576]: Connection closed by 10.0.0.1 port 49506 Jul 7 06:12:45.524788 sshd-session[5574]: pam_unix(sshd:session): session closed for user core Jul 7 06:12:45.537395 systemd[1]: sshd@22-10.0.0.94:22-10.0.0.1:49506.service: Deactivated successfully. Jul 7 06:12:45.539846 systemd[1]: session-23.scope: Deactivated successfully. Jul 7 06:12:45.540828 systemd-logind[1529]: Session 23 logged out. Waiting for processes to exit. Jul 7 06:12:45.545596 systemd[1]: Started sshd@23-10.0.0.94:22-10.0.0.1:49514.service - OpenSSH per-connection server daemon (10.0.0.1:49514). Jul 7 06:12:45.546879 systemd-logind[1529]: Removed session 23. Jul 7 06:12:45.600587 sshd[5599]: Accepted publickey for core from 10.0.0.1 port 49514 ssh2: RSA SHA256:f18dB8zRu6tlNxBqmR8LZaZDJCd15iHz/95DxGwb5s0 Jul 7 06:12:45.603045 sshd-session[5599]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:12:45.609751 systemd-logind[1529]: New session 24 of user core. Jul 7 06:12:45.619328 systemd[1]: Started session-24.scope - Session 24 of User core. Jul 7 06:12:46.216383 sshd[5601]: Connection closed by 10.0.0.1 port 49514 Jul 7 06:12:46.216818 sshd-session[5599]: pam_unix(sshd:session): session closed for user core Jul 7 06:12:46.231750 systemd[1]: sshd@23-10.0.0.94:22-10.0.0.1:49514.service: Deactivated successfully. Jul 7 06:12:46.236000 systemd[1]: session-24.scope: Deactivated successfully. Jul 7 06:12:46.238434 systemd-logind[1529]: Session 24 logged out. Waiting for processes to exit. Jul 7 06:12:46.244971 systemd[1]: Started sshd@24-10.0.0.94:22-10.0.0.1:49518.service - OpenSSH per-connection server daemon (10.0.0.1:49518). Jul 7 06:12:46.245983 systemd-logind[1529]: Removed session 24. Jul 7 06:12:46.304442 sshd[5613]: Accepted publickey for core from 10.0.0.1 port 49518 ssh2: RSA SHA256:f18dB8zRu6tlNxBqmR8LZaZDJCd15iHz/95DxGwb5s0 Jul 7 06:12:46.306479 sshd-session[5613]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:12:46.312972 systemd-logind[1529]: New session 25 of user core. Jul 7 06:12:46.323549 systemd[1]: Started session-25.scope - Session 25 of User core. Jul 7 06:12:46.456612 sshd[5615]: Connection closed by 10.0.0.1 port 49518 Jul 7 06:12:46.457003 sshd-session[5613]: pam_unix(sshd:session): session closed for user core Jul 7 06:12:46.461975 systemd[1]: sshd@24-10.0.0.94:22-10.0.0.1:49518.service: Deactivated successfully. Jul 7 06:12:46.464189 systemd[1]: session-25.scope: Deactivated successfully. Jul 7 06:12:46.465283 systemd-logind[1529]: Session 25 logged out. Waiting for processes to exit. Jul 7 06:12:46.466873 systemd-logind[1529]: Removed session 25. Jul 7 06:12:51.469840 systemd[1]: Started sshd@25-10.0.0.94:22-10.0.0.1:59046.service - OpenSSH per-connection server daemon (10.0.0.1:59046). Jul 7 06:12:51.519693 sshd[5630]: Accepted publickey for core from 10.0.0.1 port 59046 ssh2: RSA SHA256:f18dB8zRu6tlNxBqmR8LZaZDJCd15iHz/95DxGwb5s0 Jul 7 06:12:51.521274 sshd-session[5630]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:12:51.526214 systemd-logind[1529]: New session 26 of user core. Jul 7 06:12:51.539358 systemd[1]: Started session-26.scope - Session 26 of User core. Jul 7 06:12:51.652287 sshd[5632]: Connection closed by 10.0.0.1 port 59046 Jul 7 06:12:51.652618 sshd-session[5630]: pam_unix(sshd:session): session closed for user core Jul 7 06:12:51.657959 systemd[1]: sshd@25-10.0.0.94:22-10.0.0.1:59046.service: Deactivated successfully. Jul 7 06:12:51.660655 systemd[1]: session-26.scope: Deactivated successfully. Jul 7 06:12:51.661799 systemd-logind[1529]: Session 26 logged out. Waiting for processes to exit. Jul 7 06:12:51.663301 systemd-logind[1529]: Removed session 26. Jul 7 06:12:52.646851 containerd[1591]: time="2025-07-07T06:12:52.646791742Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d2cc9c16259578df8f2a5a14ce0b2a8eac94cf2fa408527a81926f580f4ae506\" id:\"71040c870e483afae9477fccf7ad87a16f5505e10ac11003b4550a718b5528c4\" pid:5656 exited_at:{seconds:1751868772 nanos:646383638}" Jul 7 06:12:56.666258 systemd[1]: Started sshd@26-10.0.0.94:22-10.0.0.1:59056.service - OpenSSH per-connection server daemon (10.0.0.1:59056). Jul 7 06:12:56.718165 sshd[5671]: Accepted publickey for core from 10.0.0.1 port 59056 ssh2: RSA SHA256:f18dB8zRu6tlNxBqmR8LZaZDJCd15iHz/95DxGwb5s0 Jul 7 06:12:56.720432 sshd-session[5671]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:12:56.725824 systemd-logind[1529]: New session 27 of user core. Jul 7 06:12:56.737271 systemd[1]: Started session-27.scope - Session 27 of User core. Jul 7 06:12:56.860626 sshd[5673]: Connection closed by 10.0.0.1 port 59056 Jul 7 06:12:56.860936 sshd-session[5671]: pam_unix(sshd:session): session closed for user core Jul 7 06:12:56.867714 systemd[1]: sshd@26-10.0.0.94:22-10.0.0.1:59056.service: Deactivated successfully. Jul 7 06:12:56.870633 systemd[1]: session-27.scope: Deactivated successfully. Jul 7 06:12:56.871854 systemd-logind[1529]: Session 27 logged out. Waiting for processes to exit. Jul 7 06:12:56.873757 systemd-logind[1529]: Removed session 27. Jul 7 06:12:58.670771 containerd[1591]: time="2025-07-07T06:12:58.670698277Z" level=info msg="TaskExit event in podsandbox handler container_id:\"f6a493223d8643b88d14c582eb45fa96fdab0c0f43ca694b7be578b227841ffe\" id:\"9677bea317cf7f195b70be842dea35b52dcd7b75e44129bc4f461d20e0c765cb\" pid:5699 exited_at:{seconds:1751868778 nanos:670182280}" Jul 7 06:13:00.367025 kubelet[2730]: E0707 06:13:00.366964 2730 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:13:01.883861 systemd[1]: Started sshd@27-10.0.0.94:22-10.0.0.1:60368.service - OpenSSH per-connection server daemon (10.0.0.1:60368). Jul 7 06:13:01.970344 sshd[5713]: Accepted publickey for core from 10.0.0.1 port 60368 ssh2: RSA SHA256:f18dB8zRu6tlNxBqmR8LZaZDJCd15iHz/95DxGwb5s0 Jul 7 06:13:01.972146 sshd-session[5713]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:13:01.977822 systemd-logind[1529]: New session 28 of user core. Jul 7 06:13:01.986285 systemd[1]: Started session-28.scope - Session 28 of User core. Jul 7 06:13:02.255173 sshd[5715]: Connection closed by 10.0.0.1 port 60368 Jul 7 06:13:02.255791 sshd-session[5713]: pam_unix(sshd:session): session closed for user core Jul 7 06:13:02.260918 systemd[1]: sshd@27-10.0.0.94:22-10.0.0.1:60368.service: Deactivated successfully. Jul 7 06:13:02.263553 systemd[1]: session-28.scope: Deactivated successfully. Jul 7 06:13:02.264750 systemd-logind[1529]: Session 28 logged out. Waiting for processes to exit. Jul 7 06:13:02.266935 systemd-logind[1529]: Removed session 28. Jul 7 06:13:05.366183 kubelet[2730]: E0707 06:13:05.365934 2730 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Jul 7 06:13:07.271701 systemd[1]: Started sshd@28-10.0.0.94:22-10.0.0.1:60382.service - OpenSSH per-connection server daemon (10.0.0.1:60382). Jul 7 06:13:07.339255 sshd[5729]: Accepted publickey for core from 10.0.0.1 port 60382 ssh2: RSA SHA256:f18dB8zRu6tlNxBqmR8LZaZDJCd15iHz/95DxGwb5s0 Jul 7 06:13:07.341403 sshd-session[5729]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jul 7 06:13:07.347765 systemd-logind[1529]: New session 29 of user core. Jul 7 06:13:07.352507 systemd[1]: Started session-29.scope - Session 29 of User core. Jul 7 06:13:07.577457 sshd[5732]: Connection closed by 10.0.0.1 port 60382 Jul 7 06:13:07.577717 sshd-session[5729]: pam_unix(sshd:session): session closed for user core Jul 7 06:13:07.582697 systemd[1]: sshd@28-10.0.0.94:22-10.0.0.1:60382.service: Deactivated successfully. Jul 7 06:13:07.585586 systemd[1]: session-29.scope: Deactivated successfully. Jul 7 06:13:07.588120 systemd-logind[1529]: Session 29 logged out. Waiting for processes to exit. Jul 7 06:13:07.589951 systemd-logind[1529]: Removed session 29. Jul 7 06:13:07.693438 containerd[1591]: time="2025-07-07T06:13:07.693355616Z" level=info msg="TaskExit event in podsandbox handler container_id:\"055ef1532d50ebcaca96ed2881b174f755f950129b37896000faf720df72b5b8\" id:\"e623ce1539551d1c263e2a8bd0d3ffa6ea5993a5932dbdc782e6901d821ab2c8\" pid:5756 exited_at:{seconds:1751868787 nanos:692950289}"